text
stringlengths
1
1.69M
meta
dict
\section{Introduction} \label{sec:Intro} If one follows the by--now--standard procedures of perturbative quantum field theory, then one finds that quantum gravity suffers from the problem that it is not perturbatively renormalizable. The natural coupling constant is $\kappa=2/M$, where $M$ is the reduced Planck mass. In terms of Newton's gravitational constant $G$, we have $\kappa^2 = 32\pi G$. Given that $\kappa$ has negative mass dimension, perturbative non-renormalizability is expected already from simple power counting arguments. Kinematic accidents allow pure gravity at one loop to be free of divergences \cite{tHooft:1974toh} (after a reparametrisation of the metric $g_{\mu\nu}$), but with generic matter or at two loops, no such miracle occurs \cite{tHooft:1974toh,Goroff:1985sz,Goroff:1985th,vandeVen:1991gw}. We will show however, that within quantum gravity, perturbative in $\kappa$ and starting from the (kinetic parts of the) Einstein Hilbert action,\footnote{This is thus not related to asymptotic safety \cite{Weinberg:1980,Reuter:1996}, although we will draw on some insight from that field.} there exists a distinguished set of composite operators, dependent on the conformal factor of the metric and non-perturbative in $\hbar$, that are promising for a route out of this dead end. Even at the linearised level, \ie for vanishingly small coupling(s), they have novel infrared properties which have the potential to explain long-standing puzzles in cosmology, and black holes, and maybe even lead to experimentally measurable quantum gravity effects, as discussed later in the introduction and in secs. \ref{sec:compact-linear} and \ref{sec:QG}. To understand clearly why there is this possibility, we will need to work with the deeper understanding of renormalization afforded by the Wilsonian RG (renormalization group) \cite{Wilson:1973,Morris:1998}. Since an essential ingredient in this framework is the quasi-local effective action constructed from integrating out fluctuations at short distances, we will need to work with a Euclidean signature metric.\footnote{so that indeed for two points $x$ and $y$, $|x-y|\to0\implies x\to y$.} Then one meets the infamous problem that the Euclidean Einstein-Hilbert action,\footnote{Our conventions are $R_{\mu\nu}=R^\alpha_{\ \mu\alpha\nu}$, and $[\nabla_\mu,\nabla_\nu]v^\lambda = R_{\mu\nu\phantom{\lambda}\sigma}^{\phantom{\mu\nu}\lambda}v^\sigma$.} \begin{equation} \label{EH} S_{EH} = \int\!\! d^4x \, \mathcal{L}_{EH}\,,\qquad \mathcal{L}_{EH} = -2\sqrt{g} R/\kappa^2\,, \end{equation} is unbounded from below, so that the Euclidean partition function \begin{equation} \label{Z} \mathcal{Z} = \int\!\! \mathcal{D}g_{\mu\nu}\ {\rm e}^{-S_{EH}} \end{equation} will fail to converge. Expanding the metric about flat space as \begin{equation} \label{h} g_{\mu\nu} = \delta_{\mu\nu} + \kappa\, H_{\mu\nu}\,, \end{equation} we have \begin{equation} \label{EHbilinear} \mathcal{L}_{EH} = \frac12 \left(\partial_\lambda H_{\mu\nu}\right)^2 -2 \left(\partial_\lambda \varphi\right)^2 - \left(\partial^\mu H_{\mu\nu}\right)^2 +2\,\partial^\alpha\! \varphi\, \partial^\beta H_{\alpha\beta} + O(H)^3\,, \end{equation} where contraction is with the background metric $\delta_{\mu\nu}$, and we have defined $\varphi = \tfrac{1}{2} H^{\,\mu}_\mu$. Adding a Feynman -- De Donder gauge fixing term \begin{equation} \label{Feynman-DeDonder} \left(\partial^\alpha H_{\alpha\beta} -\partial_\beta \varphi\right)^2 \end{equation} and splitting the fluctuation field into its SO$(4)$ irreducible parts \begin{equation} \label{ph-traceful} H_{\mu\nu} = h _{\mu\nu} + \tfrac{1}{2} \delta_{\mu\nu} \varphi \end{equation} (so $h _{\mu\nu}$ is traceless), the problem is clearly visible in the wrong sign kinetic term for $\varphi$: \begin{equation} \label{Gaussian} \mathcal{L}^{\rm kinetic}_{EH} = \frac12 \left(\partial_\lambda h _{\mu\nu}\right)^2 -\frac12 \left(\partial_\lambda\varphi \right)^2\,. \end{equation} Since the metric is now expressed as \begin{equation} \label{param-perturbative} g_{\mu\nu} = \delta_{\mu\nu} \left( 1+\frac{\kappa}{2}\,\varphi \right) +\kappa \,h _{\mu\nu}\,, \end{equation} we see that $\varphi$ is the perturbation that leads to an overall local rescaling of the metric. It is called the conformal factor, or the dilaton (even though it is not a separate field here but part of the metric). The authors of ref. \cite{Gibbons:1978ac} proposed to fix the problem by continuing the conformal factor functional integral along the imaginary axis: $\varphi\mapsto i\varphi$. Instead, we will keep this ``conformal factor instability'', and find another way of coping, which moreover has a clear physical motivation. Indeed it seems that the conformal factor instability is the key that opens the door to formulating continuum quantum gravity. Mathematically, the first step is to recast \eqref{Z} into differential form by using an exact RG equation for the corresponding effective action. Then there is no immediate difficulty in solving for the latter \cite{Reuter:1996}. Within this Wilsonian framework, the problem with perturbative renormalizability is simply that the interactions \begin{equation} \label{irrelevant} \sim H^n \partial H \partial H \qquad (n\ge1) \end{equation} form \emph{irrelevant} operators (of dimension $n+4$). This follows by na\"\i ve scaling arguments which are nevertheless correct at the Gaussian fixed point \eqref{Gaussian}. Such interactions cannot therefore build a continuum field theory around the Gaussian fixed point, since a continuum field theory requires operators corresponding to (marginally) relevant directions. Of course this only repackages the power counting arguments, although if taken as gospel it already implies that miraculous cancellations of divergences were never a way out. But why rule out non-polynomial interactions? As we will review in the next section, for theories with the right sign kinetic term, the polynomial interactions form a complete orthonormal set of eigenoperators (operators with a well defined scaling dimension). Non-polynomial perturbations with definite scaling dimension at finite field, do not scale correctly at large field. They do not emanate from the Gaussian fixed point and after RG evolution to the IR (infrared), they can be re-expanded in terms of the polynomial perturbations and thus do not lead to new continuum physics \cite{Morris:1996nx,Morris:1996xq,Bridle:2016nsu}. When we change the sign of the kinetic term, this conclusion changes radically. The same arguments that ruled out non-polynomial interactions for ordinary scalar field theory now imply that the eigenoperator spectrum degenerates, and even includes a continuous component \cite{Dietz:2016gzg}. Completeness and orthonormality properties are lost. Furthermore the Wilsonian RG now naturally flows in the reverse direction, meaning that generic flows to the infrared fail at some finite cutoff scale \cite{Bonanno:2012dg,Dietz:2016gzg}. Now we add just one, albeit crucial, observation. As part of the definition of quantization, we are free to impose that bare interactions are exponentially decaying for large $\varphi$ (see sec. \ref{sec:quantisation}). Stated more precisely, we require them to be square integrable over amplitude $\varphi\in (-\infty,\infty)$ with weight \begin{equation} \label{weight} \exp \left(\varphi^2/2\Omega_\Lambda\right)\,, \end{equation} where $\Omega_\Lambda= |\langle \varphi(x) \varphi(x) \rangle |$ is the (magnitude of the) free propagator at coincident points, regularised by a UV (ultraviolet) cutoff $\Lambda$. Then as we will see, the eigenoperator spectrum is again discrete, complete, and orthonormal. Working within the conformal sector (\ie retaining only $\varphi$), the rest of the properties of this remarkable quantum field theory follow ineluctably. We will see that the eigenoperators are non-perturbative in $\hbar$, and are evanescent \cite{Bollini:1973wu} \ie vanish when the ultraviolet regulator is removed. In $\mathbb{R}^4$, the physical (renormalized) operators become proportional to ($\varphi$-derivatives of) $\delta(\varphi)$. On other spacetimes, the physical operators are instead exponentially decaying with the amplitude decay scale related to $1/L$, where $L$ is a typical length scale in the manifold. However if the manifold is sufficiently inhomogeneous, in the sense of inducing more than an $O(1)$ change to a certain universal finite size effect (see sec. \ref{sec:compact}), each operator individually ceases to exist because the flow to the infrared ends prematurely. Infinitely many of the eigenoperators are relevant. They therefore can be used to build a non-trivial continuum limit about the Gaussian fixed point, in other words a perturbatively renormalizable quantum field theory. In the case that an infinite number of these relevant couplings are non-vanishing, which is inevitable beyond first order perturbations, new effects emerge. In fact even at the linearised level, when an infinite number of these relevant couplings are non-vanishing, it typically happens that at some lower scale $\Lambda\sim\Lambda_\mathrm{p}>0$, the expansion over eigenoperators no longer converges. The result can nevertheless be resummed by transforming to field conjugate momentum space. As we will show, convergence fails either because the RG flow itself ceases to exist, or because the interactions are no longer square integrable under \eqref{weight} but instead have exponential decay set by $\Lambda_\mathrm{p}$, which we therefore recognise as an \emph{amplitude suppression scale}. Now on other manifolds the flow exists only if the inhomogeneity remains smaller than the $O(1)$ correction plus $2\pi L^2\Lambda^2_\mathrm{p}$. As already mentioned in the Abstract, this property is clearly significant for the theory of cosmology, but also surely for black holes and more generally (see secs. \ref{sec:compact-linear} and \ref{sec:QG}). The fact that such dramatic behaviour is already evident at the linearised level, \ie even at vanishing overall coupling, suggests that such quantum gravity effects could be experimentally measurable. However confirming this will require understanding the dynamics, which in turn requires the full development of the quantum gravity, \ie not just the conformal sector. Indeed a further significant step is to embed this structure into gravity, where we need also to maintain a quantum version of diffeomorphism invariance at the renormalized level. We discuss the issues in sec. \ref{sec:QG}. Although the conformal sector has an infinite number of renormalized couplings, these get subsumed effectively into the parametrisation of the metric. As we will see, renormalizability of the diffeomorphism invariant local operators is controlled by one particular eigenoperator, which turns out to have just the right dimension to rule in the Einstein-Hilbert term and rule out all the higher derivative terms. The wrong sign kinetic term makes the scalar theory non-unitary (see sec. \ref{sec:unitarity}) but this problem will not affect gravity when continued back to Minkowski signature, where only the two transverse traceless modes actually propagate and the conformal mode is not dynamical. Since the quantum field theory is built around the Gaussian fixed point, it will be perturbatively renormalizable, in particular in $\kappa$. Although the theoretical structure is so constraining that General Relativity is guaranteed to be the low energy effective classical description, since the eigenoperators in the conformal sector are non-perturbative in $\hbar$, and indeed vanish in the limit that $\hbar\to0$, in reality the theory of gravity will be non-perturbatively quantum and have no classical limit, no matter how small $\kappa$ is taken to be.\footnote{unless $\kappa$ is set to zero, in which case we are left with only free gravitons} \bigskip The structure of the rest of the paper is as follows. Until the final two sections we will be almost exclusively concerned with the conformal sector considered on its own. In Euclidean flat space, this is just a single component scalar field theory with the wrong sign kinetic term. The significance of this change in sign for the Wilsonian RG about the Gaussian fixed point, can only be properly understood once the standard case with positive kinetic term is thoroughly understood. Therefore in the next section (sec. \ref{sec:plus}) we review the latter case. In sec. \ref{sec:minus} we change the sign of the kinetic term and develop the consequences for the Wilsonian RG, working in flat Euclidean $\mathbb{R}^4$ spacetime and with linearised perturbations. With the example of the potential, we see in sec. \ref{sec:non-deriv} that typical flows for the RG exist only in the reverse direction and that the eigenspectrum degenerates. We show that one sequence of perturbations has however a Hilbert space structure. In sec. \ref{sec:quantisation} we define the bare interactions to lie in this space as part of the definition of quantisation. As intimated earlier, everything else follows as a logical consequence. In particular we develop the properties of these eigenoperators, which for the potential are all relevant, and introduce $\Lambda_\mathrm{p}$ which (up to a non-universal constant) marks the infrared scale where the expansion over eigenoperators breaks down. In sec. \ref{sec:general} we see that for entire flows, $\Lambda_\mathrm{p}$ is a physical quantity, namely the \emph{amplitude suppression scale}. In sec. \ref{sec:examples}, we illustrate with a simple representative example. In sec. \ref{sec:derivative-ops}, we derive the form of the general eigenoperator \ie containing also space-time derivatives. In sec. \ref{sec:perturbation-th}, we start the development of the full non-linear theory. In sec. \ref{sec:unitarity}, we highlight the physical flaws that such a scalar field theory has, if considered in its own right. As already addressed above, these problems are not expected to be inherited by a full theory of quantum gravity. In sec. \ref{sec:QG} we consider what form this latter theory must take (and the phenomenological consequences). However first in sec. \ref{sec:compact} we examine the behaviour of RG flows on a manifold other than $\mathbb{R}^4$. There we see that $\Lambda_\mathrm{p}$ has another dramatic r\^ole to play, limiting the degree of inhomogeneity according to the size of the universe. \section{Scalar field theory with positive kinetic term} \label{sec:plus} In this section we review the RG structure of scalar field theory about the Gaussian fixed point, establishing that the eigenoperator spectrum is given by a complete set of orthonormal polynomial interactions. In particular we explain why non-polynomial interactions that satisfy the eigenoperator equation, do not behave correctly in the UV (ultraviolet) and after RG evolution to the IR (infrared) can be re-expanded in terms of the polynomial interactions. This was analysed in great detail in ref. \cite{Bridle:2016nsu}, see also \cite{Morris:1996nx,Morris:1996xq}, however the focus there was different and model approximations were used (in particular the so-called Local Potential Approximation). Here, and in the rest of this paper, we make no approximations beyond the use of perturbation theory where it is legitimate to do so. Not only do we need to work in Euclidean signature (as already remarked in the Abstract and the beginning of the Introduction) but we also need to work on $\mathbb{R}^4$, since for fixed points to exist, the space-time itself should look exactly the same at all scales. Momentum is therefore a useful concept. These remarks may seem trivial but it is important to underline these points for when we adapt this framework to gravitation. After integrating out high momentum modes, we can rewrite the partition function exactly in terms of a Wilsonian effective action \cite{Wilson:1973,Morris:1993} \begin{equation} \label{total-Wilsonian} S^{\mathrm{tot},\Lambda}[\varphi] = S^\Lambda[\varphi] + \frac{1}{2}\varphi\cdot (\Delta^{\Lambda})^{\!-1}\!\!\cdot \varphi\,, \end{equation} where \begin{equation} \label{DeltaUV} \Delta^\Lambda(p) := \frac{C^\Lambda(p)}{p^2} \end{equation} is here the massless propagator regularised by some smooth ultraviolet cutoff profile $C^\Lambda(p)\equiv C(p^2/\Lambda^2)$. Later, when we change the sign of the propagator, we will still define $\Delta^\Lambda$ to be \eqref{DeltaUV}, \ie positive as displayed above. Qualitatively, for $|p|<\Lambda$, $C^\Lambda(p)\approx1$ and mostly leaves the modes unaffected, while for $|p|>\Lambda$ its r\^ole is to suppress modes. We require that $C(p^2/\Lambda^2)$ is a monotonically decreasing function of its argument, that $C^\Lambda(p) \to 1$ for $|p|/\Lambda\to0$, and for $|p|/\Lambda\to\infty$, $C^\Lambda(p) \to0$ sufficiently fast to ensure that all momentum integrals are regulated in the ultraviolet. After discarding a field independent part, the interactions satisfy the Wilson/Polchinski flow equation \cite{Polchinski:1983gv,Morris:1993} \begin{equation} \label{pol+} \frac{\partial}{\partial \Lambda}S^\Lambda[\varphi]=\frac{1}{2}\frac{\delta S^\Lambda}{\delta\varphi}\cdot \frac{\partial\Delta^\Lambda}{\partial \Lambda}\cdot \frac{\delta S^\Lambda}{\delta\varphi}-\frac{1}{2}\text{tr}\bigg[\frac{\partial\Delta^\Lambda}{\partial \Lambda}\cdot \frac{\delta^{2}S^\Lambda} {\delta\varphi\delta\varphi}\bigg]\,. \end{equation} The first term on the right hand side encodes the tree level corrections, while the second term encodes the quantum corrections. Had we carried $\hbar$, it would appear in front of this latter term. We want the quasi-local solutions of this equation, \ie solutions $S^\Lambda$ that can be written as the space-time integral of a Lagrangian, which in turn can be written as an (infinite) expansion in space-time derivatives of $\varphi$. Such solutions correspond to a local Kadanoff blocking and exist if $C^\Lambda$ is smooth. The Gaussian fixed point is the trivial solution $S^\Lambda[\varphi]=0$. To find the eigenoperators we linearise around the fixed point: \begin{equation} \label{d-pol} \frac{\partial}{\partial \Lambda}\,\delta S^\Lambda[\varphi]=-\frac{1}{2}\,\text{tr}\bigg[\frac{\partial\Delta^\Lambda}{\partial \Lambda}\cdot \frac{\delta^{2} }{\delta\varphi\delta\varphi}\bigg] \delta S^\Lambda[\varphi]\,. \end{equation} Let us first consider non-derivative interactions. Thus we write: \begin{equation} \label{linearised-v} \delta S^\Lambda = \epsilon\! \int\!\!d^4x\, V\!\left(\varphi(x),\Lambda\right)\,, \end{equation} where $\epsilon$ is taken small enough to justify the linearised approximation. The Wilsonian RG consists of a Kadanoff blocking followed by a rescaling back to the original size. This second step is conveniently incorporated by using scale independent variables formed from the dimensionless combinations using $\Lambda$: \begin{equation} \label{scale} x^\mu = \tilde{x}^\mu/\Lambda\,,\qquad\varphi = \Lambda\, \tilde{\varphi}\,,\qquad V = \Lambda^4\, \tilde{V}\,, \qquad t = \ln(\mu/\Lambda)\,. \end{equation} We have noted that at the Gaussian fixed point the scaling dimension of $\varphi$ is its engineering dimension. We have also defined the so-called RG time $t$ to increase in the direction of course graining, as in ref. \cite{Wilson:1973}, and introduced the usual arbitrary finite energy scale $\mu$. Eigenoperators are then operators with well defined scaling dimension $4 -\lambda$, when expressed in these variables, which thus take the form \begin{equation} \label{lambda-v} \tilde{V}(\tilde{\varphi},t) = \left(\frac{\mu}{\Lambda}\right)^\lambda \tilde{V}(\tilde{\varphi})\,, \end{equation} the prefactor being the RG evolution of the scaled coupling $\tilde{g}_\lambda=\epsilon\, {\rm e}^{\lambda t}$ at linearised order, the associated dimensionful coupling thus being \begin{equation} \label{dimensionful-g-lambda} g_\lambda= \epsilon\mu^\lambda\,. \end{equation} Such operators are relevant if $\lambda>0$, marginal if $\lambda=0$, and irrelevant if $\lambda<0$. The continuum limit is constructed by giving non-vanishing values for the couplings associated to relevant and marginally relevant directions since these shoot out of the fixed point as $\Lambda$ is lowered from $\Lambda=\infty$ (\ie $\tilde{g}_\lambda\to0$ as $t\to-\infty$), and also to any strictly marginal couplings. The continuum limit is parametrised by these couplings, and characterised by the resulting ``RG trajectory'' as $\Lambda$ is lowered. The (marginally) irrelevant couplings do not survive as separate parameters in the continuum limit since they lead to trajectories that fall back into the fixed point, rather they parametrise the basin of attraction of the fixed point \cite{Wilson:1973,Morris:1998}. Although we will mostly restrict ourselves to this linear regime in the current paper, to be precise and to set the context let us briefly sketch the complete construction. Since the (marginally) relevant couplings increase as $\Lambda$ is lowered, we need to handle the full non-linear exact RG. Then we need to define what we still mean by such $\tilde{g}_\lambda(\Lambda)$ in the non-linear regime, which we can do conveniently by imposing some renormalization conditions on $S^\Lambda$. (Such a renormalization condition is also needed for the kinetic term and leads to rescaling the field, \ie wavefunction renormalization.) The dimensionful $g_\lambda(\Lambda)$ will then run with scale once we enter the non-linear regime. Since as described in the previous paragraph, the asymptotic UV behaviour for these couplings provides the boundary conditions that completely fixes the flow, solutions on the RG trajectory can be written in self-similar form as $S^\Lambda = S(\tilde{g}_\lambda)$, \ie where $\Lambda$ dependence only enters through the dimensionless (marginally) relevant couplings. Substituting this form back into the flow equation, the corresponding $\beta_\lambda$ functions can be read off from the renormalization conditions. Choosing finite values for the couplings at a finite scale $\Lambda$, and integrating up these $\beta$ functions, thus solves for the full RG trajectory. To the extent that there is something to prove, it is only that one should establish that there exist such solutions that match into the asymptotic UV regime. Since the $\tilde{g}_\lambda(\Lambda)$, or equivalently $g_\lambda(\Lambda)$, are finite at finite scales they are \textit{de facto} renormalized couplings. Since renormalization is in this sense automatic, we will not tend to use this terminology. On the other hand, we should distinguish these from the finitely related \emph{physical} couplings. We will define these later via the Legendre effective action. Returning to the linear regime we will mostly treat in this paper, we note that since each dimensionful coupling then does not run, its `bare' value in the far UV and the `renormalized' value in the IR, both coincide with \eqref{dimensionful-g-lambda}. We can and will also choose a physical renormalization condition so that \eqref{dimensionful-g-lambda} coincides with the physical coupling. From \eqref{pol+}, the eigenoperator equation is thus \begin{equation} \label{eigen+} -\lambda\, \tilde{V}(\tilde{\vp}) -\tilde{\varphi}\, \tilde{V}' + 4\, \tilde{V} = -\frac{\tilde{V}''}{2{a}^2}\,, \end{equation} where a prime is differentiation with respect to the field argument, and we have defined the dimensionless one-loop massless tadpole integral\footnote{Although $a$ is a pure number, it is non-universal, clearly dependent on the cutoff profile.} \begin{equation} \label{a} \frac1{2{a}^2} = \frac1{2\Lambda}\, \frac{\partial}{\partial\Lambda}\, \Omega_\Lambda = \int\!\frac{d^4\tilde{p}}{(2\pi)^4}\, \frac{C(\tilde{p}^2)}{\tilde{p}^2}\,, \end{equation} taking $a>0$, and $\Omega_\Lambda= \Lambda^2/2a^2$ is the dimensionful version: \begin{equation} \label{Omega} \Omega_\Lambda := |\langle \varphi(x) \varphi(x) \rangle | = \int\!\frac{d^4p}{(2\pi)^4}\, \Delta^\Lambda(p)\,. \end{equation} We have defined it as the magnitude of the propagator evaluated at a point. Here the propagator is positive anyway, but later it won't be. \TRM{Equation \eqref{eigen+} is of Sturm-Liouville type. Its quantised solutions are in fact} the Hermite polynomials \begin{equation} \label{On} \cO{n}(\tilde{\vp}) = H_n(a\tilde{\vp})/(2a)^n = \tilde{\vp}^n -n(n-1)\tilde{\vp}^{n-2}/4a^2 +\cdots\,, \end{equation} with $\lambda = 4-n$ and $n$ a non-negative integer. The (scaling) dimension of the operator $\cO{n}$ is thus $4-\lambda=n$, coinciding with the engineering dimension $[\varphi^n]$. The lower powers in \eqref{On} are there to correct for operator mixing as $\Lambda$ is varied and appear with increasing powers of $\hbar$. They arise from tadpole corrections, which are the only quantum corrections remaining at linearised order. As is well known, for a marginal operator we need to go beyond linearised order to decide its fate. And once we go beyond linearised order, $\cO{4}$ becomes (marginally) irrelevant. For a true continuum limit, the only relevant directions (and thus renormalized couplings) in this case are therefore the mass term $\cO{2}$ and the vacuum energy $\cO{0}$ (which however without gravity carries no physics), so that we are left with a massive free theory, a somewhat inconvenient conclusion for illustrating the general structure -- but we trust the latter will be sufficiently clear despite these specific facts. \TRM{From the general Sturm-Liouville theory we know that the} $\cO{n}$ form an orthonormal set: \begin{equation} \label{orthonormal+} \int^\infty_{-\infty}\!\!\!\! d\tilde{\vp}\,\, {\rm e}^{-a^2\tilde{\vp}^2} \cO{n}(\tilde{\vp}) \cO{m}(\tilde{\vp}) = \frac{1}{a}\left(\frac{1}{2a^2}\right)^n\! n!\sqrt{\pi}\,\delta_{nm}\,, \end{equation} which is complete in $\Lm+$, the natural space for Wilsonian interactions around a positive kinetic energy term. This Hilbert space is the space of functions that are square integrable under the \TRM{Sturm-Liouville} measure ${\rm e}^{-a^2\tilde{\vp}^2}$. By all this we mean that if $\tilde{V}(\tilde{\vp})\in\Lm+$, and we set \begin{equation} \label{tg+} \tilde{g}_n = \frac{a}{\sqrt{\pi}}\frac{(2a^2)^n}{n!}\int^\infty_{-\infty}\!\!\!\! d\tilde{\vp}\,\, {\rm e}^{-a^2\tilde{\vp}^2} \cO{n}(\tilde{\vp}) \tilde{V}(\tilde{\vp})\,, \end{equation} the norm-squared of the remainder vanishes as we extend to an infinite series, \ie \begin{equation} \label{completeness-proof} \int^\infty_{-\infty}\!\!\!\! d\tilde{\vp}\,\, {\rm e}^{-a^2\tilde{\vp}^2} \left( \tilde{V}(\tilde{\vp}) - \sum_{n=0}^N \tilde{g}_n \cO{n}(\tilde{\vp})\right)^{\!2}\to0\quad{\rm as}\quad N\to\infty\,. \end{equation} In this sense, all perturbations in $\Lm+$ are described by a countable infinity of couplings $\tilde{g}_n$, and their RG evolution is just given by the RG evolution of these couplings. To form the bare action at $\Lambda=\Lambda_0$, which we can take to be the initial condition for the flow equation \eqref{pol+}, we need to choose the bare couplings $\tilde{g}^{(\lambda)}_0\equiv\tilde{g}^{(\lambda)}( {\Lambda_0} )$. The simplest choice is to set the bare irrelevant couplings to zero. A more general choice that stays within the basin of attraction of the Gaussian fixed point (at least in perturbation theory) is to set them to some finite fixed value \ie to set $g^{(\lambda)}_0 = \tilde{g}^{(\lambda)}_0 \Lambda_0^\lambda$, where $\tilde{g}^{(\lambda)}_0$ is a fixed pure number if $\lambda<0$. In contrast the bare relevant couplings $\tilde{g}^{(\lambda)}_0$ need to follow the flow and thus vanish as $ {\Lambda_0} \to\infty$. At the linearised level, $\tilde{g}^{(\lambda)}_0 = g^{(\lambda)} \Lambda_0^{-\lambda}$ where now $g^{(\lambda)}$ is some fixed finite dimension-$\lambda$ coupling (the renormalized coupling) if $\lambda>0$. Note that as $\Lambda_0\to\infty$ in order to form the continuum limit, the linearised approximation for the relevant couplings becomes ever more valid at scales close to the bare scale. The effective action \eqref{total-Wilsonian} can in this way provide the bare action, and studying its evolution away from the bare action provides us with direct access to the Wilsonian RG framework, but does not directly furnish us with physical quantities. We can access these latter in a useful way by replacing the cutoff $C^\Lambda$ in \eqref{DeltaUV} by \begin{equation} \label{sum-rule} C^ {\Lambda_0} _k(p) = C^ {\Lambda_0} (p) - C^k(p)\,, \end{equation} thus the theory is now also infrared regulated at scale $k$ \cite{Morris:2015oca}. Then writing the Legendre effective action as \begin{equation} \label{total-Gamma} \Gamma^{\text{tot},\, {\Lambda_0} }_{k}[\varphi]= \Gamma^ {\Lambda_0} _{k}[\varphi]+ \frac{1}{2}\varphi\cdot \left(\Delta_{k}^ {\Lambda_0} \right)^{\!-1}\!\!\cdot \varphi\,, \end{equation} where \begin{equation} \label{sum-ruled-Delta} \Delta^ {\Lambda_0} _k = \Delta^ {\Lambda_0} - \Delta^k\,, \end{equation} we have the identity (up to discarding a field independent part on the right hand side) \begin{equation} \Gamma^ {\Lambda_0} _ {\Lambda_0} [\varphi] = S^ {\Lambda_0} [\varphi]\,, \end{equation} which provides us with the initial condition for a flow with respect to the infrared cutoff, the latter taking the form \cite{Nicoll1977,Morris:1993,Morris:2015oca} (see also \cite{Bonini:1992vh,Wetterich:1992}): \begin{equation} \label{Gamma.flow+} \frac{\partial}{\partial k}\Gamma^ {\Lambda_0} _{k}[\varphi]=-\frac{1}{2}\text{tr}\bigg[\bigg(1+\Delta^ {\Lambda_0} _k\cdot \frac{\delta^{2}\Gamma^ {\Lambda_0} _{k}}{\delta\varphi\delta\varphi}\bigg)^{\!-1}\frac{1} {\Delta^ {\Lambda_0} _k}\frac{\partial\Delta^ {\Lambda_0} _k}{\partial k}\bigg]\,. \end{equation} At the Gaussian fixed point the Legendre effective action has just the field independent part $\Gamma^ {\Lambda_0} _k[\varphi]=-\tfrac{1}{2}\,\text{tr}\ln\Delta^ {\Lambda_0} _k$. Once again looking at linearised perturbations, we have: \begin{equation} \label{d-Gamma.flow+} \frac{\partial}{\partial k}\, \delta\Gamma^ {\Lambda_0} _{k}[\varphi]=-\frac{1}{2}\,\text{tr}\bigg[\frac{\partial\Delta^k}{\partial k}\cdot \frac{\delta^{2} }{\delta\varphi\delta\varphi}\bigg] \delta \Gamma^ {\Lambda_0} _{k}[\varphi]\,, \end{equation} where we have used \eqref{sum-rule} to simplify the expression. We see that $\delta \Gamma^ {\Lambda_0} _{k}[\varphi]$ satisfies an identical equation to \eqref{d-pol} with $k$ now playing the r\^ole of a UV cutoff. The reason for this is as follows. Since at the linearised level the flow equation has become insensitive to the overall UV cutoff $ {\Lambda_0} $, we can send this to infinity. Then we can note that $\Gamma_\Lambda:=\Gamma_\Lambda^\infty$ is related to $S^\Lambda$ by a Legendre transform: $\Gamma_\Lambda$ carries the purely quantum, 1PI (one particle irreducible), parts of $S^\Lambda$ \cite{Morris:1993,Morris:1998,Morris:2015oca}.\footnote{See also \cite{Keller:1990ej,Bonini:1992vh}. The existence of $\Lambda\to\infty$ flows is a different matter, and is why in general such a complete (renormalized) trajectory must terminate at a fixed point.} However at the linearised level there are only quantum corrections and thus the flow equations coincide. Setting \begin{equation} \label{linearised-v-G} \delta \Gamma^ {\Lambda_0} _k[\varphi] = \epsilon\! \int\!\!d^4x\, V\left(\varphi(x),k\right)\,, \end{equation} the interaction potential will therefore satisfy the same eigenoperator equation \eqref{eigen+} as that for the Wilsonian effective action, only with $\Lambda$ replaced by $k$ in \eqref{scale} and \eqref{lambda-v}. Now suppose that we add $g_n\pO{n}_ {\Lambda_0} (\varphi)$ to the bare action \ie at $k=\Lambda= {\Lambda_0} $. By this we mean that we add in scaled units $\tilde{g}_n \cO{n}(\tilde{\vp})$, where $\tilde{g}_n = \tilde{g}_n( {\Lambda_0} )=g_n /\Lambda_0^{4-n}$. To linearised order, and in scaled units, this evolves in a self-similar way by construction, \ie keeps the same form, with the dimensionless variables formed using the appropriate scale. In particular we recognise that the coupling becomes \begin{equation} \label{mult-evolve} \left(\frac{ {\Lambda_0} }{k}\right)^{4-n}\!\!\tilde{g}_n( {\Lambda_0} ) = \frac{g_n}{\ k^{4-n}}= \tilde{g}_n(k) \,. \end{equation} Therefore, using \eqref{scale}, the dimensionful (unscaled) interaction is \begin{equation} \label{physical-Onk} g_n\,\pO{n}_k(\varphi) =k^4 \frac{g_n}{\ k^{4-n}} \,\,\cO{n}\!\left({\varphi}/{k}\right) = g_n\left( \varphi^n -n(n-1) \frac{k^2}{4a^2}\varphi^{n-2}+\cdots\right)\,, \end{equation} \ie \begin{equation} \label{physical-OnL} \pO{n}_\Lambda(\varphi)= \Lambda^n\,\cO{n}(\varphi/\Lambda) = \varphi^n -n(n-1) \frac{\Lambda^2}{4a^2}\varphi^{n-2}+\cdots\,. \end{equation} Again we note that in the Wilsonian RG framework, the operator and associated coupling are already the renormalized ones once the cutoff $k$ falls to physical scales. In addition in the limit $k\to0$, we find the universal \emph{physical} interaction, as it appears in the Legendre effective action. In this case we thus find $\pO{n}(\varphi) :=\lim_{k\to0} \pO{n}_k(\varphi)$, where: \begin{equation} \label{physical-On} g_n \pO{n}(\varphi) = g_n\varphi^n\,. \end{equation} Recalling the discussions above, we see that for relevant directions this is finite and $g_n$ indeed corresponds to the physical coupling, while for the irrelevant directions $g_n$ is proportional to an inverse power of $ {\Lambda_0} $ and thus tends to zero in the continuum limit $ {\Lambda_0} \to\infty$. Note that Wilsonian RG properties are only manifest in scaled variables. For example the statement that relevant perturbations emanate from the Gaussian fixed point in the ultraviolet, \ie vanish as $\Lambda\to\infty$, is only true in scaled variables. In dimensionful terms the tadpole correction terms actually diverge in this limit, as can be seen from \eqref{physical-OnL}. In particular for example, the negative mass term correction in the marginal operator $\cO{4} = \tilde{\vp}^4-3\tilde{\vp}^2/a^2+3/4a^4$, which is fixed and finite in scaled variables, is there to cancel exactly the quadratic mass term divergence (the divergence responsible for the naturalness problem in Higgs physics), thus automatically giving the renormalized $\varphi^4$ interaction (at linearised level) in the continuum limit as we saw above. The evolution \eqref{mult-evolve} can be understood in this way more conventionally in terms of Feynman diagrams. We will make that connection clearer later for the novel operators we discover for scalar field theory with wrong sign kinetic term. Similarly we could continue the development by including (spacetime) derivative interactions, and also in going beyond linearised order into perturbation theory with the (marginally) relevant couplings. Of course we are only rephrasing standard knowledge here, so instead we make these developments directly for the novel operators in sec. \ref{sec:minus}. Now we address the fate of non-polynomial solutions to \eqref{eigen+}, which cannot be understood purely in terms of Feynman diagrams since non-perturbative physics is required (although of a rather trivial sort). At first sight the general solution of \eqref{eigen+}, which can be written in terms of Kummer functions, allows for new eigenoperators, in particular ones for which $\lambda>0$ and which thus can be used to build exotic continuum limits \cite{HHOrig}. Their large field behaviour grows as $\sim \tilde{\vp}^{\lambda-5}\exp (a^2\tilde{\vp}^2)$, so they lie outside $\Lm+$. However it is not true that these solutions provide new continuum limits \cite{Morris:1996nx,Morris:1996xq,Bridle:2016nsu}. The reason is that for fixed $\epsilon$, no matter how small, the linearised approximation, \eqref{linearised-v} or \eqref{linearised-v-G}, is not valid for large field. To find the correct evolution for such a perturbation, one needs to use the full non-linear flow equation in the large field regime. Thus such solutions will also evolve differently depending on whether we regard this as a perturbation that is purely quantum or includes the classical corrections \cite{Bridle:2016nsu}. The simplest picture arises from taking it to be purely quantum. In fact since $\Gamma_\Lambda$ diverges at large field, it follows from \eqref{Gamma.flow+} that the right hand side vanishes and thus the dimensionful (unscaled) interaction does not evolve at all in this limit. Correspondingly in scaled units the interaction will follow ``mean field evolution''. Adding such an operator to the bare $\Gamma_ {\Lambda_0} $, we thus find at any other scale $\Lambda$, in the large field regime $\tilde{\vp}\gg {\Lambda_0} /(\Lambda\sqrt{\ln\epsilon})$, \begin{equation} \sim \epsilon\, \tilde{\vp}^{\lambda-5} \left(\frac{\Lambda}{ {\Lambda_0} }\right)^{\lambda-1}\!\!\!\exp\left\{ a^2\tilde{\vp}^2 \Lambda^2/\Lzp2\right\}\,. \end{equation} To be a relevant perturbation we want this scaled version to vanish as $\Lambda\to\infty$ so that we return to the Gaussian fixed point in this limit, but we see that actually the scaled perturbation diverges in this limit. On the other hand for RG evolution into the IR, once $\Lambda< {\Lambda_0} /\sqrt{2}$, the interaction is inside $\Lm+$ and thus can be expanded as a convergent series in terms of the $\cO{n}$. Actually, also when we add the perturbation $\tilde{g}_n \cO{n}$ to the bare $\Gamma_ {\Lambda_0} $, the linearised approximation is not valid for large field for $n>2$. Mean field evolution therefore takes over here too, and thus at scale $\Lambda$ it becomes \begin{equation} \left(\frac{ {\Lambda_0} }{\Lambda}\right)^{4}\!\! \tilde{g}_n( {\Lambda_0} )\,\cO{n}(\tilde{\vp} \Lambda/ {\Lambda_0} ) \,. \end{equation} The difference is that at large field this just gives us back self-similar evolution and \eqref{mult-evolve} \cite{Morris:1996nx,Morris:1996xq,Bridle:2016nsu}. At the same time these observations establish that a general (not necessarily small) 1PI perturbation $\tilde{V}_ {\Lambda_0} (\tilde{\vp})$ that starts in $\Lm+$, remains in $\Lm+$ under evolution to the IR, and thus the complete evolution can be understood in terms of the corresponding $\tilde{g}_n(k)$. However note that $\Lm+$ is not defined when the cutoff reaches $k=0$. In the limit $k\to0$, the relevant interactions diverge, so $\tilde{V}_k(\tilde{\vp})$ is itself ill defined in this limit. This can be seen in \eqref{mult-evolve}, although of course the linearised approximation breaks down before this happens. Nevertheless the mass and vacuum energy terms clearly will in general diverge in scaled units using $k$ (see also \eg \cite{Bridle:2016nsu}). For these reasons the property $\tilde{V}_k(\tilde{\vp})\in\Lm+$ can only be defined for all $ {\Lambda_0} \ge k>0$ (\ie excluding the limit $k\to0$). \section{Scalar field theory with negative kinetic term} \label{sec:minus} Now we change the sign of the kinetic term. At face value this makes no sense, since now the functional integral in the partition function no longer even na\"\i vely converges, while the momentum cutoff profile, instead of exponentially suppressing the integrand, makes matters worse. But gravity presents us with this problem if we are to understand it in Wilsonian terms, since then we must consider fluctuations about Euclidean $\mathbb{R}^4$ (\cf beginning sec. \ref{sec:plus}). Therefore we need to generalise what we mean by quantum field theory in this case in order to make progress. Instead of following ref. \cite{Gibbons:1978ac} and analytically continuing so as to remove the sign, we keep the sign and seek an appropriate generalisation of the structure outlined in the previous section. We begin by replacing \eqref{total-Wilsonian} and \eqref{total-Gamma} by\footnote{Note that for convenience $\Delta^\Lambda$ in \eqref{DeltaUV}, \cf also \eqref{a} and \eqref{Omega}, are defined to be positive.} \begin{equation} \label{total-} S^{\mathrm{tot},\Lambda}[\varphi] = S^\Lambda[\varphi] - \frac{1}{2}\varphi\cdot (\Delta^{\Lambda})^{\!-1}\!\!\cdot \varphi\,,\qquad \Gamma^{\text{tot},\, {\Lambda_0} }_{k}[\varphi]= \Gamma^ {\Lambda_0} _{k}[\varphi]- \frac{1}{2}\varphi\cdot \left(\Delta_{k}^ {\Lambda_0} \right)^{\!-1}\!\!\cdot \varphi\,. \end{equation} As a result, $\Delta\mapsto -\Delta$ in the flow equations \eqref{pol+}, \eqref{Gamma.flow+} and \eqref{d-Gamma.flow+}:\footnote{In preparation for later we have reinstated $\Delta^ {\Lambda_0} _k$ in the last equation.} \begin{eqnarray} \label{pol-} \frac{\partial}{\partial\Lambda} S^\Lambda[\varphi] &=&{-} \frac{1}{2}\,\frac{\delta S^\Lambda}{\delta\varphi}\cdot \frac{\partial\Delta^\Lambda}{\partial\Lambda}\cdot \frac{\delta S^\Lambda}{\delta\varphi}+\frac{1}{2}\,\text{tr}\bigg[\frac{\partial\Delta^\Lambda}{\partial\Lambda}\cdot \frac{\delta^{2}S^\Lambda} {\delta\varphi\delta\varphi}\bigg]\,, \\ \label{Gamma.flow-} \frac{\partial}{\partial k}\Gamma^ {\Lambda_0} _{k}[\varphi] &=& -\frac{1}{2}\,\text{tr}\bigg[\bigg(1-\Delta^ {\Lambda_0} _k\cdot \frac{\delta^{2}\Gamma^ {\Lambda_0} _{k}}{\delta\varphi\delta\varphi}\bigg)^{\!-1}\frac{1}{\Delta^ {\Lambda_0} _k}\frac{\partial\Delta^ {\Lambda_0} _k}{\partial k}\bigg]\,,\\ \label{d-Gamma.flow-} \frac{\partial}{\partial k}\, \delta\Gamma^ {\Lambda_0} _{k}[\varphi] &=&{-} \frac{1}{2}\,\text{tr}\bigg[\frac{\partial\Delta^ {\Lambda_0} _k}{\partial k}\cdot \frac{\delta^{2} }{\delta\varphi\delta\varphi}\bigg] \delta \Gamma^ {\Lambda_0} _{k}[\varphi]\,. \end{eqnarray} This makes these equations backward-parabolic, which means in particular that the Cauchy initial value problem for flow towards the IR is not well posed. To elucidate this and further consequences, we will again begin by considering non-derivative interactions at the linearised level. \subsection{Non-derivative eigenoperators} \label{sec:non-deriv} The linearised flow for the potential \begin{equation} \label{flow-V} \partial_t V(\varphi,t) = -\Omega_\Lambda\, V''(\varphi,t)\,, \end{equation} can be written: \begin{equation} \label{heat} \frac{\partial}{\partial T}\, V(\varphi,T) = \frac1{4a^2} V''(\varphi,T)\,, \end{equation} which is now in the form of the heat diffusion equation, with a `time' $T=\Lambda^2$, which runs towards the UV. This means that for a general `initial' potential $V(\varphi,T_0)$, well-defined flows only exist towards the UV (which is thus also an issue for the full flow equations \cite{Bonanno:2012dg,Dietz:2016gzg}). In the other direction, the bare action must be chosen carefully if the flow is to exist all the way to $k\to0$. Indeed, this is already intuitively clear from the connection to heat diffusion. Flowing in the UV direction, the potential will diffuse out, becoming ever smoother. On the contrary, flows towards the IR will reverse the diffusion process, typically resulting in a $V(\varphi,T)$ that develops singularities in $\varphi$ at some critical `time' $T=T_\mathrm{p} :=a^2\Lambda^2_\mathrm{p}$, after which the flow ceases to exist, \ie the flow typically ends at some $k =a\Lambda_\mathrm{p}>0$.\,\footnote{Although we do not address the asymptotic safety scenario in this paper, since the flow is again backward-parabolic, it is clear that generic flows towards the IR, will end at some critical scale there also \cite{Bonanno:2012dg,Dietz:2016gzg}.} (We include the factor $a$ in the definition of $\Lambda_\mathrm{p}$ for convenience: as we will see in sec. \ref{sec:general}, in other circumstances $\Lambda_\mathrm{p}$ can then have a universal meaning.) The fact that flow is more naturally in the reverse direction suggests that universality should be found in the UV limit rather than the IR. Indeed we are about to find that the Gaussian fixed point now supports eigenoperators of arbitrarily high relevancy (\ie for RG time reversed flows, playing the r\^ole of the usual hierarchy of irrelevant operators). In fact without further restriction, the situation is worse than that. To realise the Wilsonian RG, we need to use the scaled variables \eqref{scale}, giving \begin{equation} \label{scaled-flow-V} \Lambda\frac{\partial}{\partial \Lambda}\tilde{V}_\Lambda(\tilde{\vp}) -\tilde{\varphi}\, \tilde{V}'_\Lambda(\tilde{\vp}) +4\, \tilde{V}_\Lambda(\tilde{\vp}) = {\tilde{V}''_\Lambda}(\tilde{\vp})/{(2{a}^2)}\,. \end{equation} Then setting $\tilde{V}_\Lambda(\tilde{\vp})=\, {\rm e}^{\lambda t}\, \tilde{V}(\tilde{\vp})$, we get the eigenoperator equation \eqref{eigen+} except with a plus sign on the right hand side: \begin{equation} \label{eigen-} -\lambda\, \tilde{V}(\tilde{\vp}) -\tilde{\varphi}\, \tilde{V}' + 4\, \tilde{V} = \frac{\tilde{V}''}{2{a}^2}\,. \end{equation} The change in relative sign between the $\tilde{\vp} \tilde{V}'$ and $\tilde{V}''$ term means that at large field one no longer has exponentially growing solutions. Instead they behave at worst as \begin{equation} \label{GaussianEigenasymptotic} \tilde{V}\propto\tilde{\vp}^{4-\lambda}+\frac{(4-\lambda)(3-\lambda)}{4a^2}\tilde{\vp}^{2-\lambda}+ {O}(\tilde{\vp}^{-\lambda})\,, \end{equation} which is generically an asymptotic series which is also subject to exponentially decaying corrections $\sim\tilde{\vp}^{\lambda-5} \,{\rm e}^{-a^2\tilde{\vp}^2}$. For $\lambda>2$, such solutions justify linearisation of the right hand side of \eqref{Gamma.flow-} ever more accurately as $\tilde{\vp}\to\infty$ and thus are not ruled out by the large field analysis reviewed in sec. \ref{sec:plus}, while for $\lambda\le2$ mean field analysis still allows these perturbations since it just gives back the correct multiplicative evolution \ie $( {\Lambda_0} /k)^\lambda \tilde{V}$. Thus the large field test rules out none of the solutions \cite{Dietz:2016gzg}. These solutions divide into three sets as follows \cite{Dietz:2016gzg}. For every $\lambda$ there are two linearly independent solutions, an odd and even Kummer function, which thus form a continuous eigenoperator spectrum. For $\lambda$ not an integer, by adjustment of their ratio, one can arrange for zero coefficient for the asymptotic series in \eqref{GaussianEigenasymptotic} on one side $\tilde{\vp}\to\pm\infty$, leaving behind the exponentially decaying corrections, but on the other side $\tilde{\vp}\to\mp\infty$ it will then have \eqref{GaussianEigenasymptotic} as its asymptotic behaviour. At $\lambda$ an integer, one of the two Kummer functions degenerates, thus forming two discrete spectra: at $\lambda=4-n$ there are the polynomial solutions, which now read $\cO{n}(\tilde{\vp})=H_n(ia\tilde{\vp})/(2ia)^n$; for $\lambda=5+n$, we have an infinite tower of exponentially decaying `super-relevant' eigen-operators \begin{equation} \label{delta} \delta_n(\tilde{\vp}) := \frac{a}{\sqrt{\pi}} \frac{\partial^n}{\partial\tilde{\vp}^n} \, {\rm e}^{-a^2\tilde{\vp}^2} = \frac{a}{\sqrt{\pi}} (-a)^n H_n(a\tilde{\vp}) \, {\rm e}^{-a^2\tilde{\vp}^2} \,,\qquad \lambda=5+n\,, \end{equation} $n$ a non-negative integer, whose dimension is thus \begin{equation} \label{delta-dim} [\delta_n]=4-\lambda = -1-n\,. \end{equation} Solutions corresponding to these latter also existed for \eqref{eigen+} but were exponentially growing and thus by the large field analysis did not evolve correctly. The second expression in \eqref{delta} follows from substituting $\tilde{V}\mapsto \tilde{V}\, {\rm e}^{-a^2\tilde{\vp}^2}$ into \eqref{eigen-} and comparing to \eqref{eigen+}. The first expression can be found by substituting the Fourier transform: \begin{equation} \label{Fourier} \tilde{V}(\tilde{\vp}) = \int^\infty_{-\infty}\! \frac{d\tilde{\vpi}}{2\pi}\, \tilde{\mathcal{V}}(\tilde{\vpi})\, \mathrm{e}^{i\tilde{\vpi}\tilde{\vp}}\,, \end{equation} where $\tilde{\vpi}=\uppi\Lambda$ is the scaled conjugate momentum, giving the general solution: \begin{equation} \label{general-sol} \tilde{\mathcal{V}}(\tilde{\vpi}) = (i\tilde{\vpi})^{\lambda-5} \exp\left(-\frac{\tilde{\vpi}^2}{4a^2}\right)\,. \end{equation} This has power-law asymptotics \eqref{GaussianEigenasymptotic}, generated by the singularity at $\tilde{\vpi}=0$, except that the singularity is absent when $\lambda=5+n$ where it gives \eqref{delta}. \TRM{Equation \eqref{eigen-} is still of Sturm-Liouville type, but the} Sturm-Liouville weight function is now ${\rm e}^{+a^2\tilde{\vp}^2}$. Defining $\Lm-$ to be the space of square integrable functions under this measure, the polynomials and the continuous spectrum of Kummer functions lie outside this space. However the exponentially decaying solutions lie inside $\Lm-$ and indeed form a complete orthonormal basis for this Hilbert space: \begin{equation} \label{orthonormal-} \int^\infty_{-\infty}\!\!\!\! d\tilde{\vp}\,\, {\rm e}^{a^2\tilde{\vp}^2} \delta_n(\tilde{\vp})\, \delta_m(\tilde{\vp}) = \frac{a}{\sqrt{\pi}}\left({2a^2}\right)^n\! n!\,\delta_{nm}\,, \end{equation} (where we used the 2$^{\rm nd}$ eqn in \eqref{delta}) so that if $\tilde{V}(\tilde{\vp}) \in\Lm-$ and \begin{equation} \label{tg} \tilde{g}_n = \frac{\sqrt{\pi}}{2^na^{2n+1}n!} \int^\infty_{-\infty}\!\!\!\! d\tilde{\vp}\,\, {\rm e}^{a^2\tilde{\vp}^2} \delta_n(\tilde{\vp})\, \tilde{V}(\tilde{\vp})\,, \end{equation} the norm-squared of the remainder vanishes as we extend to an infinite series, \ie \begin{equation} \label{completeness-proof-} \int^\infty_{-\infty}\!\!\!\! d\tilde{\vp}\,\, {\rm e}^{a^2\tilde{\vp}^2} \left( \tilde{V}(\tilde{\vp}) - \sum_{n=0}^N \tilde{g}_n\, \delta_n(\tilde{\vp})\right)^{\!2}\to0\quad{\rm as}\quad N\to\infty\,. \end{equation} This structure is the generalisation we are looking for. \subsection{Quantisation condition} \label{sec:quantisation} Although we cannot exclude the solutions outside $\Lm-$ by their large field RG properties, we can exclude them by fiat. We thus choose, \emph{as part of the definition of quantisation}, to insist that the bare interactions must lie in $\Lm-$. If we consider a finite sum of the basis operators \eqref{delta} then this quantisation condition is clearly respected by the RG at the linear level, since the operators evolve multiplicatively. Indeed if at the bare scale $\Lambda= {\Lambda_0} $, $\delta_n(\tilde{\vp})$ appears linearly with a sufficiently small coupling $ \tilde{g}_n = g_n / {\Lambda_0} ^{5+n} $, then at some other scale it will still take this form but with $\tilde{g}_n=g_n/\Lambda^{5+n}$ (where $g_n$ is held fixed). If an infinite number of couplings are switched on, then by our quantisation condition we require: \begin{equation} \label{tv-bare} \tilde{V}_{ {\Lambda_0} }(\tilde{\vp}) = \sum_{n=0}^\infty \tilde{g}_n\, \delta_n(\tilde{\vp})\ \in \Lm-\,. \end{equation} Again, if $\tilde{V}$ is small enough to trust the linear RG evolution, then at another scale $\tilde{V}_\Lambda(\tilde{\vp})$ takes the same form with $ {\Lambda_0} $ replaced by $\Lambda$ (\ie both explicitly, and implicitly in the scaled quantities): \begin{equation} \label{tv-evolved} \tilde{V}_{\Lambda}(\tilde{\vp}) = \sum_{n=0}^\infty \tilde{g}_n\, \delta_n(\tilde{\vp})\,. \end{equation} Using \eqref{orthonormal-}, we can compute the norm-squared of the evolved potential: \begin{equation} \label{tv-norm-squared} \int^\infty_{-\infty}\!\!\!\! d\tilde{\vp}\,\, {\rm e}^{a^2\tilde{\vp}^2} \tilde{V}_\Lambda^2(\tilde{\vp}) = \frac{a}{\Lambda^{10}\sqrt{\pi}} \sum_{n=0}^\infty n!\, g_n^2 \left(\frac{2a^2}{\Lambda^2}\right)^{\!n}\,. \end{equation} By \eqref{tv-bare}, the series on the right hand side converges for $\Lambda=\Lambda_0$. We thus see that $\tilde{V}_\Lambda(\tilde{\vp})\in\Lm-$ and remains small for all $\Lambda\ge {\Lambda_0} $. This is why we interpret the quantisation condition $\tilde{V}_\Lambda(\tilde{\vp})\in\Lm-$ as operating at the bare level. Since all the couplings $g_n$ are relevant, we set them to be finite at physical scales, whence they parametrise the most general RG trajectory. The above properties ensure that the Wilsonian effective interaction continues to satisfy the quantisation condition as $\Lambda\to\infty$. Indeed $\tilde{V}_\Lambda(\tilde{\vp})\to0$ in this limit, \ie it emanates from the Gaussian fixed point, as it should to describe the RG trajectory. Like any continuum limit, it can be regarded conceptually as existing in its own right, without the need to postulate a microscopic theory. However if we do entertain that possibility, then the quantisation condition provides a hint as to the form this microscopic theory would have to take. On the other hand the generic case will be that the $g_n$ are such that the series \eqref{tv-norm-squared} has a finite radius of convergence $1/\Lambda=1/(a\Lambda_\mathrm{p})$ where, by \eqref{tv-bare}, $a\Lambda_\mathrm{p}\le {\Lambda_0} $. Then $\tilde{V}_\Lambda(\tilde{\vp})\notin\Lm-$ for all $\Lambda<a\Lambda_\mathrm{p}$, although also generically as $\Lambda$ decreases, the linearised approximation breaks down. In any case once $\tilde{V}_\Lambda(\tilde{\vp})\notin\Lm-$, the expansion over the basis \eqref{delta} no longer converges. There are two possible reasons for $\tilde{V}_\Lambda(\tilde{\vp})$ exiting $\Lm-$: either $\tilde{V}_\Lambda(\tilde{\vp})$ itself has developed divergences, or it grows too fast for large $\tilde{\vp}$ so that the integral in \eqref{tv-norm-squared} no longer converges for $\tilde{\vp}\to\pm\infty$. In the former case the flow ceases to exist, as we anticipated earlier by using the heat equation. We will see an explicit example later. In the latter case its evolution can still be described by the appropriate flow equation, namely \eqref{scaled-flow-V}, more generally \eqref{pol-} or \eqref{Gamma.flow-}. Since the flow is first order in $\Lambda$, it can be uniquely determined by supplying as boundary condition the expansion over the basis, for any $\Lambda>a\Lambda_\mathrm{p}$. At a formal level, we can still write $\tilde{V}_\Lambda(\tilde{\vp})$ as an expansion over the basis, even for $\Lambda<a\Lambda_\mathrm{p}$. Indeed at the linearised level it will continue to be \eqref{tv-evolved}, since each term separately satisfies \eqref{scaled-flow-V}. However in this region we need a prescription for resumming the series. We will see that this is provided by working in conjugate momentum space. The eigenoperators have novel physical properties. Analogously to \eqref{physical-OnL}, we identify the dimensionful bare operator $\dd{ {\Lambda_0} }n$ as the conjugate to the dimension $5\!+\!n$ unscaled coupling $g_n$ in the bare action. Thus, either directly from its dimension \eqref{delta-dim} or by re-expressing the coupling and using \eqref{scale}, \begin{equation} \label{def-physical} \dd{ {\Lambda_0} }n = { \delta_n(\varphi/ {\Lambda_0} )}/{\Lzp{1+n}}\,, \end{equation} and hence (using $a = {\Lambda_0} /\sqrt{2\Omega_ {\Lambda_0} }$): \begin{equation} \label{physical-dnL} \dd{ {\Lambda_0} }{n} := \frac{\partial^n}{\partial\varphi^n}\, \dd{ {\Lambda_0} }{0}\,, \qquad{\rm where}\qquad \dd{ {\Lambda_0} }0 := \frac{1}{\sqrt{2\pi\Omega_ {\Lambda_0} }}\,\exp\left(-\frac{\varphi^2}{2\Omega_ {\Lambda_0} }\right)\,. \end{equation} If we restore $\hbar$, it multiplies the right hand side of \eqref{eigen+}, similarly \eqref{eigen-} or \eqref{flow-V}, and thus makes its appearance as the combination $\Omega_ {\Lambda_0} \propto\hbar\, \Lzp2$. We see that the operators are ``evanescent'' \cite{Bollini:1973wu} in the sense that for fixed field $\varphi$, the operators vanish as the UV cutoff is removed ($ {\Lambda_0} \to\infty$). They are also non-perturbative in $\hbar$ with a similar functional form in this respect to instanton \cite{Belavin:1975fg,tHooft:1976snw} or renormalon \cite{tHooft:1977xjm} contributions. By construction, $V=\dd\Lambda{n}$ is a solution of the unscaled flow equation \eqref{flow-V}. A general solution of the linearised RG is the sum of these with constant coefficients $g_n$: \begin{equation} \label{expand-V} V(\varphi,\Lambda)= \sum_{n=0}^\infty g_n \,\dd\Lambda{n}\,. \end{equation} This is nothing but the sum \eqref{tv-evolved} in dimensionful terms (\ie the same except for overall multiplication by $\Lambda^4$). Since by \eqref{tv-bare}, the sum converges for all $\Lambda\ge {\Lambda_0} $, it follows that even for an infinite number of non-zero couplings, the potential inherits the properties above, \ie it is non-perturbative in $\hbar$, and $V(\varphi,\Lambda)\to0$ as $\Lambda\to\infty$, \ie the full potential is evanescent. Note that this property is logically distinct from the `relevancy' property $\tilde{V}_\Lambda(\tilde{\vp})\to0$ in this limit, established below \eqref{tv-norm-squared}, \cf the discussion for normal field theory below \eqref{physical-On}. Despite the description so far of an essentially UV structure, there is nevertheless a dramatic imprint on the far IR limit, that is the continuum physics. Since the scaled eigenoperator is form invariant under the linearised RG, the corresponding dimensionful (and automatically renormalized) operator in the IR cutoff Legendre effective action is just \begin{equation} \label{physical-dnk} \dd{k}{n} = \frac{\partial^n}{\partial\varphi^n}\, \dd{k}{0}\,, \qquad{\rm where}\qquad \dd{k}0 = \frac{1}{\sqrt{2\pi\Omega_k}}\,\exp\left(-\frac{\varphi^2}{2\Omega_k}\right)\,. \end{equation} Removing the IR cutoff gives us the physical operators in an $\mathbb{R}^4$ spacetime: \begin{equation} \label{physical-dn} \lim_{k\to0} \dd{k}n = \dd{}n \,, \end{equation} \ie the $n^{\rm th}$ derivative of the delta-function.\footnote{The unit normalization here explains our choice in \eqref{delta}.} If we keep only a finite number of couplings then since these interactions have support only on vanishing amplitude, presumably the physics of the renormalized theory is trivial, effectively just a free theory. This is true in a flat spacetime of infinite extent only when we remove the IR cutoff. In sec. \ref{sec:compact} we will see that on a homogeneous non-trivial spacetime (with inherent length scales), the amplitude is only suppressed. However once the manifold is sufficiently asymmetric, the physical operator fails to exist because the flow to the IR ends prematurely. \begin{figure}[ht] \centering \includegraphics[scale=0.35]{tadpoles.png} \caption{The renormalized eigenoperator is the bare one plus its quantum corrections at linearised level.} \label{fig:tadpoles} \end{figure} The same distributions \eqref{physical-dn} are reached by taking the $\hbar\to0$ limit. In this sense the dynamics is always essentially and non-perturbatively quantum: there is no classical limit. Let us show how the passage from bare \eqref{physical-dnL} to renormalized \eqref{physical-dnk} can nevertheless be understood in terms of Feynman diagrams. The solution to \eqref{d-Gamma.flow-} can be written as: \begin{equation} \label{tadpoles-evolution} \int_x \dd{k}n = \exp\left(-\frac{1}{2}\,\text{tr}\left[\Delta^ {\Lambda_0} _k\cdot \frac{\delta^{2} }{\delta\varphi\delta\varphi}\right]\right) \int_x \dd{ {\Lambda_0} }n\,. \end{equation} The expansion of the exponential gives the expected 1PI Feynman diagrams, as illustrated in fig. \ref{fig:tadpoles}, where the propagator for each tadpole, $-\Delta^ {\Lambda_0} _k$, is defined as in \eqref{sum-ruled-Delta}, and has the sign required from \eqref{total-}. On the other hand the bare eigenoperator \eqref{physical-dnL} can be written \begin{equation} \label{bare-delta-Omega} \dd{ {\Lambda_0} }n = \exp\left(\frac12\Omega_ {\Lambda_0} \frac{\partial^2}{\partial\varphi^2}\right) \dd{}n\,, \end{equation} as can be seen from \eqref{Fourier} and \eqref{general-sol}. Indeed, translating the Fourier transform to unscaled variables using \eqref{def-physical} gives \begin{equation} \label{dn-Fourier} \dd{ {\Lambda_0} }n = \int^\infty_{-\infty}\!\! \frac{d\uppi}{2\pi}\, (i\uppi)^n\, \mathrm{e}^{-\frac12\uppi^2\Omega_ {\Lambda_0} +i\uppi\varphi}\,, \end{equation} after which the result follows by pulling the $\Omega_ {\Lambda_0} $ piece outside the integral. Thus \begin{equation} \label{bare-delta-prop} \int_x \dd{ {\Lambda_0} }n = \exp\left(\frac{1}{2}\,\text{tr}\left[\Delta^ {\Lambda_0} \cdot \frac{\delta^{2} }{\delta\varphi\delta\varphi}\right]\right) \int_x \dd{}n\,. \end{equation} Combining this and \eqref{tadpoles-evolution}, and using \eqref{sum-ruled-Delta}, we see that the the renormalized operator is given by \eqref{bare-delta-Omega} with $ {\Lambda_0} $ replaced by $k$, and thus by the expression \eqref{physical-dnk}. \subsection{General RG flows of the potential at first order in the couplings} \label{sec:general} The situation becomes more subtle when an infinite number of couplings are switched on: as well as solutions that fail to make it to the far IR, there is an infinite dimensional space of solutions where the physical (\ie $k=0$) interaction has support on finite field amplitude. However if at scale $k$, the (total) interaction lies inside $\Lm-$, we know that, written in dimensionful terms, it must vanish faster than $\exp(-a^2\varphi^2/2k^2)/\sqrt{\varphi}$ for large $\varphi$, which implies that large amplitudes remain significantly damped. In particular if the interaction remains in $\Lm-$ for all $k>0$, then the dimensionful interaction must vanish faster than any such exponential at large $\varphi$. We furnish an example that resolves a puzzle with the form of the physical operators \eqref{physical-dn} at the linear level. The Gaussian fixed point is clearly invariant under the shift of the field by a space-time constant: $\varphi(x)\mapsto\varphi(x)+\varphi_0$. At first sight this symmetry is broken by the operators \eqref{physical-dn}, all of which constrain $\varphi$ to zero amplitude. Note that this is not forced by the restriction to be integrable under the measure ${\rm e}^{+a^2\tilde{\vp}^2}$ at the appropriate scales. In fact this breaking is illusory since in the bare action we can add an infinite number of eigenoperators: \begin{equation} \label{delta-shift+RG} \tilde{g}_m\, \delta_m(\tilde{\vp}+\tilde{\vp}_0) = \tilde{g}_m \sum_{n=0}^\infty \frac{\tilde{\vp}_0^n}{n!}\,\delta_{n+m}(\tilde{\vp})\,, \end{equation} where, from the first of \eqref{delta}, we have noted that \begin{equation} \label{d-delta} \partial_{\tilde{\vp}}\,\delta_n(\tilde{\vp}) = \delta_{n+1}(\tilde{\vp})\,. \end{equation} We see that the corresponding series in \eqref{tv-norm-squared} has an infinite radius of convergence and thus \eqref{delta-shift+RG} remains in $\Lm-$ for all $k>0$. (As with the discussion at the end of sec. \ref{sec:plus}, $k=0$ is excluded.) Under RG evolution $\delta_{n+m}(\tilde{\vp})$ supplies $( {\Lambda_0} /k)^{5+m+n}$ which is precisely right to convert $\tilde{g}_m\tilde{\vp}_0^n$ from scaled quantities at $ {\Lambda_0} $ into scaled quantities at $k$. Therefore this shifted operator is respected by the RG at linearised order: \eqref{delta-shift+RG} is form invariant under change of scale. Repeating the analysis \eqref{def-physical} and \eqref{physical-dnk}, we thus find that the physical operator also exists and takes the form: \begin{equation} \label{renormOpshifted} \lim_{k\to0}\ddp{k}n{\varphi+\varphi_0} = \ddp{}n{\varphi+\varphi_0}\,. \end{equation} We can connect this observation to the most general form of the physical potential $V_\mathrm{p}(\varphi)$ at the linearised level, when it exists. Indeed for solutions that exist for all $\Lambda\ge0$, we have that \begin{equation} \label{general-sol-V} V(\varphi,\Lambda) = \int^\infty_{-\infty}\!\!\!\!\!d\varphi_0\, V_\mathrm{p}(\varphi_0)\,\ddp\Lambda0{\varphi-\varphi_0}\,, \end{equation} since this clearly satisfies \eqref{flow-V}, whilst from \eqref{renormOpshifted} we see it satisfies the required boundary condition $V(\varphi,0)=V_\mathrm{p}(\varphi)$. We see that $\ddp\Lambda0{\varphi-\varphi_0}$ plays the r\^ole of a Green's function, but in \emph{theory space}, giving the form of the potential at any cutoff scale in terms of its \emph{final} functional form. By Taylor expanding $\ddp\Lambda0{\varphi-\varphi_0}$ about $\varphi$, we recover the expansion \eqref{expand-V}, but also find a formula for the dimensionful couplings $g_n$ in terms of the physical potential: \begin{equation} \label{gnVp} g_n = \frac{(-)^n}{n!}\int^\infty_{-\infty}\!\!\!\!\!d\varphi\,\varphi^n\, V_\mathrm{p}(\varphi) \end{equation} (renaming $\varphi_0$ as $\varphi$). Actually, substituting the second of \eqref{delta} into \eqref{tg} and using the expression \eqref{On} for the eigenoperator in normal scalar field theory we also have that\footnote{Similarly the couplings \eqref{tg+} in normal field theory can be written as an overlap of the potential with the $\delta_{n}(\tilde{\vp})$.} \begin{equation} \tilde{g}_n = \frac{(-)^n}{n!}\int^\infty_{-\infty}\!\!\!\!\!d\tilde{\vp}\,\cO{n}(\tilde{\vp}) \tilde{V}_\Lambda(\tilde{\vp})\,, \end{equation} which in dimensionful variables gives, using \eqref{physical-OnL}, \begin{equation} \label{gnVL} g_n = \frac{(-)^n}{n!}\int^\infty_{-\infty}\!\!\!\!\!d\varphi\,\pO{n}_{\,\Lambda}(\varphi)\, V(\varphi,\Lambda)\,. \end{equation} Despite appearances, this expression is independent of $\Lambda$ (at the linear level at which we are operating). Associated to any physical potential $V_\mathrm{p}(\varphi)$ is the scale $\Lambda_\mathrm{p}$, which we can now regard as being a dynamical scale characteristic of this particular solution. As before it is defined through the following property of the evolved solution \eqref{general-sol-V}: \begin{equation} \label{VpLp} V(\varphi,\Lambda) \in\Lm- \qquad \forall \Lambda>a\Lambda_\mathrm{p}\,. \end{equation} This dynamical scale is the smallest non-negative value satisfying this equation. It can vanish for example if only finitely many $g_n$ are non-vanishing. Since we impose the quantisation condition \eqref{tv-bare}, which then holds for all $\Lambda> {\Lambda_0} $, a characteristic scale $\Lambda_\mathrm{p}=\infty$ can only be arranged by tuning the $g_n$ in a particular way as the overall UV cutoff is removed. For $\Lambda<a\Lambda_\mathrm{p}$, the sum \eqref{expand-V} does not converge. However the corresponding expression in conjugate momentum space does make sense. Either from \eqref{dn-Fourier} (with $n=0$, and $ {\Lambda_0} $ replaced with $\Lambda$) and \eqref{general-sol-V}, or directly by Fourier transforming \eqref{flow-V}, \begin{equation} \label{fourier-sol} V(\varphi,\Lambda) = \int^\infty_{-\infty}\!\frac{d\uppi}{2\pi}\, \mathcal{V}_\mathrm{p}(\uppi)\, {\rm e}^ -\frac{\uppi^2}{2}\Omega_\Lambda+i\uppi\varphi} \,, \end{equation} where $\mathcal{V}_\mathrm{p}$ is the Fourier transform of $V_\mathrm{p}$, as is clear by setting $\Lambda=0$. From \eqref{dn-Fourier} and \eqref{expand-V}, \begin{equation} \label{fourier-expansion} \mathcal{V}_\mathrm{p}(\uppi) = \sum_{n=0}^\infty g_n (i\uppi)^n\,. \end{equation} Since the $g_n$ yield the series \eqref{tv-norm-squared}, which converges for $\Lambda>a\Lambda_\mathrm{p}$, we see that the above series has an infinite radius of convergence. Therefore $\mathcal{V}_\mathrm{p}$ is an entire function. Indeed we see that $\Lambda_\mathrm{p}$ characterises the behaviour of the couplings $g_n$ at large $n$, which from \eqref{tv-norm-squared} roughly behave as \begin{equation} g_n \sim \TRM{\frac{\Lambda_\mathrm{p}^{n+5}}{\sqrt{n!}}}\,. \end{equation} The expansion \eqref{fourier-expansion} is the Fourier transform of the formal $\Lambda\to0$ limit of \eqref{expand-V}, {\it viz.}\ ``$V_\mathrm{p}(\varphi) = \sum_{n=0}^\infty g_n \,\dd{}n$''. We see that the expansion of the potential in terms of its eigenoperators is most naturally expressed in conjugate momentum space, through \eqref{fourier-sol} and \eqref{fourier-expansion}. By \eqref{VpLp} we know that asymptotically we have the leading behaviour for large $\varphi$: \begin{equation} \label{asympVpLp} V(\varphi,a\Lambda_\mathrm{p})\sim \exp\left(-\frac{a^2\varphi^2}{2a^2\Lambda_\mathrm{p}^2}\right) = \exp\left(-\frac{\varphi^2}{2\Lambda_\mathrm{p}^2}\right)\,, \end{equation} since by assumption the physical potential exists and thus the only allowed reason for exiting $\Lm-$ is the lack of large field convergence in the integral for the norm-squared. Taking the inverse Fourier transform and using \eqref{fourier-sol}, we thus find the $\uppi$ dependence of the physical potential corresponding to this large $\varphi$ limit: \begin{equation} \label{fourierVplargephi} \mathcal{V}_\mathrm{p}(\uppi) \sim {\rm e}^{-\uppi^2\Lambda_\mathrm{p}^2/4}\,. \end{equation} Fourier transforming this gives us the leading asymptotic dependence of the physical potential itself at large $\varphi$: \begin{equation} \label{Vplargephi} V_\mathrm{p}(\varphi)\sim {\rm e}^{-\varphi^2/\Lambda_\mathrm{p}^2}\,. \end{equation} This final result can be confirmed by substituting it into \eqref{general-sol-V}, which recovers \eqref{asympVpLp} but in a way where we clearly rely only on the large field behaviour of $V_\mathrm{p}$. We see therefore that $\Lambda_\mathrm{p}$ is a physical quantity, the \emph{amplitude suppression scale} that characterises the rate of exponential fall-off in the physical potential\footnote{At the linear level, keeping only potential interactions, the Legendre effective potential itself will be universal. In general such a potential is not universal \cite{Jackiw:1974cv} and instead one must appeal directly to equations of motion \cite{Nielsen:1975fs}.} at large $\varphi$. Our reason for including the non-universal factor $a$ in \eqref{VpLp} (and similar earlier equations) is finally apparent: it is so that $\Lambda_\mathrm{p}$ in this case is indeed universally related to a physical quantity. From here on we take \eqref{Vplargephi} as the primary definition $\Lambda_\mathrm{p}$, whenever the physical potential exists. In sec. \ref{sec:compact} we will see another physical consequence of this scale. If we restore $\hbar$, it sits in front of $\Omega_{a\Lambda_\mathrm{p}}=\Lambda_\mathrm{p}^2/2$. Therefore \eqref{Vplargephi} establishes that even outside $\Lm-$ the potential, and in particular the physical potential, remains non-perturbatively quantum. Since \eqref{fourier-sol} is the general solution of \eqref{flow-V}, it gives the RG flow starting from any bare potential $V(\varphi, {\Lambda_0} )$, except of course that $\mathcal{V}_\mathrm{p}$ is no longer the Fourier transform of the physical potential if the flow ends prematurely. Rewriting the solution in terms of the Fourier transform of the bare potential, we have \begin{equation} \label{fourier-sol-fromBare} V(\varphi,\Lambda) = \int^\infty_{-\infty}\!\frac{d\uppi}{2\pi}\, \mathcal{V}(\uppi, {\Lambda_0} )\, \exp\left( \frac{\uppi^2}{4a^2}(\Lzp2-\Lambda^2)+i\uppi\varphi \right)\,. \end{equation} From this expression we see clearly why a generic choice of bare potential leads to the flow ending in a singularity: for sufficiently small $\Lambda$ the integrand diverges at large $\uppi$. If the integral fails to converge first at $\Lambda=a\Lambda_\mathrm{p}$, then precisely at this point the typical result will be a distributional $V(\varphi,a\Lambda_\mathrm{p})$. \subsection{Examples at first order in the couplings} \label{sec:examples} \begin{figure}[ht] \begin{center} $ \begin{array}{cc} \includegraphics[width=0.45\textwidth]{before.png} & \includegraphics[width=0.45\textwidth]{after.png} \\[-0.3cm] a\tilde{\vp} & a\tilde{\vp} \end{array} $ \end{center} \caption{Plotted in dashed red is the exact potential \eqref{exampleVL} normalized to $V(0,\Lambda)=1$, and in solid blue its finite sum up to and including $g_{20}$. The left panel is the situation when $\tilde{\Lambda}_\mathrm{p}=0.9$, \ie just inside $\Lm-$, while the right panel is the situation having just exited, with $\tilde{\Lambda}_\mathrm{p}=1.1$.} \label{fig:completeness} \end{figure} The simplest example nevertheless illustrates and confirms the general behaviour derived above. We need an entire function for $\mathcal{V}_\mathrm{p}$. We take just \eqref{fourierVplargephi} with coefficient $\Lambda_\mathrm{p}^5\sqrt{\pi}$, consistent with dimensions. Then \begin{equation} \label{exampleVp} V_\mathrm{p}(\varphi) = \Lambda_\mathrm{p}^4\, {\rm e}^{-{\varphi^2}/{\Lambda_\mathrm{p}^2}} \,, \end{equation} while from \eqref{fourier-expansion}, the odd-$n$ couplings vanish and the even-$n$ ones are given by \begin{equation} \label{exampleg2m} g_{2m} =\frac{\sqrt{\pi}}{m!4^m} \Lambda_\mathrm{p}^{5+2m}\,. \end{equation} One can confirm that these couplings are reproduced by \eqref{gnVp}, or \eqref{gnVL} using the formula below. Performing the integral in \eqref{fourier-sol} gives the evolved potential: \begin{equation} \label{exampleVL} V(\varphi,\Lambda) = \frac{a\Lambda_\mathrm{p}^5}{\sqrt{\Lambda^2+a^2\Lambda^2_\mathrm{p}}}\, \exp\left(-\frac{a^2\varphi^2}{\Lambda^2+a^2\Lambda^2_\mathrm{p}}\right)\,. \end{equation} We see explicitly that $V(\varphi,\Lambda)\in\Lm-$ only for $\Lambda>a\Lambda_\mathrm{p}$, exiting at $a\Lambda_\mathrm{p}$ through failure of the integral to converge at large $\varphi$. Computing the norm-squared integral gives \begin{equation} \label{example-norm-squared} \frac{\sqrt{\pi}\,\tilde{\Lambda}^{10}_\mathrm{p}}{a^9\sqrt{1-\tilde{\Lambda}^4_\mathrm{p}}}\,, \end{equation} where $\tilde{\Lambda}_\mathrm{p} = a\Lambda_\mathrm{p}/\Lambda$, which indeed can be expressed as the series in \eqref{tv-norm-squared} when $\Lambda>a\Lambda_\mathrm{p}$. The Hilbert space property, in particular \eqref{completeness-proof-}, is illustrated in fig. \ref{fig:completeness}, by comparing the exact result \eqref{exampleVL} to the finite sum, namely \eqref{expand-V} with the upper limit replaced by $N=20$. We can take the bare potential to be \eqref{exampleVL} for any $\Lambda= {\Lambda_0} >a\Lambda_\mathrm{p}$. Qualitatively, the property it has that allows it to survive all the way down to $\Lambda=0$, is that it is at least as spread out as the eigenoperators themselves (although if it is more spread out, then it exits $\Lm-$ through failure of the integral to converge at large $\varphi$ as we have seen). In particular therefore for a physical potential to exist, the bare potential $\tilde{V}_ {\Lambda_0} (\tilde{\vp})\in\Lm-$ must decay for large $\tilde{\vp}$ as $\exp(-a_0^2\,\tilde{\vp}^2)$, where $1/2<a^2_0/a^2\le 1$, but also there can be no smaller-scale features in the bare potential. On the contrary, if we take a bare potential with finer features than the eigenoperators, taking for example the more compact ($ {\Lambda_0} >a\Lambda_\mathrm{p}$): \begin{equation} \label{exampleVL0} V(\varphi,\Lambda_0) = \frac{a\Lambda_\mathrm{p}^5}{\sqrt{\Lambda_0^2-a^2\Lambda^2_\mathrm{p}}}\, \exp\left(-\frac{a^2\varphi^2}{\Lambda_0^2-a^2\Lambda_\mathrm{p}^2}\right)\,, \end{equation} then the flow fails before reaching $\Lambda=0$. By comparing to \eqref{exampleVL}, we see that for this example $V(\varphi,\Lambda)$ is just given by the above expression with $ {\Lambda_0} $ replaced by $\Lambda$. The couplings $g_{2m}$ are then those of \eqref{exampleg2m} but with a $(-)^m$ factor on the right hand side, and the norm-squared integral is the same as \eqref{example-norm-squared}. However this time the exit from $\Lm-$ is due to the fact that as $\Lambda$ approaches $a\Lambda_\mathrm{p}$, the width of the exponential vanishes, indeed \begin{equation} \label{sticky-end} \lim_{\Lambda\toa\Lambda_\mathrm{p}^+} V(\varphi,\Lambda) = \Lambda^5_\mathrm{p} {\sqrt{\pi}}\, \delta(\varphi)\,. \end{equation} Attempting to flow below this point by analytic continuation gives a complex answer in general, in this case pure imaginary: \begin{equation} \label{imaginary-V} V(\varphi,\Lambda) = i \frac{a\Lambda_\mathrm{p}^5}{\sqrt{a^2\Lambda^2_\mathrm{p}-\Lambda^2}}\, \exp\left(\frac{a^2\varphi^2}{a^2\Lambda^2_\mathrm{p}-\Lambda^2}\right)\,,\qquad\Lambda<a\Lambda_\mathrm{p}\,. \end{equation} For completeness, let us mention that by using \eqref{general-sol-V} and an appropriate choice of $V_\mathrm{p}$, one can generate flows $V(\varphi,\Lambda)$ that exist for all $\Lambda\ge0$ but which never enter $\Lm-$. For example choose \begin{equation} \label{bad1} V_\mathrm{p}(\varphi) = \frac1{\Lambda_\mathrm{p}^2+\varphi^2}\qquad\implies\qquad \mathcal{V}_\mathrm{p}(\uppi) = \frac{\pi}{\Lambda_\mathrm{p}}\,{\rm e}^{-\Lambda_\mathrm{p}|\uppi|}\,. \end{equation} Since the latter has no Taylor expansion, the couplings do not exist, \cf \eqref{fourier-expansion}. By \eqref{general-sol-V} or \eqref{fourier-sol}, \begin{equation} V(\varphi,\Lambda) = \frac{a\sqrt{\pi}}{\Lambda\Lambda_\mathrm{p}} \,{\rm Re} \left\{ {\rm e}^{(\tilde{\Lambda}_\mathrm{p}+ia\tilde{\vp})^2} {\rm Erfc}(\tilde{\Lambda}_\mathrm{p}+ia\tilde{\vp})\right\}\,, \end{equation} whose large $\varphi$ behaviour is the same as at $\Lambda=0$, \ie \eqref{bad1}. On the other hand, choose \begin{equation} \label{bad2} \mathcal{V}_\mathrm{p}(\uppi) = \frac1{1+\Lambda_\mathrm{p}^2\uppi^2}\qquad\implies\qquad V_\mathrm{p}(\varphi) = \frac{\pi}{\Lambda_\mathrm{p}}\,{\rm e}^{-|\varphi|/\Lambda_\mathrm{p}}\,. \end{equation} In this case, the couplings exist ($g_n = \Lambda_\mathrm{p}^n\, \delta_{n={\rm even}}$) but clearly from \eqref{tv-norm-squared}, $V(\varphi,\Lambda)$ is never in $\Lm-$. Indeed from \eqref{general-sol-V} one finds its large $\varphi$ behaviour is again unchanged from what it was at $\Lambda=0$, namely \eqref{bad2}. In both cases $V$ is never in $\Lm-$ because its large $\varphi$ decay is too weak for all $\Lambda$. The difficulty is making physical sense out of these behaviours. In the latter case, Green's functions and $S$ matrix elements do not exist because $V_\mathrm{p}$ is not differentiable at $\varphi=0$. In both cases, there is no well defined way to isolate relevant and irrelevant parts and thus to define what one means by the continuum limit. \subsection{Derivative eigenoperators} \label{sec:derivative-ops} Now we derive the form of the general eigenoperator, with spacetime derivative interactions. It will be sufficient to consider adding kinetic term interactions to \eqref{linearised-v}, to see the general pattern. Thus we set: \begin{equation} \label{linearised-k} \delta S^\Lambda = -\epsilon\! \int\!\!d^4x\,\left\{ V\!\left(\varphi(x),\Lambda\right)+ \frac12\left(\partial_\mu\varphi\right)^2K\!\left(\varphi(x),\Lambda\right)\right\} \,. \end{equation} Recall that the linearised flow is the same whether we consider this to be part of the Wilsonian or Legendre effective action. Note the overall sign. In view of the negative sign kinetic term, this is the natural sign for the interactions, \ie assuming $K>0$. Up until now the overall sign of the potential term in the action, has not mattered,\footnote{The equations in the previous subsection are blind to this sign.} however classical stability \TRM{would now require} that the potential \TRM{is} bounded above.\footnote{Without \TRM{this}, the consequent classical instability also leads \TRM{inevitably} to a pole in \eqref{Gamma.flow-}.} \TRM{Changing its sign as in \eqref{linearised-k} then returns it to being bounded below.} Working in scaled variables \eqref{scale}, $K=\tilde{K}$, the eigenoperators are defined by the $K$ component: \begin{equation} \tilde{K}(\tilde{\vp},t) = \left(\frac{\mu}{\Lambda}\right)^\lambda \tilde{K}(\tilde{\vp}) \end{equation} and $V$ component \eqref{lambda-v}. We thus find the simultaneous equations: \begin{eqnarray} \label{eigenK} -\lambda\, \tilde{K}(\tilde{\vp}) - \tilde{\vp}\, \tilde{K}' &=& \frac{\tilde{K}''}{2a^2}\,,\\ \label{eigenK-V} -\lambda\, \tilde{V}(\tilde{\vp}) -\tilde{\varphi}\, \tilde{V}' + 4\, \tilde{V} &=& \frac{\tilde{V}''}{2{a}^2}+2b\tilde{K}\,, \end{eqnarray} where we have set \begin{equation} b = \int\!\frac{d^4\tilde{p}}{(2\pi)^4}\, C(\tilde{p}^2)\,. \end{equation} Of course we still have the solutions $\tilde{V}(\tilde{\vp}) = \delta_n(\tilde{\vp})$, $\tilde{K}(\tilde{\vp})=0$. We also clearly have solutions $\tilde{V}= b \tilde{K}/2$. By comparing to \eqref{eigen-}, we see that these $O(\partial^2)$ eigenoperators thus take the form: \begin{equation} -\frac12\, \delta_n(\tilde{\vp})\left [\left(\tilde{\partial}_\mu\tilde{\vp}\right)^2 + b\, \right]\,,\qquad \lambda= 1+n\,, \end{equation} implying that these operators have dimension $3-n$. Clearly the $\tilde{K}$ and $\tilde{V}$ parts are in $\Lm-$. We can extend the definition of $\Lm-$ by stripping off the purely space-time derivative parts in this way. All the other (polynomial and Kummer function) solutions to \eqref{eigenK} and \eqref{eigenK-V} lie outside $\Lm-$ and thus are excluded from the bare action. Importantly note that the kinetic term $\left(\tilde{\partial}_\mu\tilde{\vp}\right)^2$ is not itself an eigenoperator, since a constant is not integrable under ${\rm e}^{+a^2\tilde{\vp}^2}$. Equivalently we can define $\Lm-$ to be the space of interactions that are integrable under ${\rm e}^{+a^2\tilde{\vp}^2_0}$, where we shift the field by a spacetime independent constant, $\tilde{\vp}(\tilde{x})\mapsto\tilde{\vp}(\tilde{x})+\tilde{\vp}_0$. So far we have been assuming that the interaction is localised, \ie all fields in the interaction have the same spacetime argument $x$. This latter definition of $\Lm-$ allows us to extend it to non-local interactions, although such an interactions can only then be expanded in terms of the eigenoperators if they are quasi-local \ie possess a space-time derivative expansion. Like the potential operators $\delta_n$, these $O(\partial^2)$ operators are all relevant, and thus all associated with renormalized couplings in the continuum limit (in this case $\tilde{g}_n = g_n/\Lambda^{1+n}$). Since $b>0$, the associated potential contribution has naturally the right sign for classical stability. As might have been expected, given that these eigenoperators are defined at a Gaussian fixed point, their scaling dimension equals the sum of the dimensions of the components: \begin{equation} \label{dimension-rule-K} 3-n = [\left(\partial_\mu\varphi\right)^2]+\left[\delta_n\right]\,, \end{equation} where the scaling dimension of the first term is also its engineering dimension, and the second is given by \eqref{delta-dim}. The dimensionful operators are given by multiplying by $\Lambda^{3-n}$ and thus take the form: \begin{equation} \label{K-physical} -\frac12\, \dd\Lambda{n}\left [\left({\partial}_\mu\varphi\right)^2 + b\Lambda^4\, \right]\,, \end{equation} and consequently, taking the IR limit $\Lambda\to0$, the physical operators are: \begin{equation} \label{K-renormalised} -\frac12\, \dd{}{n}\left({\partial}_\mu\varphi\right)^2 \,. \end{equation} It is straightforward to see how this generalises to arbitrary derivative interactions. We add to the effective Lagrangrian a term \begin{equation} \epsilon L\!\left(\varphi,\Lambda\right) \sigma} %{\mathfrak{m}_p(\partial,\partial\varphi)\,, \end{equation} where $\sigma} %{\mathfrak{m}_p$ is some Lorentz invariant monomial with $2p$ space-time derivatives, of definite engineering dimension $d_p$, and where each instance of $\varphi$ appears differentiated at least once. Tadpole corrections will generate subleading terms $\sigma} %{\mathfrak{m}_{0\le\, p'<p}$ of lower dimension $d_{p'}$, which thus must also be added, together with their coefficient functions. For the eigen-functions, the top function, $\tilde{L}(\tilde{\vp})$, satisfies the same equation as \eqref{eigen-} except that by scaling as in \eqref{scale}, the dimension $4$ is replaced by $4-d_p$. We thus find that the interactions in $\Lm-$ are again formed by setting $\tilde{L}(\tilde{\vp})\propto \delta_n(\tilde{\vp})$, where they form a basis for such $\sigma} %{\mathfrak{m}_p$ interactions. Similarly to \eqref{dimension-rule-K} their dimensions are thus $d_p-1-n$, while the dimension of the associated coupling is $5+n-d_p$. Thus again infinitely many of this tower of higher derivative operators are relevant. However for $d_p\ge 5$, the $n=d_p-5$ operator is marginal. And once $d_p\ge6$, those $n< d_p-5$ operators are irrelevant, and thus in the continuum limit have couplings that are determined by the relevant ones. The coefficient functions for the subleading terms will satisfy equations somewhat similar to \eqref{eigenK-V}, for which we want the special solution which will be tied to $\delta_n(\tilde{\vp})$. Since their dimension $d_{p'}<d_p$, they will appear in the dimensionful eigenoperators with positive powers of $\Lambda$ like in \eqref{K-physical}. Finally the physical operators will simply be \begin{equation} \label{derivative-operator-basis-renormalised} \dd{}n\,\sigma} %{\mathfrak{m}_p(\partial,\partial\varphi)\,. \end{equation} We see that the novel physical properties, namely non-perturbative in $\hbar$, evanescence and IR suppression, are also true of all the derivative interactions. Apart from the role of the polynomial basis \eqref{On} now being played by $\delta_n(\tilde{\vp})$, this structure closely mimics that of scalar field theory with positive kinetic term. Similarly therefore, we anticipate that a more convenient basis for the Hilbert space of interactions, is to use the top term and discard the subleading corrections:\footnote{although we are discarding only the $\sigma} %{\mathfrak{m}_{p'<p}$ terms, not the crucial tadpole corrections to $\dd{}n$. Of course the maximal subset of $\sigma_p$ should be chosen so that \eqref{derivative-operator-basis-physical} are independent under integration by parts.} \begin{equation} \label{derivative-operator-basis-physical} \dd\Lambda{n}\,\sigma} %{\mathfrak{m}_p(\partial,\partial\varphi)\,, \end{equation} and with a slight abuse of terminology, classify these as relevant, marginal, or irrelevant. Thus for example we recognise that $\dd\Lambda0\,(\Box\varphi)^2$ is an irrelevant operator, $\dd\Lambda1\,(\Box\varphi)^2$ is marginal, and all the $\dd\Lambda{n>1}\,(\Box\varphi)^2$ are relevant. \section{Perturbation theory} \label{sec:perturbation-th} We have seen that already at the linear level, the structure is non-perturbative in $\hbar$, but nevertheless calculable. This is also true for corrections which can be developed as a perturbation theory in the couplings $g_n$, while staying non-perturbative in $\hbar$. That this can be done consistently, rests upon the fact that, term by term, the corrections remain in $\Lm-$. Indeed, in these terms we will find differentials of the eigenoperators, which by \eqref{d-delta} trivially remain in $\Lm-$. As we will see in sec. \ref{sec:QG}, when applied to quantum gravity we can expect to obtain terms with $\delta_m(\tilde{\vp})$ times a positive integer power of $\tilde{\vp}$. This is again in $\Lm-$. In fact from \eqref{dn-Fourier} it is straightforward to derive \begin{equation} \label{dn-phi} \varphi\, \dd\Lambda{n} = -n\, \dd{\ \Lambda}{n-1}\,-\,\Omega_\Lambda\, \dd{\ \Lambda}{n+1} \end{equation} (which from \eqref{delta} is just the Hermite polynomial recurrence relation in disguise). Finally, we will also obtain products of the eigenoperators. Clearly such products are again in $\Lm-$, and thus, if quasi-local, we can expand them back into the eigenbasis. We are thus faced generically with \begin{equation} \label{prod} \delta_m(\tilde{\vp})\,\delta_n(\tilde{\vp}) =\sum_{j=0}^\infty\cc{mn}j\, \delta_j(\tilde{\vp}) \end{equation} (where the fields are all at the same spacetime point). From \eqref{tg} and a Hermite linearization formula \cite{Gradshteyn1980}, the expansion coefficients are: \begin{equation} \cc{mn}j=\frac{2^{s-j}a^{2s-2j}}{2\pi^2j!} \Gamma(s-j)\Gamma(s-m)\Gamma(s-n)\, \delta_{j+m+n\,=\,{\rm even}} \,,\quad {\rm where}\quad 2s = j+m+n+1\,. \end{equation} However, using Stirling's formula for large $j$, we find \begin{equation} j! \left(\cc{mn}j\right)^2 \sim \frac{a^{2(m+n+1)}}{\sqrt{2\pi^3}} \frac{\ j^{m+n-\tfrac{1}{2}}}{(4a^2)^j}\,, \end{equation} therefore we see that this is a case where \eqref{tv-norm-squared} has a finite radius of convergence. Assuming for the moment that \eqref{prod} appears in the bare action, thus with coupling $\tilde{g}_{mn}=g_{mn}/\Lzp{6+m+n}$, and we evolve the product itself at the linearised level (this is not exactly how it arises, but this discussion will be useful shortly), it leaves $\Lm-$ for $\Lambda\le a\Lambda_\mathrm{p}$ where \begin{equation} \label{prodLp} a\Lambda_\mathrm{p} = {\Lambda_0} /\sqrt{2}\,. \end{equation} To see this we note that the corresponding dimensionful coefficients are: \begin{equation} \label{c-dim} c^j_{mn} := \cc{mn}j\,\Lzp{j-m-n-1}\,, \end{equation} and then we use \eqref{tv-norm-squared} to compute the norm-squared at scale $\Lambda$. Having defined the dimensionful coefficients by \eqref{c-dim}, the dimensionless expansion evolves self-similarly, in particular $\tc{mn}j = c^j_{mn}/k^{j-m-n-1}$, this fact being guaranteed for the couplings by dimensional analysis. However the relation \eqref{prod} is not respected by the RG already at linearised level: the evolved expansion \begin{equation} \label{prod-evolved} \left[\delta_m(\tilde{\vp})\,\delta_n(\tilde{\vp})\right]^ {\Lambda_0} _k := \sum_{j=0}^\infty\tc{mn}j\, \delta_j(\tilde{\vp})\,, \end{equation} is only equal to $\delta_m(\tilde{\vp})\,\delta_n(\tilde{\vp})$ at the original scale $k= {\Lambda_0} $. Since the $\cc{mn}j$ are pure numbers, we see that the relevant couplings $g_{mn}\, c^j_{mn}$ are large (for large enough $j$), as set by the bare cutoff scale $ {\Lambda_0} $. Since (at finite scales) the relevant couplings must be finite in the continuum limit, we see that we would need to compensate by adjusting the bare values of $g_j$, in other words they would need renormalization. In fact the single term $g_{mn}\delta_m(\tilde{\vp})\,\delta_n(\tilde{\vp})$ in the bare potential is anyway unacceptable at the linearised level, because such a potential is more compact than the eigenoperators. Thus the flow in fact ends at \eqref{prodLp} with a distributional effective potential. Indeed the bare potential can be rewritten in this case as \begin{equation} P\left(\partial_\varphi\right)\,\left(\dd\Lz0\right)^2\,, \end{equation} where the first term is a rank $m+n$ polynomial of $\varphi$ derivatives. The second term is proportional to \eqref{exampleVL0}, with $a\Lambda_\mathrm{p}$ again given by \eqref{prodLp}, and thus the whole combination evolves to this constant of proportionality times $P\left(\partial_\varphi\right)$ acting on \eqref{sticky-end}. Now we demonstrate how perturbation theory can be developed. Since we need results that are non-perturbative in $\hbar$, we must in effect sum over all Feynman diagrams to infinite order. What promises to keep this manageable is that we can nevertheless expand perturbatively in the couplings. To get insight we first proceed this way, working directly from the functional integral. Then we will turn to solving the flow equations, which provides a more elegant and more powerful approach for our purposes. \subsection{Second order in the couplings by summing Feynman diagrams} \label{sec:pert2-textbook} \begin{figure}[ht] \centering \includegraphics[scale=0.4]{melons.png} \caption{Feynman diagrams at second order in the coupling but all orders in $\hbar$.} \label{fig:melons} \end{figure} At second order in the couplings, the 1PI contribution will be computed from all such Feynman diagrams involving two bare operators at spacetime points $x_1$ and $x_2$, each taking the form of \eqref{derivative-operator-basis-physical} with $\Lambda= {\Lambda_0} $. If for illustrative purposes we keep all and only the non-derivative operators, then this can be written as the $\varphi$ dependent 1PI part of the functional integral \begin{equation} \label{V2nd} \frac12\int\!\! \mathcal{D}\varphi_q\ {\rm e}^{\frac{1}{2}\varphi_q\cdot \left(\Delta_{k}^ {\Lambda_0} \right)^{\!-1}\!\!\!\!\!\cdot \varphi_q}\, \int_{x_1} V\!\left(\varphi_q(x_1)\!+\!\varphi(x_1), {\Lambda_0} \right)\,\int_{x_2} V\!\left(\varphi_q(x_2)\!+\!\varphi(x_2), {\Lambda_0} \right)\,. \end{equation} The exponential of the fluctuation field $\varphi_q(x)$ has the wrong sign for promoting convergence. As mentioned at the beginning of sec. \ref{sec:minus}, at first sight this makes no sense and, as is clear from \eqref{sum-ruled-Delta}, the exponential divergence gets dramatically worse as $k\to {\Lambda_0} $, rather than suppressing the integral. However this latter divergence belongs only to the field independent part and we are not interested in that. By using \eqref{fourier-sol} at $\Lambda= {\Lambda_0} $, the dependence on the fluctuation field from the interactions can be isolated through ${\rm e}^{iJ\cdot\varphi_q}$, where \begin{equation} J(z) = i\sum_{j=1,2}\uppi_j\,\delta(x_j-z)\,, \end{equation} and $\uppi_j$ is the corresponding conjugate momentum. Performing the now-Gaussian functional integral gives \begin{equation} \label{V2ndResult} \frac12\int_{x_1,x_2}\int \frac{d\uppi_1d\uppi_2}{(2\pi)^2}\, \mathcal{V}_\mathrm{p}(\uppi_1, {\Lambda_0} )\mathcal{V}_\mathrm{p}(\uppi_2, {\Lambda_0} )\, {\rm e}^{-\frac12\uppi_iM_{ij}\uppi_j+i\uppi_i\varphi(x_i)} \Big|_{\rm 1PI}\,. \end{equation} Anticipating that the dimensionful couplings $g_n$ will now run with scale, we set them to their bare values $g_n( {\Lambda_0} )$, or equivalently through \eqref{fourier-expansion}, set $\mathcal{V}_\mathrm{p}$ to its bare value. We have also introduced the $O(\hbar)$ $2\!\times\!2$ matrix \begin{equation} M = \begin{pmatrix} \Omega_k & -\Delta^{\! {\Lambda_0} }_k(x_1,x_2)\\ -\Delta^{\! {\Lambda_0} }_k(x_1,x_2) & \Omega_k \end{pmatrix}\,. \end{equation} The $\Omega_k$ entries arise in the same way as in \eqref{tadpoles-evolution}, and thus re-sum the tadpole graphs in fig. \ref{fig:tadpoles}, turning the constituent bare eigenoperators into renormalized ones. Expanding perturbatively in $\Delta^{\! {\Lambda_0} }_k(x_1,x_2)$ generates the graphs in fig. \ref{fig:melons} that connect the two renormalized eigenoperators. Finally, the restriction to 1PI means that one should subtract the terms zeroth and first-order in $\Delta^{\! {\Lambda_0} }_k(x_1,x_2)$. If individual eigenoperator contributions were representative of the whole, for example if only a finite number of couplings were non-vanishing, we see via \eqref{fourier-expansion} that the $\uppi$ integral in \eqref{V2ndResult} would diverge as soon as $M$ is no longer positive definite. Since $\Delta^{\! {\Lambda_0} }_k(x_1,x_2)$ is a decreasing function of $|x_1-x_2|$,\footnote{This is \eg clear from the fact that $\Delta^{\! {\Lambda_0} }(r)-\Delta^{\! {\Lambda_0} }(r') > \Delta^k(r)-\Delta^k(r')$ for $r=|x_1-x_2|<r'=|x'_1-x'_2|$.} this happens first at coincident points where \begin{equation} \label{coincidentDelta} \Delta^{\! {\Lambda_0} }_k(x_1,x_1) = \Omega_ {\Lambda_0} - \Omega_k = \frac{ {\Lambda_0} ^2-k^2}{2a^2}\,, \end{equation} meaning that $k$ could not be lowered below $ {\Lambda_0} /\sqrt{2}$, as in \eqref{prodLp}. We recognise that the flow has broken down for the reasons given in the previous subsection. But operator mixing will switch on all couplings, which furthermore will run with scale. Their bare values will be weighted by the appropriate power of $ {\Lambda_0} $ as set by dimensions (but such that the couplings nevertheless behave correctly so as to access the Gaussian continuum limit). At the bare level, for large $\uppi$, we therefore expect something like \begin{equation} \mathcal{V}_p(\uppi, {\Lambda_0} ) \sim {\rm e}^{-\uppi^2\Lzp2/4c^2_0}\,, \end{equation} for some bare coefficient $c_0( {\Lambda_0} )>0$ (compare \eqref{fourierVplargephi}). Then providing $c_0<a$, the same arguments as in \eqref{coincidentDelta} show that \eqref{V2ndResult} would be well defined for all $k\ge0$. However, as well as resorting to guesswork, we are also ignoring the contributions from the (marginally) relevant derivative operators \eqref{derivative-operator-basis-physical}, all of which will also contribute. \subsection{Second order in the couplings by solving the flow equation} \label{sec:pert2-flow} This complexity is much better handled by solving the flow equations directly. The simplest description arises from taking the 1PI part $\Gamma_\Lambda:=\Gamma_\Lambda^\infty$ of the Wilsonian effective action $S^\Lambda$ \cite{Morris:1993,Morris:1998,Morris:2015oca} since this will give us direct access to the $\beta$ functions induced by quantum corrections, and involves only the one scale, $\Lambda$. At the same time this solves for the IR cutoff Legendre effective action directly in the continuum limit. Writing $\Gamma^{(n)}$ to be the part $n^{\rm th}$ order in the couplings, and expanding the right hand side of \eqref{Gamma.flow-} to second order in the couplings, we have $\Gamma_\Lambda=\Gamma^{(1)}+\Gamma^{(2)}$, where\footnote{and from \eqref{DeltaUV} and \eqref{sum-rule}, $\Delta_\Lambda(p) =\Delta^\infty_\Lambda(p)= [1-C^\Lambda(p)]/p^2$.} \begin{equation} \label{flow-2nd} \dot{\Gamma}^{(1)}[\varphi]+\dot{\Gamma}^{(2)}[\varphi] = -\frac{1}{2}\,\text{tr}\left[\dot{\Delta}_\Lambda\cdot \frac{\delta^{2}\Gamma^{(1)} }{\delta\varphi\delta\varphi}\right] -\frac{1}{2}\,\text{tr}\left[\dot{\Delta}_\Lambda\cdot \frac{\delta^{2}\Gamma^{(2)} }{\delta\varphi\delta\varphi}\right] -\frac12\,\text{tr}\left[\dot{\Delta}_\Lambda\cdot\frac{\delta^{2}\Gamma^{(1)} }{\delta\varphi\delta\varphi}\cdot\Delta_\Lambda\cdot\frac{\delta^{2}\Gamma^{(1)}}{\delta\varphi\delta\varphi} \right] \end{equation} As we have already emphasised, we need to work non-perturbatively in the loop expansion. It is therefore important to recall that the flow equations \eqref{pol-} and \eqref{Gamma.flow-} are indeed non-perturbative, in fact exact, RG equations. Written in the form \eqref{flow-2nd} the flow equation is now second order in the couplings, but it is still exact in $\hbar$. If we were to solve \eqref{flow-2nd} by iteration, we would reproduce the Feynman diagrams just considered, in particular the last term gives those in fig. \ref{fig:melons}. Now we again concentrate on the potential. We have seen that at first order we have the solution \begin{equation} \label{linearised} \Gamma_\Lambda[\varphi] = \Gamma^{(1)}= -\int_x V\!\left(\varphi(x),\Lambda\right)\,, \end{equation} where $V$ is given by \eqref{fourier-sol}, for some $\Lambda$-independent $\mathcal{V}_\mathrm{p}$, which when expanded as in \eqref{fourier-expansion} gives thus $\Lambda$-independent $g_n$. If the flow survives down to $\Lambda=0$, then $\mathcal{V}_\mathrm{p}$ is the Fourier transform of the resulting physical potential $V_\mathrm{p}$. When $V(\varphi,\Lambda)\in\Lm-$, we can instead expand it directly, as in \eqref{expand-V}. Beyond linearised order, we need to define the couplings by an appropriate renormalization condition. Since the IR cutoff ensures that $\Gamma_\Lambda$ has a spacetime derivative expansion, we choose to define the $g_n$ to be the Taylor expansion coefficients of the corresponding $\mathcal{V}_\mathrm{p}$, which thus now runs: \begin{equation} \label{fourier-expansion-running} \mathcal{V}_\mathrm{p}(\uppi,\Lambda) = \sum_{n=0}^\infty g_n\!(\Lambda) \left(i\uppi\right)^n\,. \end{equation} While $V\in\Lm-$, this is equivalent to requiring that $g_n(\Lambda)$ is the coefficient of the operator $\dd\Lambda{n}$. By the renormalization conditions, $\Gamma^{(2)}$ has no interaction potential. Thus the only piece that contributes to the running of the potential is the $O(\partial^0)$ part of the final term which evaluates to $c \int_x \left(\partial^2_\varphi V\right)^2$, where $c$ is a universal term, the one-loop diagram: \begin{equation} c = -\frac12\int\!\frac{d^4p}{(2\pi)^4}\, \Delta_\Lambda(p)\dot{\Delta}_\Lambda(p) = -\frac{1}{32\pi^2} \int^\infty_0\!\!\!\!\!\! dp\,\, \frac{\partial}{\partial p} C^2_\Lambda = -\frac{1}{32\pi^2} \,. \end{equation} By \eqref{physical-dnk} and \eqref{expand-V}, while $V\in\Lm-$ we have \begin{equation} \label{Vpp} \partial^2_\varphi V(\varphi,\Lambda) = \sum^\infty_{n=0} g_n\, \dd\Lambda{n+2}\,. \end{equation} Converting to scaled operators using \eqref{def-physical}, using the product formula \eqref{prod}, and then converting back we thus find \begin{equation} \label{beta-physical} \dot{g}_j = \frac{\Lambda^{j-5}}{32\pi^2}\sum_{m,n=0}^\infty \frac{\cc{m+2,n+2}j }{\Lambda^{m+n}} g_mg_n\,, \end{equation} or in autonomous form: \begin{equation} \label{beta-scaled} \Lambda \frac{\partial}{\partial\Lambda} \tilde{g}_j = -(5+j)\tilde{g}_j-\frac{1}{32\pi^2}\sum_{m,n=0}^\infty {\cc{m+2,n+2}j } \tilde{g}_m\tilde{g}_n\,. \end{equation} Relying on the existence of flows in the reverse direction, we can now solve these equations for $\Lambda>\mu$ for any given choices of `initial' couplings $g_j(\mu)$. Indeed \TRM{it} is straightforward to solve \eqref{beta-physical} as a perturbative series in powers of $g_j(\mu)$: \begin{equation} g_j(\Lambda) = g_j(\mu) + \frac{1}{32\pi^2}\sum_{m,n=0}^\infty \frac{\cc{m+2,n+2}j }{m+n+5-j}g_m(\mu)g_n(\mu) \left(\Lambda^{j-m-n-5}-\mu^{j-m-n-5}\right) + O\left(g^3(\mu)\right)\,. \end{equation} Note that since $\tilde{g}_j(\Lambda) =g_j(\Lambda)/\Lambda^{j+5}$, order by order in the perturbation theory all these solutions emanate from the Gaussian fixed point in the $\Lambda\to\infty$ limit as required. We have only kept track of the $O(\partial^0)$ parts.\footnote{We cannot therefore directly compare this to the calculation in sec. \ref{sec:pert2-textbook}, where the induced higher derivative contributions are implicitly included at scales $k< {\Lambda_0} $, through $\Delta^ {\Lambda_0} _k(x_1,x_2)$.} The last term in \eqref{flow-2nd} provides a spacetime derivative expansion to all orders. Expanding these into the basis \eqref{derivative-operator-basis-physical}, it will contribute to the $\beta$ functions for all the other relevant couplings. A continuum limit can therefore be achieved only by working simultaneously with all the relevant couplings, as expected on general grounds. Defining their renormalization conditions in a similar way, will mean that $\Gamma^{(2)}$ contains no relevant operators. Its only purpose is to solve for the couplings of the irrelevant operators which, in the continuum limit, are determined by the irrelevant operator parts extracted from the last term. Of course once we recognise that all the other relevant couplings must be switched on, the second-order $\beta$ functions above will receive contributions from them as well. We note that the arbitrarily negative powers of $\Lambda$ that appear in \eqref{beta-physical} prevent a smooth $\Lambda\to0$ limit existing, unless all the couplings $g_n$ vanish in this limit. To show this we assume a $\Lambda\to0$ limit does exist for which $V(\varphi,0)\ne0$ and show that $\partial_\Lambda V(\varphi,\Lambda)$ must then diverge in this limit. First note that outside $\Lm-$, we would get the same formula by using \eqref{fourier-expansion-running} and \eqref{fourier-sol} and Fourier transforming the final $c \int_x \left(\partial^2_\varphi V\right)^2$ term. In fact having isolated the $O(\partial^0)$ part, this last term is the only term that survives the $\Lambda\to0$ limit on the right hand side of \eqref{flow-2nd}, and is non-vanishing if the couplings are non-vanishing in this limit. This implies that $\Lambda\partial_\Lambda V(\varphi,\Lambda)$ has a finite limit, which in turn implies that $\partial_\Lambda V(\varphi,\Lambda)$ itself must diverge in the $\Lambda\to0$ limit. However, as we will address in sec. \ref{sec:infinite}, these couplings generate a mass $m$, which must then be handled non-perturbatively. Then it is no longer true that the evolution of the couplings $g_j$ are tied to the scale $\Lambda$ and we can expect that they generically freeze out at values set by the scale $m$, as $\Lambda\to0$. We similarly expect finite size effects (see sec. \ref{sec:compact}) to provide a freeze-out scale $1/L$ on a sufficiently homogeneous manifold. \subsection{Higher orders and infinite order} \label{sec:infinite} \begin{figure}[ht] \centering \includegraphics[scale=0.4]{chains.png} \caption{Part of these Feynman diagrams need to resummed to all orders in the coupling.} \label{fig:chains} \end{figure} Although we have only sketched explicitly how to compute the $O(g^2)$ contributions (which however through the $\beta$ functions \eqref{beta-physical} or \eqref{beta-scaled} furnish higher order contributions and indeed resum these in the usual fashion), we trust the treatment of higher order contributions along these lines is also clear. We note that the scalar field theory will also be subject to some corrections that must be handled non-perturbatively in the IR. In particular, classes of Feynman diagrams made by replacing the propagators $\Delta_\Lambda$ by the chain of corrections shown in fig. \ref{fig:chains}, as well as providing higher order $\varphi$ interactions, induce a mass $m^2(\Lambda)$. From \eqref{fourier-sol}, and setting $\varphi=0$ in \eqref{Vpp} and iterating \eqref{dn-phi}:\footnote{or consulting known formulae for Hermite polynomials} \begin{equation} \label{m2} m^2(\Lambda) = -\int^\infty_{-\infty}\!\frac{d\uppi}{2\pi}\, \uppi^2\, \mathcal{V}_\mathrm{p}(\uppi,\Lambda)\, {\rm e}^ -\frac{\uppi^2}{2}\Omega_\Lambda}\ =\ \frac{a}{\sqrt{\pi}\Lambda} \sum^\infty_{n=0} (2n+1)!!\left(-\frac{2a^2}{\Lambda^2}\right)^{\!n+1}\!\!\!\!\!\!g_{2n}(\Lambda)\,. \end{equation} The corresponding $O(\varphi^0)$ corrections in fig. \ref{fig:chains} thus appear as a power series in $m^2/p^2$. If we try to treat these order by order perturbatively in the couplings, when inserted into loop corrections (such as those of figs. \ref{fig:tadpoles} or \ref{fig:melons}) we obtain diagrams of ever increasing divergence as the IR cutoff $\Lambda\to0$. This problem is clearly related to the one we noted at the end of the previous subsection. Instead therefore we need to replace $\Delta_\Lambda(p)$ by $C_\Lambda(p)/(p^2+m^2)$, singling out $m(\Lambda)$ for non-perturbative treatment in the IR. At the same time we should use \eqref{m2} to eliminate one degree of freedom, for example $g_0(\Lambda)$, in favour of $m^2(\Lambda)$ in the equations. We recognise that the $-\tfrac{1}{2}\, \dd\Lambda{n}\left({\partial}_\mu\varphi\right)^2$ operators through the chain of diagrams \ref{fig:chains} similarly induce a wavefunction renormalization. These do not result in the same way in IR divergences. Similarly all higher derivative operators \eqref{derivative-operator-basis-physical} are IR safe in this sense. Note that in a correctly formed continuum limit, all contributions from all operators are UV safe and do not need non-perturbative resummation in this regime, apart from using the $\beta$ function to resum the evolution of any marginally relevant coupling. This follows because such a continuum limit depends only on the (marginally) relevant couplings whose scaled versions must vanish in the limit $\Lambda\to\infty$ so that the flow emanates from the Gaussian fixed point as required. \section{Unitarity and universality} \label{sec:unitarity} We are not of course claiming that scalar field theory with wrong sign kinetic term, when considered as a continuum quantum field theory in its own right, is free from physical problems. In Minkowski signature, the wrong sign for the kinetic term implies either a Hamiltonian unbounded from below, or a Fock space with negative norm states (see \eg sec. 8 of \cite{Arnone:2001iy}). Presumably related, the dimensions $[\delta_n]<1$, \cf \eqref{delta-dim}, all violate the unitarity bound. The existence of higher derivative relevant eigenoperators, \cf \eqref{derivative-operator-basis-renormalised}, leads to further concerns for unitarity. Finally the fact that it is specified by an infinite number of relevant couplings is phenomenologically \TRM{useless}, and raises questions about universality as already touched on in sec. \ref{sec:non-deriv}. However it is natural to expect that these problems disappear when the structure is appropriately embedded into gravity, as discussed in the Introduction and sec. \ref{sec:QG}. \section{RG evolution on a manifold} \label{sec:compact} As we have seen, even at the linearised level, RG evolution plays a crucial r\^ole. By the quantisation condition, the eigenoperators are given at the bare level by the operators in eqn. \eqref{derivative-operator-basis-physical} with $\Lambda= {\Lambda_0} $, as given by the coefficient functions \eqref{physical-dnL}. At the linear level these composite operators do not interact with each other, but they nevertheless evolve under lowering the cutoff, by tadpole quantum corrections as in fig. \ref{fig:tadpoles}. In $\mathbb{R}^4$, by the eigenoperator property, they are form invariant under this evolution, with the inherent scale now equal to the infrared cutoff, as in eqn. \eqref{physical-dnk}, becoming the distributions \eqref{derivative-operator-basis-renormalised} in the physical limit in which the infrared cutoff is removed, \ie as $k\to0$. \subsection{Eigenoperators on a manifold} \label{sec:compact-eigen} On a (Euclidean) spacetime manifold $\mathcal{M}$ that is not $\mathbb{R}^4$, the bare operators are still the same, because these operators are defined at $ {\Lambda_0} $, the UV scale that is eventually diverging, corresponding to vanishing distances where the spacetime is indistinguishable from $\mathbb{R}^4$. However the quantum corrections are modified at long distances by the spacetime geometry. To be specific it is sufficient to consider the evolution of the potential operator $\dd {\Lambda_0} {n}$, as defined in \eqref{physical-dnL}, since a general eigenoperator is also made with this term, and the top part, \eqref{derivative-operator-basis-physical} with covariant derivatives as appropriate, evolves in the same way. The evolution will be given by \eqref{tadpoles-evolution}, where the propagation now takes place on the manifold (and thus also a $\sqrt{g}$ is included in the integral over $x$). Actually, until we know the form of the full theory of quantum gravity, we do not know for sure what replaces \eqref{tadpoles-evolution} in the general case.\footnote{For example whether $\varphi$ is conformally coupled to the background curvature, \cf sec. \ref{sec:QG}.} For the general arguments below we do not need the precise definition, only that it reduces to the flat space version when the background metric $g_{\mu\nu}\to\delta_{\mu\nu}$. Then in the fully worked example we choose the metric to be $\delta_{\mu\nu}$. On the other hand, since the bare operator is the same, the identity \eqref{bare-delta-Omega} still holds and thus the bare operator can still be expressed as \eqref{bare-delta-prop}, \emph{where the integration is still over} $\mathbb{R}^4$. Thus combining \eqref{bare-delta-Omega} and \eqref{tadpoles-evolution}, the quantum corrections above $k$ no longer precisely cancel to give \eqref{bare-delta-Omega} with $ {\Lambda_0} $ replaced by $k$, but leave a modified version where: \begin{equation} \label{delta-M-Omega} \dd{\!k, {\Lambda_0} }n = \exp\left(\frac12\,\Omega_{k, {\Lambda_0} }(x)\, \frac{\partial^2}{\partial\varphi^2}\right) \dd{}n\,, \end{equation} and \begin{equation} \label{Omega-M-kL} \Omega_{k, {\Lambda_0} }(x) = |\langle \varphi(x) \varphi(x) \rangle |_{\mathbb{R}^4}-|\langle \varphi(x) \varphi(x) \rangle |_{\mathcal{M}}\,. \end{equation} Here the first term is $\Omega_ {\Lambda_0} $, as defined in \eqref{Omega}, while the second term is from propagation on the manifold $\mathcal{M}$ and is regulated by $C^ {\Lambda_0} _k$. In general the second term depends on the position of the point $x$ in $\mathcal{M}$, and thus $\dd{\!k, {\Lambda_0} }n$ has $x$ dependence through $\Omega_{k, {\Lambda_0} }(x)$ as well as through its dependence on the field $\varphi(x)$. Consequentially, the operators are no longer form invariant, but pick up ``finite size'' corrections, and will retain some dependence on the UV regularisation while $ {\Lambda_0} $ is finite. However we can expect that $\Omega_{k, {\Lambda_0} }(x)$ becomes independent of the latter in the limit $ {\Lambda_0} \to\infty$, in particular the operators will again be automatically renormalized, because the tadpole corrections will continue to wipe out all dependence on higher scales providing $k\gg 1/L$, where $L$ is a characteristic length scale for the manifold. This will continue to work as $k$ is lowered, until $k$ is comparable to $1/L$, after which the infrared properties should primarily be set by the geometry. In particular in the limit that $k\to0$, we expect that $\Omega_{k, {\Lambda_0} }(x)$ will therefore become a finite universal function of this geometry. We call this function \begin{equation} \label{Omega-p} \Omega_\mathrm{p}(x) := \lim_{ {\Lambda_0} \to\infty\atop k\to0} \Omega_{k, {\Lambda_0} }(x)\,. \end{equation} By comparing \eqref{bare-delta-Omega} and \eqref{physical-dnL}, we see immediately that evaluating \eqref{delta-M-Omega} gives again the same form for eigenoperators on $\mathcal{M}$ as in \eqref{physical-dnk}, but with $\Omega_k$ replaced by $\Omega_{k, {\Lambda_0} }(x)$. Taking the limits \eqref{Omega-p} we get the physical eigenoperators $\dd{\mathrm{p}}{n}$, which are thus given by \begin{equation} \label{physical-p} \dd\mathrm{p}{n} = \frac{\partial^n}{\partial\varphi^n}\, \dd\p0\,, \qquad{\rm where}\qquad \dd\p0 = \frac{1}{\sqrt{2\pi\Omega_\mathrm{p}}}\,\exp\left(-\frac{\varphi^2}{2\Omega_\mathrm{p}}\right)\,. \end{equation} Evidently, $\Omega_\mathrm{p}=0$ if the manifold is $\mathbb{R}^4$, and we return to $\dd{\mathrm{p}}{n}=\dd{}n$. Otherwise, by dimensions \begin{equation} \label{shape} \Omega_\mathrm{p}(x) = \frac{\mathcal{S}(x)}{4\pi L^2}\,, \end{equation} where $\mathcal{S}$ is a (universal) dimensionless `shape' function that can thus only depend on dimensionless characterisations of the manifold (the factor $4\pi$ is included for convenience). Providing $\mathcal{S}(x)>0$, $\Omega_\mathrm{p}$ acts to suppress large amplitudes $\varphi>1/L$. However as we will see, it is also possible for $\mathcal{S}$ to be negative. \subsection{General linear RG flows on a manifold} \label{sec:compact-linear} In this latter case, the operators $\dd{\!k, {\Lambda_0} }n$ themselves cease to exist below some positive IR cutoff $k$, being the value where, for some $x$, $\Omega_{k, {\Lambda_0} }(x)$ first vanishes and then turns negative. (Here $ {\Lambda_0} $ can be finite or the continuum limit, $ {\Lambda_0} \to\infty$, could have been taken.) At this point we get a distribution, namely $\dd{}n$, and attempting to flow below this $k$ will result in the operator turning imaginary, as in \eqref{imaginary-V}. Once more, a full understanding at the linearised level is only gained by switching on infinitely many couplings. Consider again the general solution \eqref{fourier-sol} for the potential. This solution now takes the form \begin{equation} \label{compact-fourier-sol} V(\varphi,k, {\Lambda_0} ) = \int^\infty_{-\infty}\!\frac{d\uppi}{2\pi}\, \mathcal{V}_\mathrm{p}(\uppi)\, {\rm e}^ -\frac{\uppi^2}{2}\Omega_{k, {\Lambda_0} }+i\uppi\,\varphi} \,, \end{equation} where the choice of bare (relevant) couplings fixes the theory, and in particular determines the amplitude suppression scale $\Lambda_\mathrm{p}$. As before, the above expression is meaningful even when $V\notin\Lm-$. Additionally it remains meaningful even when the eigenoperators themselves fail to exist, since by \eqref{fourierVplargephi} the integral still converges for large $\uppi$ providing $\Omega_{k, {\Lambda_0} }(x)>-\Lambda_\mathrm{p}^2/2$ for all $x\in\mathcal{M}$. Taking the limits $ {\Lambda_0} \to\infty$ and $k\to0$, the physical potential is now: \begin{equation} \label{compact-fourier-phys} V_\mathrm{p}\left(\varphi(x),x\right) = \int^\infty_{-\infty}\!\frac{d\uppi}{2\pi}\, \mathcal{V}_\mathrm{p}(\uppi)\, {\rm e}^ -\frac{\uppi^2}{2}\Omega_\mathrm{p}(x)+i\uppi\,\varphi(x)} \,, \end{equation} and thus asymptotically for large field: \begin{equation} \label{compact-large-field} V_\mathrm{p}\left(\varphi(x),x\right) \sim \exp\left(-\frac{\varphi^2(x)}{\Lambda_\mathrm{p}^2+2\Omega_\mathrm{p}(x)}\right)\,. \end{equation} Thus $\Omega_\mathrm{p}(x)$ modifies the amplitude suppression scale, increasing or decreasing it, depending on the sign. In particular from \eqref{shape}, the given theory only makes sense on manifolds where\footnote{It might be possible to make sense of the limiting case where $\Omega_\mathrm{p}(x)=-\Lambda_\mathrm{p}^2/2$ for some points or subspace in $\mathcal{M}$.} \begin{equation} \label{lower-bound} \mathcal{S}(x) > -2\pi L^2\Lambda^2_\mathrm{p}\qquad \forall x\in \mathcal{M}\,. \end{equation} Judging from the example below, and confirmed in further examples in ref. \cite{Matt1}, manifolds where $\mathcal{S}(x)$ is somewhere negative, have the characteristic that they have at least one other finite length scale which is sufficiently different, already at the $O(1)$ level, from some appropriately defined average length scale $L$. For the given theory ({\it viz.}\ choice of couplings) such manifolds must thus be larger than a minimum size \begin{equation} \label{Lmin} L> L_{\rm min}= \frac1{\Lambda_\mathrm{p}}\sqrt{\frac{-\mathcal{S}_{\rm min}}{2\pi}}\,, \end{equation} where $\mathcal{S}_{\rm min}$ is the infimum value over all $x\in\mathcal{M}$. On the other hand, the larger the characteristic length scale $L$, the more inhomogeneous the manifold (the more negative $\mathcal{S}$) is allowed to be. Indeed we can rephrase this effect in terms of inhomogeneity. Let $\mathcal{S}_{\rm max}>0$ be the maximum (strictly supremum) value for $\mathcal{S}_{\rm min}$ over a suitable set of such manifolds $\mathcal{M}$ with the same topology. This is naturally a number of $O(1)$, characteristic of what the theory regards as the most symmetric manifold in the set. Then for a given manifold $\mathcal{M}$, the quantity $\mathcal{I}_\mathcal{M}= \mathcal{S}_{\rm max} - \mathcal{S}_{\rm min} >0$ is a universal measure of its inhomogeneity (in the sense of being independent of the details of regularisation). Rephrasing \eqref{Lmin}, the inhomogeneity is bounded above depending on the size of the universe: \begin{equation} \label{inhomogeneity} \mathcal{I}_\mathcal{M} < \mathcal{S}_{\rm max} + 2\pi L^2\Lambda_\mathrm{p}^2\,. \end{equation} Evidently, such behaviour could be very attractive within a complete theory of quantum gravity (\cf sec. \ref{sec:QG}), although a full, and dynamical, understanding, will have to wait until the non-linear theory is developed. In particular it cries out for application to cosmology. It explains why the initial conditions for inflation had to be sufficiently smooth. It possibly requires from quantum gravity alone that the early universe approximates a highly symmetric state such as a de Sitter inflationary phase. The restriction on inhomogeneity is maybe sufficient to forbid eternal inflation. Since (classical) fluctuations are restricted anyway, it maybe does away with the need for inflation altogether. See \eg refs. \cite{Hollands:2002yb,Kofman:2002cj,Hollands:2002xi,Carroll:2010aj} for discussions relevant to these ideas. Since it ties the minimum size of the universe to the degree of inhomogeneity, and large amplitude inhomogeneities have appeared only recently in the history of the universe, it could also explain the infamous ``Why now?'' problem, namely that the energy density of matter (including dark matter) is now similar in magnitude to the apparent energy density of dark energy deduced from the current acceleration of the universe. Finally, assuming spacetime singularities induce infinite inhomogeneity $\mathcal{I}_\mathcal{M}$, it implies ``cosmic censorship'' and somehow a softening of the causal structure of black holes. \subsection{Eigenoperators on a hyper-torus} \label{sec:torus-eigen} We now evaluate $\Omega_\mathrm{p}(x)$ in a simple example, verify that it is universal, and demonstrate that requiring $\mathcal{S}>0$ restricts the amount of asymmetry in the manifold. We choose the manifold to be a four-dimensional (untwisted) hyper-torus. Such a manifold is of course not a very realistic representation of our universe. The same effects however also appear for other examples \cite{Matt1}, including cases where the time direction is non-compact. We choose the minimum lengths of the non-contractable loops to be $L_\mu$, and choose flat coordinates such that $g_{\mu\nu}=\delta_{\mu\nu}$. In this case \begin{equation} \label{torus-tadpole} |\langle \varphi(x) \varphi(x) \rangle |_{\mathcal{M}} = \frac1V \sum_{n\ne0} \frac{C^ {\Lambda_0} _k(p_n)}{p^2_n}\,, \end{equation} where $p^\mu_n = 2\pi n_\mu/L_\mu$ (no summation over $\mu$), the sum is over all vectors of integers $n\in \mathbb{Z}^4\backslash \{0\}$, and $V=\Pi_{\mu=1}^4 L_\mu$ is the volume of the hyper-torus. Note that since the hypertorus has translation invariance, in this case there is actually no $x$ dependence. Then $\mathcal{S}$ can only depend on ratios of length scales. Also note that since this is a manifold of finite volume, the constant mode ({a.k.a.}\ zero mode) $\varphi(x)=\varphi_0$ is normalizable. It needs to be divided out from the functional measure since a pure kinetic term, and thus the integrand of the partition function at the Gaussian fixed point, does not depend on this (recall related comments at the beginning of sec. \ref{sec:general}). This is the reason for excluding $n=0$ from the sum in \eqref{torus-tadpole}, making it manifestly IR finite. Therefore the limit $k\to0$ in \eqref{Omega-p} can be safely taken, and $\Omega_\mathrm{p}$ is clearly independent of the choice of IR regularisation. With the infrared cutoff $k>0$ in place, the $n=0$ contribution is not singular. Indeed \begin{equation} \label{limit1} \lim_{p\to0} \frac{C^ {\Lambda_0} _k(p)}{p^2} = C'(0)\left(\frac1{\Lzp{2}}-\frac1{k^2}\right)\,, \end{equation} where we have used \eqref{sum-rule} and below \eqref{DeltaUV}. Using this to add back the $n=0$ contribution, we can then employ the Poisson summation formula to write \eqref{torus-tadpole} as a sum over winding numbers: \begin{equation} |\langle \varphi(x) \varphi(x) \rangle |_{\mathcal{M}} = \int \frac{d^4p}{(2\pi)^4} \frac{C^ {\Lambda_0} _k(p)}{p^2} \sum_n {\rm e}^{il_n\cdot p}\ -\frac{C'(0)}{V}\left(\frac1{\Lzp{2}}-\frac1{k^2}\right)\,, \end{equation} where $l_{n\,\mu} = L_\mu n_\mu$ (not summed over $\mu$) and $n\in \mathbb{Z}^4$ are now the winding numbers. Using \eqref{sum-rule} and \eqref{Omega}, we see that the zero winding number sector, \ie $n=0$, yields the $\mathbb{R}^4$-quantity $\Omega_ {\Lambda_0} -\Omega_k$, and thus from \eqref{Omega-M-kL} we find that \begin{equation} \label{step1} \Omega_{k, {\Lambda_0} }\ =\ \ \Omega_k +\frac{C'(0)}{V}\left(\frac1{\Lzp{2}}-\frac1{k^2}\right) -\int \frac{d^4p}{(2\pi)^4} \frac{C^ {\Lambda_0} _k(p)}{p^2} \sum_{n\ne0} {\rm e}^{il_n\cdot p}\,. \end{equation} Since the last term is a sum of propagators to separated points, we see that $\Omega_{k, {\Lambda_0} }$ is manifestly UV finite, as we already argued above on general grounds. We can therefore safely take the limit $ {\Lambda_0} \to\infty$, with the result clearly independent of the method UV regularisation (in this case the UV cutoff profile). As we have already seen that it is IR safe, we have thus proved that $\Omega_\mathrm{p}$ is well-defined and universal, as we claimed. We are free to choose the IR cutoff profile to facilitate the remaining calculation. We set $C(p^2/k^2)= {\rm e}^{-p^2/k^2}$.\footnote{For a different choice see ref. \cite{Hasenfratz:1989pk}; we otherwise essentially follow their derivation.} Recall that by \eqref{sum-rule}, $C_k(p)=1-C(p^2/k^2)$. Taking limits where it is safe to do so, we can thus write: \begin{equation} \Omega_\mathrm{p} = \frac1{Vk^2}-\int \frac{d^4p}{(2\pi)^4}\, \int^{1/k^2}_0\!\!\!\!\!\!\!\!\!\!d\alpha\,\,{\rm e}^{-\alpha p^2} \sum_{n\ne0} {\rm e}^{il_n\cdot p}\,, \end{equation} where we have expressed the IR cutoff through a Schwinger parameter, and the $k\to0$ limit should hereafter be understood. Performing the momentum integral, and substituting $\alpha = L^2 t/4\pi$, where $L = V^{1/4}$ is the geometric mean of the $L_\mu$, gives \begin{equation} \label{stepTheta} \Omega_\mathrm{p} = \frac1{Vk^2} -\frac1{4\pi L^2}\int_0^{\frac{4\pi}{L^2k^2}} \frac{dt}{t^2} \left[ \Pi_{\mu=1}^4\, \Theta\left(\frac{L^2_\mu}{tL^2}\right)-1\right]\,, \end{equation} where we have introduced the third Jacobi theta function (at Jacobi $\nu=0$, $x>0$): \begin{equation} \Theta(x) := \sum_{n=-\infty}^\infty\!\! {\rm e}^{-\pi n^2 x}\,. \end{equation} Splitting the integral into two pieces about $t=1$, the first piece is given by $s(L_\mu/L)$ where \begin{equation} \label{s} s(\ell_\mu) := \int_0^1 \frac{dt}{t^2} \bigg( \Pi_{\mu=1}^4\, \Theta\left({\ell^2_\mu}/{t}\right)-1\bigg)\,. \end{equation} In the $t\ge1$ part we substitute $t\mapsto1/t$ and use the identity $\Theta(x) = (1/\sqrt{x})\, \Theta(1/x)$ (which straightfowardly follows from a further application of Poisson resummation) to cast it in terms of the above function plus a remainder. The latter in particular cancels the explicit IR divergence in \eqref{stepTheta}. Thus finally, using \eqref{shape}, we find \begin{equation} \Omega_\mathrm{p} = \frac{\mathcal{S}(L_\mu/L)}{4\pi\sqrt{V}}\qquad{\rm where}\qquad \mathcal{S}(\ell_\mu) := 2-s(\ell_\mu)-s(1/\ell_\mu)\,. \end{equation} By dimensions, $\mathcal{S}$ only depends on the ratios $L_\mu/L$. Symmetry under permutation of the $L_\mu$ follows from the symmetries of the torus. However we note further that $\Omega_\mathrm{p}$ and $\mathcal{S}$ are invariant under the simultaneous inversion of all moduli: $L_\mu \mapsto L^2/L_\mu$ (which also preserves the overall volume $V$). It can be extended to a larger group involving the modular group and twisted torii. This intriguing symmetry is reminiscent of T-duality in String Theory \cite{Green:1982sw,Kikkawa:1984cp,Sakai:1985cs}, except that there radii are inverted using the string scale $\alpha'$, whereas here the scale is set by the manifold itself. Again a comprehensive understanding of its significance in the current context will have to await the development of the full quantum gravity. At the symmetric point where all $L_\mu = L$, we find numerically that $\mathcal{S}\equiv \mathcal{S}_{\rm max}=1.765$, in agreement with ref. \cite{Hasenfratz:1989pk}, and confirming the general expectation that $\mathcal{S}_{\rm max}$ is a number of $O(1)$. On the other hand $\mathcal{S}$ vanishes already if for example: \begin{enumerate} \item[(a)] $L_1 = 2.709\, L$ with the other three $L_\mu$ equal (thus to $0.7173\, L$), \item[(b)] thus also the dual version $L_1 = 0.3691\, L$ and the other three $L_\mu = 1.394\, L$, \item[(c)] $L_1 = L_2 = 2.457\, L$ with the other pair $L_3=L_4=0.4069\, L$. \item[(d)] $L_\mu = 1.487\, L_{\mu+1}$\ ($\mu=1,2,3$). \end{enumerate} (Combined with permutation symmetry, (c) and (d) are self-dual.) With the $L_\mu$ further apart, these configurations result in $\mathcal{S}<0$, which implies a minimum allowed size for such a manifold, for example from \eqref{lower-bound} we can write this in terms of the space-time volume as: \begin{equation} V > \frac{\mathcal{S}^2(L_\mu/L)}{4\pi^2\Lambda^4_\mathrm{p}}\,. \end{equation} \section{Implications for quantum gravity} \label{sec:QG} The discoveries we have reported in this paper point towards gravity being after all a perturbatively renormalizable quantum field theory, albeit of a new and dramatically different kind. Of course physical processes are described by working with the theory in Minkowski signature, or by using some continuation appropriately adapted to the process at hand (see \eg the recent discussion \cite{Feldbrugge:2017mbc}). However before such processes can be investigated, one must actually construct such a theory. To do this we need to formulate it in Wilsonian terms, which means that we need to study its fluctuations around Euclidean $\mathbb{R}^4$ (see secs. \ref{sec:Intro} and \ref{sec:plus}). Then, reflecting the unboundedness of the Euclidean signature action, the conformal factor has the wrong sign kinetic term. Considered on its own, we have shown in the previous sections how to make sense of its Wilsonian RG behaviour, uncovering novel and promising properties (further explored in ref. \cite{Matt1}). Now we discuss what this implies for the full theory of quantum gravity. The key observation from the Wilsonian RG, is that the continuum theory can be constructed if the scaled bare action in the limit $ {\Lambda_0} \to\infty$ is just the Gaussian fixed point plus a vanishing perturbation which is the linearised interaction expanded only over (marginally) relevant eigen-operators. This provides the boundary condition for the renormalized trajectory, and renormalizability can then be expected to follow provided that all bare relevant couplings are included that are induced by requiring finite couplings at physical scales. More generally, if bare irrelevant couplings are needed, they must stay close enough to the Gaussian fixed point to remain within its domain of attraction. Just as discussed in sec. \ref{sec:plus}, we can then anticipate that their dimensionful values must actually vanish in the limit as $ {\Lambda_0} \to\infty$. For the conformal factor on its own, this means in particular that the bare theory must sit inside $\Lm-$, using the relevant interactions of the form \eqref{derivative-operator-basis-physical}. Since these eigenoperators are non-perturbative in $\hbar$, quantum gravity must also be non-perturbative in $\hbar$. Therefore we cannot organise contributions by the loop expansion, however calculations can proceed perturbatively in $\kappa$ (\ie Newton's coupling \cf sec. \ref{sec:Intro}). Since the traceless fluctuation $h _{\mu\nu}$ has the right sign for its kinetic term, \cf \eqref{Gaussian}, eigen-operators involving only $h _{\mu\nu}$ are built in $\Lm+$, \ie are polynomials of $h _{\mu\nu}$ and its space-time derivatives, generalising sec. \ref{sec:plus} (see also sec. \ref{sec:derivative-ops}). In particular $[h _{\mu\nu}]=1$ and $\tilde{h}_{\mu\nu} = h_{\mu\nu}/\Lambda$, as follows from the canonically normalized kinetic term \eqref{Gaussian}, and the Hilbert space $\Lm+$ is defined through the norm ${\rm e}^{-a^2\tilde{h}_{\mu\nu}^2}$. Extending sec. \ref{sec:derivative-ops}, it is thus clear that the general eigenoperator is built using a top term \begin{equation} \label{top} \dd\Lambda{n} \,\sigma(h ,\partial,\partial\varphi)\,, \end{equation} where $\sigma(h ,\partial,\partial\varphi)$ is a Lorentz invariant monomial involving some or all of the components indicated (and thus $h _{\mu\nu}$ can appear here differentiated or undifferentiated or not at all). These perturbations form the Hilbert space ``$\Lm{}$'' of interactions that are square integrable under ${\rm e}^{a^2(\tilde{\vp}^2-\tilde{h}_{\mu\nu}^2)}$. Clearly this includes the $\varphi$ eigen-perturbations that are purely in $\Lm-$, since these interactions are still square-integrable under the new measure. But $h_{\mu\nu}$ eigen-perturbations that are purely in $\Lm+$ are not allowed since they are not square integrable under the new measure (there is nothing to mitigate the ${\rm e}^{a^2\tilde{\vp}^2}$ part). If we included such interactions we would destroy the $\varphi$ part of the Hilbert space structure and as we will see, also renormalizability. The scaling dimensions of the eigenoperators are the ones expected at the Gaussian fixed point, in particular if $[\sigma(h ,\partial,\partial\varphi)]=d_\sigma$, then the scaling dimension of the full eigenoperator is $d_\sigma+[\delta_n] = d_\sigma-1-n$. It is tempting to assume that all symmetries are preserved and that we can discuss the issue within the framework of a classical action. But neither of these assumptions is true: the regularisation (and not only this as we will discuss) breaks or at least deforms local symmetries, and thanks to the conformal factor, the action is never classical but always non-perturbatively quantum. The usual arguments proceed by assuming diffeomorphism invariance, leading at the classical level to a series of interactions \eqref{irrelevant} organised by powers of $\kappa$, after which quantum corrections can be analysed. Here the interactions at each new power of $\kappa$ arise simultaneously from both directions: on the one hand from the quantum corrections induced by interactions with a lower power of $\kappa$, and on the other hand by the constraints of the quantum (BRST) version of diffeomorphism invariance. Provided the latter at least incorporates the linearised diffeomorphism invariance enjoyed by \eqref{EHbilinear}, and that the kinetic term remains second order in derivatives at the bare level, back in Minkowski signature this is a theory of gravitons with just two transverse polarisations. In particular this also ensures that in Minkowski signature, the conformal mode is non-dynamical, and thus that the wrong-sign kinetic term does not lead to a break-down of unitarity. To the extent that the low energy effective description can be assumed to be classical, many related arguments of consistency then effectively enforce that it coincides with General Relativity \cite{Gupta:1954zz,Kraichnan:1955zz,Feynman:1996kb,Weinberg:1965rz,Ogievetsky:1965,Wyss:1965,Deser:1969wk,Boulware:1974sr,Fang:1978rc,Wald:1986bj,Boulanger:2000rq}. Given all the experimental tests, this seems surely to be required phenomenologically. As we have been emphasising however, according to the theory we are uncovering, gravity must in reality be non-perturbatively quantum at all scales. This aspect lies at the heart of the restrictions on inhomogeneity, which as discussed in sec. \ref{sec:compact-linear}, themselves look so promising phenomenologically. We can add that the tendency to IR divergence at the interacting level (see the end of sec. \ref{sec:pert2-flow}) make it tempting to speculate that gravitational dynamics will receive important corrections at large scales, raising the prospect that these effects could be ones attributed to dark matter, and perhaps even have a r\^ole in explaining conflicting experimental measurements of Newton's coupling \cite{Mohr:2015ccw}. Clearly there is some tension with the conclusion we reached at the beginning of this paragraph. The actual extent to which General Relativity is modified will only be revealed once the full theory is developed. Since the BRST invariance is broken by our regularisation, bare operators corresponding to its breaking, will have non-vanishing couplings, even though the corresponding physical expressions are tuned to vanish. To avoid the breaking of this quantum version of diffeomorphism invariance, one might hope to reformulate the arguments using dimensional regularisation. However, since quadratic divergences of a massless field are crucial to the definition of the $\varphi$ eigenoperators, dimensional regularisation would appear to be inapplicable. In principle we could try to finesse the difficulties by basing the formulation on the fact that $\Omega_\mathrm{p}$ in \eqref{Omega-p} is actually independent of regularisation and thus also the physical operators \eqref{physical-p} are independent of regularisation. But to discuss renormalizability we need access to the bare operator, which requires using only the first term in \eqref{Omega-M-kL}. This vanishes in dimensional regularisation, which by \eqref{physical-dnL} implies that all the bare operators also vanish. We could try the usual expedient of adding a mass term for $\varphi$ by hand. However adding a mass term breaks the realisation of diffeomorphism invariance we were trying to preserve, meaning that we appear to be no better off than with the rigorously more secure regularisation scheme we are currently using. We need to avoid being forced by the parametrisation, equivalently the realisation of diffeomorphism invariance, to include irrelevant operators with corresponding non-vanishing couplings in the limit $ {\Lambda_0} \to\infty$ (this being the usual problem). To gain some feeling for the parametrisation required, let us imagine for the moment that the theory can be constructed by starting from a diffeomorphism invariant classical action. Then since the action will be \eqref{EH}, and the kinetic terms have to appear explicitly as in \eqref{Gaussian}, any parametrisation can be reduced to the question of how to parametrise the metric $g_{\mu\nu}$. To linear order in the fields we know already that this takes the form \eqref{param-perturbative}, in order to obtain \eqref{Gaussian} after using the Feynman -- De Donder gauge \eqref{Feynman-DeDonder}. This suggests writing \begin{equation} \label{metric-overparam} g_{\mu\nu} = \left(1+\frac\kappa4\,\varphi\right)^2 \hat{g}_{\mu\nu}\,, \end{equation} so that \eqref{EH} becomes: \begin{equation} \label{EH-phi} \mathcal{L}_{EH} = -\frac34 \sqrt{\hat{g}}\,\hat{g}^{\mu\nu}\partial_\mu\varphi\partial_\nu\varphi -\frac2{\kappa^2}\,\sqrt{\hat{g}}\hat{R}\left(1+\frac\kappa4\,\varphi\right)^2\,. \end{equation} If $\hat{g}_{\mu\nu}=\delta_{\mu\nu}$, this gives us the required kinetic term for $\varphi$ (before getting $\frac14(\partial\varphi)^2$ from gauge fixing) and nothing else. From \eqref{param-perturbative} we then know that to linear order in the fields, $\hat{g}_{\mu\nu} = \delta_{\mu\nu} + \kappa\,h _{\mu\nu}$. But such an unadorned $\h_{\mu\nu}$ will lead us straight back into the space of non-renormalizable finite irrelevant interactions \eqref{irrelevant}, and take us outside $\Lm{}$. Instead we need to protect it by using the $\varphi$ operators \eqref{delta}. For example we could try replacing $h_{\mu\nu}$ with the marginal operator $\dd\Lambda0\,h_{\mu\nu}$, or with $\dd\Lambda{n}\,h_{\mu\nu}$ for some $n>0$, which is a relevant operator. On the other hand once we use one such a basis operator, perturbative quantum corrections (\ie in $\kappa$, non-perturbative in $\hbar$) will generate infinitely many others via \eqref{prod}. Thus to renormalize the theory we expect to need to extend this to an infinite sum over such operators, so we are led to try $\hat{g}_{\mu\nu} = \delta_{\mu\nu} + \kappa\,f_{1}h _{\mu\nu}$, where $f_{1}(\varphi, {\Lambda_0} )\in\Lm-$ is a general \emph{coefficient function}. Thus the general structure described in sec. \ref{sec:general} can be expected: the effective interaction will be in $\Lm{}$ at cutoff scales $\Lambda$ higher than some $ {\Lambda_0} $, leaving $\Lm{}$ at some $a\Lambda_\mathrm{p}$; with further care, complete flows exist, leading to the inhomogeneity effects discussed in sec. \ref{sec:compact}. Substituting such an expansion into \eqref{EH-phi} will lead to higher order $\h_{\mu\nu}$ interactions, with $\varphi$-dependent coefficients that can be expanded over the $\dd\Lambda{n}$ basis using \eqref{prod}. At this point we have to face the fact, as we saw in eqn. \eqref{prod-evolved}, that the flow even at the linearised level does not respect the product structure, and thus here does not respect the fact that these operators came from some power of (differentials of) $f_1$. This will be true even if we were able to construct a diffeomorphism invariant flow equation \cite{Morris:2016nda}. The only way we can match the result to $\hat{g}_{\mu\nu}$ at some other scale, is to give the latter sufficiently many parameters to reproduce the result of this evolution. We are thus led to consider very general expansions, schematically (derivative operators might also be needed) \begin{equation} \label{g-exp} \hat{g}_{\mu\nu} = \delta_{\mu\nu} +\kappa f_{1}\, h _{\mu\nu} +\kappa^2 f_{2}\, h ^{\ \alpha}_\mu h _{\alpha\nu} +\cdots\,, \end{equation} each operator with their own coefficient function $f_i(\varphi,\Lambda)$. Substituting this expansion into \eqref{EH-phi}, it is clear that this can come from a bare level action where all the interactions are of form \eqref{top}, in particular cubic and higher $\h_{\mu\nu}$ interactions appear together with their `protection' via $\varphi$ interactions in $\Lm-$. Indeed since $\hat{R}$ vanishes for flat $\hat{g}_{\mu\nu}$, it is reconstructed from interactions all of which contain at least one coefficient function. Then the observations in sec. \ref{sec:perturbation-th} apply. Thus $\partial_\mu f_{j} =\partial_\varphi f_{j}\, \partial_\mu\varphi$ is in $\Lm-$ by \eqref{d-delta}, products of the $f_{j}$ are in $\Lm-$ by \eqref{prod}, and the explicit instances of $\varphi$ in the last term in \eqref{EH-phi} are absorbed into $\Lm-$ by \eqref{dn-phi}. We thus see that the r\^ole of the infinite number of relevant couplings in the conformal sector, \cf \eqref{delta} and sec. \ref{sec:derivative-ops}, is to allow for such a sufficiently general parametrisation. So far we have only discussed what happens when we aim for the Einstein-Hilbert action \eqref{EH}. With infinitely many relevant directions of arbitrarily high dimension, one should worry that covariant higher derivative contributions could also be relevant. In particular ones which have an $O(h^2)$ piece, that can for example come from $g_{s} R^2/\kappa^2$ (where $g_s$ is its coupling) and the other squared curvatures, are dangerous since they can destroy unitarity by introducing poles of the wrong sign into the propagator \cite{Stelle:1976gc}. In fact the dimensions \eqref{delta-dim} are just right to ensure that this does not happen! From \eqref{g-exp} such terms look like $g_sf_1^2 h\partial^mh$ for $m\ge4$. For the generic $f_1$ which we are anyway forced to have, such a term contains $\dd\Lambda0\, h\partial^mh$ which is an irrelevant operator of dimension $m+1\ge5$. Thus the corresponding couplings $[g_s]\le-1$, must be set to vanish in the continuum limit. In essentially the same way, one shows that none of the covariant higher derivative operators can be associated with their own bare couplings. From \eqref{metric-overparam} and \eqref{g-exp} we would deduce that a cosmological constant term is not allowed, since it leads to non-vanishing $\varphi$ and $\varphi^2$ terms. These operators are not in $\Lm-$ so do not appear at the bare level, and cannot be generated from products of operators that start in $\Lm{}$. Such a conclusion would be clearly attractive, especially given that the theory already has the potential to explain the current acceleration of the universe (\cf sec. \ref{sec:compact-linear}). However at this point we have to confess to a flaw in these arguments. Nevertheless they show how these structures are important for quantum gravity, and the flaw indicates the path we have to take. The problem is that substituting \eqref{g-exp} does not (after appropriate modification of the Feynman -- De Donder gauge fixing) give the kinetic terms \eqref{Gaussian} plus interactions in $\Lm{}$, because the $\h_{\mu\nu}$ kinetic term also gets multiplied by $f_1^2$. Writing it as \eqref{Gaussian} plus the interaction \begin{equation} \label{killer} \frac12 (f_1^2-1) \left(\partial_\lambda h _{\mu\nu}\right)^2\,, \end{equation} makes this look harmless, particularly if we can arrange for $f_1|_{\varphi=0}=1$ so that it is genuinely only interactions. However \eqref{killer} is not in $\Lm{}$. Although the unprotected $(\partial h)^2$ is marginal (thus perturbatively renormalizable), the Hilbert space structure is destroyed and with it the guarantee that quantum corrections are also in $\Lm{}$ (at sufficiently high scales). Indeed \eqref{killer} together with the other $O(h^2)$ interactions when strung together as in fig. \ref{fig:chains} and inserted into Feynman diagrams made using the other interactions, cancel the $f_1$ appearances in internal legs. In fact all the $f_i$ cancel inside loops. Despite the novel context, the equivalence theorem still applies \cite{Bergere:1975tr,Itzykson:1980rh}. Reparametrising the metric does not help, cosmological constant terms are after all generated, and gravity is still non-renormalizable -- with the same structure of divergences. The root cause of the failure is where we flagged it be, in the paragraphs above \eqref{metric-overparam}. We cannot start from a diffeomorphism invariant classical action. Instead we must go directly to a quantum action subject to some quantum version of diffeomorphism invariance. The known consistency constraints \cite{Gupta:1954zz,Kraichnan:1955zz,Feynman:1996kb,Weinberg:1965rz,Ogievetsky:1965,Wyss:1965,Deser:1969wk,Boulware:1974sr,Fang:1978rc,Wald:1986bj,Boulanger:2000rq} appear at first sight to leave no room for an alternative quantum theory. However all of these works assume one or more properties, in particular justified by the assumed existence of a classical limit, that either now do not apply or become significantly softened. \bigskip\bigskip \section*{Acknowledgments} It is a pleasure to thank Chris Sachrajda for helpful conversations about finite size effects, and Matt Kellett for helpful discussions stemming from the further examples of $\Omega_\mathrm{p}$ \cite{Matt1}. I acknowledge support from both the Leverhulme Trust and the Royal Society as a Royal Society Leverhulme Trust Senior Research Fellow, and from STFC through Consolidated Grants ST/L000296/1 and ST/P000711/1. \vfill \newpage \bibliographystyle{hunsrt}
{ "timestamp": "2018-07-31T02:20:11", "yymm": "1802", "arxiv_id": "1802.04281", "language": "en", "url": "https://arxiv.org/abs/1802.04281" }
\section{Introduction} When performing a complex action, humans do not think or act in the level of granular primitive actions at the individual muscle or joint level. Instead, humans decompose complicated actions in a set of simpler actions. By combining simpler actions or motor primitives, humans can learn more complicated and unseen challenges in a fast and easy way. Moreover, human cognition separates a task at several levels of temporal abstraction. In robotics, the same occurs as complicated tasks are composed of sub-tasks at different levels of granularity ranging from motor primitives to higher level tasks, such as grasping, where different time scales interact. The majority of deep reinforcement learning (DRL) techniques focus on individual actions at single time steps resulting in low sample efficiency when training robots, lack of adaptability to unseen new tasks and low transfer capabilities between related tasks. Hierarchical reinforcement learning methods allow the robot to learn to perform a certain task in the level of macro-actions that are a set of individual actions reducing the search space. This makes the learning process faster and more scalable, and allows the robot to generalize across unseen tasks or environments. Modular robots present a novel approach for building robots where each component of the system is independent but works in symbiosis with the other components, forming a flexible system which can be reconfigured and easily assembled. Compared to traditional robotics as described in \cite{hros}, the modular robots can facilitate the integration time, ease of re-purposing, and accelerate the development time of different behaviours. This work focuses on exploring hierarchical techniques for DRL methods, with the goal of allowing training different behaviours on a re-configurable modular robot. The approach presented in this paper evaluates the output of a hierarchical neural network trained for two different configurations of the Scara modular robot, namely 3DoF and 4DoF configuration. Moreover we evaluate the error when each of the configurations of the modular robot tries to achieve different targets. \section{Previous work:} In order to develop robots that learn in an efficient and structured manner, temporally-extended actions and temporal abstraction are required. The first hierarchical approach for RL was introduced by \cite{dayan1993feudal}. The authors propose a method that speeds up learning by enabling it to happen at multiple resolutions in space and time by introducing management hierarchy. In \cite{dayan1993feudal} work, the high level managers learn how to set tasks to their sub-managers and, in turn, the sub-managers learn how to complete these tasks. The Options framework introduced by \cite{Sutton99betweenmdps} sets the path towards more structured approaches for reinforcement learning. Options consist of courses of actions extended over different time-scales. In the past, several researchers learned such policies for action by explicitly defining sub-goals and engineered rewards. However, using explicitly defined sub-goals subsequently learned by policies is not scalable when learning complex behaviours. Thus, recent research has focused on automatically learning them, such as: Strategic Attentive Writer for Learning Macro-Actions \cite{DBLP:journals/corr/VezhnevetsMAOGV16}, Stochastic Neural Networks for Hierarchical Reinforcement Learning \cite{DBLP:journals/corr/FlorensaDA17}, Probabilistic inference for determining options in reinforcement learning \cite{Daniel2016} and The option-critic architecture \cite{DBLP:journals/corr/BaconHP16}. The recent work of \cite{frans2017meta} presented a method for learning hierarchies in which they improve the sample efficiency on unseen tasks trough the use of shared policies that are executed for large number of timesteps. The goal of this work is to evaluate the meta-learning shared hierarchies (MLSH) method and its applicability to modular robots. In section \ref{sec:mlsh}, we present the theoretical foundation of the MLSH method, and in section \ref{sec:experiments}, we present our experimental evaluation of MLSH for modular robots. \section{Meta-learning shared hierarchies (MLSH) for modular robots} \label{sec:mlsh} The goal of MLSH is to maximize the accumulated reward across a distribution of tasks defined as $P_{M}$ with a common state and action space \begin{equation} maximize_{\phi}E_{M \sim P_{M}} [r_{0}+r_{1}+...+r_{T-1}] \end{equation} by following a stochastic policy $\pi_{\phi, \theta}(a|s)$, being $\theta$ the task-specific parameters whereas $\phi$ is the shared parameter vector between tasks. In order to find the optimal parameters of the stochastic policy, MLSH finds meaningful sub-policies parametrized by $\phi_{k}$ that are selected by a master policy parametrized by $\theta$. Given that meaningful sub-policies are discovered, the agent learns to realize new tasks quickly by simply adapting the master policy to the new task. Learning both policy and sub-policy parameters $\theta$ and $\phi_{k}$ is divided into two stages, namely the warm-up period and the joint update period. The warm-up period corresponds to the master policy parameters' update period where the sub-policy parameters are fixed and the agent interacts with the environment following the selected sub-policy by the master policy. In the joint update period, both the master policy $\theta$ and the sub-policy selected $\phi_{k}$ are updated. For more details, the pseudo-code of MLSH can be found in \ref{appendix:pseud} Despite the high precision of state-of-the-art DRL methods such as Proximal Policy Optimization (PPO) \cite{schulman2017proximal}, Actor Critic using Kronecker-Factored Trust (ACKTR) \cite{wu2017scalable}, or Trust Region Policy Optimization (TRPO) \cite{schulman2015trust}, these techniques were developed with a single task and environment in mind. This leads to poor generalization of tasks the robot can perform. We hypothesize that different robots with a similar action and state space share motor primitives that we could leverage when training the robot. Hence, our experiments aim to validate if MLSH manages to converge to the corresponding target position by training the master and sub-policy neural networks on different robot configurations and different target positions. \section{Experiments} \label{sec:experiments} In our experiments, we trained a master policy and corresponding sub-policies that generalize across different robot configurations and target positions by switching the robot configuration and corresponding target position. The simulation environment is described in more details in Appendix \ref{appendix:simulation_env}. Our experimental setup consisted of a modular 3DoF robot to be extended by 1 DoF, both in simulation and in the real robot, and two target positions, namely the center of H with point at $[0.3305805, -0.1326121, 0.3746]$ for the 3DoF, and $[0.3305805, -0.1326121, 0.4868]$ for the 4DoF robot. The center of O is set to $[0.3325683, 0.0657366, 0.3746]$ for the 3DoF and to $[0.3325683, 0.0657366, 0.4868]$ for the 4DoF robot. Since our experiment consisted of two different robot configurations and two different target positions, we trained the MLSH network with a number of sub-policies equal to 4, macro duration equal to 5, warm-up time equal to 20 and training time equal to 200. After training the network in simulation, we evaluated the learned MLSH network on the modular robot where the different target positions were reached for the different robot configurations, see Table \ref{table:3dof_eucledian} for details. \begin{table}[ht] \centering \begin{tabular}{c|c|c|c|} \cline{2-4} & \multirow{2}{*}{Target} & \multicolumn{2}{|c|}{Euclidean Distance (mm)} \\ \cline{3-4} &&real robot& simulation \\ \hline \multirow{2}{*}{\bf 3DoF} & Center of O & \textbf{31.06$\pm$0.15} & 33.69$\pm(1.9 \times 10^{-7})$\\ \cline{2-4} &Center of H& 60.36$\pm$0.12 & 60.07$\pm$0.02\\ \hline \multirow{2}{*}{\bf 4DoF} & Center of O &37.02$\pm$0.12 & 58.39$\pm$0.01\\ \cline{2-4} &Center of H& 48.82$\pm$0.15 & 46.83 $\pm (3.49\times10^{-15})$\\ \hline \end{tabular} \caption{Summarized results when executing a network trained with different goals and DoF. The first target is set to the middle of the H, the second target is set to the middle of the O for the 3DoF and 4DoF robots. The trained network is executed both in simulated environment and on the real robot. The MLSH network outputs continuously trajectory points even after convergence, therefore the standard deviation (STD) of the last 10 end-effector points is calculated.} \label{table:3dof_eucledian} \end{table} \begin{figure}[ht] \label{fig_method_vs_sim_time} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{images/results/dual_3dof_trajectories.png} \vspace{1ex} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{images/results/dual_4dof_trajectories.png} \vspace{1ex} \end{minipage} \caption{Output of the trajectories for the 3DoF (left) and 4DoF (right) Scara Robot, when loading a previously trained network for different targets.} \end{figure} During the evaluation of MLSH we noticed that, while executing trained network, the master policy selects the same sequence of sub-policies for a particular robot configuration and target position. This behaviour validates the initial claims in the original work of \cite{frans2017meta}, that the sub-policies are underlying motor primitives that generalize across different tasks. Future work involves investigation of which motor primitives are being learned during training.
{ "timestamp": "2018-02-13T02:20:25", "yymm": "1802", "arxiv_id": "1802.04132", "language": "en", "url": "https://arxiv.org/abs/1802.04132" }
\section{Introduction} \label{sec:intro} Most spoken language translation (SLT) systems integrate (loosely or closely) two main modules: source language speech recognition (ASR) and source-to-target text translation (MT). In these approaches, a symbolic sequence of words (or characters) in the source language is used as an intermediary representation during the speech translation process. However, recent works have attempted to build end-to-end speech-to-text translation without using source language transcription during learning or decoding. One attempt to translate directly a source speech signal into target language text is that of \cite{Duong2016}. However, the authors focus on the alignment between source speech utterances and their text translation without proposing a complete end-to-end translation system. The first attempt to build an end-to-end speech-to-text translation system (which does not use source language) is our own work \cite{Berard2016} but it was applied to a synthetic (TTS) speech corpus. A similar approach was then proposed and evaluated on a real speech corpus by \cite{Weiss2017}. This paper is a follow-up of our previous work \cite{Berard2016}. We now investigate end-to-end speech-to-text translation on a corpus of audiobooks - \textit{LibriSpeech} \cite{Panayotov2015} - specifically augmented to perform end-to-end speech translation \cite{Alican2018}. While previous works \cite{Berard2016,Weiss2017} investigated the extreme case where source language transcription is not available during learning nor decoding (unwritten language scenario defined in \cite{Adda2016,Anastasopoulos2017}), we also investigate, in this paper, a midway case where a certain amount of source language transcription is available during training. In this intermediate scenario, a unique (end-to-end) model is trained to decode source speech into target text through a single pass (which can be interesting if compact speech translation models are needed). This paper is organized as follows: after presenting our corpus in section \ref{corpus}, we present our end-to-end models in section \ref{models}. Section \ref{exp} describes our evaluation on two datasets: the synthetic dataset used in \cite{Berard2016} and the audiobook dataset described in section \ref{corpus}. Finally, section \ref{concl} concludes this work. \section{Audiobook Corpus for End-to-End Speech Translation} \label{corpus} \subsection{Augmented LibriSpeech } Large quantities of parallel texts (e.g. Europarl or OpenSubtitles) are available for training text machine translation systems, but there are no large (\textgreater 100h) and publicly available parallel corpora that include speech in a source language aligned to text in a target language. The \textit{Fisher/Callhome} Spanish-English corpora \cite{Post2013} are only medium size (38h), contain low-bandwidth recordings, and are not available for free. We very recently built a large English to French corpus for direct speech translation training and evaluation \cite{Alican2018}\footnote{The Augmented LibriSpeech corpus is available for download here: \url{https://persyval-platform.univ-grenoble-alpes.fr/DS91/detaildataset}}, which is much larger than the existing corpora described above. We started from the \textit{LibriSpeech} corpus used for Automatic Speech Recognition (ASR), which has 1000 hours of speech aligned with their transcriptions \cite{Panayotov2015}. The read audiobook recordings derive from a project based on a collaborative effort: LibriVox. The speech recordings are based on public domain books available on \textit{Gutenberg Project}\footnote{\url{https://www.gutenberg.org/}} which are distributed in \textit{LibriSpeech} along with the recordings. Our augmentation of \textit{LibriSpeech} is straightforward: we automatically aligned e-books in a foreign language (French) with English utterances of \textit{LibriSpeech}. This lead to 236 hours of English speech aligned to French translations at utterance level (more details can be found in \cite{Alican2018}). Since English (source) transcriptions are initially available for \textit{LibriSpeech}, we also translated them using \textit{Google Translate}. To summarize, for each utterance of our 236h corpus, the following quadruplet is available: English speech signal, English transcription, French text translation 1 (from alignment of e-books) and translation 2 (from MT of English transcripts). \begin{table*}[t] \centering \begin{tabular}{|c|c|c|c||c|c|c||c|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{Corpus}} & \multicolumn{2}{c||}{Total} & \multicolumn{3}{c||}{Source (per segment)} & Target (per segment) \\ \cline{3-8} \multicolumn{2}{|l|}{} & segments & hours & frames & chars & (sub)words & chars \\ \hline \hline \multirow{4}{*}{LibriSpeech (Real)} & train 1 & \multirow{2}{*}{47271} & \multirow{2}{*}{100:00} & \multirow{2}{*}{762} & \multirow{2}{*}{111} & \multirow{2}{*}{20.7} & 143 \\ & train 2 & & & & & & 126 \\ \cline {2-8} & dev & 1071 & 2:00 & 673 & 93 & 17.9 & 110 \\ & test & 2048 & 3:44 & 657 & 95 & 18.3 & 112 \\ \hline \hline \multirow{3}{*}{BTEC (Synthetic)} & train & 19972 & 15:51 & 276 & 50 & 10 & 42 \\ & dev & 1512 & 0:59 & 236 & 40 & 8.1 & 33 \\ & test & 933 & 0:36 & 236 & 41 & 8.2 & 34 \\ \hline \end{tabular} \caption{Size of the Augmented LibriSpeech and BTEC corpora, with the average frame, character and word counts (subword count for LibriSpeech) per segment. Character counts take whitespaces into account. The source side of BTEC actually has six times this number of segments and hours, because we concatenate multiple speakers (synthetic voices). LibriSpeech \emph{train 1} (alignments) and \emph{train 2} (automatic translation) share the same source side.} \label{tab:corpus_size} \end{table*} \subsection{MT and AST tasks} This paper focuses on the speech translation (AST) task of audiobooks from English to French, using the Augmented LibriSpeech corpus. We compare a direct (end-to-end) approach, with a cascaded approach that combines a neural speech transcription (ASR) model with a neural machine translation model (MT). The ASR and MT results are also reported as baselines for future uses of this corpus. Augmented LibriSpeech contains 236 hours of speech in total, which is split into 4 parts: a test set of 4 hours, a dev set of 2 hours, a clean train set of 100 hours, and an extended train set with the remaining 130 hours. Table~\ref{tab:corpus_size} gives detailed information about the size of each corpus. All segments in the corpus were sorted according to their alignment confidence scores, as produced by the alignment software used by the authors of the corpus \cite{Alican2018}. The test, dev and train sets correspond to the highest rated alignments. The remaining data (extended train) is more noisy, as it contains more incorrect alignments. The test set was manually checked, and incorrect alignments were removed. We perform all our experiments using \emph{train} only (without \emph{extended train}). Furthermore, we double the training size by concatenating the aligned references with the Google Translate references. We also mirror our experiments on the BTEC synthetic speech corpus, as a follow-up to \cite{Berard2016}. \section{End-to-End Models} \label{models} For the three tasks, we use encoder-decoder models with attention \cite{Bahdanau2015,Chorowski2015,Bahdanau2016,Berard2016,Weiss2017}. Because we want to share some parts of the model between tasks (multi-task training), the ASR and AST models use the same encoder architecture, and the AST and MT models use the same decoder architecture. \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\cev}[1]{\reflectbox{\ensuremath{\vec{\reflectbox{\ensuremath{#1}}}}}} \subsection{Speech Encoder} The speech encoder is a mix between the convolutional encoder presented in \cite{Weiss2017} and our previously proposed encoder \cite{Berard2016}. It takes as input a sequence of audio features: $\mathbf{x}=(x_1,\ldots,x_{T_x})\in\mathbb{R}^{T_x \times n}$. Like \cite{Berard2016}, these features are given as input to two non-linear ($tanh$) layers, which output new features of size $n'$. Like \cite{Weiss2017}, this new set of features is then passed to a stack of two convolutional layers. Each layer applies 16 convolution filters of shape $(3, 3, depth)$ with a stride of $(2, 2)$ w.r.t. time and feature dimensions; $depth$ is $1$ for the first layer, and $16$ for the second layer. We get features of shape $(T_x/2,n'/2,16)$ after the 1\textsuperscript{st} layer, and $(T_x/4,n'/4,16)$ after the 2\textsuperscript{nd} layer. This latter tensor is flattened with shape $(T'_x=T_x/4,4n')$ before being passed to a stack of three bidirectional LSTMs. This set of features has $1/4th$ the time length of the initial features, which speeds up training, as the complexity of the model is quadratic with respect to the source length. In our models, we use $n'=128$, which gives features of size $512$. The last bidirectional LSTM layer computes a sequence of annotations $\mathbf{h}=h_1,\cdots,h_{T'_x}$, where each annotation $h_i$ is a concatenation of the corresponding forward and backward states: $h_i=(\vec{h_i}\oplus\cev{h_i})\in\mathbb{R}^{2m}$, with $m$ the encoder cell size. This model differs from \cite{Berard2016}, which did not use convolutions, but time pooling between each LSTM layer, resulting in a shorter sequence (pyramidal encoder). \subsection{Character-level decoder} We use a character-level decoder composed of a conditional LSTM \cite{Sennrich2017}, followed by a dense layer. \begin{align} s_t,o_t=update^1(s'_{t-1},E(y_{t-1})) \\ c_t = look(o_t,\mathbf{h}) \\ s'_t,o'_t=update^2(s_{t-1},c_t) \\ y_t = generate(o't\oplus c_t \oplus E(y_{t-1})) \end{align} where $update^1$ and $update^2$ are two LSTMs with cell size $m'$. $look$ is a vanilla global attention mechanism \cite{Bahdanau2015}, which uses a feed-forward network with one hidden layer of size $m'$. $E^{k\times |V|}$ is the target embedding matrix, with $k$ the embedding size and $|V|$ the vocabulary size. $c_t\in\mathbb{R}^{2m}$ is a context vector which summarizes the input states to help the decoder generate a new symbol and update its state. \noindent $generate$ uses a non-linear layer followed by a linear projection to compute a score for each symbol in target vocabulary $V$. It then picks target symbol $z_t$ with the highest score: \begin{align} generate(x) = \arg \max_{i=1}^{|V|} {z_i} \\ z = W_{proj} \tanh(W_{out}^T x + b_{out}) + b_{proj} \end{align} with $W_{proj}\in\mathbb{R}^{|V|\times l},b_{proj}\in\mathbb{R}^{|V|}$, $W_{out}\in\mathbb{R}^{l\times(m'+2m+k)}$, $b_{out}\in\mathbb{R}^l$, where $l$ is the output layer size. \section{Experiments} \label{exp} \subsection{Model Settings} Speech files were preprocessed using Yaafe \cite{mathieu2010}, to extract 40 MFCC features and frame energy for each frame with a step size of 10 ms and window size of 40 ms, following \cite{Chan2016,Berard2016}. We tokenize and lowercase all the text, and normalize the punctuation, with the Moses scripts\footnote{\url{http://www.statmt.org/moses/}}. For BTEC, the same preprocessing as \cite{Berard2016} is applied. Character-level vocabularies for LibriSpeech are of size $46$ for English (transcriptions) and $167$ for French (translation). The decoder outputs are always at the character-level (for AST, MT and ASR). For the MT task, the LibriSpeech English (source) side is preprocessed into subword units \cite{Sennrich2016}. We limit the number of merge operations to $30k$, which gives a vocabulary of size $27k$. The MT encoder for BTEC takes entire words as input. Our BTEC models use an LSTM size of $m=m'=256$, while the LibriSpeech models use a cell size of $512$, except for the speech encoder layers which use a cell size of $m=256$ in each direction. We use character embeddings of size $k=64$ for BTEC, and $k=128$ for LibriSpeech. The MT encoders are more shallow, with a single bidirectional layer. The source embedding sizes for words (BTEC) and subwords (LibriSpeech) are respectively $128$ and $256$. The input layers in the speech encoders have a size of $256$ for the first layer and $n'=128$ for the second. The LibriSpeech decoders use an output layer size of $l=512$. For BTEC, we do not use any non-linear output layer, as we found that this led to overfitting. \subsection{Training settings} We train our models with Adam \cite{Kingma2015}, with a learning rate of $0.001$, and a mini-batch size of 64 for BTEC, and 32 for LibriSpeech (because of memory constraints). We use variational dropout \cite{Kingma2015b}, i.e., the same dropout mask is applied to all elements in a batch at all time steps, with a rate of $0.2$ for LibriSpeech and $0.4$ for BTEC. In the MT tasks, we also drop source and target symbols at random, with probability $0.2$. Dropout is not applied on recurrent connections \cite{Zaremba2014}. We train all our models on \emph{LibriSpeech train} augmented with the Google Translate references, i.e., the source side of the corpus (speech) is duplicated, and the target side (translations) is a concatenation of the aligned references with the Google Translate references. Because of GPU memory limits, we set the maximum length to $1400$ frames for LibriSpeech input, and $300$ characters for its output. This covers about $90\%$ of the training corpus. Longer sequences are kept but truncated to the maximum size. We evaluate our models on the dev set every 1000 mini-batch updates using BLEU for AST and MT, and WER for ASR, and keep the best performing checkpoint for final evaluation on the test set. Our models are implemented with TensorFlow~\cite{Abadi2015} as part of the LIG-CRIStAL NMT toolkit\footnote{The toolkit and configuration files are available at: \url{https://github.com/eske/seq2seq}}. \subsection{Results} Table \ref{tab:BTEC} presents the results for the ASR and MT tasks on BTEC and LibriSpeech. The MT task (and by extension the AST task) on LibriSpeech (translating novels) looks particularly challenging, as we observe BLEU scores around 20\%\footnote{Google Translate is also scored as a topline (22.2\%).}. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|} \hline & Model & ASR (WER $\downarrow$) & MT (BLEU $\uparrow$) \\ \hline \multirow{3}{*}{\rotatebox{90}{BTEC}} & greedy & 14.9 & 47.4\\ & beam-search & 13.8 & 49.2\\ & ensemble & \textbf{11.3} & \textbf{50.7} \\ \hline \hline \multirow{4}{*}{\rotatebox{90}{\footnotesize LibriSpeech}} & greedy & 19.9 & 19.2 \\ & beam-search & 17.9 & 18.8 \\ & ensemble & \textbf{15.1} & 19.3 \\ & Google Translate & \cellcolor{black!20} & \textbf{22.2} \\ \hline \end{tabular} \caption{MT and ASR results on test set for \emph{BTEC} and \emph{Augmented LibriSpeech}. We use a beam size of 8, and ensembles of 2 models trained from scratch.} \label{tab:BTEC} \end{table} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline & greedy & beam & ensemble & params \\ \cline{2-4} & \multicolumn{3}{c|}{Test BLEU} & (million) \\ \hline Baseline \cite{Berard2016} & 29.1 & 31.3 & 37.9$\dagger$ & 10.4 \\ \hline Cascaded & 38.9 & 40.7 & \textbf{43.8} & 7.9 + 3.4 \\ \hline End-to-End & 31.3 & 33.7 & \cellcolor{black!20} & \multirow{3}{*}{6.7} \\ \cline{1-4} Pre-trained & 33.7 & 36.3 & \multirow{2}{*}{\textbf{40.4}} & \\ \cline{1-3} Multi-task & 35.1 & 37.6 & & \\ \hline \end{tabular} \caption{Results of the AST task on \emph{BTEC test}. $\dagger$ was obtained with an ensemble of 5 models, while we use ensembles of 2 models. The non-cascaded ensemble combines the \emph{pre-trained} and \emph{multi-task} models. Contrary to \cite{Berard2016}, we only present mono-reference results.} \label{tab:BTEC-AST} \end{table} \captionsetup[figure]{aboveskip=0pt} \captionsetup[figure]{belowskip=0pt} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{multitask.pdf} \caption{Augmented LibriSpeech Dev BLEU scores for the MT task, and WER scores for the ASR task, with the initial (mono-task) models, and when multi-task training picks up.} \label{fig:multitask_BLEU} \end{figure} For Automatic Speech Translation (AST), we try four settings. The \emph{cascaded} model combines both the ASR and MT models (as a pipeline). The \emph{end-to-end} model (described in section~\ref{models}) does not make any use of source language transcripts. The \emph{pre-trained} model is identical to \emph{end-to-end}, but its encoder and decoder are initialized with our ASR and MT models. The \emph{multi-task} model is also pre-trained, but continues training for all tasks, by alternating updates like \cite{Luong2016}, with 60\% of updates for AST and 20\% for ASR and MT. Table~\ref{tab:BTEC-AST} and~\ref{tab:LIBR-AST} present the results for the end-to-end AST task on BTEC and LibriSpeech. On both corpora, we show that: (1) it is possible to train compact end-to-end AST models with a performance close to cascaded models; (2) pre-training and multi-task learning\footnote{if source transcriptions are available at training time} improve AST performance; (3) contrary to \cite{Weiss2017}, in both BTEC and LibriSpeech settings, best AST performance is observed when a symbolic sequence of symbols in the source language is used as an intermediary representation during the speech translation process (cascaded system); (4) finally, the AST results presented on LibriSpeech demonstrate that our augmented corpus is useful, although challenging, to benchmark end-to-end AST systems on real speech at a large scale. We hope that the baseline we established on Augmented LibriSpeech will be challenged in the future. The large improvements on MT and AST on the BTEC corpus, compared to~\cite{Berard2016} are mostly due to our use of a better decoder, which outputs characters instead of words. \begin{table}[] \centering \begin{tabular}{|c|c|c|c|c|} \hline & greedy & beam & ensemble & params \\ \cline{2-4} & \multicolumn{3}{c|}{Test BLEU} & (million) \\ \hline Cascaded & 14.6 & 14.6 & \textbf{15.8} & 6.3 + 15.9 \\ \hline End-to-End & 12.3 & 12.9 & \multirow{3}{*}{\textbf{15.5$\dagger$}} & \multirow{3}{*}{9.4} \\ \cline{1-3} Pre-trained & 12.6 & 13.3 & & \\ \cline{1-3} Multi-task & 12.6 & 13.4 & & \\ \hline \end{tabular} \caption{AST results on \emph{Augmented LibriSpeech test}. $\dagger$ combines the end-to-end, pre-trained and multi-task models.} \label{tab:LIBR-AST} \end{table} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{AST.pdf} \caption{Dev BLEU scores on 3 models for end-to-end AST of audiobooks. Best scores on the dev set for the end-to-end (mono-task), pre-train and multi-task models were achieved at steps 369k, 129k and 95k.} \label{fig:AST_BLEU} \end{figure} \vspace{-.1cm} \subsection{Analysis} Figure~\ref{fig:multitask_BLEU} shows the evolution of BLEU and WER scores for MT and ASR tasks with single models, and when we continue training them as part of a multi-task model. The multi-task procedure does more updates on AST, which explains the degraded results, but we observe that the speech encoder and text decoder are still able to generalize well to other tasks. Figure~\ref{fig:AST_BLEU} shows the evolution of dev BLEU scores for our three AST models on LibriSpeech. We see that pre-training helps the model converge much faster. Eventually, the End-to-End system reaches a similarly good solution, but after three times as many updates. Multi-Task training does not seem to be helpful when combined with pre-training. \section{Conclusion} \label{concl} We present baseline results on End-to-End Automatic Speech Translation on a new speech translation corpus of audiobooks, and on a synthetic corpus extracted from BTEC (follow-up to \cite{Berard2016}). We show that, while cascading two neural models for ASR and MT gives the best results, end-to-end methods that incorporate the source language transcript come close in performance. \vfill\pagebreak \bibliographystyle{IEEEbib}
{ "timestamp": "2018-02-13T02:22:10", "yymm": "1802", "arxiv_id": "1802.04200", "language": "en", "url": "https://arxiv.org/abs/1802.04200" }
\section{Experimental setup} \begin{figure} \centering% \includegraphics[width=\linewidth]{fig1} \caption{(Color online) Schematic of the experimental setup and the definition of the coordinate system. a) Sketch of the cross sections of the molecular beam and the laser beam to illustrate the working principle of the knife edge. b) Zoom into the knife edge region, showing the mechanical setup and motorization. c) Definition of the volumes $A$ and $B$, the beam radius $r$, and the width $p$ used for the theoretical limit; see text for details.} \label{fig:setup} \end{figure} A schematic of the experimental setup is shown in FIG.~\ref{fig:setup}. A pulsed molecular beam was provided by expanding a few millibar of indole and a trace of water in 80~bar of helium through a position-adjustable Even-Lavie valve~\cite{Even:JCP112:8068}. The valve was operated at a temperature of \celsius{110} and at a repetition rate of 250~Hz. Two transversely, in $X$--$Y$, adjustable conical skimmers (Beam Dynamics, model 50.8 with $\varnothing=3.0$, model 40.5 with $\varnothing=1.5$~mm) were placed 6.5~cm and 30.2~cm downstream from the nozzle, respectively. The transversely adjustable electrostatic deflector was located 4.4~cm behind the tip of the second skimmer. Using the b-type electrostatic deflector~\cite{Kienitz:JCP147:024304}, the molecular beam was dispersed according to the specific quantum states of the molecular species~\cite{Chang:IRPC34:557, Filsinger:JCP131:064309, Trippel:PRA86:033202}. The vertically, $Y$, adjustable knife edge was placed 1.3~cm behind the end of the deflector. For the measurements with knife edge its vertical position was chosen such that the undeflected molecular beam was cut roughly in its center. For the measurements without knife edge it was moved vertically out of the molecular beam. A third, transversely adjustable skimmer (Beam Dynamics, model 50.8 with $\varnothing=1.5$~mm) was placed 2.5~cm downstream of the front of the knife edge. The molecular beam entered a time of flight mass spectrometer (TOF-MS) centered 17.6~cm downstream of the last skimmer, where the molecules and clusters were strong-field ionized by a laser pulse with a pulse duration of 30~fs, centered at a wavelength of 800~nm, and focused to $\varnothing\approx50$~\um. FIG.~\ref{fig:setup}{a} shows a cross section, in the $X$--$Y$ plane, of the molecular beam to schematically illustrate the working principle of the knife edge. On the left, a molecular beam profile defined by the shape of a round skimmer is depicted. Its deflected part is shown by a vertical shift. On the right, the corresponding profiles are depicted for the case with the knife edge. The laser probes the molecules in the deflected part of the beam, resulting in a higher column density compared to the case without knife edge. FIG.~\ref{fig:setup}{b} highlights the region of the setup where the knife edge was located. It depicts the knife edge with its holder which was mounted on a motor (SmarAct SLC-1750-S-UHV) which allows to position the knife edge vertically. The molecular beam is indicated by the green cylinder which is cut into halves by the knife edge. We used the separation of indole and indole-water clusters to demonstrate the advantage of using the knife edge in combination with the electrostatic deflector. FIG.~\ref{fig:deflection-profile}{a} shows the measured vertical density profiles of the undeflected and deflected molecular beam when the knife edge was used. The TOF mass spectrum was gated on specific masses, which corresponded to either parent ions or specific fragments, to obtain each individual profile. The undeflected (0~kV) profile of the signal corresponding to the indole mass of $m=117$~u is shown in dark blue. All molecules and clusters were deflected downwards when voltages of $\pm10$~kV were applied to the deflector electrodes, as all quantum states were high-field seeking at the electric field strengths experienced inside the deflector~\cite{Chang:IRPC34:557}. The deflection profiles for the gates set to the masses of indole, \ensuremath{\text{indole}(\HHO)}\xspace, \ensuremath{\text{indole}(\HHO)_2}\xspace and \ensuremath{(\text{indole})_2}\xspace are shown in red, black, green, and orange, respectively. The profiles for \ensuremath{\text{indole}(\HHO)}\xspace, \ensuremath{\text{indole}(\HHO)_2}\xspace, and \ensuremath{(\text{indole})_2}\xspace were multiplied by a factor of five. The \ensuremath{\text{indole}(\HHO)_3}\xspace cluster was not observed in the mass spectrum. Furthermore, the profile of \ensuremath{(\text{indole})_2(\HHO)}\xspace had the same shape as the one for \ensuremath{(\text{indole})_2}\xspace and is not shown in the figure. Several edges were observed in the profiles which correspond to various molecules and fragments. Going from left to right, the outermost edge at -1.25~mm is attributed to \ensuremath{\text{indole}(\HHO)}\xspace because this cluster showed the largest Stark effect of all molecules and clusters to be considered and was, therefore, deflected the most~\cite{Trippel:PRA86:033202, Chang:IRPC34:557}. The shape of this edge matches the corresponding edge in the indole-ion profile, which confirms that the \ensuremath{\text{indole}(\HHO)}\xspace ion was fragmenting to indole ion with a probability of $\ordsim53$~\%. The edge at -0.9~mm in the indole-cation signal was attributed to the indole monomer, since indole had the second largest Stark effect. The edge on top of the \ensuremath{\text{indole}(\HHO)}\xspace profile at -0.6~mm was produced by \ensuremath{\text{indole}(\HHO)_2}\xspace clusters which fragmented into \ensuremath{\text{indole}(\HHO)}\xspace with a probability of $\ordsim64$~\%. A better separation of \ensuremath{\text{indole}(\HHO)}\xspace from indole and higher clusters was observed in comparison to our previous experiments on this system without the knife edge~\cite{Trippel:PRA86:033202, Chang:IRPC34:557}. Furthermore, the edge for the \ensuremath{\text{indole}(\HHO)_2}\xspace cluster has now been observed for the first time. FIG.~\ref{fig:deflection-profile}{b} shows the measured deflection profiles for indole corrected by the known fragmentation probabilities to account for the fragmentation for the case with and without knife edge (Knife) and the deflector switched on (red) and off (blue). The profiles for the case without knife edge were shifted by 0.975~mm to the left to match the edges on the left side for a better direct comparison. In both cases -- deflector on and off -- the left edge was steeper for the measurements with knife edge. This is attributed to the higher column density as a result of the knife edge. Placing the probe laser at $-0.7$~mm in the deflected profile results in an enhancement factor of $R=1.5$ at this position. The measured molecular beam diameter of $r=2$~mm matches exactly the expected radius from geometry arguments assuming a point source for the molecular beam. The distances between the valve and the third skimmer and the interaction region are 53.4~cm and 71~cm, respectively. This results in a magnification factor of $71.0/53.4=1.33$, in excellent agreement with the ratio between the measured molecular-beam diameter and the skimmer diameter given by $2.0/1.5=1.33$. The deflected part of the molecular beam is, therefore, also expected to be far out of the geometric helium profile. Additional broadening mechanisms for the molecular beam, such as the finite temperature or deviations from a point source are not taken into account. The influence of these contributions to the purity of the molecular beam are beyond the scope of this manuscript. \begin{figure} \includegraphics[width=\linewidth]{fig2} \caption{(Color online) a) Column density profile with knife edge of indole (dark blue), and deflection curves of indole (red open circles), \ensuremath{\text{indole}(\HHO)}\xspace (black open triangles), \ensuremath{\text{indole}(\HHO)_2}\xspace (green open triangles), and \ensuremath{(\text{indole})_2}\xspace (orange open squares). b) Column density profiles with deflector switched off without knife edge (blue triangles), deflector switched off with knife edge (dark blue circles), deflector switched on without knife edge (light red open triangles), and deflector switched on with knife edge (red open circles).} \label{fig:deflection-profile} \end{figure} The maximum enhancement factor $R$ for the increase in column density can be estimated, assuming an uniform molecular beam emitted from a point source and a uniform deflection force, from the molecular beam radius $r$ in the interaction region and the width of the, for the interaction with the molecules relevant, volume $p$. For $p\ll{}r$ the enhancement factor is given by $R=A/B=3/4\sqrt{2r/p}$, see FIG.~\ref{fig:setup}{c}. Taking the radius of our measured molecular beam profile of $r=1.0$~mm and the diameter of the ionization laser $p=50$~\um resulted in an expected enhancement factor of $R\approx4.7$. We attribute the reduced experimentally observed enhancement factor of $R=1.5$ to the following reasons: The experimental molecular beam profile was not completely collimated and, therefore, the edges of the profiles are not infinitely steep. This is ascribed to the finite size of the virtual source, which we estimate to be in the order of 0.6~mm. Due to the finite source size and the geometrical constraints given by the skimmers we furthermore expect it be be advantageous to place the knife edge behind the deflector, compared to using, e.\,g., a slit-skimmer before the deflector since it decreases the effective virtual source. Secondly, the important volume for the interaction of the molecules with the ionization laser was unknown and might be broader than the measured diameter in intensity. A third contribution to the reduced enhancement is attributed to the fact that the deflector acts as a thick lens for the dispersion of the molecular beam which leads to a softening of the edges. A further contribution could be a misalignment of the knife edge with respect to the propagation direction of the probe laser. The combination of the knife edge with the electrostatic deflector is of general use for all molecular beam experiments that benefit from a strong separation of molecular species or a strong separation from the seed gas. The presented approach is also especially useful for applications with low count rates or restricted measurement times, e.\,g., beamtimes at large facilities such as free electron lasers (FELs), synchrotrons, or high-power-laser facilities, where typically only a few days of beamtime are available for the measurements. Furthermore, for probing a collimated molecular beam with $r=2$~mm by the generally small x-ray beams, e.\,g., $p=5$~\um, a theoretical enhancement factor, according to the model described above, of $R>20$ is obtained. Taking into account the finite virtual source size and the resulting measured reduction of the enhancement factor by about $5/1.5$ leads to an expected enhancement factor of 6 in line with preliminary results from a recent beamtime at the LCLS. \begin{acknowledgments} We acknowledge Benjamin Erk and the CAMP team for a significant equipment loan. Besides DESY, this work has been supported by the excellence cluster ``The Hamburg Center for Ultrafast Imaging -- Structure, Dynamics and Control of Matter at the Atomic Scale'' (CUI, DFG-EXC1074), the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) through the Consolidator Grant COMOTION (ERC-Küpper-614507), by the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement 641789 ``Molecular Electron Dynamics investigated by Intense Fields and Attosecond Pulses'' (MEDEA), and by the Helmholtz Association through the Virtual Institute 419 ``Dynamic Pathways in Multidimensional Landscapes'' and the ``Initiative and Networking Fund''. J.O.\ gratefully acknowledges a fellowship by the Alexander von Humboldt Foundation. \end{acknowledgments}
{ "timestamp": "2018-09-05T02:16:49", "yymm": "1802", "arxiv_id": "1802.04053", "language": "en", "url": "https://arxiv.org/abs/1802.04053" }
\section*{Acknowledgments} The author is grateful to G. Duplan\v{c}i\'c, B. Klajn and B. Meli\'c for discussions. This work was supported by the Ministry of Science of the Republic of Croatia and by H2020 Twinning project No. 692194, ``RBI-T-WINNING''.
{ "timestamp": "2018-02-13T02:17:49", "yymm": "1802", "arxiv_id": "1802.04025", "language": "en", "url": "https://arxiv.org/abs/1802.04025" }
\section{Introduction} Differential privacy \cite{dwork2006calibrating} has emerged as a rigorous notion for privacy which allows accurate data analysis with a guaranteed bound on the increase in harm for each individual to contribute her data. Methods to guarantee differential privacy have been widely studied, and recently adopted in industry~\cite{208167,erlingsson2014rappor}.\par Two main user models have emerged for differential privacy: the central model and the local one. In the local model, each individual manages his/her proper data and discloses them to a server through some differentially private mechanisms. The server collects the (now private) data of each individual and combines them into a resulting data analysis. A classical use case for this model is the one aiming at collecting statistics from user devices like in the case of Google's Chrome browser~\cite{erlingsson2014rappor}, and Apple's iOS-10 \cite{208167,DBLP:journals/corr/abs-1709-02753}. In the local model, there are two basic kinds of protocols: interactive and non-interactive. \citet{bassily2015local} have recently investigated the power of non-interactive differentially private protocols. These protocols are more natural for the classical use cases of the local model: both the projects from Google and Apple use the non-interactive model. Moreover, implementing efficient interactive protocols in such applications is more difficult due to the latency of the network and communication cost. Despite being used in industry, the local model has been much less studied than the central one. Part of the reason for this is that there are intrinsic limitations in what one can do in the local model. As a consequence, many basic questions, that are well studied in the central model, have not been completely understood in the local model, yet. In this paper, we study differentially private Empirical Risk Minimization in the non-interactive local model. Before showing our contributions and discussing comparisons with previous works, we firstly discuss our motivations. \paragraph{Problem setting \cite{smith2017interaction}\cite{kasiviswanathan2011can}} Given a convex, closed and bounded constraint set $\mathcal{C}\subseteq \mathbb{R}^{p}$, a data universe $\mathcal{D}$, and a loss function $\ell:\mathcal{C}\times \mathcal{D}\mapsto \mathbb{R}$. A dataset $D=\{x_1,x_2\cdots,x_n\}\in \mathcal{D}^n$ defines an \emph{empirical risk} function: $\hat{L}(\theta;D)=\frac{1}{n}\sum_{i=1}^{n}\ell(\theta,x_i)$. When the inputs are drawn i.i.d from an unknown underlying distribution $\mathcal{P}$ on $\mathcal{D}$, we can also define the \emph{population risk} function: $L_\mathcal{P}(\theta)=\mathbb{E}_{D\sim \mathcal{P}^n}[\ell(\theta;D)]$. Now we have the following two kinds of excess risk, one is empirical risk, {\em i.e.} $\text{Err}_{D}(\theta_{\text{priv}})=\hat{L}(\theta_{\text{priv}};D)-\min_{\theta\in \mathcal{C}}\hat{L}(\theta;D)$, the other one is population risk, {\em i.e.} $\text{Err}_{\mathcal{P}}(\theta_{\text{priv}})=L_{\mathcal{P}}(\theta_{\text{priv}})-\min_{\theta\in \mathcal{C}} L_{\mathcal{P}}(\theta).$\par The problem that we study in this paper is for finding $\theta_{\text{priv}}\in \mathcal{C}$ under non-interactive local differential privacy (see Definition \ref{def:1}) which makes the empirical and population excess risk as low as possible. Alternatively, when dimensionality $p$ is constant or low, we can express this problem in terms of \emph{sample complexity} as finding as small of $n$ as possible for achieving $\text{Err}_{D}\leq \alpha$ and $\text{Err}_{\mathcal{P}}\leq \alpha$, where $\alpha$ is the user specified error tolerance (or simply called error). \paragraph{Motivation} \citet{smith2017interaction} prove the following result concerning the problem for general convex 1-Lipschitz loss functions over a bounded constraint set. \begin{theorem}\label{theorem:1} Under the assumptions above, there is a non-interactive $\epsilon$-LDP algorithm such that for all distribution $\mathcal{P}$ on $\mathcal{D}$, with probability $1-\beta$, we have \begin{equation}\label{equation:1} \text{Err}_{\mathcal{P}}(\theta_{\text{priv}})\leq \tilde{O}\big(( \frac{\sqrt{p}\log^2(1/\beta)}{\epsilon^2 n} )^{\frac{1}{p+1}}\big). \end{equation} A similar result holds for $\text{Err}_{D}$, with at least $\Omega(n^{\frac{1}{p+1}})$ for both computation and communication complexity for each user. Alternatively, to achieve error $\alpha$, the sample complexity must satisfies $n=\tilde{\Omega}(\sqrt{p}c^p\epsilon^{-2} \alpha^{-(p+1)})$, where $c$ is some constant (approximately 2). More importantly, they also show that generally, the dependence of the sample size over the dimensionality $p$, in the terms $\alpha^{-(p+1)}$ and $c^p$, is unavoidable. \end{theorem} This situation is somehow undesirable: when the dimensionality is high and the target error is low, the dependency on $\alpha^{-(p+1)}$ could make the sample size quite large. However, several results have already shown that for some specific loss functions, the exponential dependency on the dimensionality can be avoided. For example, \citet{smith2017interaction} show that, in the case of linear regression, there is a non-interactive $(\epsilon,\delta)$-LDP algorithm\footnote{Although, these two results are formulated for non-interactive $(\epsilon,\delta)$-LDP, in the rest of the paper we will focus on non-interactive $\epsilon$-LDP algorithms.} whose sample complexity for achieving error $\alpha$ for the empirical risk is $n=\Omega(p\log(1/\delta)\epsilon^{-2}\alpha^{-2})$. Similarly, \citet{DBLP:conf/icml/0007MW17} showed that for logistic regression, if the sample complexity satisfies $n> O\big(( \frac{8r}{\alpha})^{4r\log\log(8r/\alpha)}(\frac{4r}{\epsilon})^{2cr\log(8r/\alpha)+2}(\frac{1}{\alpha^2 \epsilon^2})\big),$ where $c$ and $r$ are independent on $p$, then there is an non-interactive $(\epsilon,\delta)$-LDP such that $\text{Err}_{\mathcal{P}}(\theta_{\text{priv}})\leq \alpha$. \par In this paper we will firstly study the following natural questions: $\RN{1})$ Can we get an algorithm which has lower sample complexity than in Theorem \ref{theorem:1}? $\RN{2})$ From the discussion above, we have a gap between the general case and the case of specific loss functions. Can we give natural conditions on the loss function that guarantee non-interactive $\epsilon$-LDP with sample complexity that is not exponential in the dimensionality $p$? $\RN{3})$ As we can see from above the computation and communication cost is relatively high when $n$ is large, can we reduce them to constant? $\RN{3})$ Next, we consider the problem in high dimensional case. \citet{smith2017interaction} assumes that the dimensionality is low or constant compared with $n$, however, in machine learning it is common when in the high dimensional case, that is $n\ll p$. We can see the above bound is meaningless under this case. So the question is how can we get a lower upper bound? \paragraph{Our Contributions} \begin{enumerate} \item For low (constant) dimensional case, we first show that there is a non-interactive $\epsilon$-LDP algorithm, if the loss function is $(8, T)$-smooth (Definition \ref{def:5}), then when $n=\tilde{\Omega}\big( (c_0p^{\frac{1}{4}})^p\alpha^{-(2+\frac{p}{2})}\epsilon^{-2}\big)$, where $c_0$ is a universal constant, then the empirical excess risk will satisfies $\text{Err}_{D}\leq \alpha$. If the loss function is $(\infty, T)$-smooth, then when $n\geq \tilde{\Omega}(4^{p(p+1)}D^2_p p\epsilon^{-2}\alpha^{-4})$, we have empirical excess risk $\text{Err}_{D}\leq \alpha$, where $D_p$ depends only on $p$. Interestingly, to obtain this result we do not need the loss function to be convex. However, if the loss function is convex and 1-Lipschitz, results of population excess risk can also be achieved. Note that when $\alpha\leq O(\frac{1}{p})$ the complexity of our first result is lower than it in \cite{smith2017interaction}, also in the latter result the dependence on $\alpha$ is constant, respectively, rather than $\alpha^{-(p+1)}$. Our method is based on Berstein polynomial approximation. \item Next, we address the efficiency issue, which has not been well studied before \cite{smith2017interaction}. Following an approach similar to~\cite{bassily2015local}, we propose an algorithm for our loss functions which has only $1$-bit communication cost and $O(1)$ computation cost for each client, and which achieves asymptotically the same error bound as the original one. Additionally, we show also a novel analysis for the server. This shows that if the loss function is convex and Lipschitz and the convex set satisfies some natural conditions, then we have an algorithm which achieves the error bound of $O(p\alpha)$ when $n$ is the same as in the previous part, moreover, the running time is polynomial in $\frac{1}{\alpha}$ if the loss function is $(\infty, T)$-smooth, which is exponential in $d$ in \cite{smith2017interaction}. \item Later, we show the generality of our technique by applying polynomial approximation to other problems. We give a non-interactive LDP algorithm for answering the class of k-way marginals queries and the class of smooth queries, by using different type of polynomials approximations (details are in Appendix). \item For high dimensional case, we show that if the loss function is a convex general linear function, then we have an $\epsilon$-LDP algorithm whose risk bound is only dependent on $n$ and the Gaussian Width of $\mathcal{C}$, this is much smaller than it in \cite{smith2017interaction}. When $\mathcal{C}$ is $\ell_1$ norm ball or distribution simplex, we show it will only dependent on $n, \log p$ instead of $p$. \end{enumerate} \begin{table*}[h] \large \begin{center} \resizebox{1\linewidth}{!}{% \begin{tabular}[t]{ |p{1.5cm}|p{4cm}|p{2.5cm}|p{2cm}|p{2cm}|p{3cm}|} \hline Method & Sample Complexity (omit $\text{Poly}(p)$ terms)& Communication Cost(each user) & Computation Cost(each user) & Running time for the server & Assumption\\[0.5ex] \hline Claim 4 in \cite{smith2017interaction} & $\tilde{\Omega}(4^p\alpha^{-(p+2)}\epsilon^{-2})$ & $1$ & $O(1)$ & $O\big((\frac{1}{\alpha})^p\big)$ & Lipschitz\\ [1ex] \hline Theorem 10 in \cite{smith2017interaction} & $\tilde{\Omega}(2^p\alpha^{-(p+1)}\epsilon^{-2})$ & $\Omega(n^{\frac{1}{p+1}})$ & $\Omega(n^{\frac{1}{p+1}})$ & Not Mentioned & Lipschitz and Convex \\ [1ex] \hline \textbf{This Paper} & $\tilde{\Omega}\big( (c_0 p^{\frac{1}{4}})^p\alpha^{-(2+\frac{p}{2})}\epsilon^{-2}\big)$ & $1$ & $O(1)$ & $O( (\frac{1}{\alpha})^{\frac{p}{2}})$ & $(8, T)$-smooth \\ [1ex] \hline \textbf{This Paper} & $\tilde{\Omega}(4^{p(p+1)}D^2_p\epsilon^{-2}\alpha^{-4})$ & $1$ & $O(1)$ & $O\big(\text{Poly}(\frac{1}{\alpha})\big)$ & $(\infty, T)$-smooth \\ [1ex] \hline \end{tabular}} \end{center} \caption{Comparisons with previous works on the empirical risk under low dimensional case. We can see that when the error $\alpha \leq O(\frac{1}{p})$ then the sample complexity of $(8, T)$-smooth loss function case is less than previous works. When the error $\alpha\leq O(\frac{1}{16^p})$, then the sample complexity for $(\infty, T)$-smooth loss function case is less than previous works.} \label{Table:1} \end{table*} We list some of our results in Table \ref{Table:1}. Due to the space limit, all the proofs and some details of algorithms can be found in the Appendix part. Also, in order for convenience, we have to note that many of the upper bounds are quite loose. \section{Related Works} ERM in the local model of differential privacy has been studied in \cite{kasiviswanathan2011can,beimel2008distributed,duchi2017minimax,duchi2013local,DBLP:conf/icml/0007MW17,smith2017interaction}. \citet{kasiviswanathan2011can} showed a general equivalence between learning in the local model and learning in the statistical query model. \citet{duchi2017minimax,duchi2013local} gave the lower bound $O(\frac{\sqrt{d}}{\epsilon\sqrt{n}})$ and optimal algorithms for general convex optimization; however, their optimal procedure needs many rounds of interactions. The works that are most related to ours are \cite{DBLP:conf/icml/0007MW17,smith2017interaction}. \citet{DBLP:conf/icml/0007MW17} considered some specific loss functions in high dimensions, such as sparse linear regression and kernel ridge regression, Note that although it also studied a class of loss functions ({\em i.e.,} Smooth Generalized Linear Loss functions) and used the polynomial approximation approach, the functions investigated in our paper are more general, which include linear regression and logistic regression, and the approximation techniques are quite different. \citet{smith2017interaction} studied general convex loss functions for population excess risk and showed that the dependence on the exponential of the dimensionality is unavoidable. In this paper, we show that such a dependence in the term of $\alpha$ is actually avoidable for a class of loss functions, and this even holds for non-convex loss functions, which is a big difference from all existing works, also we consider high dimensional case. In addition, our algorithms are simpler and more efficient. The polynomial approximation approach has been used under central model in \cite{alda2017bernstein,wang2016differentially,thaler2012faster,DBLP:conf/icml/0007MW17} and the dimension reduction has been used in local model in \cite{bassily2015local,DBLP:conf/icml/0007MW17}. \section{Preliminaries} \paragraph{Differential privacy in the local model.} In LDP, we have a data universe $\mathcal{D}$, $n$ players with each holding a private data record $x_i\in \mathcal{D}$, and a server that is in charge of coordinating the protocol. An LDP protocol proceeds in $T$ rounds. In each round, the server sends a message, which we sometime call a query, to a subset of the players, requesting them to run a particular algorithm. Based on the queries, each player $i$ in the subset selects an algorithm $Q_i$, run it on her data, and sends the output back to the server. \begin{definition}\cite{kasiviswanathan2011can,smith2017interaction}\label{def:1} An algorithm $Q$ is $\epsilon$-locally differentially private (LDP) if for all pairs $x,x'\in \mathcal{D}$, and for all events $E$ in the output space of $Q$, we have $\text{Pr}[Q(x)\in E]\leq e^{\epsilon}\text{Pr}[Q(x')\in E].$ A multi-player protocol is $\epsilon$-LDP if for all possible inputs and runs of the protocol, the transcript of player i's interaction with the server is $\epsilon$-LDP. If $T=1$, we say that the protocol is $\epsilon$ non-interactive LDP. \end{definition} \begin{wrapfigure}{R}{0.5\linewidth}\vspace{-1cm} \begin{minipage}{\linewidth} \begin{algorithm}[H] \caption{1-dim LDP-AVG} \label{alg:1} \begin{algorithmic}[1] \State {\bfseries Input:} Player $i\in [n]$ holding data $v_i\in [0,b]$, privacy parameter $\epsilon$. \For{Each Player $i$} \State Send $z_i=v_i+\text{Lap}(\frac{b}{\epsilon})$ \EndFor \For{The Server} \State Output $a=\frac{1}{n}\sum_{i=1}^{n}z_i$. \EndFor \end{algorithmic} \end{algorithm} \vspace{-0.7cm} \end{minipage \end{wrapfigure} Since we only consider non-interactive LDP through the paper, we will use LDP as non-interactive LDP below. As an example that will be useful in the sequel, the next lemma shows an $\epsilon$-LDP algorithm for computing 1-dimensional average. \begin{lemma}\label{lemma:1} Algorithm \ref{alg:1} is $\epsilon$-LDP. Moreover, if player $i\in [n]$ holds value $v_i\in [0,b]$ and $n>\log \frac{2}{\beta}$ with $0<\beta<1$, then, with probability at least $1-\beta$, the output $a\in \mathbb{R}$ satisfies: $|a-\frac{1}{n}\sum_{i=1}^{n}v_i|\leq \frac{2b\sqrt{\log \frac{2}{\beta}}}{\sqrt{n}\epsilon}$. \end{lemma} \paragraph{Bernstein polynomials and approximation.} We give here some basic definitions that will be used in the sequel; more details can be found in \cite{alda2017bernstein,lorentz1986bernstein,micchelli1973saturation}. \begin{definition}\label{def:2} Let $k$ be a positive integer. The Bernstein basis polynomials of degree $k$ are defined as $b_{v,k}(x)=\binom{k}{v}x^{v}(1-x)^{k-v}$ for $v=0,\cdots,k$. \end{definition} \begin{definition}\label{def:3} Let $f:[0,1]\mapsto \mathbb{R}$ and $k$ be a positive integer. Then, the Bernstein polynomial of $f$ of degree $k$ is defined as $B_k(f;x)=\sum_{v=0}^{k}f(v/k)b_{v,k}(x)$. We denote by $B_k$ the Bernstein operator $B_k(f)(x)=B_k(f,x)$. \end{definition} \begin{definition}\cite{micchelli1973saturation} \label{def:4} Let $h$ be a positive integer. The iterate Bernstein operator of order $h$ is defined as the sequence of linear operators $B_k^{(h)}=I-(I-B_k)^h=\sum_{i=1}^{h}\binom{h}{i}(-1)^{i-1}B_k^i$, where $I=B_k^0$ denotes the identity operator and $B_k^i$ is defined as $B_k^i=B_k\circ B_k^{k-1}$. The iterated Bernstein polynomial of order $h$ can be computed as $B_k^{(h)}(f;x)=\sum_{v=0}^{k}f(\frac{v}{k})b_{v,k}^{(h)}(x),$ where $b^{(h)}_{v,k}(x)=\sum_{i=1}^{h}\binom{h}{i}(-1)^{i-1}B^{i-1}_k(b_{v,k};x)$. \end{definition} Iterate Bernstein operator can well approximate multivariate $(h,T)$-smooth functions. \begin{definition}\cite{micchelli1973saturation} \label{def:5} Let $h$ be a positive integer and $T>0$ be a constant. A function $f:[0,1]^{p}\mapsto \mathbb{R}$ is $(h,T)$-smooth if it is in class $\mathcal{C}^{h}([0,1]^{p})$ and its partial derivatives up to order $h$ are all bounded by $T$. We say it is $(\infty,T)$-smooth, if for every $h\in \mathbb{N}$ it is $(h,T)$-smooth. \end{definition} \begin{definition}\label{def:6} Assume $f:[0,1]^p \mapsto \mathbb{R}$ and let $k_1,\cdots,k_p,h$ be positive integers. The multivariate iterated Bernstein polynomial of order $h$ at $y=(y_1,\ldots,y_p)$ is defined as: \begin{equation}\label{equation:2} B^{(h)}_{k_1,\ldots, k_p}(f;y)=\sum_{j=1}^{p}\sum_{v_j=0}^{k_j}f(\frac{v_1}{k_1},\ldots,\frac{v_p}{k_p})\prod_{i=1}^p b^{(h)}_{v_i,k_i}(y_i). \end{equation} We denote $B^{(h)}_k=B^{(h)}_{k_1,\ldots, k_p}(f;y)$ if $k=k_1=\cdots=k_p$. \end{definition} \begin{theorem}\cite{alda2017bernstein}\label{theorem:2} If $f:[0,1]^p\mapsto \mathbb{R}$ is a $(2h,T)$-smooth function, then for all positive integers $k$ and $y\in[0,1]^p$, we have $|f(y)-B_k^{(h)}(f;y)|\leq O(pTD_h k^{-h})$. Where $D_h$ is a universal constant only related to $h$. \end{theorem} \paragraph{Our settings} We conclude this section by making explicitly the settings that we will consider throughout the paper. We assume that there is a constraint set $\mathcal{C}\subseteq [0,1]^p$ and for every $x\in \mathcal{D}$ and $\theta\in \mathcal{C}$, $\ell(\cdot,x)$ is well defined on $[0,1]^p$ and $\ell(\theta,x)\in [0,1]$. These closed intervals can be extended to arbitrarily bounded closed intervals. Our settings are similar to the `Typical Settings' in \cite{smith2017interaction}, where $\mathcal{C}\subseteq [0,1]^p$ appears in their Theorem 10, and $\ell(\theta,x)\in [0,1]$ from their 1-Lipschitz requirement and $\|\mathcal{C}\|_2\leq 1$. \section{Low Dimensional Case} Definition \ref{def:6} and Theorem \ref{theorem:2} tell us that if we know the value of the empirical risk function, {\em i.e.} the average of the sum of loss functions, on each of the grid points $(\frac{v_1}{k},\frac{v_2}{k}\cdots\frac{v_p}{k})$, where $(v_1,\cdots,v_p)\in \mathcal{T}=\{0,1,\cdots,k\}^p$ for some large $k$, then we can approximate it well. Our main observation is that this can be done in the local model by estimating the average of the sum of loss functions on each of the grid points using Algorithm~\ref{alg:1}. This is the idea of Algorithm \ref{alg:2}. \begin{algorithm} \caption{Local Bernstein Mechanism} \label{alg:2} \begin{algorithmic}[1] \State {\bfseries Input:} Player $i\in [n]$ holding data $x_i\in \mathcal{D}$, public loss function $\ell:[0,1]^p \times \mathcal{D}\mapsto [0,1]$, privacy parameter $\epsilon>0$, and parameter $k$. \State Construct the grid $\mathcal{T}=\{\frac{v_1}{k},\ldots,\frac{v_p}{k}\}_{\{v_1,\ldots,v_p\}}$, where $\{v_1,\ldots,v_p\}\in\{0,1,\cdots,k\}^p$. \For {Each grid point $v=(\frac{v_1}{k},\ldots,\frac{v_p}{k})\in \mathcal{T}$} \For{Each Player $i\in [n]$} \State Calculate $\ell(v;x_i)$. \EndFor \State Run Algorithm \ref{alg:1} with $\epsilon=\frac{\epsilon}{(k+1)^p}$ and $b=1$ and denote the output as $\tilde{L}(v;D)$. \EndFor \For{The Server} \State Construct Bernstein polynomial, as in (\ref{equation:2}), for the perturbed empirical loss $\tilde{L}(v;D)$. Denote $\tilde{L}(\cdot,D)$ the corresponding function. \State Compute $\theta_{\text{priv}}=\arg\min_{\theta\in \mathcal{C}}\tilde{L}(\theta;D)$. \EndFor \end{algorithmic} \end{algorithm} \begin{theorem}\label{theorem:3} For $\epsilon>0, 0<\beta<1$, Algorithm \ref{alg:2} is $\epsilon$-LDP. Assume that the loss function $\ell(\cdot,x)$ is $(2h,T)$-smooth for all $x\in \mathcal{D}$ for some positive integer $h$ and constant $T$. If $n,\epsilon$ and $\beta$ satisfy $n=\Omega\Big (\frac{\log \frac{1}{\beta}4^{p(h+1)}}{\epsilon^2 D_{h}^2}\Big )$, then setting $k=O\Big((\frac{D_h\sqrt{pn}\epsilon}{2^{(h+1)p}\sqrt{\log \frac{1}{\beta}}})^{\frac{1}{h+p}}\Big)$ we have with probability at least $1-\beta$: \begin{equation}\label{equation:3} \text{Err}_{D}(\theta_{\text{priv}})\leq \tilde{O}\Big (\frac{\log^{\frac{h}{2(h+p)}} (\frac{1}{\beta}) D_h^{\frac{p}{p+h}}p^{\frac{p}{2(h+p)}}2^{(h+1)p\frac{h}{h+p}}}{n^{\frac{h}{2(h+p)}}\epsilon^{\frac{h}{h+p}}}\Big ), \end{equation} where $\tilde{O}$ hides the $\log$ and $T$ terms. \end{theorem} From (\ref{equation:3}) we can see that in order to achieve error $\alpha$, the sample complexity needs to be $n=\tilde{\Omega}(\log \frac{1}{\beta}D_h^{\frac{2p}{h}}p^{\frac{p}{h}}4^{(h+1)p}\epsilon^{-2}\alpha^{-(2+\frac{2p}{h})})$. As particular cases, we have the followings. \begin{corollary}\label{col1} If the loss function $\ell(\cdot,x)$ is $(8,T)$-smooth for all $x\in \mathcal{D}$ for some constant $T$, nd if $n,\epsilon, \beta, k$ satisfy the condition in Theorem~\ref{theorem:3} with $h=4$, then with probability at least $1-\beta$, the sample complexity to achieve $\alpha$ error is $n=\tilde{O}\big(\alpha^{-(2+\frac{p}{2})}\epsilon^{-2}(4^5\sqrt{D_4}p^{\frac{1}{4}})^p\big)$, where for general convex loss function $n=\tilde{\Omega}\big(\alpha^{-(p+1)}\epsilon^{-2}2^p\big)$ in Theorem \ref{theorem:1}. We can easily get that when the error satisfies $\alpha \leq O(\frac{1}{p})$, the sample complexity is lower than it in \cite{smith2017interaction}, we note that this case always appears in real applications (see below). \end{corollary} \begin{corollary}\label{col2} If the loss function $\ell(\cdot,x)$ is $(\infty,T)$-smooth for all $x\in \mathcal{D}$ for some constant $T$, and if $n,\epsilon, \beta, k$ satisfy the condition in Theorem~\ref{theorem:3} with $h=p$, then with probability at least $1-\beta$, the output $\theta_{\text{priv}}$ of Algorithm \ref{alg:2} satisfies: $\text{Err}_{D}(\theta_{\text{priv}})\leq \tilde{O}\Big (\frac{\log \frac{1}{\beta}^{\frac{1}{4}}D_p^{\frac{1}{2}}p^{\frac{1}{4}}\sqrt{2}^{(p+1)p}}{n^{\frac{1}{4}}\epsilon^{\frac{1}{2}}}\Big)$, where $\tilde{O}$ hides the $\log$ and $T$ terms. So, to achieve error $\alpha$, with probability at least $1-\beta$, we have sample complexity: \begin{equation}\label{equation:4} n=\tilde{\Omega}\Big (\max\{4^{p(p+1)}\log(\frac{1}{\beta})D_p^2 p \epsilon^{-2}\alpha^{-4}, \frac{\log \frac{1}{\beta}4^{p(p+1)}}{\epsilon^2 D_{p}^2} \}\Big ), \end{equation} \end{corollary} It is worth noticing that from (\ref{equation:3}), when the term $\frac{h}{p}$ grows, the term $\alpha$ decreases. Thus, for loss functions that are $(\infty,T)$-smooth, we can get a smaller dependency than the term $\alpha^{-4}$ in (\ref{equation:4}). For example, if we take $h=2p$, then the sample complexity is $n=\Omega(\max\{c_2^{p^2}\log \frac{1}{\beta}D_{2p} \sqrt{p} \epsilon^{-2}\alpha^{-3}, \frac{\log \frac{1}{\beta}c^{p^2}}{\epsilon^2 D_{2p}^2} \})$ for some constants $c, c_2$. When $h\rightarrow \infty$, the dependency on the error becomes $\alpha^{-2}$, which is the optimal bound, even for convex functions. Our analysis of the empirical excess risk does not use the convexity assumption. While this gives a bound which is not optimal, even for $p=1$, it also says that our result holds for non-convex loss functions and constrained domain set, as long as they are smooth enough.\par By (\ref{equation:4}), we can see that our sample complexity is lower than it in \cite{smith2017interaction} when $\alpha\leq O(\frac{1}{16^p})$ (we assume $D_p$ is a constant here since $p$ is a constant). We have to note that this is always the case when studying ERM when the dimension $d$ is small, in order to get best performance usually, we wan to achieve the error of $\alpha= 10^{-10}\sim10^{-14}$\cite{johnson2013accelerating}. \par Using the convexity assumption of the loss function, and a lemma in \cite{shalev2009stochastic}, we can also give a bound on the population excess risk, details are in supplemental material. Corollary \ref{col1} and \ref{col2} provide answers to our motivating question. That is, for loss functions which are $(8, T)$-smooth, we can get a lower sample complexity, if they are $(\infty,T)$-smooth, there is an $\epsilon$-LDP algorithm for empirical and population excess risks achieving error $\alpha$ with sample complexity which is independent from the dimensionality $p$ in the term $\alpha$. This result does not contradict the results by \citet{smith2017interaction}. Indeed, the example they provide whose sample complexity must depend on $\alpha^{-\Omega(p)}$, to achieve the $\alpha$ error, is actually non-smooth. \par In our result of $(\infty, T)$-smooth case, like in the one by~\citet{smith2017interaction}, there is still a dependency of the sample complexity in the term $c^p$, for some constant $c$. Furthermore ours has also a dependency in the term $D_p$. There is still the question about what condition would allow a sample complexity independent from this term. We leave this question for future works and we focus instead on the efficiency and further applications of our method. \section{More Efficient Algorithms} \label{sec:efficient} Algorithm \ref{alg:2} has computational time and communication complexity for each player which is exponential in the dimensionality. This is clearly problematic for every realistic practical application. For this reason, in this section, we study more efficient algorithms. In order for convenience, in this part we only focus on the case of $(\infty, T)$-smooth loss functions, it can be easily extended to general cases. Consider the following lemma, showing an $\epsilon$-LDP algorithm for computing $p$-dimensional average (notice the extra conditions on $n$ and $p$ compared with Lemma \ref{lemma:1}). \begin{lemma}\cite{DBLP:journals/corr/NissimS17aa}\label{lemma:3} Consider player $i\in [n]$ holding data $v_i\in \mathbb{R}^p$ with coordinate between $0$ and $b$. Then for $0<\beta<1,\, 0<\epsilon$ such that $n\geq 8p\log (\frac{8p}{\beta})$ and $\sqrt{n}\geq \frac{12}{\epsilon}\sqrt{\log \frac{32}{\beta}}$, there is an $\epsilon$-LDP algorithm, LDP-AVG, with probability at least $1-\beta$, the output $a\in \mathbb{R}^p$ satisfying: $\max_{j\in[d]}|a_j-\frac{1}{n}\sum_{i=1}^{n}[v_i]_j|\leq O(\frac{bp}{\sqrt{n}\epsilon}\sqrt{\log \frac{p}{\beta}})$. Moreover, the computation cost for each user is $O(1)$\footnote{Note that here we use an weak version of their result}. \end{lemma} By using this lemma and by discretizing the grid with some interval steps , we can design an algorithm which requires $O(1)$ computation time and $O(\log n)$-bits communication per player (see Appendix). However, we would like to do even better and obtain constant communication complexity. Instead of discretizing the grid, we apply a technique, firstly proposed by \citet{bassily2015local}, which permits to transform any `sampling resilient' $\epsilon$-LDP protocol into a protocol with 1-bit communication complexity. Roughly speaking, a protocol is sampling resilient if its output on any dataset $S$ can be approximated well by its output on a random subset of half of the players. Since our algorithm only uses the LDP-AVG protocol, we can show that it is indeed sampling resilient. Inspired by this result, we propose Algorithm~\ref{alg1:3} and obtain the following theorem. \begin{theorem}\label{thm:5} For $\epsilon\leq\ln 2$ and $0<\beta<1$, Algorithm \ref{alg1:3} is $\epsilon$-LDP. If the loss function $\ell(\cdot,x)$ is $(\infty,T)$-smooth for all $x\in \mathcal{D}$ and $n=\Omega(\max\{\frac{\log \frac{1}{\beta}4^{p(p+1)}}{\epsilon^2 D_{p}^2}, p(k+1)^p\log (k+1), \frac{1}{\epsilon^2}\log \frac{1}{\beta}\})$ for some constant $c$, then by setting $k=O\big((\frac{D_p\sqrt{pn}\epsilon}{2^{(p+1)p}\sqrt{\log \frac{1}{\beta}}})^{\frac{1}{2p}}\big)$, the results in Corollary \ref{col2} hold with probability at least $1-4\beta$. Moreover, for each player the time complexity is $O(1)$, and the communication complexity is $1$-bit. \end{theorem} \begin{algorithm} \caption{Player-Efficient Local Bernstein Mechanism with 1-bit communication per player} \label{alg1:3} \begin{algorithmic}[1] \State {\bfseries Input:} Player $i\in [n]$ holding data $x_i\in \mathcal{D}$, public loss function $\ell:[0,1]^p \times \mathcal{D}\mapsto [0,1]$, privacy parameter $\epsilon\leq \ln 2$, and parameter $k$. \State {\bfseries Preprocessing:} \State Generate $n$ independent public strings\\ $y_1=\text{Lap}(\frac{1}{\epsilon}), \cdots, y_n=\text{Lap}(\frac{1}{\epsilon})$. \State Construct the grid $\mathcal{T}=\{\frac{v_1}{k},\ldots,\frac{v_p}{k}\}_{\{v_1,\ldots,v_p\}}$, where $\{v_1,\ldots,v_p\}\in\{0,1,\cdots,k\}^p$. \State Partition randomly $[n]$ into $d=(k+1)^p$ subsets $I_1,I_2,\cdots,I_d$, and associate each $I_j$ to a grid point $\mathcal{T}(j)\in \mathcal{T}$. \For{Each Player $i\in[n]$} \State Find $I_{l}$ such that $i\in I_{l}$. Calculate $v_i=\ell(\mathcal{T}(l);x_i)$. \State Compute $p_i=\frac{1}{2}\frac{\text{Pr}[v_i+\text{Lap}(\frac{1}{\epsilon})=y_i]}{\text{Pr}[\text{Lap}(\frac{1}{\epsilon})=y_i]}$ \State Sample a bit $b_i$ from $\text{Bernoulli}(p_i)$ and send it to the server. \EndFor \For{The Server} \For {$i=1\cdots n$} \State Check if $b_i=1$, set $\tilde{z_i}=y_i$, otherwise $\tilde{z_i}=0$. \EndFor \For {each $l\in[d]$} \State Compute $v_{\ell}=\frac{n}{|I_{l}|}\sum_{i\in I_{\ell}}\tilde{z_i}$ \State Denote the corresponding grid point $(\frac{v_1}{k},\ldots,\frac{v_p}{k})\in \mathcal{T}$ of $I_l$, then denote $\hat{L}((\frac{v_1}{k},\cdots,\frac{v_p}{k});D)=v_{l}$. \EndFor \State Construct Bernstein polynomial for the perturbed empirical loss $\tilde{L}$ as in Algorithm \ref{alg:2}. Denote $\tilde{L}(\cdot,D)$ the corresponding function. \State Compute $\theta_{\text{priv}}=\arg\min_{\theta\in \mathcal{C}}\tilde{L}(\theta;D)$. \EndFor \end{algorithmic} \end{algorithm} Now we study the algorithm from the server's complexity perspective. The polynomial construction time complexity is $O(n)$, where the most inefficient part is finding $\theta_{\text{priv}}=\arg\min_{\theta\in \mathcal{C}}\tilde{L}(\theta,D)$. In fact, this function may be non-convex; but unlike general non-convex functions, it can be $\alpha$-uniformly approximated by a convex function $\hat{L}(\cdot;D)$ if the loss function is convex (by the proof of Theorem \ref{theorem:3}), although we do not have access to it. Thus, we can see this problem as an instance of Approximately-Convex Optimization, which has been studied recently by \citet{risteski2016algorithms}. \begin{definition}\cite{risteski2016algorithms}\label{def:7} We say that a convex set $\mathcal{C}$ is $\mu$-well-conditioned for $\mu\geq 1$, if there exists a function $F:\mathbb{R}^p\mapsto \mathbb{R}$ such that $\mathcal{C}=\{x|F(x)\leq 0\}$ and for every $x\in \partial K: \frac{\|\nabla^2F(x)\|_2}{\|\nabla F(x)\|_2}\leq \mu$. \end{definition} \begin{lemma}[Theorem 3.2 in \cite{risteski2016algorithms}]\label{lemma:4} Let $\epsilon,\Delta$ be two real numbers such that $\Delta\leq \max\{\frac{\epsilon^2}{\mu\sqrt{p}},\frac{\epsilon}{p}\}\times \frac{1}{16348}$. Then, there exists an algorithm $\mathcal{A}$ such that for any given $\Delta$-approximate convex function $\tilde{f}$ over a $\mu$-well-conditioned convex set $\mathcal{C}\subseteq\mathbb{R}^p$ of diameter 1 (that is, there exists a 1-Lipschitz convex function $f:\mathcal{C}\mapsto \mathbb{R}$ such that for every $x\in \mathcal{C}, |f(x)-\tilde{f}(x)|\leq \Delta$), $\mathcal{A}$ returns a point $\tilde{x}\in\mathcal{C}$ with probability at least $1-\delta$ in time $\text{Poly}(p,\frac{1}{\epsilon},\log \frac{1}{\delta})$ and with the following guarantee $\tilde{f}(\tilde{x})\leq \min_{x\in \mathcal{C}}\tilde{f}(x)+\epsilon$. \end{lemma} Based on Lemma \ref{lemma:4} (for $\tilde{L}(\theta;D)$) and Corollary \ref{col2}, and taking $\epsilon=O(p\alpha)$, we have the following. \begin{theorem}\label{thm:6} Under the conditions in Corollary \ref{col2}, and assuming that $n=\tilde{\Omega}(4^{p(p+1)}\log(1/\beta)D_p^2 p \epsilon^{-2}\alpha^{-4})$, that the loss function $\ell(\cdot, x)$ is $1$-Lipschitz and convex for every $x\in \mathcal{D}$, that the constraint set $\mathcal{C}$ is convex and $\|\mathcal{C}\|_2\leq 1$, and satisfies $\mu$-well-condition property (see Definition \ref{def:7}), if the error $\alpha$ satisfies $\alpha\leq C\frac{\mu}{p\sqrt{p}}$ for some universal constant $C$, then there is an algorithm $\mathcal{A}$ which runs in $\text{Poly}(n, \frac{1}{\alpha},\log \frac{1}{\beta})$ time\footnote{Note that since here we assume $n$ is at least exponential in $p$, thus the algorithm is not fully polynomial.} for the server, and with probability $1-2\beta$ the output $\tilde{\theta}_{\text{priv}}$ of $\mathcal{A}$ satisfies $\tilde{L}(\tilde{\theta}_{\text{priv}};D)\leq \min_{\theta\in \mathcal{C}}\tilde{L}(\theta;D)+O(p\alpha),$ which means that $\text{Err}_{D}(\tilde{\theta}_{\text{priv}})\leq O(p\alpha)$. \end{theorem} Combining with Theorem \ref{thm:5}, \ref{thm:6} and Corollary \ref{col2}, and taking $\alpha=\frac{\alpha}{p}$, we have our final result: \begin{theorem}\label{thm:7} Under the conditions of Corollary \ref{col2}, Theorem \ref{thm:5} and \ref{thm:6}, and for any $C\frac{\mu}{\sqrt{p}}>\alpha>0$, if we further set $n=\tilde{\Omega}(4^{p(p+1)}\log(1/\beta)D_p^2 p^5 \epsilon^{-2}\alpha^{-4})$, then there is an $\epsilon$-LDP algorithm, with $O(1)$ running time and $1$-bit communication per player, and $\text{Poly}(\frac{1}{\alpha},\log \frac{1}{\beta})$ running time for the server. Furthermore, with probability at least $1-5\beta$, the output $\tilde{\theta}_{\text{priv}}$ satisfies $\text{Err}_{D}(\tilde{\theta}_{\text{priv}})\leq O(\alpha)$. \end{theorem} Note that compared with the sample complexity in Theorem \ref{thm:7} and Corollary \ref{col2}, we have an additional factor of $p^4$; however, the $\alpha$ terms are the same. In fact, we could extend our method to other LDP problems, we study how to answer the class of k-way marginals and smooth queries under LDP, which has not been studied before. \section{LDP Algorithms for Learning K-way Marginals Queries and Smooth Queries By using Polynomial Approximation} In this section, we will show further applications of our idea by giving $\epsilon$-LDP algorithms for answering sets of queries. All the queries we consider in this section are linear, that is, of the form $q_f(D)=\frac{1}{|D|}\sum_{x\in D}f(x)$ for some function $f$. It will be convenient to have a notion of accuracy for the algorithm we will present with respect to a set of queries. This is defined as follow: \begin{definition} Let $\mathcal{Q}$ denote a set of queries. An algorithm $\mathcal{A}$ is said to have $(\alpha,\beta)$-accuracy for size $n$ databases with respect to $\mathcal{Q}$, if for every $n$-size dataset $D$, the following holds: $\text{Pr}[ \exists q\in \mathcal{Q}, |\mathcal{A}(D,q)-q(D)|\geq \alpha]\leq \beta.$ \end{definition} \subsection{K-way Marginals Queries} Now we consider a database $D=(\{0,1\}^p)^n$, where each row corresponds to an individuals record. A marginal query is specified by a set $S\subseteq [p]$ and a pattern $t\in \{0,1\}^{|S|}$. Each such query asks: `What fraction of the individuals in $D$ has each of the attributes set to $t_j$?'. We will consider here k-way marginals which are the subset of marginal queries specified by a set $S\subseteq[p]$ with $|S|\leq k$. K-way marginals permit to represent several statistics over datasets, including contingency tables, and the problem to release them under differential privacy has been studied extensively in the literature~\cite{hardt2012private,gupta2013privately,thaler2012faster,GaboardiAHRW14}. All these previous works have considered the central model of differential privacy, and only the recent work~\cite{Kulkarni17} studies this problem in the local model, while their methods are based Fourier Transform. We now use the LDP version of Chebyshev polynomial approximation to give an efficient way of constructing a sanitizer for releasing k-way marginals. Since learning the class of $k$-way marginals is equivalent to learning the class of monotone k-way disjunctions \cite{hardt2012private}, we will only focus on the latter. The reason why we can locally privately learning them is that they form a $\mathcal{Q}$-Function Family. \begin{definition}[$\mathcal{Q}$-Function Family]\label{def:10} Let $\mathcal{Q}=\{q_y\}_{y\in Y_{\mathcal{Q}}\subseteq \{0,1\}^m}$ be a set of counting queries on a data universe $\mathcal{D}$, where each query is indexed by an $m$-bit string. We define the index set of $\mathcal{Q}$ to be the set $Y_{\mathcal{Q}}=\{y\in \{0,1\}^m| q_y\in \mathcal{Q}\}$.\\ We define a $\mathcal{Q}$-Function Family $\mathcal{F}_{\mathcal{Q}}=\{f_{\mathcal{Q},x}:\{0,1\}^m \mapsto \{0,1\}\}_{x\in\mathcal{D}}$ as follows: for every data record $x\in D$, the function $f_{\mathcal{Q},x}:\{0,1\}^m\mapsto \{0,1\}$ is defined as $f_{\mathcal{Q},x}(y)=q_y(x)$. Given a database $D\in \mathcal{D}^n$, we define $f_{\mathcal{Q},D}(y)=\frac{1}{n}\sum_{i=1}^{n}f_{\mathcal{Q},x^i}(y)=\frac{1}{n}\sum_{i=1}^{n}q_y(x^i)=q_y(D)$, where $x^i$ is the $i$-th row of $D$. \end{definition} This definition guarantees that $\mathcal{Q}$-function queries can be computed from their values on the individual's data $x^i$. We can now formally define the class of monotone k-way disjunctions. \begin{definition}\label{def:11} Let $\mathcal{D}=\{0,1\}^p$. The query set $\mathcal{Q}_{disj,k}=\{q_y\}_{y\in Y_k\subseteq \{0,1\}^p}$ of monotone $k$-way disjunctions over $\{0,1\}^p$ contains a query $q_y$ for every $y\in Y_k=\{y\in\{0,1\}^p| |y|\leq k\}$. Each query is defined as $q_y(x)= \vee_{j=1}^{p}y_jx_j$. The $\mathcal{Q}_{disj,k}$-function family $\mathcal{F}_{\mathcal{Q}_{disj,k}}=\{f_x\}_{x\in\{0,1\}^p}$ contains a function $f_x(y_1,y_2,\cdots,y_p)=\vee_{j=1}^{p}y_jx_j$ for each $x\in \{0,1\}^p$. \end{definition} Definition \ref{def:10} guarantees that if we can uniformly approximated the function $f_{\mathcal{Q},x}$ by polynomials $p_x$, then we can also have an approximation of $f_{\mathcal{Q},D}$, {\em i.e.} we can approximate $q_y(D)$ for every $y$ or all the queries in the class $\mathcal{Q}$. Thus, if we can locally privately estimate the sum of coefficients of the monomials for the $m$-multivariate functions $\{p_x\}_{x\in D}$, we can uniformly approximate $f_{\mathcal{Q},D}$. Clearly, this can be done by Lemma 2, if the coefficients of the approximated polynomial are bounded. In order to uniformly approximate the class $\mathcal{Q}_{disj,k}$, we use Chebyshev polynomials. \begin{definition}[Chebyshev Polynomials]\label{def:9} For every $k\in \mathbb{N}$ and $\gamma>0$, there exists a univariate real polynomial $p_k(x)=\sum_{j=0}^{t_k}c_ix^i$ of degree $t_k$ such that $t_k=O(\sqrt{k}\log(\frac{1}{\gamma}))$; for every $i\in [t_k], |c_i|\leq 2^{O(\sqrt{k}\log(\frac{1}{\gamma}))}$; and $p(0)=0, |p_k(x)-1|\leq \gamma, \forall x\in[k]$. \end{definition} \begin{algorithm}[h] \caption{Local Chebyshev Mechanism for $\mathcal{Q}_{\text{disj,k}}$ } \label{Aalg:1} \begin{algorithmic}[1] \State{\bfseries Input:} Player $i\in [n]$ holding data $x_i\in \{0,1\}^p$, privacy parameter $\epsilon>0$, error bound $\alpha$, and $k\in \mathbb{N}$. \For{Each Player $i\in[n]$} \State Consider the $p$-multivariate polynomial $q_{x_i}(y_1,\ldots,y_p)= p_k(\sum_{j=1}^{p}y_j[x_i]_j)$, where $p_k$ is defined as in Definition \ref{def:9} with $\gamma=\frac{\alpha}{2}$. \State Denote the coefficients of $q_{x_i}$ as a vector $\tilde{q}_{i}\in \mathbb{R}^{\binom{p+t_k}{t_k}}$(since there are $\binom{p+t_k}{t_k}$ coefficients in a $p$-variate polynomial with degree $t_k$), note that each $\tilde{q}_{i}$ can bee seen as a $p$-multivariate polynomial $q_{x_i}(y)$. \EndFor \For{The Server} \State Run LDP-AVG from Lemma 1 on $\{\tilde{q}_i\}_{i=1}^{n}\in\mathbb{R}^{\binom{p+t_k}{t_k}}$ with parameter $\epsilon$, $b=p^{O(\sqrt{k}\log(\frac{1}{\gamma}))}$, denote the output as $\tilde{p}_D\in \mathbb{R}^{\binom{p+t_k}{t_k}}$, note that $\tilde{p}_D$ also corresponds to a $p$-multivariate polynomial. \State For each query $y$ in $\mathcal{Q}_{\text{disj,k}}$ (seen as a $d$ dimension vector), compute the $p$-multivariate polynomial $\tilde{p}_D(y_1,\ldots,y_p)$. \EndFor \end{algorithmic} \end{algorithm} \begin{lemma}\cite{thaler2012faster}\label{Alemma:1} For every $k,p \in \mathbb{N}$, such that $k\leq p$, and every $\gamma>0$, there is a family of $p$-multivariate polynomials of degree $t=O(\sqrt{k}\log(\frac{1}{\gamma}))$with coefficients bounded by $T=p^{O(\sqrt{k}\log(\frac{1}{\gamma}))}$, which uniformly approximate the family $\mathcal{F}_{\mathcal{Q}_{\text{disj,k}}}$ over the set $Y_k$ (Definition \ref{def:11}) with error bound $\gamma$. That is, there is a family of polynomials $\mathcal{P}$ such that for every $f_x\in\mathcal{F}_{\mathcal{Q}_{\text{disj,k}}}$, there is $p_x\in \mathcal{P}$ which satisfies $\sup_{y\in Y_k}|p_x(y)-f_x(y)|\leq \gamma$. \end{lemma} By combining the ideas discussed above and Lemma \ref{Alemma:1}, we have Algorithm \ref{Aalg:1} and the following theorem. \begin{theorem} For $\epsilon >0$ Algorithm \ref{Aalg:1} is $\epsilon$-LDP. Also, for $0<\beta<1$, there are constants $C, C_1$ such that for every $k,p,n\in\mathbb{N}$ with $k\leq p$, if $n\geq \Omega(\max\{\frac{p^{C\sqrt{k}\log \frac{1}{\alpha}}\log \frac{1}{\beta}}{\epsilon^2\alpha^2}, \frac{\log \frac{1}{\beta}}{\epsilon^2},p^{C_1\sqrt{k}\log \frac{1}{\alpha}}\log \frac{1}{\beta}\})$, this algorithm is $(\alpha,\beta)$-accuracy with respect to $\mathcal{Q}_{\text{disj},k}$. The running time for player is $\text{Poly}(p^{O(\sqrt{k}\log\frac{1}{\alpha})})$, and the running time for server is at most $O(n)$ and the time for answering a query is $O(p^{C_2\sqrt{k}\log \frac{1}{\alpha}})$ for some constant $C_2$. Moreover, as in Section 5, the communication complexity can be improved to 1-bit per player. \end{theorem} \subsection{Smooth Queries} We now consider the case where each player $i\in[n]$ holds a data $x_i\in \mathbb{R}^p$ and we want to estimate the kernel density for a given point $x_0\in\mathbb{R}^p$. A natural question is: If we want to estimate Gaussian kernel density of a given point $x_0$ with many different bandwidths, can we do it simultaneously under $\epsilon$ local differential privacy? We can see this kind of queries as a subclass of the smooth queries. So, like in the case of k-way marginals queries, we will give an $\epsilon$-LDP sanitizer for smooth queries. Now we consider the data universe $\mathcal{D}=[-1,1]^p$, and dataset $D\in \mathcal{D}^n$. For a positive integer $h$ and constant $T>0$, we denote the set of all $p$-dimensional $(h,T)$-smooth function (Definition \ref{def:5}) as $C^{h}_T$, and $\mathcal{Q}_{C^{h}_T}=\{q_f(D)=\frac{1}{n}\sum_{x\in D}f(D), f\in C^{h}_T\}$ the corresponding set of queries. The idea of the algorithm is similar to the one used for the k-way marginals; but instead of using Chebyshev polynomials, we will use trigonometric polynomials. We now assume that the dimensionality $p$, $h$ and $T$ are constants so all the result in big $O$ notation will be omitted. The idea of Algorithm \ref{alg:7} is actually based on the following Lemma. \begin{lemma}\cite{wang2016differentially}\label{Alemma:2} Assume $\gamma>0$. For every $f\in C^h_T$, defined on $[-1,1]^p$, let $g_f(\theta_1,\ldots,\theta_p)=f(\cos(\theta_1),\ldots,\cos(\theta_p))$, for $\theta_i\in [-\pi,\pi]$. Then there is an even trigonometric polynomial $p$ whose degree for each variable is $t(\gamma)=(\frac{1}{\gamma})^{\frac{1}{h}}$: \begin{equation}\label{Aeq:2} p(\theta_1,\ldots,\theta_p)=\sum_{0\leq r_1,\ldots,r_p< t(\gamma)}c_{r_1,\ldots,r_p }\prod_{i=1}^{p}\cos(r_i\theta_i), \end{equation} such that 1) $p$ $\gamma$-uniformly approximates $g_f$, i.e. $\sup_{x\in[-\pi,\pi]^p}|p(x)-g_f(x)|\leq \gamma.$ 2) The coefficients are uniformly bounded by a constant $M$ which only depends on $h, T$ and $p$. 3) Moreover, the whole set of the coefficients can be computed in time $O\big((\frac{1}{\gamma})^{\frac{p+2}{h}+\frac{2p}{h^2}}\text{poly}\log \frac{1}{\gamma})\big)$. \end{lemma} By (\ref{Aeq:2}), we can see that all the $p(x)$ which corresponds to $g_f(x)$, representing functions $f\in C_T^{h}$, have the same basis $\prod_{i=1}^{p}\cos(r_i\theta_i)$. So, we can use Lemma 1 or 2 to estimate the average of the basis. Then, for each query $f$ the server can only compute the corresponding coefficients $\{c_{r_1,r_2,\cdots,r_p}\}$. This idea is implemented in Algorithm \ref{Aalg:2} for which we have the following result. \begin{theorem} For $\epsilon>0$, Algorithm \ref{Aalg:2} is $\epsilon$-LDP. Also for $\alpha>0$, $0<\beta<1$, if $n\geq \Omega(\max \{\log^{\frac{5p+2h}{2h}}(\frac{1}{\beta})\epsilon^{-2}\alpha^{-\frac{5p+2h}{h}}, \frac{1}{\epsilon^2}\log(\frac{1}{\beta})\})$ and $t=O((\sqrt{n}\epsilon)^{\frac{2}{5p+2h}})$, then Algorithm \ref{Aalg:2} is $(\alpha,\beta)$-accurate with respect to $\mathcal{Q}_{C^h_T}$. The time for answering each query is $\tilde{O}((\sqrt{n}\epsilon)^{\frac{4p+4}{5p+2h}+\frac{4p}{5ph+2h^2}})$, where $O$ omits $h,T,p$ and some $\log$ terms. For each player, the computation and communication cost could be improved to $O(1)$ and 1 bit, respectively, as in Section 5. \end{theorem} \vspace{-0.05in} \begin{algorithm}[h] \caption{Local Trigonometry Mechanism for $\mathcal{Q}_{C^{h}_T}$ } \label{Aalg:2} \begin{algorithmic}[1] \State {\bfseries Input:} Player $i\in [n]$ holding data $x_i\in [-1,1]^p$, privacy parameter $\epsilon>0$, error bound $\alpha$, and $t\in \mathbb{N}$. $\mathcal{T}_{t}^p=\{0,1,\cdots,t-1\}^p$. For a vector $x=(x_1,\ldots,x_p)\in [-1,1]^p$, denote operators $\theta_i(x)=\arccos(x_i), i\in[p]$. \For{Each Player $i\in[n]$} \For{Each $v=(v_1,v_2,\cdots,v_p)\in \mathcal{T}_{t}^p$} \State Compute $p_{i;v}=\cos(v_1\theta_1(x_i))\cdots \cos(v_p\theta_p(x_i))$ \EndFor \State Let $p_i=(p_{i;v})_{v\in \mathcal{T}_{t}^p}$. \EndFor \For{The Server} \State Run LDP-AVG from Lemma 1 on $\{p_i\}_{i=1}^{n}\in\mathbb{R}^{t^p}$ with parameter $\epsilon$, $b=1$, denote the output as $\tilde{p}_D$. \State For each query $q_f\in \mathcal{Q}_{C^h_T}$. Let $g_f(\theta)=f(\cos(\theta_1),\cos(\theta_2),\cdots,\cos(\theta_p))$. \State Compute the trigonometric polynomial approximation $p_t(\theta)$ of $g_f(\theta)$, where $p_t(\theta)=\sum_{r=(r_1,r_2\cdots r_p),\|r\|_{\infty}\leq t-1}c_r\cos(r_1\theta_1)\cdots \cos(r_p\theta_p)$ as in (\ref{Aeq:2}). Denote the vector of the coefficients $c\in \mathbb{R}^{t^p}$. \State Compute $\tilde{p}_D\cdot c$. \EndFor \end{algorithmic} \end{algorithm} \section{High Dimensional Case} In the previous parts and \cite{smith2017interaction}, it is always assume that $n\geq p$ (while ours need $\log n\geq O(p)$). However, many problem in machine learning are in high dimension space {\em i.e.} $n\ll p$. We will show a general method for this case if the loss function is generalized linear function. Due to the space limit, all the definitions and general statements are in Appendix.\par A function $\ell(w, x)$ is called generalized linear function \cite{shalev2009stochastic} if $\ell(w,x)=f(\langle w,y \rangle, z)$ for $x=(y, z)$, where $y\in\mathbb{R}^p$ is the data and $z$ is the label, actually, many loss functions satisfies the condition, such as Logistic Regression, Hinge loss, linear Regression etc. We will assume the dataset satisfies $\|y_i\|\leq 1, \|z_i\|\leq 1$ for all $i\in[n]$. Also we will assume that $f$ is $1$-Lipschitz convex in the first argument, $\|\mathcal{C}\|_2\leq 1$ follows \cite{smith2017interaction} and is isotropic\footnote{A convex set is isotropic if a random vector chosen uniformly from $\mathcal{K}$ according to the volume is isotropic. A random vector $a$ is isotropic if for all $b\in\mathbb{R}^p, \mathbb{E}[\langle a,b \rangle^2]=\|b\|^2$, such as polytope.}.\par The motivation of our algorithm is inspired by \cite{kasiviswanathan2016efficient}, that is, we firstly do dimension reduction for each data $y_i$, that is $D'=\{(\Phi y_1,z_1),\cdots, (\Phi y_n, z_n)\}$, where $\Phi\in \mathbb{R}^{m\times p}$. Then we run a modified version of the algorithm in \cite{smith2017interaction}. After getting the private estimator $\bar{w}\in \mathbb{R}^m$, we use compress sensing technique by solving a optimization problem \cite{vershynin2015estimation} to recover $w^{\text{priv}}\in \mathbb{R}^p$. \par We have to note that we cannot use the $\epsilon$-LDP algorithm (see Figure 5 in \cite{smith2017interaction}) in \cite{smith2017interaction} since it needs $n\geq k$, where $k=O\big(\frac{2^{\frac{p-1}{2}}\sqrt{p}}{\alpha^{p-1}}\big)$, and $\alpha=O\big( (\frac{\sqrt{p}}{\epsilon^2n}\log^3(\epsilon^2n))^{\frac{1}{p+1}}\big)$. This is said that $n\geq O(c^p)$, which is contradictory with our assumption. Actually, we will provide a similar algorithm which can remove this assumption. The idea is comes from \cite{DBLP:journals/corr/abs-1711-04740}, which shows that in non-interactive local model, every $(\epsilon, \delta)$-LDP protocal can be transformed as an $\epsilon$-LDP algorithm. Thus, our idea is, in Figure 5 of \cite{smith2017interaction}, instead of partitioning the dataset for $k$ parts and running the subroutine of Figure 1 in \cite{smith2017interaction}, here we will run $k$ directions for the whole dataset, by the advanced composition theorem (corollary 3.21 in \cite{dwork2014algorithmic}), if for each direction, we run $(\epsilon_0=O(\frac{\epsilon}{\sqrt{k\log(1/\delta)}},0)$-LDP, then the whole LDP algorithm is $(\epsilon,\delta)$-LDP, after that, we use the protocal in \cite{DBLP:journals/corr/abs-1711-04740} to transform the $(\epsilon, \delta)$-LDP algorithm to $O(\epsilon)$-DP. See Algorithm \ref{Ealg:6}. \begin{algorithm*}[h] \caption{$(\epsilon, \delta)$ protocol LDP Algorithm} \label{Ealg:6} \begin{algorithmic}[1] \State{\bfseries Input:} Each user $i\in [n]$ has data $x_i\in \mathcal{D}$, privacy parameters $\epsilon, \delta$, public loss function $\ell:[0,1]^p \times \mathcal{D}\mapsto [0,1]$ satisfies the assumption in \cite{smith2017interaction}, and parameter $k$( we will specify it later). \State {\bfseries Preprocessing:} \State Choose $k$ random directions, $u_1, u_2, \cdots, u_k$ and send to each user. \For{Each user $i\in[n]$} \State For each $j\in [k]$, invoke 1D-General (Figure 3 in \cite{smith2017interaction}) with $(x_i, u_j)$ with $\epsilon=\frac{\epsilon}{2\sqrt{2k\log(1/\delta)}}$ and $\gamma=\gamma/k$, output $\mathcal{T}_{i,j}$. Then send $\mathcal{T}_i=(\mathcal{T}_{i,1}, \cdots, \mathcal{T}_{i,k})$ to the server. \EndFor \For{The server} \State After receiving $\{\mathcal{T}_i\}_{i=1}^n$, do the following steps \State For $j\in [k]$, invokes 1-D General(Figure 4 in \cite{smith2017interaction}) with $\{\mathcal{T}_{i,j}\}_{i=1}^{n}$ to get $\hat{f}^j$. \State Compute $\theta_j=\arg\min_{\theta||u_j}\hat{f}^j$ and then compute $\theta_{\text{priv}}=\arg\min_j\hat{f}^j(\theta_j)$, output $\theta_{\text{priv}}$. \EndFor \end{algorithmic} \end{algorithm*} We have the following theorem for Algorithm \ref{Ealg:6}, the proof is the same as in \cite{smith2017interaction}: \begin{theorem}\label{atheorem:1} Under the same assumption as in Theorem 10 of \cite{smith2017interaction}. Algorithm \ref{Ealg:6} is $(\epsilon, \delta)$-LDP for any $1>\epsilon>0, 0<\delta<1$. Also, we have for every $k$, with probability at least $1-\gamma$, the output satisfies \begin{equation} \|\hat{f}^j-L_{\mathcal{P}}\|_{\infty}\leq O\big(\frac{\log(\epsilon^2 n/k\log(1/\delta))}{\epsilon} \sqrt{\frac{k\log(1/\delta)\log(\epsilon^2n/\log(1/\delta))\gamma}{n}}\big). \end{equation} Furthermore, if we take $k=O\big(\frac{2^{(p-1)/2}\log(1/\gamma)}{\alpha^{p-1}}\sqrt{\frac{\pi p}{2}}\big)$ where $\alpha=O\big( (\frac{\sqrt{p}}{\epsilon^2n}\log^3(\epsilon^2n)\log^2(1/\gamma))^{\frac{1}{p+1}}\big)$, here $O$ omits $\log(1/\delta)$ factors. Then we have $\|\hat{f}-L_{\mathcal{P}}\|_{\infty}\leq \tilde{O}(\alpha)$, with probability at least $1-2\gamma$. \end{theorem} Now, we have almost the same upper bound as in Theorem 10 of \cite{smith2017interaction}. Then after using GenProt in \cite{DBLP:journals/corr/abs-1711-04740}, we can have an $10\epsilon$-LDP which has the same error bound as in Theorem \ref{atheorem:1}: \begin{theorem}\label{atheorem:2} Let $\epsilon\leq \frac{1}{4}$, if we set $\delta=O(\frac{\epsilon\gamma}{n\ln(2n/\gamma)})$ in Algorithm \ref{Ealg:6} as the protocol and run the Genprot algorithm in \cite{DBLP:journals/corr/abs-1711-04740}. Then there is an $10\epsilon$-LDP algorithm, such that whith probability ast least $1-3\gamma$, the output $w_{\text{priv}}$ satisfies \begin{equation*} \text{Err}_{\mathcal{P}}(\theta_{\text{priv}})\leq \tilde{O}\big(( \frac{\sqrt{p}\log^2(1/\beta)}{\epsilon^2 n} )^{\frac{1}{p+1}}\big). \end{equation*} \end{theorem} Our method is based on the following lemma in \cite{dirksen2016dimensionality}. \begin{algorithm}[h] \caption{DR-ERM-LDP} \label{alg:9} \begin{algorithmic}[1] \State {\bfseries Input:} Player $i\in [n]$ holding data $x_i=(y_i,z_i)\in \mathcal{D}$, where $\|y_i\|\leq 1$, privacy parameter $\epsilon$. \State The server generate an random sub-Gaussian matrix $\Phi\in \mathbb{R}^{m\times p}$ in Lemma \ref{lemma:4}, and send the seed of this random matrix to all players. \For{Each Player $i$} \State Calculate $x_i'=(\Phi y_i,z_i)$ \State Run the modified $\epsilon$-local DP algorithm of \cite{smith2017interaction} for $D'=\{z_i'\}$ with constrained set $\mathcal{C}=\Phi\mathcal{C}$ and loss function $f$. The server get the output as $\bar{w}\in\mathbb{R}^m$. \EndFor \State The server solving the following problem $w^{\text{priv}}=\arg \min_{w\in\mathbb{R}^p}\|w\|_{\mathcal{C}}$ subject to $\Phi w=\bar{w}$. \end{algorithmic} \end{algorithm} \begin{lemma}\label{lemma:4} Let $\tilde{\Phi}\in \mathbb{R}^{m\times p}$ be an random matrix, whose rows are i.i.d mean-zero, isotropic, subgaussian random variable in $\mathbb{R}^d$ with $\psi=\|\Phi_i\|_{\psi_2}$. Let $\Phi=\frac{1}{\sqrt{m}}\tilde{\Phi}$. let $S$ be a set if points in $R^d$. Then there is a constant $C>0$ such that for any $0<\gamma,\beta<1$. $ \text{Pr}[\sup_{a\in S}|\|\Phi a\|^2-\|a\|^2\leq \gamma\|a\|^2]\leq \beta, $ provided that $m\geq \frac{C\psi^4}{\gamma^2}\max\{\mathcal{G}_{\mathcal{S}},\log(1/\beta)\}^2$. \end{lemma} \begin{theorem}\label{theorem:7} Under the assumption above. For any $\epsilon\leq \frac{1}{4}$, Algorithm \ref{alg:9} is $O(\epsilon)$-LDP. Moreover, setting $m=\Theta(\frac{\psi^4(\mathcal{G}_{\mathcal{C}}+\sqrt{\log n})^2\log(n/\beta)}{\gamma^2})$, where $\gamma=\Theta(\frac{\psi\sqrt{(\mathcal{G}_{\mathcal{C}}+\sqrt{\log n})}\log(1/\beta)\sqrt[4]{\log(n/\beta)}}{\sqrt{n}\epsilon})$. Then with probability at least $1-\beta$, $$\text{Err}_D(w^{\text{priv}})=\tilde{O}\big(\big(\frac{\log(1/\beta)\psi\sqrt{(\mathcal{G}_{\mathcal{C}}+\sqrt{\log n})}\sqrt[4]{\log(n/\beta)}}{\sqrt{n}\epsilon}\big)^{\frac{1}{1+m}}),$$ where $\psi$ is the subgaussian norm of the distribution of $\Phi$, $\mathcal{G}_{\mathcal{C}}$ is the Gaussian width of $\mathcal{C}$. \end{theorem} \begin{corollary} If $\Phi$ is a standard Gaussian random matrix, $\mathcal{C}$ is the $\ell_1$ norm ball $B^p_1$ or the distribution simplex in $\mathbb{R}^p$, and $n\ll p\leq e^{cn}$ for some constant $c$. Then we have the bound in Theorem 7 is just $\tilde{O}\big(\big(\frac{\log(1/\beta)\sqrt[4]{\log p}\sqrt[4]{\log(n/\beta)}}{\sqrt{n}\epsilon}\big)^{\frac{1}{1+m}}\big)$, where $m=O(n\epsilon^2\log p\sqrt{\log(n/\beta)})$. We can see the bound in Theorem \ref{theorem:7} is always better than the bound in Theorem \ref{theorem:1} since ours is always less than $O(1)$. \end{corollary} \section{Conclusion and Discussion} In this paper, we studied ERM under non-interactive LDP and proposed an algorithm which is based on Bernstein polynomial approximation. We showed that if the loss function is smooth enough, then the sample complexity to achieve $\alpha$ error is $\alpha^{-c}$ for some positive constant $c$, which improves significantly on the previous result of $\alpha^{-(p+1)}$. Moreover, we proposed efficient algorithms for both player and server views. We also showed how a similar idea based on other polynomial approximations can be used to answering k-way-marginals and smooth queries in the local model.\par In our algorithms the sample complexity still depends on the dimension $p$, in the term of $c^{p}$ for constant $c$. We will focus on removing this dependency in future work. Additionally, we will study the difference between strongly convex and convex loss functions in the non-interactive LDP setting. \bibliographystyle{plainnat}
{ "timestamp": "2018-05-18T02:02:13", "yymm": "1802", "arxiv_id": "1802.04085", "language": "en", "url": "https://arxiv.org/abs/1802.04085" }
\section{Introduction} In geotechnics, the large deformations of rock and soil masses commonly occur in various geo-disasters such as landslides, debris flows, rock collapses, and ground subsidence. Moreover, when analyzing the stability of rock or soil slopes, the distribution and propagation of natural cracks is one of the most crucial issues that needs to be considered. To understand the mechanisms behind the above-mentioned geo-disasters, physical experiments and numerical investigations are commonly employed in practice. The large-deformations and crack propagations of rock and soil masses have been examined using various mesh-based or meshfree numerical methods \cite{01,02,03,04,05,06}. When employing those mesh-dependent numerical analysis methods such as Finite Element Method (FEM), Finite Volume Method (FVM), Finite Difference Method (FDM) to analyze the large deformation or crack propagation in geotechnics, the mesh element would generally be distorted or need to be broken. Mainly motivated by addressing those problems mentioned above occurring in mesh-based methods, the meshfree / meshless methods such as SPH, MLPG, LBIM, EFG, RPIM are proposed; see several excellent reviews \cite{07,08,09}. Recently, there are several meshfree software packages that have been developed or are being under development. For example, Hsieh and Pan described the Essential Software Framework for Meshfree methods (ESFM) \cite{10}. Cercos-Pita \cite{11} introduced the AQUAgpusph, a new free 3D SPH solver accelerated with OpenCL. Sinaie et \textit{al} \cite{12} presents the implementation of the material point method (MPM) using Julia. Winkler et \textit{al} \cite{13} introduced gpuSPHASE, a shared memory caching implementation for 2D SPH using CUDA. Vanaverbeke et \textit{al} \cite{14} presented GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. Zhang et \textit{al} \cite{15,16,17} developed the 3D explicit parallel MPM code, MPM3D. This short paper briefly reports the GeoMFree$^{3D}$, a meshfree / meshless software package for Geomechanics. The objective for developing the GeoMFree$^{3D}$ is to numerically analyze large deformations \cite{18,19,20}, and crack propagations \cite{21,22} of rock and soil masses in geomechanics. The package GeoMFree$^{3D}$ is currently under intensive developments. The underlying algorithm behind the GeoMFree3D is the Radial Point Interpolation Method (RPIM) proposed by Liu G.R. \cite{23,24}, . In addition, to improve the computational efficiency when analyzing large-scale problems \cite{25}, the GeoMFree3D is parallelized on multi-core CPU and many-core GPU using the OpenMP \cite{26} and CUDA \cite{27}, respectively. \section{GeoMFree$^{3D}$} The GeoMFree$^{3D}$ is a meshfree software package designed for numerically analyzing the large deformations and crack propagations of rock and soil masses in geomechanics. The GeoMFree$^{3D}$ is currently capable of analyzing linear and nonlinear static problems, and is being developed for addressing dynamic problems. The GeoMFree$^{3D}$ is written in C/C++ and accelerated by exploiting the parallel computing on multi-core CPU and many-core GPU. The process of numerical modeling using the GeoMFree$^{3D}$ is illustrated in Figure \ref{fig1}. There are three major stages in the GeoMFree$^{3D}$. The first stage is to assemble the global stiffness matrix by looping over all field nodes to create the element stiffness matrix of each field node. The second is to enforce the boundary conditions. And the third is to solve the system equation to obtain displacements and then the stresses, etc. To improve the computational efficiency, the first stage of assembling the global stiffness matrix is parallelized on multi-core CPU using the API OpenMP \cite{26}. The meshfree RPIM is inherently suitable to be parallelized since there is no data dependencies between the forming of any two element stiffness matrices for any pair of field nodes. That is, the assembly of the element stiffness matrix of one field node is completely independent of that for another field node. Therefore, we can allocate $n$ threads on the multi-core CPU; and each thread is responsible for assembling the element stiffness matrix for one field node. In this case, the assembly of element stiffness matrices for $n$ field nodes can be conducted concurrently. This is the essential idea behind parallelizing the assembly of global / system stiffness matrix on multi-core CPU. Similarly, to enhance the computational efficiency, the second stage of enforcing the boundary condition can also be parallelized. More specifically, we adopt the penalty function method to enforce the displacement boundary conditions. This procedure is performed in parallel on the many-core GPU. Assuming there are $m$ field nodes on the displacement boundary, and we can allocate $m$ GPU threads to enforce the displacement boundary conditions for the $m$ field nodes concurrently, where each thread takes responsibility for enforcing the displacement boundary condition for one boundary field node. The final stage is the solving of system equations to obtain nodal displacements and then the stresses. In meshfree RPIM, the assembled global stiffness matrix is large, sparse, and asymmetric. When analyzing large-scale problems and requiring a large number of field nodes, the global system matrix could be very large. To improve the computational efficiency in solving system equations, we have employed the \texttt{cuSparse} and \texttt{cuSolver} library integrated in CUDA \cite{27} to solve the system of equations. \begin{figure}[!h] \centering \includegraphics[width=0.78\linewidth]{Figure1.pdf} \caption{Process of our meshfree software package GeoMFree$^{3D}$} \label{fig1} \end{figure} \section{Verification} This section will present several computational examples to verify the validation and features of the reported meshfree software package GeoMFree$^{3D}$. \subsection{Example 1: Stresses of a Cubic Domain} First, to verify the correctness of the GeoMFree$^{3D}$, we specifically calculate the distribution of displacements and stresses of a cubic domain; see Figure \ref{fig2}. In this quite simple verification example, only the force of gravity is considered and there are no other forces. The density of the cube is set as 2600 kg / m$^{3}$. The stress on the bottom of the cube can be theoretically calculated, and is noted as the \textit{theoretical} result. In contrast, we can also numerically calculate the nodal stress on the bottom using our meshfree package GeoMFree$^{3D}$ which is noted as the \textit{numerical} result. Then, by comparing the theoretical result to the numerical counterpart, we could validate the correctness of the GeoMFree$^{3D}$. The theoretical nodal stress on the bottom of the cube is 2.548 MPa, while the numerically calculated one is 2.376 MPa. There is a slight difference between the theoretical and numerical results. And thus, we can conclude that the correctness of the GeoMFree$^{3D}$ has been verified, although the employed verification example is extremely simple. \begin{figure}[htbp] \centering \subfigure[]{ \label{fig2a} \includegraphics[width=0.45\linewidth]{Figure2a.pdf} } \subfigure[]{ \label{fig2b} \includegraphics[width=0.45\linewidth]{Figure2b.png} } \caption{Verification example 1: stresses of a cubic domain. (a) Computational model of a cubic domain; (b) Stresses calculated by using our package GeoMFree$^{3D}$.} \label{fig2} \end{figure} \subsection{Example 2: Displacements of a Cantilever Beam with Crack} To further verify the effectiveness of our GeoMFree$^{3D}$, we have employed it to calculate the displacement and stress field of a Cantilever beam with a crack; see Figure \ref{fig3}. Moreover, we have also computed the displacements and stresses of the beam using a FDM numerical software FLAC$^{3D}$; see Figure \ref{fig3b}. The numerical results calculated by our package GeoMFree$^{3D}$ and the commercial numerical software FLAC$^{3D}$ are almost the same; see Figure \ref{fig3b} and \ref{fig3c}. And this indicates that currently our package GeoMFree$^{3D}$ is capable of analyzing the very simple cases of crack propagation. In the near future, we hope that the GeoMFree$^{3D}$ can be used to model and simulate dynamic crack propagations in three-dimensions. \begin{figure}[htpb] \centering \subfigure[Computational model of a Cantilever beam with a crack]{ \label{fig3a} \includegraphics[width=0.7\linewidth]{Figure3a.pdf} } \subfigure[Displacements calculated by using the software FLAC$^{3D}$]{ \label{fig3b} \includegraphics[width=0.75\linewidth]{Figure3b.png} } \subfigure[Displacements calculated by using our package GeoMFree$^{3D}$]{ \label{fig3c} \includegraphics[width=0.75\linewidth]{Figure3c.png} } \caption{Verification example 2: displacements of a Cantilever beam with crack} \label{fig3} \end{figure} \subsection{Example 3: Displacements of a Simplified Slope} As having been introduced several times, the motivation why we are developing our meshfree package GeoMFree$^{3D}$ is that: we hope to employ one of the meshfree methods, i.e., the RPIM, to well model and simulate the large deformations and crack propagations of rock and soil masses. Currently, the GeoMFree$^{3D}$ cannot be used to model the continuously developed large deformations of rock or soil masses. But it can be used to calculate the displacements of a simplified slope; see Figure \ref{fig4} and Figure \ref{fig5}. And we are working on analyzing the stability of slopes using the GeoMFree$^{3D}$ based upon the Strength Reduction Method (SRM). In meshfree methods, the study domain is discretized with a set of field nodes. These field nodes could be (1) \textit{regularly} or (2) \textit{irregularly} distributed in the domain. The patterns of nodal distributions are of strong influence on both the computational accuracy and efficiency. To verify the flexibility of the GeoMFree3D for addressing problems with regular or irregular discretization, we have decomposed a simplified slope model with: (1) regular nodes (Figure \ref{fig4b}) and (2) irregular nodes (Figure \ref{fig5a}). We then calculated the displacements of the above two models using our package GeoMFree$^{3D}$, and also compared the results calculated by the GeoMFree$^{3D}$ to those by the commercial software FLAC$^{3D}$. The numerical results illustrated in Figure 4 and Figure 5 indicate that: (1) our package GeoMFree$^{3D}$ is capable of analyzing the problems with regular or irregular nodal distributions; (2) our package GeoMFree$^{3D}$ can be used to address the problems with relatively complex geometric domains and boundaries. \begin{figure}[htbp] \centering \subfigure[Geometrical model of a simplified slope ]{ \label{fig4a} \includegraphics[width=0.7\linewidth]{Figure4a.pdf} } \subfigure[Displacements calculated by using the software FLAC$^{3D}$]{ \label{fig4b} \includegraphics[width=0.75\linewidth]{Figure4b.png} } \subfigure[Displacements calculated by using our package GeoMFree$^{3D}$]{ \label{fig4c} \includegraphics[width=0.75\linewidth]{Figure4c.png} } \caption{Displacements of a simplified slope when employing regularly-distributed field nodes} \label{fig4} \end{figure} \begin{figure}[!h] \centering \subfigure[Displacements calculated by using the software FLAC$^{3D}$]{ \label{fig5a} \includegraphics[width=0.75\linewidth]{Figure5a.png} } \subfigure[Displacements calculated by using our package GeoMFree$^{3D}$]{ \label{fig5b} \includegraphics[width=0.75\linewidth]{Figure5b.png} } \caption{Displacements of a simplified slope when employing irregularly-distributed field nodes} \label{fig5} \end{figure} \section{Conclusion and Future Work} A meshfree software package, GeoMFree$^{3D}$, has been briefly introduced in this paper. The package GeoMFree$^{3D}$ is designed for the numerical investigation of large-deformations and crack propagations of rock and soil masses in geotechnics. The GeoMFree$^{3D}$ is developed based on the RPIM, and is currently under intensive developments. To validate the effectiveness of the introduced GeoMFree$^{3D}$, several verifications have been conducted. The verification examples have demonstrated that the current version of GeoMFree$^{3D}$ is capable of analyzing the deformation of simple study domains. The GeoMFree$^{3D}$ is currently under intensive developments. We are focusing on improving the computational efficiency by developing accurate and efficiency meshfree shape functions \cite{28,29,30}, for example, the parallel RBF \cite{31}, MLS \cite{32}, and Shepard \cite{33,34} interpolations. Currently, we are also aiming at numerically modeling the crack propagation of multiple tensile and shear cracks of rock masses. In future, we hope that: the GeoMFree$^{3D}$ can be used to (1) model the large-deformations of strata induced by underground mining and (2) analyze the stability of jointed rock slopes via modeling the very complex crack propagations of rock masses. \section*{Acknowledgements} This research was supported by the Natural Science Foundation of China (Grant Numbers 41772326 and 11602235), and the Fundamental Research Funds for the Central Universities. The authors would like to thank the editor and reviewers for their contributions on the paper. \bibliographystyle{splncs}%
{ "timestamp": "2018-02-14T02:00:13", "yymm": "1802", "arxiv_id": "1802.04256", "language": "en", "url": "https://arxiv.org/abs/1802.04256" }
\section{Introduction} In this work we investigate the regularity of $p$-orthotropic functions in the plane for $1<p<2$. Let $\Omega\subset\mathbb{R}^2$ be an open set. A weak solution of the orthotropic $p$-Laplace equation (also known as pseudo $p$-Laplace equation) is a function $u\in W^{1,p}(\Omega)$ such that \begin{equation}\label{orthodeg} \sum_{i=1}^2 \int_\Omega |\partial_i u|^{p-2} \partial_i u \, \partial_i \phi \dx=0 \quad \text{for all} \quad \phi\in W_0^{1,p}(\Omega). \end{equation} Equation \eqref{orthodeg} arises as the Euler-Lagrange equation for the functional \begin{equation} I_{\Omega}(v) = \sum_{i=1}^2\int_{\Omega} \frac{|\partial_i v|^p}{p}\dx. \end{equation} The equation is singular when either one of the derivatives vanishes, and does not fall into the category of equations with $p$-Laplacian structure. It was proved by Bousquet and Brasco in \cite{BB} that weak solutions of \eqref{orthodeg} for $1<p<\infty$ are $C^1(\Omega)$. A simple proof which gives a logarithmic modulus of continuity for the derivatives is contained in \cite{LR} for the case $p\geq 2$. The latter relies on a lemma on the oscillation of monotone functions due to Lebesgue \cite{L} and the fact that derivatives of solutions are monotone (in the sense of Lebesgue). The purpose of this work is to extend this result to the case $1<p<2$ employing methods developed in \cite{LR}. We obtain the following: \begin{theorem}\label{C1} Let $\Omega\subset\mathbb{R}^2$ and $u\in W^{1,p}(\Omega)$ be a solution of the equation \eqref{orthodeg} for $1<p<2$. Fix a ball $B_{R}\subset\subset \Omega$. Then, for all $j\in\{1,2\}$ and $B_r\subset\subset B_{R/2}$, we have \begin{equation}\label{oscEst2} \osc{B_r} (\partial_ju)\leq C_p \left(\log\left(\frac{R}{r}\right)\right)^{-\frac{1}{2}}\left( \fint_{B_{R}} |\nabla u|^p \dx\right)^\frac{1}{p}, \end{equation} where $C_p$ is a constant depending only on $p$. \end{theorem} \paragraph{\bf Notation.} We indicate balls by $B_r=B_r(a)=\{x\in\mathbb{R}^2\;:\; |x-a|<r\}$ and we omit the center when not relevant. Whenever two balls $B_r\subset B_R$ appear in a statement they are implicitly assumed to be concentric. The variable $x$ denotes the vector $(x_1, x_2)$ and we denote the partial derivatives of a function $f$ with respect to $x_j$ as $\partial_j f$. \section{Regularization} We will consider a regularized problem by introducing a non degeneracy parameter $\epsilon>0$. \\ Fix $B_R\subset\subset \Omega\subset \mathbb{R}^2$ and consider the regularized Dirichlet problem \begin{equation}\label{orthonondeg2} \begin{split} \begin{cases} \sum_{i=1}^2 \int_{B_R} (|\partial_i u^\epsilon|^2 +\epsilon)^\frac{p-2}{2} \partial_i u^\epsilon\, \partial_i \phi \dx=0\\ u^\epsilon-u \in W_0^{1,p}(B_R). \end{cases} \end{split} \end{equation} Note that $u^\epsilon$ is the unique minimizer of the regularized functional \begin{equation} I^\epsilon_{B_R}(v)=\sum_{i=1}^2\int_{B_R}\frac{1}{p}(|\partial_i v|^2+\epsilon)^\frac{p}{2}\dx \end{equation} among $W^{1,p}(B_R)$ functions $v$ such that $v-u\in W^{l,p}_0(B_R)$. By elliptic regularity theory, the unique solution $u^\epsilon$ of \eqref{orthonondeg2} is smooth in $B_R$.\\ Fix an index $j\in\{1,2\}$. Then, replacing $\phi$ by $\partial_j\phi$ in equation \eqref{orthonondeg2} and integrating by parts, we find that the derivative $\partial_j u^\epsilon$ satisfies the following equation \begin{equation}\label{orthoder2} \sum_{i=1}^2 \int_{B_R} (\epsilon+|\partial_i u^\epsilon|^2)^\frac{p-4}{2} (\epsilon+(p-1)|\partial_i u^\epsilon|^2)\, \partial_i \partial_j u^\epsilon \,\partial_i\phi \dx =0 \end{equation} for all $\phi\in C_0^\infty(B_R)$.\\ We now collect some uniform estimates and convergences (see also \cite{BB}). \begin{Lemma} Let $u\in W^{1,p}(\Omega)$ be a solution of \eqref{orthodeg} and $u^\epsilon$ be a solution of \eqref{orthonondeg2} for $1<p<2$. Then we have \begin{equation}\label{energy2} \int_{B_R} |\nabla u^\epsilon|^p\dx \leq C_p \left( \int_{B_{R}} |\nabla u|^p\dx +\epsilon^\frac{p}{2}R^2 \right) \end{equation} where $C_p$ is a constant depending only on $p$. \end{Lemma} \begin{proof} The estimate follows from $$I^\epsilon_{B_R}(u^\epsilon)\leq I^\epsilon_{B_R}(u).$$ \end{proof} \begin{Prop} Let $u\in W^{1,p}(\Omega)$ be a solution of \eqref{orthodeg} and $u^\epsilon$ be a solution of \eqref{orthonondeg2} for $1<p<2$. Then, for all $j\in\{1,2\}$, we have \begin{align} \sup_{B_{R/2}} (\epsilon+|\nabla u^\epsilon|^2)&\leq C_p\left( \fint_{B_R}(\epsilon+|\nabla u^\epsilon|^2)^\frac{p}{2}\dx \right)^\frac{2}{p}, \label{lip2} \\ \int_{B_{R/2}}|\nabla \partial_j u^\epsilon|^2\dx &\leq C_p\left(\fint_{B_R}(|\nabla u|^p+\epsilon^\frac{p}{2})\dx\right)^\frac{2}{p} \label{grad2} \end{align} where $C_p$ is a constant depending only on $p$. \end{Prop} \begin{proof} The proof of the Lipschitz bound can be found in \cite{FF} while \eqref{grad2} appears in \cite{BB}. We provide details for completeness. Note that by a change of variables, the function $u^\epsilon_R(x)=u^\epsilon(x_0+Rx)$ satisfies the equation \begin{equation}\label{orthonondeg2scaled} \sum_{i=1}^2\int_{B_1}(|\partial_iu^\epsilon_R|^2+R^2\epsilon)^\frac{p-2}{2}\partial_iu^\epsilon_R\partial_i\phi\dx=0 \quad \text{for all} \quad \phi\in W_0^{1,p}(B_1). \end{equation} Introduce the notation $w=\epsilon R^2+|\nabla u^\epsilon_R|^2$ and $a_i(z)=a_i(z_i)=(\epsilon R^2+|z_i|^2)^\frac{p-2}{2}z_i$ so that equation \eqref{orthonondeg2scaled} rewrites as $$\sum_{i=1}^2\int_{B_1}a_i(\partial_iu^\epsilon_R)\partial_i\phi\dx=0 \quad \text{for all}\quad \phi\in W_0^{1,p}(B_1).$$ For $j\in\{1,2\}$ and $\alpha\geq 0$ take $\phi=\partial_j(\partial_ju^\epsilon_R\,w^\frac{\alpha}{2}\xi^2)$ so that $\partial_i\phi=\partial_j(\partial_i\partial_ju^\epsilon_R w^\frac{\alpha}{2} \xi^2+\frac{\alpha}{2}\partial_iw \, w^\frac{\alpha-2}{2}\,\partial_ju^\epsilon_R\, \xi^2) + 2\partial_j(\xi\partial_i\xi\, w^\frac{\alpha}{2}\,\partial_ju^\epsilon_R)$. Sum in $j$ to get \begin{equation*} \begin{split} A+B:&= \sum_{i,j=1}^2\int_{B_1} a_i(\partial_iu^\epsilon_R)\partial_j(\partial_i\partial_ju^\epsilon_R w^\frac{\alpha}{2} \xi^2+\frac{\alpha}{2}\partial_iw \, w^\frac{\alpha-2}{2}\,\partial_ju^\epsilon_R\, \xi^2)\dx \\ &+2\sum_{i,j=1}^2\int_{B_1} a_i(\partial_iu^\epsilon_R) \partial_j(\xi\partial_i\xi\, w^\frac{\alpha}{2}\,\partial_ju^\epsilon_R)\dx=0. \end{split} \end{equation*} Note that $\partial_i w=2\sum_{j=1}^2\partial_i\partial_ju^\epsilon_R\,\partial_ju^\epsilon_R$ and $\partial_ia_i(\partial_iu^\epsilon_R)\geq c_pw^\frac{p-2}{2}$ since $1<p<2$. Integrate by parts in $A$. We get $A=A_1+A_2$ where \begin{equation*} \begin{split} A_1&:=\sum_{i,j=1}^2\int_{B_1}\partial_ia_i(\partial_iu^\epsilon_R)(\partial_i\partial_ju^\epsilon_R)^2\,w^\frac{\alpha}{2}\,\xi^2\dx \geq c_p\sum_{j=1}^2 \int_{B_1} w^\frac{p-2+\alpha}{2}|\nabla\partial_ju^\epsilon_R|^2\xi^2\dx,\\ A_2&:=c\alpha\sum_{i,j=1}^2\int_{B_1}\partial_ia_i(\partial_iu^\epsilon_R)\partial_i\partial_ju^\epsilon_R\,\partial_ju^\epsilon _R\,\partial_iw\, w^\frac{\alpha-2}{2}\,\xi^2\dx =c\alpha\sum_{i=1}^2 \int_{B_1}\partial_ia_i(\partial_iu^\epsilon_R) (\partial_i w)^2 w^\frac{\alpha-2}{2}\,\xi^2\dx\\ &\geq c_p\alpha \int_{B_1}w^\frac{p-4+\alpha}{2}|\nabla w|^2\xi^2\dx. \end{split} \end{equation*} Now we estimate $B=B_1+B_2+B_3$. \begin{equation*} \begin{split} B_1:&=\sum_{i,j=1}^2\int_{B_1}a_i(\partial_iu^\epsilon_R)w^\frac{\alpha}{2}|\partial_ju^\epsilon_R|\,|\partial_j(\xi\partial_i\xi)|\dx \leq C_p\int_{B_1}w^\frac{p+\alpha}{2} (|\nabla\xi|^2+|\nabla^2\xi|)\dx,\\ B_2:&=\frac{\alpha}{2}\sum_{i,j=1}^2\int_{B_1} a_i(\partial_iu^\epsilon_R)w^\frac{\alpha-2}{2}|\partial_jw|\,|\partial_ju^\epsilon_R| \,\xi\,|\partial_i\xi|\dx \leq C\alpha\int_{B_1} w^\frac{p+\alpha-2}{2}|\nabla w|\,\xi\,|\nabla\xi|\dx \\ &\leq \eta \alpha \int_{B_1}w^\frac{p-4+\alpha}{2}|\nabla w|^2\xi^2\dx+\frac{C \alpha}{\eta}\int_{B_1}|\nabla\xi|^2\,w^\frac{p+\alpha}{2}\dx,\\ B_3:&=\sum_{i,j=1}^2\int_{B_1}a_i(\partial_iu^\epsilon_R)w^\frac{\alpha}{2}|\partial_j\partial_ju^\epsilon_R|\,|\partial_ju^\epsilon_R| \,\xi\,|\partial_i\xi|\dx \leq \sum_{j=1}^2\int_{B_1}w^\frac{p-1+\alpha}{2}|\nabla\partial_ju^\epsilon_R|\,\xi\,|\nabla\xi|\dx\\ &\leq \eta\sum_{j=1}^2 \int_{B_1} w^\frac{p-2+\alpha}{2}|\nabla\partial_ju^\epsilon_R|^2\xi^2\dx+\frac{C}{\eta}\int_{B_1}|\nabla\xi|^2\, w^\frac{p+\alpha}{2}\dx \end{split} \end{equation*} where we used $a_i(\partial_iu^\epsilon_R)\leq w^\frac{p-1}{2}$ and Young's inequality with a parameter $\eta$ to be chosen suitably small. We get \begin{equation}\label{mixEst2} c_p\sum_{j=1}^2 \int_{B_1} w^\frac{p-2+\alpha}{2}|\nabla\partial_ju^\epsilon_R|^2\xi^2\dx + c_p\alpha \int_{B_1}w^\frac{p-4+\alpha}{2}|\nabla w|^2\xi^2\dx \leq C_p(\alpha+1)\int_{B_1} (|\nabla\xi|^2+|\nabla^2\xi|)\, w^\frac{p+\alpha}{2}\dx. \end{equation} Note that for $\alpha=0$ we get for all $j\in\{1,2\}$ \begin{equation}\label{gradientEst2} \int_{B_1} w^\frac{p-2}{2}|\nabla\partial_ju^\epsilon_R|^2\xi^2\dx \leq C_p\int_{B_1} (|\nabla\xi|^2+|\nabla^2\xi|)\, w^\frac{p}{2}\dx, \end{equation} and since $|\nabla w|^2\leq c\sum_j |\nabla \partial_j u^\epsilon_R|^2|\nabla u^\epsilon_R|^2$ we have \begin{equation}\label{alpha0} \begin{split} \int_{B_1}w^\frac{p-4}{2}|\nabla w|^2\xi^2\dx &\leq c \sum_{j=1}^2\int_{B_1}w^\frac{p-4}{2}|\nabla u^\epsilon_R|^2|\nabla \partial_j u^\epsilon|^2\xi^2\dx \leq c \sum_{j=1}^2\int_{B_1}w^\frac{p-2}{2}|\nabla \partial_j u^\epsilon_R|^2\xi^2\\ &\leq C_p\int_{B_1} (|\nabla\xi|^2+|\nabla^2\xi|)\, w^\frac{p}{2}\dx. \end{split} \end{equation} Now for $\alpha\geq 1$, \eqref{mixEst2} implies \begin{equation} \int_{B_1}w^\frac{p-4+\alpha}{2}|\nabla w|^2\xi^2\dx \leq C_p \frac{\alpha+1}{\alpha}\int_{B_1} (|\nabla\xi|^2+|\nabla^2\xi|)\, w^\frac{p+\alpha}{2}\dx \end{equation} and combining with \eqref{alpha0} we get \begin{equation*} \int_{B_1}\lvert\nabla (w^\frac{p+\alpha}{4}\xi)\rvert^2\dx \leq C (p+\alpha)^2\int_{B_1} (|\nabla\xi|^2+|\nabla^2\xi|)\, w^\frac{p+\alpha}{2}\dx \end{equation*} for all $\alpha\geq 0$. Using Sobolev's embedding $W_0^{1,2}(B_1)\hookrightarrow L^{2q}(B_1)$ for a fixed $q>1$ we get \begin{equation}\label{moser2} \left(\int_{B_1} w^{q\frac{p+\alpha}{2}}\xi^{2q}\dx\right)^\frac{1}{q} \leq C_p (p+\alpha)^2\int_{B_1} (|\nabla\xi|^2+|\nabla^2\xi|)\, w^\frac{p+\alpha}{2}\dx. \end{equation} Now choose a sequence of radii $r_i=1/2^i+(1-1/2^{i})\frac{1}{2}$, cut-off functions $\xi$ between $r_i$ and $r_{i+1}$ and $\alpha_i=q^ip-p$ so that $\frac{p+\alpha_i}{2}=\frac{p}{2}q^i$. Using these in \eqref{moser2}, raising to the power $1/q^{i}$ and iterating we get for all $i\in\mathbb{N}$ \begin{equation*} \left(\int_{B_{r_{i+1}}} w^{\frac{p}{2}q^{i+1}}\dx \right)^\frac{1}{q^{i+1}} \leq (C_p q^{2i} 2^i)^\frac{1}{q^i}\left( \int_{B_{r_i}} w^{\frac{p}{2}q^i}\dx\right)^\frac{1}{q^i} \leq \prod_{j=0}^i(C_p q^{2j} 2^j)^\frac{1}{q^j} \int_{B_{1}} w^{\frac{p}{2}}\dx. \end{equation*} Observe that $\prod_{i=0}^\infty(C_p q^{2i} 2^i)^\frac{1}{q^i}=C(p,q)<\infty$ so passing to the limit as $i\to\infty$ we get $$\sup_{B_{1/2}}w^\frac{p}{2}\leq C(p,q)\int_{B_{1}} w^{\frac{p}{2}}\dx$$ which, after rescaling, proves \eqref{lip2}. Now going back to \eqref{gradientEst2}, choosing a cut-off function between $B_{R/2}$ and $B_R$ and using $1<p<2$ we get \begin{equation*} \int_{B_{R/2}} |\nabla \partial_j u^\epsilon|^2\dx \leq C_p\sup_{B_{R/2}} (\epsilon+|\nabla u^\epsilon|^2)^\frac{2-p}{p} \fint_{B_R}(\epsilon+|\nabla u^\epsilon|^2)^\frac{p}{2}\dx. \end{equation*} Using \eqref{lip2} and \eqref{energy2} we obtain \eqref{grad2}. \end{proof} Next we collect some facts about the convergence of $u^\epsilon$ to the solution of the degenerate equation. These are sufficient for our purposes. \begin{Prop}\label{uni2} Let $u^\epsilon$ be the solution of \eqref{orthonondeg2} for $1<p<2$ and $u\in W^{1,p}(\Omega)$ the solution of \eqref{orthodeg}. We have \begin{itemize} \item $u^\epsilon$ converges to $u$ locally uniformly in $B_R$,\\ \item $\nabla u^\epsilon$ converges to $\nabla u$ in $L^p(B_R)$. \end{itemize} \end{Prop} \begin{proof} From the energy estimate \eqref{energy2} we obtain a uniform bound for the $L^p$ norm of $\nabla u^\epsilon$. Therefore (up to a subsequence) $u^\epsilon$ converges to some $v\in W^{1,p}(B_R)$ weakly in $W^{1,p}(B_R)$ and strongly in $L^p(B_R)$. Note that we have $v-u\in W_0^{1,p}(B_R)$. By weakly lower semicontinuity we get \begin{equation*} \begin{split} I_{B_R}(v) = \sum_{i=1}^2\int_{B_R}\frac{|\partial_i v|^p}{p}\dx &\leq \liminf_{\epsilon\to 0} \sum_{i=1}^2\int_{B_R}\frac{|\partial_i u^\epsilon|^p}{p}\dx\\ &\leq \liminf_{\epsilon\to 0} \sum_{i=1}^2\int_{B_R}\frac{1}{p}(|\partial_i u^\epsilon|^2+\epsilon)^\frac{p}{2}\dx\\ &\leq \liminf_{\epsilon\to 0} \sum_{i=1}^2\int_{B_R}\frac{1}{p}(|\partial_i u|^2+\epsilon)^\frac{p}{2}\dx\\ &=\sum_{i=1}^2\int_{B_R}\frac{1}{p}|\partial_i u|^p\dx =I_{B_R}(u). \end{split} \end{equation*} Note that in the third inequality we used the minimality of $u^\epsilon$ subject to the boundary condition $u^\epsilon-u\in W_0^{1,p}(B_R)$. By uniqueness of the minimizer of $I_{B_R}$ among functions with boundary values $u$ in $B_R$, we get $v=u$. By the uniform Lipschitz estimate \eqref{lip2} and Ascoli-Arzela' theorem we obtain that the convergence is uniform.\\ Now we show $L^p(B_R)$ convergence of the gradient. Use $\phi = u^\epsilon-u$ as a test function in \eqref{orthonondeg2}, add and subtract the term $(|\partial_i u|^2+\epsilon)^\frac{p-2}{2}\partial_i u$ to get \begin{equation*} \begin{split} \sum_{i=1}^2\int_{B_R} &\left((|\partial_i u^\epsilon|^2+\epsilon)^\frac{p-2}{2}\partial_i u^\epsilon-(|\partial_i u|^2+\epsilon)^\frac{p-2}{2}\partial_i u\right)\left(\partial_i u^\epsilon-\partial_i u\right)\dx\\ &=\sum_{i=1}^2\int_{B_R} (|\partial_i u|^2+\epsilon)^\frac{p-2}{2}\partial_i u (\partial_i u-\partial_i u^\epsilon)\dx. \end{split} \end{equation*} Since $\partial_i u-\partial_i u^\epsilon$ converges to $0$ weakly in $L^p(B_R)$, the integral in the right hand side converges to $0$. We can minorize the integral in the left hand side using the inequality $$ |a-b|^2(\epsilon+|a|^2+|b^2|)^\frac{p-2}{2}\leq C_p ((\epsilon+|a|^2)^\frac{p-2}{2}a-(\epsilon+|b|^2)^\frac{p-2}{2}b)(a-b) $$ valid for $1<p<2$, and obtain that \begin{equation}\label{conv0} \int_{B_R}\left(\epsilon+|\partial_i u^\epsilon|^2+|\partial_i u|^2\right)^\frac{p-2}{2}|\partial_i u^\epsilon-\partial_i u|^2\dx \longrightarrow 0 \end{equation} as $\epsilon\to 0$, for $i=1$, $2$. Finally by H\"older's inequality \begin{equation*} \begin{split} \int_{B_R} |\partial_iu^\epsilon-\partial_iu|^p\dx &= \int_{B_R} |\partial_iu^\epsilon-\partial_iu|^p \left(\epsilon+|\partial_i u^\epsilon|^2+|\partial_i u|^2\right)^\frac{p(p-2)}{2}\left(\epsilon+|\partial_i u^\epsilon|^2+|\partial_i u|^2\right)^\frac{p(2-p)}{2}\dx\\ &\leq \left( \int_{B_R} |\partial_iu^\epsilon-\partial_iu|^2 \left(\epsilon+|\partial_i u^\epsilon|^2+|\partial_i u|^2\right)^\frac{p-2}{2}\dx\right)^\frac{p}{2}\cdot\\ &\qquad\qquad\qquad\quad\cdot\left( \int_{B_R} \left(\epsilon+|\partial_i u^\epsilon|^2+|\partial_i u|^2\right)^\frac{p}{2}\dx\right)^\frac{2-p}{2}. \end{split} \end{equation*} Since the last integral is uniformly bounded in $\epsilon$, using \eqref{conv0} we get that $\partial_i u^\epsilon$ converges to $\partial_i u$ in $L^p(B_R)$. \end{proof} \section{Monotone functions and Lebesgue's lemma} A continuous function $v:\Omega\longrightarrow\mathbb{R}$ is monotone (in the sense of Lebesgue) if $$\max_{\overline{D}}v=\max_{\partial D}v\quad \text{and}\quad \min_{\overline{D}}v=\min_{\partial D}v$$ for all subdomains $D\subset\subset \Omega$. Monotone functions are further discussed in \cite{Manf}. The next Lemma is due to Lebesgue \cite{L}. \begin{Lemma}\label{oscLeb} Let $B_R\subset \mathbb{R}^2$ and $v\in C(B_R )\cap W^{1,2}(B_R)$ be monotone in the sense of Lebesgue. Then \begin{equation*} (\osc{ B_r} v)^2\log\left(\frac{R}{r}\right) \leq \pi \int_{B_R\setminus B_r} |\nabla v(x)|^2 \dx \end{equation*} for every $r<R$. \end{Lemma} \begin{proof} Assume $v$ is smooth. Let $(\eta, \zeta)$ be the center of $B_R$. Let $x_1$ and $x_2$ be two points on the circle of radius $t$, and let $\gamma:[0,2\pi]\longrightarrow \mathbb{R}^2$, $\gamma(s)=(\eta+t\cos(s),\zeta+ t\sin(s))$ be a parametrization of the circle such that $\gamma(a)=x_1$ and $\gamma(b)=x_2$. Then we have \begin{equation*} \begin{split} v(x_1)-v(x_2)=\int_a^b \frac{d}{ds} v(\gamma(s)) \ds = \int_a^b \langle \nabla v(\gamma(s)), \gamma '(s) \rangle \ds \leq \int_a^{b} t\, |\nabla v(\gamma(s))| \ds. \end{split} \end{equation*} Taking the supremum on angles $a$ and $b$ such that $|a-b|\leq \pi$ and using H\"older's inequality, we get \begin{equation*} (\osc{\partial B_t}v )^2 \leq \pi t^2 \int_0^{2\pi} |\nabla v (\gamma(s))|^2 \ds. \end{equation*} Now diving by $t$, integrating from $r$ to $R$, and using polar coordinates we get \begin{equation*} \int_r^R\frac{(\text{osc}_{\partial B_t}v )^2}{t} \dt \leq \pi \int_r^R \int_0^{2\pi} t\,|\nabla v(\gamma(s))|^2 \ds \dt = \pi \int_{B_R\setminus B_r} |\nabla v(x)|^2 \dx. \end{equation*} Thanks to the monotonicity of $v$, for $t\geq r$ we have \begin{equation*} \osc{\partial B_t} \partial_ju^\epsilon \geq \osc{B_t} \partial_ju^\epsilon \geq \osc{B_r} \partial_ju^\epsilon \end{equation*} and we get the result for a smooth function. The general statement follows by approximation. \end{proof} The following is credited to \cite{BB} (see Lemma 2.14 for the minimum principle). \begin{Lemma}\label{maxmin2}[Minimum and Maximum principles for the derivatives]\\ Let $u^\epsilon$ be the solution of \eqref{orthonondeg2}. Then $$\min_{\partial B_r}\partial_j u^\epsilon\leq \partial_j u^\epsilon (x)\leq\max_{\partial B_r}\partial_j u^\epsilon $$ for all $x\in B_r$, $B_r\subset\subset B_R$ and $j=1$, $2$. In particular, $\partial_j u^\epsilon$ is monotone in the sense of Lebesgue. \end{Lemma} \begin{proof} We are going to show that given a constant $C$, if $\partial_j u^\epsilon \leq C$ (resp. $\partial_j u^\epsilon \geq C$) in $\partial B_r$ then $\partial_j u^\epsilon \leq C$ (resp. $\partial_j u^\epsilon \geq C$) in $B_r$. Let $\phi^{\pm}= 1_{B_r}(\partial_j u^\epsilon-C)^\pm =1_{B_r}\max\{\pm(\partial_j u^\epsilon-C),0\} $ in the equation satisfied by the derivative \eqref{orthoder2}. Since $u^\epsilon$ is smooth and $\partial_j u^\epsilon \geq C$ (resp. $\partial_j u^\epsilon \leq C$) on $\partial B_r$ we have $\phi^\pm\in W_0^{1,2}(\Omega)$, so they are admissible functions. We get \begin{equation*} \begin{split} 0&=\sum_{i=1}^2 \int_{B_r} (\epsilon+|\partial_i u^\epsilon|^2)^\frac{p-4}{2} (\epsilon+(p-1)|\partial_i u^\epsilon|^2)\, |\partial_i (\partial_j u^\epsilon -C)^\pm |^2 \dx \\ &\geq \epsilon \sum_{i=1}^2\int_{B_r}(\epsilon+|\nabla u^\epsilon|^2)^\frac{p-4}{2} |\partial_i (\partial_j u^\epsilon -C)^\pm |^2 \dx \\ &= \epsilon \int_{B_r}(\epsilon+|\nabla u^\epsilon|^2)^\frac{p-4}{2} |\nabla (\partial_j u^\epsilon -C)^\pm |^2 \dx . \end{split} \end{equation*} This implies $(\partial_j u^\epsilon -C)^\pm$ is constant in $B_r$, and since it is $0$ in $\partial B_r$ then $(\partial_j u^\epsilon -C)^\pm=0$ in $B_r$. \end{proof} \ \section{Proof of the Main Theorem} \begin{proof} [Proof of Theorem \eqref{C1}] Applying Lemma \eqref{oscLeb} and estimate \eqref{grad2} we get for all $r<R/2$ \begin{equation} (\osc{B_r} \partial_ju^\epsilon)^2 \log(\frac{R}{r}) \leq C \norm{\nabla \partial_ju^\epsilon}^2_{L^2(B_{R/2})} \leq C \left(\fint_{B_{R}} |\nabla u|^p\dx+\epsilon^\frac{p}{2}\right)^\frac{2}{p} \end{equation} and hence for all $r<R/2$ \begin{equation}\label{osc2} \osc{B_r} \partial_ju^\epsilon \leq C \left(\log\left(\frac{R}{r}\right)\right)^{-\frac{1}{2}} \left(\fint_{B_{R}} |\nabla u|^p\dx+\epsilon^\frac{p}{2}\right)^\frac{1}{p} \end{equation} where $C$ is a constant independent of $\epsilon$. Thanks to Proposition \eqref{uni2} we can pass to the limit and get \eqref{oscEst2}. \end{proof} \subsection{Acknowledgements} I thank Peter Lindqvist for useful comments and suggestions.
{ "timestamp": "2018-02-13T02:22:04", "yymm": "1802", "arxiv_id": "1802.04197", "language": "en", "url": "https://arxiv.org/abs/1802.04197" }
\part*{Auxiliary Material} \end{document}
{ "timestamp": "2018-06-22T02:08:39", "yymm": "1802", "arxiv_id": "1802.04329", "language": "en", "url": "https://arxiv.org/abs/1802.04329" }
\section{Introduction} Due to their rapid deployment, flexibility, as well as collaborative sensing, self-organization, and intelligent-processing abilities, wireless sensor networks (WSNs) have found their way into a wide variety of industrial settings with varying requirements and characteristics: healthcare monitoring~\cite{AlemdarC10}, structural health monitoring, pipeline monitoring~\cite{YoonYHLS11}, agricultural monitoring~\cite{Riquelme09}, networked control systems (NCSs) where a spatially distributed feedback control system is required \cite{Ulusoy2011}, etc. In particular, in industrial WSN applications, an area needs to be covered with one or multiple WSNs, monitoring different parameters, or different locations. Often, these subnetworks of simple devices send data via gateway nodes (or cluster heads) to a remote BS located at a central office, where the signal processing to produce strategic decisions runs on a more powerful computer. It is then necessary for the BS to regularly broadcast certain network details and commands to the nodes. Sustainable and environmentally friendly development of such industrial applications requires increased use of renewable energy, e.g., solar or wind power. Resource management is very critical in the industrial environment. Since industrial WSNs are expected to be deployed in harsh or inaccessible environments for long periods of time, a remote BS may be needed to control the operation of these networks. Recently, employing energy harvesting (via ambient energy sources such as solar irradiation~\cite{Bogue12}, vibrations~\cite{MohamedWM11}, and wind~\cite{YenP11}) to power transmitters of wireless networks, such as BSs has gained tremendous interest. Today, solar energy is becoming widely used, due to its high power density compared to other sources of ambient energy~\cite{NohK09}. Therefore, the scenario considered in this paper involves an outdoor industrial WSN (or a combination of WSNs) controlled by a central node that is powered by solar panels, at least as an auxiliary energy source. During times of low or no daylight, the BS will need to rely on another power source. To reduce the dependence on these other sources and maximize efficient use of solar power, it makes sense to schedule time-insensitive communications from the BS to the nodes to times of high solar irradiation. Of course, outdoor solar irradiation exhibits a daily periodicity. However, there are seasonal as well as short-term variations. Depending on such a varying energy source will require the revision of conventional resource management. When, for example, the size of a solar cell limits the available power, decisions about when to provide how much power, rate, service, etc. have to be made. As also stated in~\cite{Koksal2012}, conservative energy expenditure, may lead to missed recharging opportunities if the battery is already full. On the other hand, aggressive usage of energy may result in reduced coverage or connectivity for certain time periods, that could make the BS temporarily incapable of transferring time-sensitive data. In industrial applications, this may lead to loss of production and may sometimes create hazardous situations. Hence, new resource allocation and scheduling schemes need to be developed to balance these contradictory goals, in order to maximize the network performance. In this paper, we focus on a scenario of a solar powered BS that needs to send protocol maintenance messages to sensor nodes (or, to a set of gateways or cluster heads). It is well known that in WSNs, maintenance messages (topology updates, protocol information, etc) constitute a significant fraction of all messages that are passed. If the BS needs to deliver these routine messages to the nodes, how should it schedule these over the duration of a day, to maximize its efficient use of solar energy? Furthermore, how much power and time should it allocate to different nodes which, being spread over an area, may observe vastly different path losses from each other? If the problem was formulated as a throughput maximization problem, the solution would essentially constitute of always sending to the node that is closest to the transmitter, which defeats the purpose of serving all the gateways. Hence, it is more appropriate to construct a formulation that maximizes throughput in a \textit{proportionally fair} way so that no particular gateway is starved due to its high path loss from the transmitter. The contribution of this paper is to combine a Kalman-filter based solar energy prediction algorithm with a proportional-fair scheduler. The proportional-fair energy harvesting resource allocation problem formulated in~\cite{6463487} was shown to be a {\emph{biconvex}} problem\footnote{The problem of optimizing a biconvex function over a given (bi)convex or compact set, where a function $f : X\times{Y}\rightarrow{\Re}$ is called biconvex if $f(x,y)$ is convex in $y$ for fixed $x\in{X}$ and is convex in $x$ for fixed $y\in{Y}$~\cite{Pirsiavash10}. } which is nonconvex, and has multiple optima. The optimum off-line schedule developed in~\cite{tekbiyik2} (which assumes that the energy arrival profile at the transmitter is deterministic and known ahead of time in an off-line manner), a Block Coordinate Descent based optimization algorithm, BCD, was shown to converge to an optimal solution for proportional fair allocation of harvested energy in a wireless downlink. A simple heuristic, called PTF, that can closely track the performance of the BCD solution was also developed in~\cite{6463487}. However, in many practical scenarios, the energy harvests are not known a priori. Thus, this paper leverages the PTF algorithm to develop an \textit{online} resource allocation algorithm, PTF-On. PTF-On is a stand-alone algorithm that operates two algorithms in tandem: A Kalman filter-based solar energy prediction algorithm, and a modified version of the PTF algorithm. PTF-On can predict the BS's energy arrival profile throughout the day, and then, act upon this energy arrival profile to determine the best power and time allocation that will maximize the throughput (the amount of data sent to the gateways) in a proportionally fair way. We start by summarizing some related work in the next section and continue by describing our system model in Section \ref{sec:sm}. After that, in Section \ref{sec:psandst}, we explore the structure of the problem. Our Kalman-based solar prediction algorithm is described in Section \ref{sec:k-sep}. Section \ref{sec:ptf-on} proposes the online allocation algorithm, PTF-On. In Section \ref{sec:theresultsofchp7}, we test our proposed algorithms with respect to several related schemes. We conclude in Section \ref{sec:conc} with an outline of further directions. \section{Related Work} \label{sec:rw} Recently, several works in the field of Industrial WSNs have been conducted. In \cite{Manfredi2010}, a sink resource allocation strategy based on log-utility fairness criteria is proposed. In \cite{Zhang2013}, authors propose a solution to the problem of maximizing the minimum energy reserve. An algorithm to achieve arbitrarily close to optimal power efficiency while satisfying the desired estimation accuracy of processes over some time is presented in \cite{Vijayandran2011}. Authors have expanded their work by introducing a WSN model that includes fairness control among sensors and both energy harvesting batteries and backup battery units. An optimal resource allocation presented in the paper is used for state estimation to provide a desired accuracy constraint by taking advantage of Kalman filter. Even if proposed WSN frame includes energy harvesting batteries modelled as IID random processes, the renewable solar energy may not be modelled as an IID process. Moreover, Kalman filter approach used in \cite{Vijayandran2011} is used to estimate the state of the processes to be detected by sensors, not to predict solar irradiations as this paper presents. In \cite{Sharma2010}, Sharma et.al. present throughput optimal energy management policies for energy harvesting sensor networks. However, results are valid only for stationary ergodic energy harvesting and data processes. Outdoor solar irradiation, which has a general 24-hour variation cannot be well modelled as a stationary process. On the other hand, Sharma et. al also propose optimal energy management policies for energy harvesting sensor nodes with solar energy harvesting. While solar irradiation cannot be considered as a stationary ergodic process, it is assumed to be piecewise stationary over half an hour periods \cite{Sharma}. Differently from the work in \cite{Sharma}, the problem in our case has only sub-optimal solutions as proved in \cite{6463487} and prediction intervals for solar energy harvesting are applied within periods of half an hour. In \cite{Lalitha2013}, the problem of minimizing the average grid power consumption of a Green BS downlink in scheduling N users with average delay constraints is considered and formulated as Markov Decision problem. Any fairness criteria is not regarded in the work and harvested energy is taken as a stationary IID process without taking into account the statistics and daily periodicity of the solar irradiation. In \cite{Lin2007}, authors present an asymptotically optimal energy aware routing model for WSNs and propose an online algorithm which does not know future packet requests. However, as in most of the similar works, the energy model assumes that short term energy replenishment schedule for nodes is known. While there are studies on prediction of solar irradiation such as \cite{Chaabene2008}, \cite{Lonij2013} and \cite{Hassanzadeh}; few have used Kalman filtering techniques, and none of them combines online proportional time fair resource allocation algorithm with such a predictor. To the best of our knowledge, there is no prior work in the literature about proportional fair resource allocation in WSNs with a prediction algorithm based on a Kalman filter that operates on solar irradiation measurements. We propose an online method for scheduling nodes in an industrial WSN according to predicted solar energy levels, with a fair resource allocation criterion. \section{System Model} \label{sec:sm} The goal is to schedule transmissions from a BS to $N$ gateways (or cluster heads, or individual sensor nodes, without loss of generality), over a certain time window that we refer to as a \textit{frame}. The transmissions to different receivers will be organized by time sharing over a bandwith W. The setup is described in Figure \ref{fig:fig2}. We assume arbitrary channel gains $g_n$ from the BS to receiver $n$, $n\geq 1$, and, w.l.o.g, for simplicity, constant noise Power Spectral Density $N_o$ at all receivers. The channel gains are thought of as long term average gains, when short term channel variations are averaged out. Hence, $g_n$'s are constant throughout the frame and are known by the scheduler. \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth, height=0.7\textwidth]{fig22.eps} \centering \vspace{-2.4in} \caption{Example industrial WSN application (agricultural monitoring) controlled by a remote base station ~\cite{Riquelme09}.} \label{fig:fig2} \end{figure} The BS is equipped with a rechargeable battery, powered by a solar panel, such that harvested energy becomes available at distinct instances. The durations between two harvest instants will be called a ``slot'' (as in~\cite{6463487} ). Our system model is based on the one illustrated in Figure \ref{fig:problemillustration1}. \begin{figure*}[tb] \centering \includegraphics[width=1\textwidth, height=0.15\textwidth]{problem_illustration_equal_slot_v2.eps} \vspace*{-0.22in} \caption{One of the multiple frames in a timeline. The highlighted frame $i$ (of 24 hours) includes $K$ energy arrivals. The time between consecutive arrivals is allocated to $N$ users.} \label{fig:problemillustration1} \end{figure*} For facilitating daily predictions, we set the length of a frame to 24 hours. Note that, we restrict our attention to the case of periodic energy arrivals ($T_t=T$ for all $t\in \left\{1,\hdots,K\right\}$), as in~\cite{6463487}). Not all generality is lost, since harvest amounts are arbitrary and the absence of a harvest in a certain duration can be expressed with a harvest of amount zero for the respective slot. The amount of energy harvested from the environment at the beginning of time slot $t$ of frame $i$ is $E_{i,t}$. The BS chooses a power level $p_t$ and a time allocation vector $\tau_t=(\tau_{1t},...,\tau_{Nt})$, for each time slot $t$ of the frame, where $p_{nt}=p_t$ is the transmission power for gateway $n$ during slot $t$ and, $\tau_{nt}$ is the time allocated for transmission to gateway $n$ during slot $t$. \section{Problem Statement and Structure} \label{sec:psandst} We define the total achievable rate for user $n$ (the number of bits transmitted to user $n$) within a frame, $R_n$. Our goal is to maximize a total utility, i.e., the log-sum of the user rates $\sum_{n=1}^Nlog_2(R_n)$, which is known to result in proportional fairness~\cite{MaoKS10}. Without loss of all generality\footnote{Most practical rate-power relationships will satisfy convexity and may be used in a similar formulation. Our choice of rate function here following AWGN capacity is quite standard.}, by using AWGN (Additive Gaussian Noise) channel capacity as a rate function to construct a biconvex problem \cite{6463487}, we define $R_n=\sum_{t=1}^K\tau_{nt}W\log_2\left(1+\frac{p_{t}g_n}{N_oW}\right)$. Thus, we obtain the constrained optimization problem, Problem \ref{pr:DownlinkScheduling}, where (\ref{eq:nonnegativity2}) represents the nonnegativity constraints for $t=1,...,K$ , $n=1,...,N$. The equations in (\ref{eq:Timeconstraint}), called time constraints, ensure that the total time allocated to users does not exceed the slot length, and, every user gets a non-zero time allocation during the frame. Finally, the equations in (\ref{eq:Energycausality}), called energy causality constraints, ensure no energy is consumed before becoming available. \begin{problem} \label{pr:DownlinkScheduling} \small \begin{align} \noindent\mbox{Maximize: } &U(\overline{\tau},\overline{p})=\sum_{n=1}^N\log_2\left(\sum_{t=1}^K\tau_{nt}W\log_2\left(1+\frac{g_np_{t}}{N_oW}\right)\right)\nonumber \\ \noindent \mbox{subject to: }&\tau_{nt}\geq{0} \ , \ p_{t}\geq{0} \label{eq:nonnegativity2} \\ &\sum_{n=1}^N\tau_{nt} = T_t \label{eq:Timeconstraint} \ , \ \sum_{t=1}^K\tau_{nt} \geq \epsilon \\ &\sum_{i=1}^tp_{i}T_{i} \leq{\sum_{i=1}^tE_i} \label{eq:Energycausality} \end{align}\normalsize \end{problem} Note that Problem \ref{pr:DownlinkScheduling} is a biconvex optimization problem with multiple optima, and there exists an offline heuristic algorithm, PTF~\cite{6463487}, that can closely track the optimal solution (solution found by BCD~\cite{6463487}) of this problem. With the purpose of adapting real life scenarios, in this paper, we modify the PTF algorithm so that we can use it in an online setting, i.e., the amounts of energy harvests within a frame are not known a priori. The modified version of the PTF algorithm will need to be combined with an energy prediction algorithm, which will be explained in the next section. \section{Kalman-Based Solar Energy Prediction} \label{sec:k-sep} In this section, we apply the Kalman filter algorithm to forecast the energy arrivals within a frame, for a BS powered with solar panel. We consider sub-hourly prediction of the energy arrivals for a frame of 24 hours (one day) as an example, and, formulate the Kalman filter for the following state and measurement models: \begin{align} x(k+1)&=\alpha_1x(k)+\alpha_2x(k-47)+\beta_1y(k)+w(k) \label{eq:state_model} \\ z(k)&=x(k) + v(k) \label{eq:meas_model} \end{align} where $x$ and $z$ represent the state (energy level) and the measurement respectively. This model is mainly based on the idea that; due to the diurnal cycle of a day, the amount of energy that will be harvested in the $(k+1)^{th}$ sub-hour of an arbitrary day, $x(k+1)$, should be related to the energy harvested in the $k^{th}$ sub-hour of the same day, $x(k)$, the solar irradiation received in the $k^{th}$ sub-hour of the same day, $y(k)$, and, the energy harvested in the $(k+1)^{th}$ sub-hour of the previous day (the energy that was harvested 48 sub-hours ago: $x((k+1)-48)=x(k-47)$), $x(k-47)$. In (\ref{eq:state_model}), $w(k)$ is a modeling error, which represents the effects of the uncontrolled events on the harvested energy (such as shadowing caused by clouds passing through, disturbance to the solar panel, or damage due to malicious act, etc.). It is modelled as IID Gaussian with zero mean and variance $\sigma^2_w$. The parameters $\alpha_1$,$\alpha_2$ and $\beta_1$ represent the weights assigned to emphasize the importance of the parameters that will be used for prediction. In the measurement model, $v$ denotes the IID Gaussian measurement noise with zero mean and variance $\sigma^2_v$. By considering that there are 48 sub-hours in a day, the overall state equations can be re-stated in matrix form as in (\ref{eq:stateeq}). \begin{align} \label{eq:stateeq} \footnotesize \begin{bmatrix} x(k+1)\\ x(k)\\ x(k-1)\\ \vdots \\ x(k-46) \end{bmatrix} = \textbf{A} \begin{bmatrix} x(k)\\ x(k-1)\\ x(k-2)\\ \vdots \\ x(k-47) \end{bmatrix} +\footnotesize \overline{\beta} y(k)+ \overline{\Gamma} w(k) \end{align} Now, we define an augmented state vector, $\overline{\xi_{k}}$, which contains the energy amounts harvested today: \begin{align} \overline{\xi_{k}}= \begin{bmatrix} x(k)&x(k-1)&\hdots&x(k-47) \end{bmatrix}' \end{align} We define a new matrix $\textbf{A}$, column vectors $\overline{B}$, and $\overline{\Gamma}$: \begin{align} \textbf{A}&= \footnotesize\begin{bmatrix} \alpha_1 & 0 & 0 & \hdots & 0 & 0 & \alpha_2\\ 1 & 0 & 0 & \hdots & 0 & 0 & 0 \\ \vdots & & & \ddots & & & \vdots\\ 0 & 0 & 0 & \hdots & 0 & 1 & 0 \\ \end{bmatrix} \\ \overline{B}&= \footnotesize\begin{bmatrix} \beta_1 & 0 & \hdots &0 \end{bmatrix}' \hspace{0.2in}\overline{\Gamma}=\hspace{0.05in} \footnotesize\begin{bmatrix} 1 & 0 & \hdots &0 \end{bmatrix}' \end{align} Thus, the state model in (\ref{eq:stateeq}), and the measurement model in (\ref{eq:meas_model}) reduce to: \begin{align} \overline{\xi_{k+1}}&=\textbf{A}\overline{\xi_{k}}+\overline{B}y(k)+\overline{\Gamma}w(k) \label{eq:truth1}\\ z(k)&=x(k) + v(k) \label{eq:truth2} \end{align} which is structurally equivalent to the ``truth'' model described in (5.27) of~\cite{Crassidis04}. Thus, by applying the Discrete-Time Linear Kalman Filter described in~\cite{Crassidis04}, we are able to predict the amount of energy arrival in the next sub-hour by only using the amount of energy arrival in this sub-hour, the solar irradiation received in this sub-hour and, the arrival in the previous day's next sub-hour.Please note that, in order to compute the best weights $\alpha_1$, $\alpha_2$ and $\beta_1$ that will be used for simulations, we use a data fitting method described as follows: By using the 18 days' data (real power measurements belonging to 01-18.10.2009 for Amherst, Massachusetts, USA) provided by Navin Sharma~\cite{SharmaGIS10}, we design a Newton algorithm that aims to minimize the Mean Squared Error (MSE) between the data obtained from real measurements and the estimated data according to the state and measurement models in (\ref{eq:state_model}) and (\ref{eq:meas_model}). Thus, the objective function to be minimized by the Newton algorithm is described below: \begin{align} \frac{1}{N}\sum^{N}_{k=1}(z(k)-z_m(k))^2 \end{align} where $z$ denotes the data obtained from actual measurements and $z_m$ denotes the estimated data obtained from the models, in (\ref{eq:state_model}) and (\ref{eq:meas_model}). Note that we have $48$ subhours for a day, and need the past day's data at the same subhour for the prediction of a subhour's solar irradiation. For 17 days ($17$ days$=816$ sub-hours) data ~\cite{SharmaGIS10}, the objective function can be stated as: \begin{align} \frac{1}{816}\sum^{863}_{k=48}(z(k+1)-(\alpha_1x(k)+\alpha_2x(k-47)+\beta_1y(k)))^2 \end{align} Our simulation results, provided in Section \ref{sec:theresultsofchp7}, show that the best values for weights, $\alpha_1$,$\alpha_2$,$\beta_1$ are 0.7184, 0.1439, and, 0.0063 respectively, when the $x(k)$'s are in terms of kilojoules and the initial values for the data fitting operation of $\alpha_1$,$\alpha_2$,$\beta_1$ are taken as 0.9, 0.1 and 0.01 respectively. Furthermore, considering the equivalence of the ''truth" model in \cite{Crassidis04}, the prediction and update equations of the Kalman estimator can be stated as: \begin{align} \overline{\xi_{k+1}^-}=\textbf{A}\overline{\xi_{k}^+}+\overline{B}y(k), \hspace{0.05in}\overline{\xi_{k}^+}=\overline{\xi_{k}^-}+\textbf{K}_k[z(k)-\overline{\xi_{k}^-}] \end{align} where $\xi_k^-$ and $\xi_k^+$ denotes the pre-measurement and post-measurement states respectively. Moreover, K, R , I, P are the Kalman gain function, measurement noise matrix, identity matrix and error covariance matrix respectively and defined as: \vspace{-0.1in} \begin{align} K_k&=P_k^--[P_k^-+R] \\ P_k^+&=[I-\textbf{K}_k]P_k^- \\ P^-_{k+1}&=\textbf{A}P_k^+\textbf{A}^T+\overline{\Gamma}\sigma_\omega^2\overline{\Gamma}^T \end{align}Similar to the notation of states $\xi_k^-$ and $\xi_k^+$, $P_k^+$ and $P_k^-$ denote the pre-measurement and post measurement error covariance matrices. Thus, we have KSEP with the state and measurement models in (\ref{eq:state_model}) and (\ref{eq:meas_model}). \section{PTF-On Algorithm} \label{sec:ptf-on} In this section, we propose an online proportional fair resource (power and time) allocation algorithm, called PTF-On. PTF-On is the online version of the PTF heuristic proposed in~\cite{6463487}. Note that the PTF algorithm operates in an offline fashion, i.e., the energy arrival amounts within a frame are known at the beginning of that frame. The main motivation of the PTF-On algorithm can be explained as follows: There are 48 sub-hours and thus, 48 energy arrivals within a frame (24 hours). At the beginning of a each slot, the current amount of residual energy and amounts of previous harvests are known. The amounts of next 47 energy arrivals should be predicted. Thus, at the beginning of each frame we perform two prediction operations to determine the energy amounts that will be harvested during the frame. We perform this operation as follows: At the beginning of Slot 1, the energy arrives and is known to the BS. Thus, the BS can use K-SEP (Kalman Based Solar Energy Prediction) to predict its next energy arrival, i.e., the arrival in Slot 2. The arrivals other then the arrival in the next slot can not be predicted before a sub-hour passes without enlarging the error covariance matrix . This is due to the fact that a half an hour should pass to see what is really harvested in Slot 2, so that this value can be used to predict and verify the value in Slot 3. In case of predicting the arrivals other than the first successive arrival, we need to deal with bigger covariance matrices. Thus,to be able to deal with simpler computations, we adopt S-SEP, which does not use the data that was harvested in the previous slots (mainly predicts the amount of energy that will be harvested in today's $k^{th}$ sub-hour as the average of the energy arrival amounts of the past two days' $k^{th}$ sub-hours), to predict the next 46 arrivals. Thus, all energies (or at least their estimates) are known to the BS at the beginning of the frame. This way, at the beginning of each frame, one can run PTF algorithm to determine a close-to-optimal power and time allocation that will maximize the throughput in a proportionally fair way, for the up-coming 24 hours. PTF-On requires past two days' data for predicting the energy arrival amounts of the day it will be used in. Assume that there are days 1,2,3,4,... etc, and, PTF-On will be used to predict the arrivals, and, determine the most proportional fair resource allocation, for the second half of day 3 and first half of day 4 (Frame of 24 hours: From 12:00 of day 3 to 12:00 of day 4). The operation of PTF-On algorithm is explained below. \begin{enumerate} \item For the 24-hours frame started at 12:00 of day 3, there will be 48 slots, each 30 minutes of length (Please note that this frame is called the original frame). The beginning of the whole frame will be the beginning of Slot 1. Thus, when the frame starts, the energy arrival at the beginning of slot 1 of day 3, $E_{3,1}$ is known. Thus, the energy arrival at the beginning of Slot 2, $E_{3,2}$, can be predicted by using the K-SEP algorithm. Then, use the S-SEP algorithm to obtain rough predictions of the others, $E_{3,3},\hdots,E_{3,48}$, and form a predicted harvest series as follows: $\overline{E_{pred}}=[E_{3,1},E'_{3,2},E''_{3,3},\hdots,E''_{3,48}]$, where $E$, $E'$ and $E''$ represent the real, the K-SEP predicted, and the S-SEP predicted energy amounts, respectively. \item As all the energy amounts (or at least their estimates) are known at the beginning of the frame, use the first part of the PTF algorithm to determine the best proportional fair power allocation (sub-hours) within the frame. \item In the first slot of the frame, apply the power allocation found by the PTF algorithm for Slot 1 of that frame. Let, $B_{nt}=R_{nt}T$ be the number of bits that would be sent to gateway $n$ if the whole slot (of length $T$) was allocated to that gateway. If this slot is the first slot of the original frame, assign this slot to the gateway who has the maximum rate, $R_{nt}$, in that slot. Otherwise, at the beginning of each slot, $t\in\left\{2,\hdots,K\right\}$, determine the gateway with the maximum $\beta$ where, $\beta_n=\frac{B_{nt}}{\sum^{t-1}_{i=1}B_{ni}}$. Then, assign the whole slot to that gateway. If multiple gateways share the same $\beta$, then, allocate the slot to the gateway with the best channel. \item When first slot of the frame finishes, and thus the second slot starts, assign Slot 2 of the current frame as the first slot of the upcoming frame (half an hour shifted version of the original frame), and estimate related energy amounts. Then, add the remaining energy to the energy of the first harvest of the new frame to form a new predicted harvest series. (Ex: At 12:30, $E_{3,2}$ is known and $E_{3,3}$ can be predicted by K-SEP. The remaining 46 energy harvests are predicted by S-SEP. Thus, a new predicted harvest series is formed: $\overline{E_{pred}}=[E_{3,2}+(E_{3,1}-p_{1}T),E'_{3,3},E''_{3,4},\hdots,E''_{3,48},E''_{4,1}]$.) \label{it:4} \item Apply Step 2,3, and 4 in order until the 24 hours is completed, i.e., the last slot of the original frame has been assigned a power and time allocation. \end{enumerate} \section{Numerical and Simulation Results} \label{sec:theresultsofchp7} \subsection{K-SEP and S-SEP Related Results} \label{sec:theresultsofchp71} In this section, we present the numerical and simulation results related to our Kalman filter based solar energy prediction algorithm, called K-SEP, and the online resource allocation algorithm, PTF-On. By using the best weights that we computed by using the Newton algorithm, we perform numerous simulations to test our predictor. The performance of the predictor is tested by the MSE criteria and computed as follows: \begin{align} MSE=\frac{1}{M}\sum_{i=1}^M(x_i-\widetilde{x}_i)^2 \end{align} where $x$ and $\widetilde{x}$ represent the real and estimated energies respectively, and, $M$ is the number of samples that will be considered. In order to compare the performance of our predictor with another one, we use a simple solar energy predictor, called S-SEP in this paper. S-SEP has been introduced in Section \ref{sec:ptf-on}. We first let $M=48$ (for 48 sub-hours in a day), and, compute daily MSE values for 16 days, as shown in Table \ref{tab:MSE171}. Then, average MSE over 16 days of October, 2009 (03.10-2009-18.10.2009) for K-SEP and S-SEP are, $MSE^{K-SEP}_{Aver}=4.3778 $ kilojoules/sub-hour/day and $MSE^{S-SEP}_{Aver}=84.1463 $ kilojoules/sub-hour/day respectively. By considering that the maximum power measured in~\cite{SharmaGIS10} was 60 Watts, one can produce maximum $E_{max}=60.1800=108$ kilojoules in a sub-hour by using this system. Thus, the performance of S-SEP is much worse than the performance of K-SEP in terms of average error, i.e., $\sqrt{MSE^{K-SEP}_{Aver}}=2.0923$ kilojoules/sub-hour whereas, $\sqrt{MSE^{S-SEP}_{Aver}}=9.1731$ kilojoules/sub-hour. \begin{table*}[tb]\footnotesize \centering \caption{MSEs for the 16 days} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Days & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline K-SEP & 0.1687 & 12.0649 & 9.5177 & 17.0761 & 6.9371 & 3.6468 & 0.3298 & 2.6919 \\ \hline S-SEP & 106.6895 & 266.2716 & 122.6293 & 89.9413 & 141.0086 & 43.9074 & 122.3416 & 15.8741 \\ \hline \hline Days & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 \\ \hline K-SEP & 2.0283 & 2.3286 & 3.4519 & 3.6127 & 0.2873 & 1.0939 & 2.5699 & 0.1945 \\ \hline S-SEP & 90.8727 & 20.1912 & 69.3250 & 26.1086 & 32.3662 & 5.6346 & 57.2630 & 135.9156 \\ \hline \end{tabular} \label{tab:MSE171} \vspace{-0.2in} \end{table*} The figures \ref{fig:best} and \ref{fig:worst} illustrate the performances of the two predictors for two days in which S-SEP performs the best, and the worst in its 16 days's performance. As it can be seen from the figures, K-SEP outperforms S-SEP at all instances. However, even S-SEP as a simple prediction method of solar energy harvests provide some usefulness for the solution of online fashion \ref{pr:DownlinkScheduling}. In addition, it is more important to note that harvesting energies predicted by K-SEP algorithm always follow the original energies obtained from real measurements as shown in Figures \ref{fig:best},\ref{fig:worst}. By considering the numerical and simulation results conducted with K-SEP and S-SEP, we reach two main conclusions: the advantage of using a prediction method for the estimation of solar energy harvesting with an offline allocation algorithm (PTF) in tandem and the novelty of K-SEP which performs very close to optimal situation where energy harvests are known a priori. \begin{figure}[tb] \vspace{-0.2in} \centering \begin{psfrags} \psfrag{x}[t]{No. of sub-hours} \psfrag{y}[t]{Energy (kilojoules)} \includegraphics[width=0.45\textwidth, height=0.35\textwidth]{best.eps} \end{psfrags} \caption{Performances of K-SEP and S-SEP compared with the real power measurements provided in ~\cite{SharmaGIS10}; belonging to 16.10.2009 for Amherst, MA, USA.} \label{fig:best} \end{figure} \begin{figure}[tb] \vspace{-0.25in} \centering \begin{psfrags} \psfrag{x}[t]{No. of sub-hours} \psfrag{y}[t]{Energy (kilojoules)} \includegraphics[width=0.45\textwidth, height=0.35\textwidth]{worst.eps} \end{psfrags} \caption{Performances of K-SEP and S-SEP compared with the real power measurements provided in ~\cite{SharmaGIS10}; belonging to 04.10.2009 for Amherst, MA, USA.} \label{fig:worst} \end{figure} \begin{figure*}[tb] \centering \begin{psfrags} \psfrag{number of subhours}[c]{No. of sub-hours} \psfrag{Energy in kJ}[b]{Energy (kilojoules)} \includegraphics[scale=0.35]{kalman_30min.eps} \end{psfrags} \vspace{-0.1in} \caption{Performance of K-SEP with respect to the real measurements from the University of Oregon Solar Radiation Laboratory; belonging to 07-19.05.2009 for Salem, MA, USA. } \label{fig:kalman_30min} \end{figure*} Next, in order to show the robustness and reliability of results given in Figures \ref{fig:best}, \ref{fig:worst} in Table \ref{tab:MSE171}, we further test the performance of K-SEP with the solar irradiation measurements obtained from an entirely different source of data than \cite{SharmaGIS10}. To that end, K-SEP is run on measurements obtained from the University of Oregon Solar Radiation Laboratory in Salem, MA, USA. Obtained results related to the performance of K-SEP algorithm can be seen from Figure \ref{fig:kalman_30min} for 12 days (between 7-19 May 2009). Similar to the previous tests, the performance of the K-SEP algorithm is very close to the original data which the all energy harvests are known in an offline manner. \subsection{PTF-On Related Results} \label{sec:theresultsofchp72} In this section, we present the numerical and simulation results related to the proposed online heuristic, PTF-On. Throughout our simulations, we use the following setup: $W=10MHz$, $N_o=10^{-19} W/Hz$. For the sake of an example, we suppose that there are three sensor networks, and thus three gateways in the system, similar to the one shown in Figure \ref{fig:fig2}. The path loss of the gateways are 78, 92, and, 100 dB respectively. We compare the performance of the proposed algorithm with the performance of the ``Spend What You Get'' policy (where the amount of energy harvested at the beginning of a slot is completely spent during that slot) combined with TDMA time allocation, and with the performance of the offline PTF heuristic that is proved to operate very close-to-optimal in the authors' previous paper, \cite{6463487}. \begin{table}[tb]\footnotesize \caption {FI (FAIRNESS INDEXES)’S OBTAINED BY PTF, PTF-ON, AND THE SG+TDMA SCHEME.} \centering \begin{tabular}{|l|l|l|} \hline Algotrithm & Worst Case FI & Average FI over 14 days \\ \hline SG+TDMA & 0.8954 & 0.9420 \\ \hline PTF & 0.9345 & 0.9531 \\ \hline PTF-ON & 0.9084 & 0.9299 \\ \hline \end{tabular} \label{tab:fair} \end{table} \begin{table}[tb]\footnotesize \vspace{-0.2in} \caption {AVERAGE OVER 14 FRAMES OF 3 GATEWAYS' THROUGHPUTS (GIGABYTES/DAY)} \centering \begin{tabular}{|l|l|l|l|l|} \hline Algorithm & G1 & G2 & G3 & Total \\ \hline SG+TDMA & 188.0338 & 133.7222 & 105.8971 & 427.6531 \\ \hline PTF & 458.8113 & 340.0307 & 269.5249 & 1068.3669 \\ \hline PTF-ON & 473.5146 & 326.1724 & 244.9720 & 1044.6595 \\ \hline \end{tabular} \label{tab:avthr} \vspace{-0.2in} \end{table} \begin{figure*}[tb] \vspace{-0.25in} \centering \begin{psfrags} \psfrag{No of frames}[l]{Frame no.} \psfrag{Total throughput(Gigabytes)}[l]{Total throughput(Gigabytes)} \includegraphics[scale=0.42]{figure_third_v4} \end{psfrags} \vspace{-0.05in} \caption{Total Throughput of the three gateways (in Gigabytes) over 14 Frames on Amherst, MA solar irradiation data.} \label{fig:total} \end{figure*} We start our analysis at 12:00 pm on 03.10.2009 and finish it at 12:00 on 17.10.2009. Hence, we have 14 frames, each of which consists of 24 hours (48 sub-hours). For this time period we test the performances of the PTF-On and PTF algorithms, and, the SG+TDMA scheme. The results are illustrated in Tables \ref{tab:fair}, \ref{tab:avthr} and Figure \ref{fig:total}. Note that, the $FI$ (fairness index) mentioned in Table \ref{tab:fair} is the Jain's index ($FI$), which is a well-known measure of fairness~\cite{Jain84}. $FI$ takes the value of 1 when there is a complete fair allocation, and, it is defined as $FI=\frac{(\sum^N_{i=1}x_i)^2}{N\cdot \sum^N_{i=1}x_i^2}$. For computing $FI$, we use the no. of bits transmitted to the gateways, $x_i$ for $i=1,\hdots,N$. It is important to note that, as the utility is defined as sum of ``logarithms'' of individual throughputs, even 1\% percent improvement in utility is significant. From the viewpoint of fairness, as illustrated in the Table \ref{tab:fair}, the performance of the proposed online algorithm, PTF-On, follows the performance of the offline PTF algorithm closely. Note that Jain's index of proposed algorithm is always higher than the threshold (0.90) by considering the worst case values during simulations. Although the average FI obtained with PTF-ON algorithm seems a little lower than the SG+TDMA algorithm; it should be remembered that the goal of PTF is to maximize proportional fairness rather than $FI$ in particular. Despite this fact, PTF-ON fares quite well with respect to FI. \vspace{-0.02in} Average throughputs of the three gateways obtained as a result of the simulation over 14 frames are also given in the Table \ref{tab:avthr}. As illustrated in the Figure \ref{fig:total} and the Table \ref{tab:avthr}, PTF-ON significantly outperforms SG+TDMA. Moreover, the utility (sum of logarithms of individual rates) and the total throughput results obtained from PTF-ON are very close to the offline PTF algorithm, which was already shown to perform very close to optimal \cite{6463487}. Hence we believe that the PTF-ON algorithm, proposed as a novel solution to the power-rate allocation problem regarding proportional fairness, provides a good online solution. \vspace{-0.1in} \section{Conclusion} \vspace{-0.03in} \label{sec:conc} This paper investigated the proportional fair power and time allocation problem in an industrial wireless sensor network system with an energy harvesting BS. The paper focuses on finding the best \textit{on-line} schedule for this problem, by predicting the energy amounts that will be harvested, and supplied to the BS, during the frame. It is proven by numerical evaluations that the joint prediction and resource allocation algorithm that we propose performs very close to the optimal offline resource allocation. The developed framework can be applied to various other scenarios , including various types of energy harvesting (such wind, vibration etc.) and various types of applications and utility functions (such as delay, reliability etc.).
{ "timestamp": "2018-02-14T02:01:34", "yymm": "1802", "arxiv_id": "1802.04338", "language": "en", "url": "https://arxiv.org/abs/1802.04338" }
\section{Introduction} \label{sec:1} Consider the dynamical evolution, with non conserved order parameter, of a system undergoing a first order phase transition. A basic paradigm of statistical mechanics is that the corresponding macroscopic behavior is described by the motion by curvature of the interfaces separating the two stable phases. For lattice systems with short range interaction, the lattice symmetries are still felt on the macroscopic scale and the resulting evolution is an anisotropic motion by curvature. For values of the temperature below the roughening transition, the Wulff shape is not strictly convex and the corresponding evolution is crystalline, i.e., it generates facets \cite{TCH,T}. On the other hand, for long range interactions, the resulting interface evolution is described by the (isotropic) motion by mean curvature. We refer to \cite{funaki} for a recent overview on stochastic interface evolutions. In principle, the macroscopic evolution of the interfaces should be derived from a microscopic Glauber-like dynamics, and the corresponding transport coefficients could be characterized in terms of the microscopic interaction and the jump rates. While there is plenty of numerical evidence that this is indeed the case, the analytical results are few and the derivation of motion by curvature, say for the Ising model with Glauber dynamics at positive temperature, remains a most challenging issue. For short range interactions, the only available results are in fact at zero temperature \cite{CMST,LST,S}. In the case of long range interactions, or more precisely for Ising model with Kac potentials, the motion by mean curvature has been derived in \cite{DOPT1,KS}. The peculiar feature of this model is the presence of a parameter, the interaction range, that allows to achieve this derivation in two separate steps. Firstly, it is considered the evolution of the empirical magnetization in the Lebowitz-Penrose limit, showing that its limiting behavior is described by a non local evolution equation. Secondly, it is shown that, under a diffusive rescaling of space and time, such evolution leads to the motion by mean curvature. This second step is quite similar to the analogous derivation starting from the Allen-Cahn equation \cite{BSS,I}. There is another model with the same features, the so-called Glauber+Kawasaki process, for which the derivation of motion by mean curvature has been achieved by the same procedure \cite{Bo,KS0}. The present purpose is to describe, in the sense of large deviations theory, the probability of deviations from the motion by curvature. Postponing the connection with the microscopic dynamics, let us first discuss this topic purely from a phenomenological point of view in the setting introduced in \cite{S}. On a scale large compared to the microscopic length scale, we can represent the interface between the two pure phases as a surface $\Gamma$ of codimension one embedded in $\bb R^d$. The typical evolution of $\Gamma$ can then be deduced by free energy considerations. We denote by $\tau$ the surface tension, that in general depends on the local orientation of the surface, i.e., on the local normal $\hat n$ at $\Gamma$. The surface free energy is then given by \begin{equation} \label{free} F = \int_{\Gamma} \!\mathrm{d}\sigma\, \tau (\hat n)\;, \end{equation} where $\mathrm{d} \sigma$ is the surface measure. Observe that in the isotropic case $\tau$ is constant and $F$ becomes proportional to the perimeter of $\Gamma$. Phenomenologically, it is postulated that the interface velocity along the local normal, denoted by $v$, is given by \begin{equation} \label{mbc0} v = -\mu \frac {\delta F}{\delta \Gamma}\;, \end{equation} where the \emph{mobility} $\mu$ may depend on the local orientation on the surface. As shown in \cite{S}, for short range interactions the mobility $\mu$ can be computed from the microscopic dynamics, by either a Green-Kubo formula obtained via a linear response argument, or by looking at the fluctuations of the empirical order parameter. Let $\tilde\tau$ be the 1-homogeneous extension of $\tau$ to a function on $\bb R^d$, and introduce the stiffness matrix $A(\hat n)$ as the Hessian of $\tilde\tau$ at $\hat n$ (so that $A(\hat n) \hat n=0$). For $x\in\Gamma$ we define, \[ \kappa_A (x) := \tau(\hat n(x))^{-1} \sum_{i=1}^{d-1} \langle e_i(x),A(\hat n(x))e_i(x)\rangle \kappa_i(x)\;, \] where $\kappa_i(x)$ are the principal curvatures and $e_i(x)$ are the corresponding principal curvature directions of $\Gamma$ at $x$. Then \eqref{mbc0} reads, \[ v = \theta \kappa_A\;, \] where the \emph{transport coefficient} $\theta$ is given by the Einstein relation, \begin{equation} \label{theta} \theta = \mu \tau\;. \end{equation} In the isotropic case, $\tau$ and $\mu$ are constant, $A(\hat n) = \tau {1 \mskip -5mu {\rm I}}$ on the subspace orthogonal to $\hat n$, hence $\kappa_A = \kappa$, the mean curvature of $\Gamma$. Referring to \cite{S} for the analysis of (small) Gaussian fluctuations, we next introduce the rate function describing the asymptotics of the probability of large deviations around the motion by mean curvature. To this end, fix a time interval $[0,T]$ and a path $\Gamma(t)$, $t\in[0,T]$. On a basis of a Gaussian assumption on the noise and a fluctuation dissipation relation, the rate function ought to be given by \begin{equation} \label{sac} S_\mathrm{ac}(\Gamma) = \frac 1{4\mu} \int_0^T\!\mathrm{d} t\int_{\Gamma(t)}\mathrm{d} \sigma\, (v-\theta \kappa_A)^2\;. \end{equation} This functional should catch the asymptotics of the probability of smooth paths, and we next discuss its extension to more general paths. As shown in \cite{KORV} in the context of the Allen-Cahn equation, the path $t\mapsto \Gamma(t)$ need not to be continuous since nucleation might occur at some intermediate times. In such cases, the appropriate rate function reads, \begin{equation} \label{s+s} S(\Gamma) = S_\mathrm{ac}(\Gamma) + S_\mathrm{nucl}(\Gamma) \;, \end{equation} where $S_\mathrm{nucl}$ measures, according to \eqref{free}, the free energy cost of the interfaces nucleated in the time interval $[0,T]$. As we discuss in Section \ref{sec:5}, $S_\mathrm{nucl}$ can be recovered from $S_\mathrm{ac}$ by approximating nucleation events with continuous paths. Moreover, interfaces need to be counted with their multiplicity and are not necessarily smooth, even away from the nucleation times. Suitable weak definitions of the curvature and velocity are thus needed. This is accomplished by using tools of geometric measure theory, we refer to \cite{MR} for the proper definition of the functional $S$ in the case of non smooth interfaces in the isotropic case. Finally, it cannot be excluded that the map $t\mapsto \Gamma(t)$ has a Cantor part, which does not affect the cost functional constructed in \cite{MR}. A variational definition of $S$ which takes into account also such Cantor part is provided in \cite{BBP1}, its corresponding zero level set is given by the mean curvature flow according to the Brakke's formulation \cite{Brakke}. The rate functional $S$ should describe the large deviations asymptotics of microscopic stochastic dynamics that leads to the motion by curvature of the interfaces. The corresponding analysis has been carried out mostly for the Allen-Cahn evolution. In particular, the functional \eqref{s+s} has been identified by considering the sharp interface limit of the natural action functional associated to the Allen-Cahn equation, initially in \cite{KORV} and in greater detail in \cite{MR}. A stochastic Allen-Cahn equation has been considered in \cite{BBP1}, where it is proven the large deviation upper bound with rate function $S$. Observe that, as discussed in \cite{S}, the Allen-Cahn evolution exhibits a trivial transport coefficient, $\mu=1/\tau$, so that $\theta=1$ regardless of the shape of the double well potential. The case of Glauber dynamics for Ising systems with Kac potentials, in the one dimensional case, has been considered in \cite{BT,BDT}, where it is evaluated the asymptotic probability of a displacement of an interface in a given finite time. Here we discuss, in the case of smooth interfaces, the derivation of the rate function $S$ by considering either the Glauber dynamics for Ising systems with Kac potentials or the Glauber+Kawasaki process. For these models the large deviations asymptotics, respectively in the Lebowitz-Penrose and in the continuum limit, has been derived in \cite{C} and in \cite{JLV}. We thus analyze the sharp interface limit of the corresponding action functionals, deducing the rate functional \eqref{sac} and providing explicit formulae for the mobility coefficients. While the basic strategy is analogous to the one in \cite{KORV}, the non local character of the action functionals requires a more clever choice of the optimizing sequences. More precisely, in order to obtain the right transport coefficient, we need to introduce a corrector in the ansatz for the recovery sequences and solve a variational problem to identify the optimal choice. In the case of the Ising model with Kac potentials, the mobility derived here agrees with that derived in \cite{B} by a linear response argument, thus validating the fluctuations dissipation assumption. The computation of the mobility for the Glauber+Kawasaki process appears instead novel and provides a dynamical characterization of the surface tension. Note indeed that, as the invariant measure of this process is not explicitly known, a static characterization according to the guidelines of equilibrium statical mechanics is not feasible. It would be interesting to extend the results of the present paper on the rate function $S$ to the case of general interfaces, possibly exhibiting nucleation events. In analogy with the results in \cite{MR} for the Allen-Cahn equation, a key step should be to describe in both the models considered here the asymptotic behavior of sequences $\varphi_\eps$ with equibounded action (see the equations \eqref{S1} and \eqref{Ihk2} below). In the case of Ising-Kac a compactness property is expected, in analogy with the result in \cite{AB} for time-independent sequences with equibounded free energy (see \eqref{Fe} below), yielding paths of sharp interfaces in the limit $\eps \to 0$ with uniformly bounded perimeter. However, it is unclear how to associate to such configurations $\varphi_\eps$ corresponding paths of generalized surfaces $t \mapsto \Gamma_\eps(t)$ (as varifolds in the sense of geometric measure theory) with suitable uniform curvature and velocity bounds. As a consequence, we are not able to deduce curvature and velocity bounds on the limiting interfaces. Moreover, it remains to be proven that the well-prepared sequences $\varphi_\eps$ here considered (see \eqref{recovery} and \eqref{recovery1}) actually describe the typical asymptotic behavior of configurations assuming uniquely boundedness of the action functionals. \section{ Glauber dynamics with Kac potentials} \label{sec:2} In this section we analyze the sharp interface limit of the action functional in the context of the Glauber dynamics for Ising systems with Kac potentials. \subsection{Microscopic model and its mean field limit.} \label{sec:2.0} Let $\bb T^d_L= (\bb R/ L\bb Z)^d$ be the torus of side $L\ge 1$ in $\bb R^d$; when $L=1$ we drop it from the notation, i.e., $\bb T^d = \bb T^d_1$. We denote by $r,r'$ the elements of $\bb T^d_L$ and by $\mathrm{d} r$ the Haar measure on $\bb T^d_L$. Given a smooth non-negative function $j\colon \bb R_+ \to \bb R_+$, supported in $[0,\frac 12]$ and such that $\int_{\bb R^d}\!\mathrm{d} z\; j(|z|) = 1$, we let $J\colon \bb T^d_L \to \bb R_+$ be the probability density defined by $J(r) = j(|r|)$. In the sequel, $J*f(r) := \int_{\bb T^d_L} \!\mathrm{d} r'\, J(r-r') f(r')$ is the standard convolution on $\bb T^d_L$. Given $L>0$, and $\gamma>0$ such that $\gamma^{-1}L \in \bb N$, let $\bb T^d_{L,\gamma} := (\gamma \bb Z / L \bb Z)^d $ be the discrete approximation of $\bb T^d_L$ with lattice spacing $\gamma$. The microscopic configuration space is $\Omega_{L,\gamma} := \{-1,1\}^{\bb T^d_{L,\gamma}}$. The microscopic energy is the function $H_\gamma \colon \Omega_{L,\gamma} \to \bb R$ defined by \[ H_{L,\gamma}(\s) = -\frac 12 \sum_{i,j\in \bb T^d_{L,\gamma}} \gamma^d J(i-j) \sigma(i)\sigma(j)\;. \] Given the inverse temperature $\beta > 0$, the corresponding Gibbs measure $\mu_{L,\gamma}^\beta$ is the probability on $\Omega_{L,\gamma}$ defined by \begin{equation} \label{gibbs} \mu_{L,\gamma}^\beta(\sigma) = \frac 1{Z^\beta_{L,\gamma}} \exp \big\{-\beta H_{L,\gamma}(\sigma)\big\}\;, \end{equation} where $Z^\beta_{L,\gamma}$ is the partition function. \subsubsection*{Lebowitz-Penrose limit} We consider the supercritical case $\beta>1$ and define the spontaneous magnetization $m_\beta$ as the strictly positive solution of the Curie-Weiss equation, that is \begin{equation} \label{mbeta} m_\beta = \tanh (\beta m_\beta)\;, \quad m_\beta>0\;. \end{equation} Denoting by $\mc M(\bb T^d_L)$ the space of bounded measures on the torus $\bb T^d_L$, equipped with the weak*-topology, we define the \emph{empirical magnetization} as the map $M^\gamma\colon \Omega_{L,\gamma} \to \mc M(\bb T^d_L)$ given by \[ M^\gamma(\sigma) = \gamma^d \sum_{i\in \bb T^d_{L,\gamma}} \sigma(i) \, \delta_i\; . \] As proven in \cite{EE}, in the Lebowitz-Penrose limit $\gamma\to 0$ the excess free energy functional for the Gibbs measures \eqref{gibbs} is the functional $F_L\colon L^\infty(\bb T^d_L;[-1,1]) \to [0,\infty)$ given by \begin{equation} \label{F} F_L(m) = \int\!\mathrm{d} r\; [f_\beta(m)-f_\beta(m_\b)] + \frac 14 \int\!\mathrm{d} r\!\int\!\mathrm{d} r'\; J(r-r') [m(r)-m(r')]^2\;, \end{equation} where \begin{equation} \label{fb} f_\beta(m) = - \frac{m^2}2 + \b^{-1} \imath(m)\;, \quad \imath(m) = \frac{1+m}2\log\frac{1+m}2 + \frac{1-m}2\log\frac{1-m}2\;. \end{equation} Observe that, since $\pm m_\beta$ are the minimizers of $f_\beta$, the functional $F_L$ vanishes on the pure phases $\pm m_\beta$. The probabilistic content of this statement is that the family $\{\mu^\gamma_{L,\gamma} \circ (M^\gamma)^{-1} \}_{\gamma>0}$ of probabilities on $ \mc M(\bb T^d_L)$ satisfies a large deviation principle with speed $\beta^{-1}\gamma^d$ and rate function $\mc F_L$ given by $\mc F_L(\nu)=F_L(m)$ if $\nu= m \, \mathrm{d} r$ for some $m \in L^\infty(\bb T^d_L;[-1,1])$ and $+\infty$ otherwise. \subsubsection*{Glauber-Kac dynamics} The Glauber dynamics with Kac potentials is a continuous time Markov chain on the state space $\Omega_{L,\gamma}$, reversible with respect to the Gibbs measure \eqref{gibbs}. It is defined by assigning the rates at which the value of the spin $\sigma$ at site $i$ is flipped. The corresponding generator $\bb L_\gamma$ is the operator acting on functions on $\Omega_{L,\gamma}$ as \begin{equation} \label{glauber} \bb L_\g f(\s) = \sum_{\substack{i\in \bb T^d_{L,\gamma}}} c(i,M^\gamma(\sigma)) \mathrm{e}^{-\beta J*M^\gamma(\sigma) (i) \s(i)} [f(\s^i)-f(\s)]\;, \end{equation} where $c \colon \bb T^d_L\times \mc M(\bb T^d_L) \to (0,+\infty)$ is a continuous function satisfying $c(r, \nu) = c(r, \nu-\nu(\{r\})\delta_r)$, which implies the detailed balance condition, namely, that $\bb L_\gamma$ is self-adjoint with respect to the Gibbs measure \eqref{gibbs}. In order to perform the sharp interface limit, we restrict to a special class of rates. More precisely, we assume that \begin{equation} \label{ctype} c(r,\nu) \equiv c(\nu)(r) = a(K*\nu(r))\;, \quad r\in \bb T^d_L\;, \end{equation} where $a\colon \bb R \to (0,+\infty)$ is a Lipschitz function and $K$ is a smooth radial function on $\bb R^d$ with support in the ball of radius $\frac 12$ and satisfying $K(0)=0$; i.e., $K(r) = k(|r|)$ for a smooth non-negative function $k\colon \bb R_+ \to \bb R_+$ with support in $[0,\frac 12]$. A standard choice, see \cite{DOPT1}, is \begin{equation} \label{c} c(i,M^\gamma(\sigma)) = \frac{1}{2\cosh\big\{\b \sum_{j\ne i} \gamma^d J(i-j) \sigma(j)\big\}}\;, \end{equation} that, provided $J(0)=0$, corresponds to $c(r,\nu) = (2\cosh \b J* \nu(r))^{-1}$. \subsubsection*{Mean field evolution equation} As proven in \cite{DOPT1}, in the Lebowitz-Penrose limit (mesoscopic limit) the empirical magnetization under the Glauber dynamics becomes absolutely continuous and its density $m$ evolves according to the non-local equation, \begin{equation} \label{mfe1} \frac{\partial m}{\partial t} = - 2c(m) \sqrt{1-m^2} \sinh (\mathop{\rm arctanh}\nolimits m -\b J*m)\;, \end{equation} We notice that expanding the $\sinh$, Eq.\eqref{mfe1} reads, \begin{equation} \label{mfe2} \frac{\partial m}{\partial t} = 2c(m) \cosh(\b J*m) (\tanh(\b J*m) -m)\;. \end{equation} In particular, with the choice \eqref{c}, the mean field evolution becomes, \begin{equation} \label{mfee} \frac{\partial m}{\partial t} = \tanh(\b J*m) -m\;. \end{equation} The stationary solutions to \eqref{mfe1} do not depend on the particular choice of the rates. In particular, since we are assuming $\b>1$, recalling \eqref{mbeta}, the spatially homogeneous stationary solutions are $m=\pm m_\b$, that are stable, and $m=0$, which is unstable. \subsubsection*{Action functional} The large deviation asymptotics for the empirical magnetization under the Glauber dynamics for an Ising spin system with Kac potentials has been analyzed in \cite{C}. We next recall the associated rate function. Let $B_1(L)$ be the unit ball in $L^\infty(\bb T_L^d)$ equipped with the (metrizable) weak*-topology. For $T>0$ we then let $C([0,T];B_1(L))$ be the set of $B_1(L)$-valued continuous functions equipped with the induced uniform distance. Let finally $C_*([0,T];B_1(L))$ be the subset of functions $\varphi$ in $C([0,T];B_1(L))$ such that there exists $\psi \in L^1([0,T] \times \bb T_L^d)$ for which \[ \varphi(t,r)-\varphi(0,r)=\int_0^t \psi(s,r) \, \mathrm{d} s \quad \text{$r$ - a.e.} \quad \forall\, t \in [0,T]\;. \] Clearly $\psi$ is unique and will be denoted by $\dot{\varphi}$. We define $I_{T,L} \colon C([0,T];B_1(L)) \to [0, \infty]$ by \begin{equation} \label{I} I_{T,L}(\varphi)= \begin{cases} \displaystyle{\int_0^T\!\mathrm{d} t\int \!\mathrm{d} r\; \mc L(\ph(t, \cdot),\dot\ph(t,\cdot))} & \text{if } \varphi \in C_*([0,T];B_1(L)) \, , \\ \\ +\infty & \text{otherwise} \, , \end{cases} \end{equation} where, given measurable functions $u \colon \bb T_L^d \to [-1,1]$ and $v \colon \bb T_L^d \to \bb R$, \begin{equation} \label{mc L=} \begin{split} & \mc L(u,v) = \frac v{2\beta} \log\frac{\displaystyle \tfrac{v}{2c(u)} + \sqrt{1-u^2+\tfrac{v^2}{4c(u)^2}}}{1-u} - \frac v2 J*u \\ & \quad + \frac{c(u)}\beta \Big(\cosh(\b J*u) - u \sinh(\b J*u) - \sqrt{1-u^2+\tfrac{v^2}{4c(u)^2}}\,\Big) \;. \end{split} \end{equation} Under suitable assumptions on the initial conditions, in \cite{C} it is proven that the empirical magnetization sampled according to the Glauber dynamics, regarded as a random variable taking values in the Skorokhod space $D([0,T];\mc M(\bb T^d_L))$, satisfies a large deviation principle with speed $\beta^{-1} \gamma^d$ and rate function $\mc I_{T,L}$ given by $\mc I_{T,L}(\nu) = I_{T,L}(\ph)$ if $\nu_t= \varphi_t \, \mathrm{d} r$ for some $\varphi\in C_*([0,T];B_1(L))$ and $+\infty$ otherwise. For our purposes, by noticing that, as $\iota'(m) = \mathop{\rm arctanh}\nolimits m$, the functional derivative ($L^2$-gradient) of $F_L$ is given by \begin{equation} \label{dF} \frac{\delta F_L}{\delta m} = \b^{-1} \mathop{\rm arctanh}\nolimits m -J*m\;, \end{equation} we rewrite the Lagrangian $\mc L$ in \eqref{mc L=} in the form, \[ \begin{split} \mc L(u,v) & = \frac v{2\beta} \left( \mathop{\rm arctanh}\nolimits u - \b J*u + \mathop{\rm arcsinh}\nolimits \frac{v}{2c(u)\sqrt{1-u^2}} \right) \\ & \quad + \frac{c(u)}\beta \sqrt{1-u^2} \bigg(\cosh\big(\b J*u -\mathop{\rm arctanh}\nolimits u\big) - \sqrt{1+\tfrac{v^2}{4c(u)^2(1-u^2)}}\bigg) \\ & = \frac v2 \left(\frac{\delta F_L}{\delta u} + \frac1\beta\mathop{\rm arcsinh}\nolimits \frac{v}{2c(u)\sqrt{1-u^2}} \right) \\ & \quad + \frac{c(u)}\beta\sqrt{1-u^2} \left(\cosh\left(\b\frac{\delta F_L}{\delta u} \right) - \sqrt{1+\tfrac{v^2}{4c(u)^2(1-u^2)}}\right) \;. \end{split} \] Accordingly, the action functional becomes, \begin{equation} \label{IS} \begin{split} & I_{T,L}(\varphi)= \frac 12 \big[ F_L(\ph(T)) - F_L(\ph(0))\big] \\ & \;+ \int_0^T\!\mathrm{d} t\int\!\mathrm{d} r\; \bigg[ \frac{\dot\ph}{2\beta} \mathop{\rm arcsinh}\nolimits W(\ph,\dot\varphi)- \frac{c(\ph)}\beta\sqrt{1-\ph^2} \big(\sqrt{1+W(\ph,\dot\ph)^2}-1\big) \bigg] \\ & \;+ \int_0^T\!\mathrm{d} t\int\!\mathrm{d} r\; \frac{c(\ph)}\beta\sqrt{1-\ph^2} \left(\cosh\left(\b\frac{\delta F_L}{\delta \varphi} \right) - 1\right)\;. \end{split} \end{equation} where \begin{equation} \label{W} W(\ph,\dot\varphi)= \frac{\dot\ph}{2c(\ph)\sqrt{1-\ph^2}}\;. \end{equation} It is worthwhile to remark that the above representation of the action functional reflects a Legendre duality. More precisely, for $\alpha>0$ let $G(\cdot;\alpha)$ and $G^*(\cdot;\alpha)$ be the Legendre pair of convex even functions, \begin{equation} \label{gg*=} G(q;\alpha) := \alpha(\cosh q -1)\;, \quad G^*(p;\alpha) = p \mathop{\rm arcsinh}\nolimits(p/\alpha) - \sqrt{\alpha^2 +p^2} + \alpha\;, \end{equation} so that $qp + G(q;\alpha) + G^*(p;\alpha) \ge 0$ with equality if and only if $p=-\alpha\sinh q$. Then \eqref{IS} can be rewritten as \begin{equation} \label{gg*} \begin{split} I_{T,L}(\varphi)& = \frac 12 \big[ F_L(\ph(T)) - F_L(\ph(0))\big] \\ & \quad + \frac 12\int_0^T\!\mathrm{d} t\int\!\mathrm{d} r\; \Big[G\big(\b\tfrac{\delta F_L}{\delta \varphi};\alpha(\ph)\big) + G^*\big(\beta^{-1}\dot\ph;\alpha(\ph)\big)\Big]\;, \end{split} \end{equation} where $\alpha(\varphi)= 2 \beta^{-1} c(\varphi)\sqrt{1-\varphi^2}$. From this representation we easily conclude that the solution $m$ to the mean field equation \eqref{mfe1} is characterized by $I_{T,L}(m) = 0$, or equivalently $I_{T,L}(m) \le 0$. The last inequality provides the following gradient flow formulation: $m$ is a solution to \eqref{mfe1} if and only if, for any $t\in [0,T]$, \[ F_L(m(t)) + \int_0^t\!\mathrm{d} s\int\!\mathrm{d} r\; \Big[G\big(\b\tfrac{\delta F_L}{\delta m};\alpha(m)\big) + G^*\big(\beta^{-1}\dot m;\alpha(m)\big)\Big] \le F_L(m(0))\;. \] \subsection{Sharp interface limit} \label{sec:3} A natural and physically relevant question is to investigate the limiting behavior of the Ising-Kac model in the sharp interface limit, in which the interface between the two stable phases $\pm m_\beta$ is described by surfaces of codimension one. \subsubsection*{Excess free energy and surface tension} We set $\eps=L^{-1}$, and rescale the space variable $r \in \bb T^d_L$ by setting $r=\eps^{-1}x$ with $x\in \bb T^d$. We then introduce the rescaled excess free energy renormalized with a factor $L^{d-1}$. We namely define $F^{\eps} \colon L^\infty(\bb T^d;[-1,1]) \to [0,\infty)$ by $F^\eps(m) := \eps^{d-1} F_{\eps^{-1}}(m(\eps^{-1} \cdot))$, i.e., \begin{equation} \label{Fe} F^\eps(m) = \int \!\mathrm{d} x\; \frac{f_\beta(m)-f_\beta(m_\b)}{\eps} + \frac \eps 4 \int \!\mathrm{d} x\!\int \!\mathrm{d} y\; J_\eps(x-y) \left[ \frac{m(x)-m(y)}{\eps}\right]^2\;, \end{equation} where $J_\eps(z) := \eps^{-d}J(\eps^{-1}z)$. The asymptotics of the excess free energy functional \eqref{Fe} has been discussed in \cite{AB,ABCP}, where it is proven that the limiting functional is finite only if $m$ takes the values $\pm m_\beta$, and in this case its value is proportional to the perimeter of the jump set of $m$. The proportionality factor defines the \emph{surface tension} of the Ising-Kac model, which is denoted by $\tau$ and will be characterized below. This result has been extended to the anisotropic case, i.e., when $J$ is not radial; then, the surface tension $\tau$ is no longer constant but a convex function of the orientation \cite{AB,BBP}. The surface tension is the excess free energy cost per unit area of the transition between the two stable phases. The characterization of $\tau$ reduces to a one dimensional computation in the direction normal to the interface. We introduce the \emph{instanton} $\bar m(\xi)$, $\xi\in \bb R$, as the optimal magnetization profile of such a transition, that is, $\bar m$ is solution to \begin{equation} \label{inst} \bar m(\xi) = \tanh \b \tilde J * \bar m (\xi)\;, \quad \bar m(0) = 0\;, \quad \lim_{\xi\to\pm\infty}\bar m(\xi) = \pm m_\b\;, \end{equation} where, recalling $J(r) = j(|r|)$, \begin{equation} \label{tildeJ} \tilde J(\xi) = \int_{\bb R^{d-1}}\!\mathrm{d} \eta\; j\big(\sqrt{\xi^2+|\eta|^2}\big) \;. \end{equation} Then $\tau = \mc F (\bar m)$, where $\mc F$ is the free energy functional on $\bb R$, \begin{equation} \label{mcF} \mc F(m) = \int\!\mathrm{d} \xi\; [f_\beta(m)-f_\beta(m_\b)] + \frac 14 \int\!\mathrm{d} \xi\!\int\!\mathrm{d} \xi'\; \tilde J(\xi-\xi') [m(\xi)-m(\xi')]^2\;. \end{equation} It can be shown \cite{B} that \begin{equation} \label{tau} \tau = \int\!\mathrm{d}\xi \; \bar m'(\xi) \int\!\mathrm{d}\xi' \; \int_{\bb R^{d-1}}\!\mathrm{d} \eta\; j\big(\sqrt{(\xi-\xi')^2+|\eta|^2}\big) \; \bar m'(\xi') \; \frac{\eta_1^2}2\;. \end{equation} For later purpose, we recall the main properties of the instanton, see \cite{DOPT2,DOPT3,DGP}. It is an odd and strictly increasing function which converges exponentially fast to its asymptotes. More precisely, $\bar m'(\xi)>0$ and there are $a,c,\delta>0$ such that, for any $\xi\ge 0$, \begin{equation} \label{mexp} \big| \bar m(\xi) - (m_\beta-a\mathrm{e}^{-\alpha\xi}) \big| + \big| \bar m'(\xi) - a\alpha \mathrm{e}^{-\alpha\xi}) \big| + \big| \bar m''(\xi) - a\alpha^2 \mathrm{e}^{-\alpha\xi}) \big| \le c \mathrm{e}^{-(\alpha+\delta)\xi}\;, \end{equation} where $\alpha$ is the unique positive solution to the equation \begin{equation} \label{alpha} \beta(1-m_\beta^2)\int\!\mathrm{d}\xi\; \tilde J(\xi) \mathrm{e}^{-\alpha\xi} = 1\;. \end{equation} \subsubsection*{Motion by mean curvature} Concerning the dynamical behavior, the sharp interface limit of the nonlocal evolution equation has been analyzed in \cite{DOPT,DOPT1,KS}, with the special choice of $c$ as in \eqref{c}. To describe these results, let $m$ be the solution to \eqref{mfee} and define, according to a diffusive rescaling of space and time, $m^\eps \colon \bb R_+ \times \bb T^d \to [-1,1]$ by $m^\eps(t,x) = m(\eps^{-2}t,\eps^{-1}x)$, which solves \begin{equation} \label{mfeee} \frac{\partial m^\eps}{\partial t} = \eps^{-2}\big(\tanh(\b J_\eps*m^\eps) -m^\eps\big)\;. \end{equation} In order to describe the limiting behavior of $m^\eps$, we briefly recall the notion of classical mean curvature flow. Given a $C^1$-family of oriented smooth surfaces $\Gamma=\{ \Gamma(t)\}_{t\geq 0}$, with $\Gamma(t) = \partial\Omega(t)$ for some open $\Omega(t) \subset \bb T^d$, we denote by $n_t=n_{\Gamma(t)}$ the inward normal of $\Gamma(t)$, by $v_t\colon \Gamma(t) \to \bb R$ the normal velocity of $\Gamma$ at time $t$. Finally, we set $\kappa_t=\kappa_{\Gamma(t)}$, where $\kappa_{\Gamma(t)} \colon \Gamma(t) \to \bb R$ is the mean curvature of $\Gamma(t)$. Then, given $\theta>0$, $\Gamma$ evolves according to the mean curvature flow with transport coefficient $\theta>0$ if \begin{equation} \label{mbc} v_t=\theta \kappa_t \, , \qquad t\geq 0 \, . \end{equation} Given a mean curvature flow as above, assuming that the initial datum for \eqref{mfee} satisfies $m^\eps(0,\cdot) \to m_\beta{1 \mskip -5mu {\rm I}}_{\Omega(0)}-m_\beta {1 \mskip -5mu {\rm I}}_{\Omega(0)^\complement}$, then $m^\eps(t,\cdot) \to m_\beta{1 \mskip -5mu {\rm I}}_{\Omega(t)}-m_\beta {1 \mskip -5mu {\rm I}}_{\Omega(t)^\complement}$ for any $t>0$. The actual value of $\theta$ obtained in \cite{DOPT,KS} will be discussed later. In \cite{DOPT1,KS} the convergence to the mean curvature flow is proven also starting directly from the microscopic Glauber dynamics. More precisely, letting $M^{\gamma,\eps}$ be the diffusively rescaled empirical magnetization, it is shown that if $\eps = |\log\gamma|^{-1}$ then $M^{\gamma,\eps}$ satisfies the law of large numbers as $\gamma\to 0$, and the limiting evolution is given by the mean curvature flow. \subsubsection*{Transport coefficients and Einstein relation} The value of the transport coefficient $\theta$, for arbitrary $c(m)$ of the form \eqref{ctype}, can be inferred by using a linear response argument along the guidelines in \cite{S}. Consider the non local mean field equation \eqref{mfe2} on $\bb R^d$ with external field $h$, that is, \[ \frac{\partial m}{\partial t} = 2c(m) \cosh(\b(J*m+h)) [\tanh(\b (J*m+h))-m]\;. \] In view of \eqref{ctype} and recalling that $J$ and $K$ are radial, solutions to the above equation with planar symmetry along a fixed direction $\hat n$ have the form $m(t,\eta) = \tilde m(\eta\cdot \hat n,t)$ with $\tilde m(\xi,t)$, $\xi\in\bb R$, solution to \begin{equation} \label{mfe1d0} \frac{\partial \tilde m}{\partial t} = 2 a(\tilde K *\tilde m) \cosh(\b(\tilde J *\tilde m+h)) [\tanh(\b (\tilde J *\tilde m+h))-\tilde m]\;, \end{equation} where $\tilde J$ is defined in \eqref{tildeJ} and, analogously, recalling $K(r)=k(|r|)$, \begin{equation} \label{tildeK} \tilde K(\xi) = \int_{\bb R^{d-1}}\!\mathrm{d} \eta\; k\big(\sqrt{\xi^2+|\eta|^2}\big) \;. \end{equation} In particular, if we look for a traveling wave solution along $\hat n$, i.e., a solution of the form $m(t,\eta) = q_h(\eta\cdot \hat n-v(h)t)$, we deduce that $q_h$ and the front velocity $v(h)$ do not depend on the direction $\hat n$ and solve (in the case of \eqref{mfee} with $h$ small their existence is proven in \cite{DGP}) \begin{equation} \label{mfe1d} -v(h) q_h' = 2 a(\tilde K *q_h) \cosh(\b(\tilde J *q_h+h)) [\tanh(\b (\tilde J *q_h+h))-q_h]\;. \end{equation} In order to compute the linear response to the external field we expand, \[ v(h) = v_1h + O(h^{2})\;,\qquad q_h = \bar m + h \psi + O(h^{2}) \;, \] where $\bar m$ is the instanton which solves \eqref{mfe1d} with $h=0$ and $v(0)=0$, see \eqref{inst}. In the sequel we set \begin{equation} \label{cxi} \bar a(\xi) := a(\tilde K*\bar m(\xi))\:, \quad \xi\in \bb R\;. \end{equation} By \eqref{mfe1d}, at the first order in $h$, we obtain the following identity, \[ - v_{1} \bar m' = \frac{2\bar a}{\sqrt{1-\bar m^2}} \big[ - \psi + (1- \bar m^2) \b \tilde J * \psi + \b (1 - \bar m^2) \big]\;, \] where we used that $\cosh(\b\tilde J*\bar m) = 1/\sqrt{1-\tanh^2(\b\tilde J*\bar m)} = 1/\sqrt{1-\bar m^2}$. We multiply both sides of the above equation by $\bar m'/(2\bar a \sqrt{1-\bar m^2})$ and then integrate; using that $\bar m' = (1-\bar m^2)\b \tilde J *\bar m'$ we obtain, \[ v_{1} = - 2N \beta m_{\beta}\;, \] where \begin{equation} \label{N} N = \left[\int\!\mathrm{d}\xi\; \frac{(\bar m')^2}{2\bar a\sqrt{1-\bar m^2}}\right]^{-1}. \end{equation} But, by the definition of the (macroscopic) mobility $\mu$, see \cite{S}, it must be $v(h) = - 2m_{\beta} \mu h + O(h^{2}) $. We conclude that \begin{equation} \label{mu} \mu = N \beta\;. \end{equation} We finally remark that in the case \eqref{c} we have $2\bar a = \sqrt{1-\bar m^2}$, so that $N = \left[\int\!\mathrm{d}\xi\; \frac{(\bar m')^2}{1-\bar m^2}\right]^{-1}$ in this case. \subsubsection*{Sharp interface limit of the action functional} The main purpose of the section is to discuss the sharp interface limit of the action functional. To this end, we perform a diffusive rescaling of space and time of parameter $\eps=L^{-1}$ and normalize the resulting action with a factor $L^{d-1}$. Namely, given $T>0$, we define $S_\eps \colon C([0,T];B_1) \to [0,\infty]$ (here $B_1$ is a short notation for the unit ball $B_1(1)$ in $L^\infty(\bb T^d)$) by \begin{equation} \label{S} S_\eps(\varphi)= \eps^{d-1} I_{\eps^{-2}T,\eps^{-1}}(\ph(\eps^2 \cdot, \eps \cdot ))= \eps^{-1} \int_0^T \mathrm{d} t \int \mathrm{d} x \, \mc L_\eps (\varphi(t,\cdot), \dot{\varphi}(t,\cdot))\;, \end{equation} where, given measurable functions $u \colon \bb T^d \to [-1,1]$, $v \colon \bb T^d \to \bb R$ and recalling $J_\eps(\cdot):= \eps^{-d} J(\cdot /\eps)$, \begin{equation} \label{mc Leps=} \begin{split} & \mc L_\eps(u,v) = \frac {v}{2\beta} \log\frac{\displaystyle \frac{\eps^{2} v}{2c_\eps(u)} + \sqrt{1-u^2+\left(\frac{\eps^{2} v}{2c_\eps(u)}\right)^2}}{1-u} - \frac { v}2 J_\eps*u \\ & \quad + \frac{c_\eps(u)}{\beta\eps^2} \left(\cosh(\b J_\eps*u) - u \sinh(\b J_\eps*u) - \sqrt{1-u^2+\left(\frac{\eps^{2} v}{2c_\eps(u)}\right)^2}\right)\;, \end{split} \end{equation} with, recalling \eqref{ctype} and letting $K_\eps(\cdot):= \eps^{-d} K(\cdot /\eps)$, \begin{equation} \label{ceps} c_\eps(u) := a(K_\eps*u)\;. \end{equation} Given a $C^1$-family of oriented smooth surfaces $\Gamma=\{ \Gamma(t)\}_{t\in [0,T]}$, with $\Gamma(t) = \partial\Omega(t)$ for some open $\Omega(t) \subset \bb T^d$, as before we denote by $n_t=n_{\Gamma(t)}$ the inward normal of $\Gamma(t)$, by $v_t\colon \Gamma(t) \to \bb R$ the normal velocity of $\Gamma$ at time $t$, and by $\kappa_t$ the mean curvature of $\Gamma(t)$. Letting $\tilde d(\cdot,\Gamma(t))$ be the signed distance from $\Gamma(t)$, i.e., $\tilde d(\cdot,\Gamma(t)) := \mathrm{dist}(\cdot,\Omega(t)^\complement) - \mathrm{dist}(\cdot, \Omega(t))$, we denote by $d(\cdot,\Gamma(t))$ a regularized version of $\tilde d(\cdot,\Gamma(t))$ such that they coincide on a neighborhood of $\Gamma(t)$. For such families of surfaces we consider the action functional \eqref{sac}, i.e., \begin{equation} \label{Rpn} S_\mathrm{ac}(\Gamma) = \frac 1{4\mu}\int_0^T\!\mathrm{d} t\int_{\Gamma(t)}\!\mathrm{d}\s\; (v_t-\theta \kappa_t)^2\;, \end{equation} with $\mu$ as given in \eqref{mu} and $\theta=\mu\tau$ with $\tau$ as defined in \eqref{tau}. As next stated, it describes the sharp interface limit of the rescaled action functional associated to the Glauber dynamics for an Ising system with Kac potentials. \begin{theorem} \label{thm:3.1} Given a $C^1$-family of oriented smooth surfaces $\Gamma=\{ \Gamma(t)\}_{t\in [0,T]}$, with $\Gamma(t) = \partial\Omega(t)$, consider sequences $\{ \varphi_\eps \} \subset C([0,T];B_1)$ converging to $m_\beta{1 \mskip -5mu {\rm I}}_{\Omega(\cdot)}-m_\beta {1 \mskip -5mu {\rm I}}_{\Omega(\cdot)^\complement}$ of the form \begin{equation} \label{recovery} \varphi_\eps(t,x) = \bar m\left(\frac{d(x,\Gamma(t))}\eps + \eps Q\left(t,x,\frac{d(x,\Gamma(t))}\eps\right)\right)+\eps R_\eps(t,x)\;, \end{equation} where $\bar{m}$ is the instanton, $Q \colon [0,T] \times \bb T^d \times \bb R \to \bb R$ is a smooth function such that \begin{equation} \label{Q} \sup_{(t,x,\xi) \in [0,T] \times \bb T^d \times \bb R} \frac{\big| Q(t,x,\xi) \big| + \big|\partial_t Q(t,x,\xi) \big| +\big| \partial_\xi Q(t,x,\xi)\big|}{1+|\xi|} <+\infty\;, \end{equation} and $R_\eps \colon [0,T] \times \bb T^d \to \bb R$ is a smooth function. \begin{itemize} \item[(a)] If $\| R_\eps\|_\infty +\| \partial_t R_\eps \|_\infty \to 0$ as $\eps \to 0$ then, for any $Q$, \[ \liminf_{\eps\to 0} S_\eps(\varphi_\eps) \ge S_\mathrm{ac}(\Gamma)\;. \] \item[(b)] There exist $Q^*$ such that, choosing $Q=Q^*$ and $R_\eps=0$ we have, \[ \lim_{\eps\to 0} S_\eps(\varphi_\eps) = S_\mathrm{ac}(\Gamma)\;. \] \end{itemize} \end{theorem} From a physical viewpoint, the main content of the result is the identification of the transport coefficients in the limiting rate function $S$. As expected, the mobility $\mu$, that is initially introduced via a linear response argument, coincides with the variance of the fluctuations around the motion by mean curvature. The mechanism behind this identification is an averaging property, common to homogenization problems. At the mathematical level, this is achieved by the introduction (in the same spirit of \cite{KS}) of the corrector $Q$ in the ansatz \eqref{recovery}: the transport coefficients are then identified by solving an optimization problem on $Q$. As mentioned in the Introduction, this issue does not appear in the Allen-Cahn case, in which the introduction of correctors is not needed. The above statement is the analogous for the Ising-Kac model of that in \cite[Prop.\ 2.2]{KORV} for the Allen-Cahn action functional. While these results hint to the variational convergence (more precisely $\Gamma$-convergence) of the sequence of functionals $S_\eps$ to $S_\mathrm{ac}$, from technical viewpoint there are several missing steps. Concerning the lower bound, i.e., statement (a) in the theorem, the main difficulty consists in showing that the sequences $\varphi_\eps$ satisfying $S_\eps(\varphi_\eps)\le C$ are of the one-dimensional form given by \eqref{recovery} for suitable (not necessarily smooth) path $\Gamma$ and some $Q$ and $R_\eps$. In the Allen-Cahn case this is proven in \cite{MR}, where this structure of the sequence $\varphi_\eps$ is deduced as a consequence of the vanishing property of the discrepancy measures. As we have no analogue of the discrepancy measures for the Ising-Kac model, we have no clue on how to handle this issue in the present case. Concerning the upper bound, statement (b) provides the construction of the recovery sequence when the limiting path $\Gamma$ is smooth without nucleations. Combining, via a diagonal argument, this statement with the argument presented in Section \ref{sec:5}, it is also possible to construct a recovery sequence for piecewise $C^1$-paths. The missing step, that is common with the Allen-Cahn case, is the proof that general paths of finite action can be approximated by piecewise $C^1$-paths. A natural further step is the analysis of the large deviation properties of the empirical magnetization for the underlying microscopic dynamics in the joint limit $\gamma\to 0$ and $\eps\to 0$, for instance when $\eps = |\log\gamma|^{-1}$. For the stochastic Allen-Cahn equation, the large deviations upper bound, with rate function $S$, is proven in \cite{BBP1} by constructing suitable exponential martingales. This strategy seems applicable also to the Ising-Kac model, but requires, as a crucial step, the $\Gamma$-convergence lower bound discussed above. \subsection{Proof of Theorem \ref{thm:3.1}} \label{sec:3b} To carry out the proof, we shall need the following results on the linearization of the nonlocal evolution. Consider Eq.\eqref{mfe1d0} for $h=0$; by \eqref{cxi} and using again the identity $\cosh(\b\tilde J*\bar m) = 1/\sqrt{1-\bar m^2}$, the linearization around the instanton gives rise to the linear operator, \begin{equation} \label{L} L\psi = \frac{2\bar a}{\sqrt{1-\bar m^2}} (-\psi + (1-\bar m^2)\b \tilde J*\psi)\;. \end{equation} We regard it as an operator on $L^2(\bb R,\nu(\mathrm{d}\xi))$, where \begin{equation} \label{nu} \nu(\mathrm{d}\xi) = \frac{\mathrm{d}\xi}{2 \bar a(\xi) \sqrt{1-\bar m^2(\xi)}}\;. \end{equation} We observe that $L$ is bounded, symmetric, and negative semidefinite, with $0$ a simple eigenvalue and $\bar m'$ the corresponding eigenvector. In fact, using again that $\bar m' = (1-\bar m^2)\b \tilde J *\bar m'$, it is easy to check that $L\bar m' = 0$ and that \[ \int\!\nu(\mathrm{d}\xi)\; \psi(\xi) L\psi(\xi) = -\frac 12 \int\!\mathrm{d} \xi\int\!\mathrm{d} \xi'\; \b\tilde J(\xi-\xi') \bar m'(\xi)\bar m'(\xi') \left[\frac{\psi}{\bar m'}(\xi) - \frac{\psi}{\bar m'}(\xi')\right]^2 \;. \] As $\tilde J(0)>0$ and $\tilde J$ is continuous, we infer that the integral on the right-hand side is zero if and only if $\psi/\bar m'$ is constant. An application of Weyl's theorem shows that $L$ has the gap property, i.e., that $0$ is an isolated eigenvalue. The above arguments can be found in \cite{DOPT2} for the case \eqref{c}. A similar result holds also in $L^\infty$. This is done in \cite{DGP} for the case \eqref{c}, the extension to the general case is straightforward. For expository reasons, we prove the statements in reverse order. \noindent {\it Proof of \rm (b).} Recalling \eqref{IS}, the decomposition \eqref{gg*}, and \eqref{gg*=}, we rewrite the rescaled action functional \eqref{S} as \begin{equation} \label{S1} S_\eps(\varphi) = S_\eps^{(1)}(\varphi)+ S_\eps^{(2)}(\varphi) + S_\eps^{(3)}(\ph)\;, \end{equation} where \[ \begin{split} S_\eps^{(1)}(\varphi)& = \frac 12 \big[ F^\epsilon(\ph(T)) - F^\epsilon(\ph(0))\big]\;, \\ S_\eps^{(2)}(\varphi) & = \frac 12 \int_0^T\!\mathrm{d} t\int \!\mathrm{d} x\; G^*((\beta\eps)^{-1}\dot \ph;\alpha_\eps(\ph)) \;, \\ S_\eps^{(3)}(\varphi) & = \frac 12 \int_0^T\!\mathrm{d} t\int \!\mathrm{d} x\; G\big(\eps\b\tfrac{\delta F^\epsilon}{\delta m} (\varphi);\alpha_\eps(\ph)\big)\;, \end{split} \] with $F^\epsilon$ as in \eqref{Fe} and \begin{equation} \label{Weps} \alpha_\eps(\varphi)= 2 \frac{c_\eps(\ph)}{\beta\eps^3} \sqrt{1-\ph^2} \;, \quad \eps\b\frac{\delta F^\epsilon}{\delta m}(\varphi) = \mathop{\rm arctanh}\nolimits \varphi -\b J_\eps*\varphi\;. \end{equation} In the sequel we choose $\ph=\varphi_\eps$ as in \eqref{recovery}, with $R_\eps =0$ and $Q$ to be determined later, and analyze separately the contribution of the three terms in \eqref{S1}. \noindent 1) As proven in \cite{P}, the free energy $F^\epsilon$ $\Gamma$-converges to $\tau \mathrm{Per}(\cdot)$, where $\mathrm{Per}(\cdot)$ is the perimeter functional. Moreover, for any choice of the corrector $Q$ and $t\in [0,T]$, the function $\varphi_\eps(t,\cdot)$ is a recovery sequence. Hence, \[ \lim_{\eps\to 0} S_\eps^{(1)}(\varphi_\eps) = \frac{\tau}2 \left[\mathrm{Per}(\Omega(T)) - \mathrm{Per}(\Omega(0)\right] = -\frac{\tau}2 \int_0^T\!\mathrm{d} t \int_{\Gamma(t)} \!\mathrm{d}\s\; \kappa_t v_t\;, \] where in the last equality we used that $-\int_{\Gamma(t)} \!\mathrm{d}\s\, \kappa_t v_t$ is the time derivative of $\mathrm{Per}(\Omega(t)) $. By \eqref{theta} we thus have, \begin{equation} \label{s1} \lim_{\eps\to 0} S_\eps^{(1)}(\varphi_\eps) = -\frac1\mu \int_0^T\!\mathrm{d} t \int_{\Gamma(t)} \!\mathrm{d}\s\; \frac{\theta\kappa_t v_t}2\;. \end{equation} \noindent 2) We first notice that, by Taylor expansion, $G^*(p,\alpha) = \alpha \big[\tfrac 12 \big(\tfrac p\alpha\big)^2 + O\big(\big(\tfrac p\alpha\big)^4\big)\big]$. By \eqref{Q}, \begin{equation} \label{phid} \dot\varphi_\eps (x,t) = -\tfrac {\partial_td(x,\Gamma(t))}\eps \; \bar m'\left(\tfrac{d(x,\Gamma(t))}\eps(1+O(\eps)) \right) \left(1+\left(1+\tfrac{|d(x,\Gamma(t))|}\eps\right) O(\eps)\right)\;. \end{equation} As $\bar m'(\xi)$ converges exponentially fast to zero as $|\xi|\to\infty$, see \eqref{mexp}, and in view of \eqref{Weps}, the integrand appearing in $S_\eps^{(2)}$ is smaller than any power of $\eps$ if $|d(x,\Gamma(t))|>C\eps(\log\eps)^2$. Therefore, we can restrict the domain of integration in a small neighborhood of $\Gamma_t$. In view of the expansion of $G^*$, using the co-area formula, we then get, \[ \lim_{\eps \to 0} S_\eps^{(2)}(\varphi_\eps) = \lim_{\eps \to 0} \int_0^T\!\mathrm{d} t\int\limits_{|s|\le C\eps(\log\eps)^2}\!\mathrm{d} s\int_{d=s}\!\mathrm{d}\s\; \eps^{-1}\;\frac{\bar m'(s/\eps)^2}{2c_\eps(\varphi_\eps)\sqrt{1-\bar m(s/\eps)^2}}\; \frac {(\partial_t d)^2}{4\beta}\;, \] where $\mathrm{d}\sigma$ is the surface measure on the level set of the distance function $d$, and, by \eqref{ceps}, $c_\eps(\varphi_\eps) = a(K_\eps*\varphi_\eps)$. To compute the asymptotic behavior of $c_\eps(\varphi_\eps)$, we choose an orthonormal frame with origin in the orthogonal projection $x_{\Gamma(t)}$ of $x$ on $\Gamma(t)$ and the first direction $\mathrm{e_0}$ along the normal to $\Gamma(t)$ at $x_{\Gamma(t)}$. If $d(x,\Gamma(t)) = s$ then $x = s\; \mathrm{e_0}$ and therefore, using \eqref{tildeK}, \[ \begin{split} K_\eps*\varphi_\eps(x) & = \int\!\mathrm{d} y\; \eps^{-d} k\left(\eps^{-1}\sqrt{(s-y\cdot \mathrm{e_0})^2+|y-(y\cdot \mathrm{e_0})\mathrm{e_0}|^2}\right) \bar m \left(\frac{y\cdot \mathrm{e_0}}\eps\right) + O(\eps) \\ & = \int\!\mathrm{d}\xi'\; \tilde K\left(\frac{s}\eps - \xi'\right) \bar m(\xi') + O (\eps) = \tilde K*\bar m \left(\frac{s}\eps \right) + O(\eps)\;. \end{split} \] We conclude that, recalling the definition of $\bar a$ in \eqref{cxi}, \begin{equation} \label{s2} \lim_{\eps \to 0} S_\eps^{(2)}(\varphi_\eps) = \int_0^T\!\mathrm{d} t\int_{\Gamma(t)} \!\mathrm{d}\s\; \frac{v_t^2}{4\beta}\int\!\mathrm{d} \xi\; \frac{\bar m'(\xi)^2}{2\bar a(\xi) \sqrt{1-\bar m(\xi)^2}} = \frac 1\mu \int_0^T\!\mathrm{d} t\int_{\Gamma(t)} \!\mathrm{d}\s\; \frac{v_t^2}4\;, \end{equation} where we used that $-\partial_t d(\cdot,\Gamma(t)) = v_t$ on $\Gamma(t)$, and \eqref{N} and \eqref{mu} in the last identity. \noindent 3) We are left with the limit of $S_\eps^{(3)}(\varphi_\eps)$. This is the point where the corrector $Q$ plays a role and has to be chosen appropriately. As $\mathop{\rm arctanh}\nolimits \bar m = \b \tilde J *\bar m$, \[ \begin{split} \eps\b\frac{\delta F^\epsilon}{\delta m}(\varphi_\eps) (x,t)& = \b \int\!\mathrm{d}\xi'\; \tilde J\left(\frac{d(x,\Gamma(t))}\eps + \eps Q\left(t,x,\frac{d(x,\Gamma(t))}\eps \right) - \xi'\right) \bar m(\xi') \\ & \qquad - \b \int\!\mathrm{d} y\; J_\eps(x,y) \; \bar m \left(\frac{d(y,\Gamma(t))}\eps + \eps Q\left(t,x,\frac{d(y,\Gamma(t))}\eps \right)\right)\;. \end{split} \] Since $\bar m(\xi)$ converges exponentially fast to $\pm m_\b$ as $\xi\to\pm\infty$, see \eqref{mexp}, the above expression is smaller than any power of $\eps$ if $|d(x,\Gamma(t))|>C\eps(\log\eps)^2$. Therefore, as $G(q,\alpha) = \alpha \big[\tfrac 12 q^2 + O(q^4)\big]$, restricting the domain of integration, and using the previous computation for the limit of $c_\eps(\varphi_\eps)$, we obtain, \[ \begin{split} & \lim_{\eps \to 0}S_\eps^{(3)}(\varphi_\eps) \\ & = \lim_{\eps \to 0} \frac1{2\beta\eps^3} \int_0^T\!\mathrm{d} t\int\limits_{|s|\le C\eps(\log\eps)^2}\!\mathrm{d} s \int_{d=s}\!\mathrm{d}\s\; \bar a(s/\eps)\sqrt{1-\bar m(s/\eps)^2} \; \left(\eps\b\frac{\delta F^\epsilon}{\delta m}(\varphi_\eps)\right)^2. \end{split} \] To compute $\eps\b\frac{\delta F^\epsilon}{\delta m}(\varphi_\eps)$ we choose an orthonormal frame with origin in the orthogonal projection $x_{\Gamma(t)}$ of $x$ on $\Gamma(t)$, the first direction $\mathrm{e_0}$ along the normal to $\Gamma(t)$, and the remaining directions $\{\mathrm{e}_1,\ldots,\mathrm{e}_{d-1}\}$ along the principal curvature directions of $\Gamma(t)$. In this way, if $d(x,\Gamma(t)) = s$ with $|s|\le C\eps(\log\eps)^2$ and $|x-y|\le \eps$, we have, \[ x = s\; \mathrm{e_0}\;, \qquad d(y,\Gamma(t)) =y\cdot \mathrm{e_0} - \sum_{i=1}^{d-1} \kappa^{(i)}_t \frac{(y\cdot \mathrm{e}_i)^2}2+o(\eps^2)\;, \] where $\kappa^{(i)}_t$ are the principal curvatures of $\Gamma(t)$ at $x_{\Gamma(t)}$; in particular, the mean curvature reads $\kappa_t = \sum_{i=1}^{d-1} \kappa^{(i)}_t$. Therefore, if $d(x,\Gamma(t)) = s$, \[ \begin{split} \b \int\!\mathrm{d}\xi'\; & \tilde J\left(\frac{d(x,\Gamma(t))}\eps + \eps Q\left(t,x,\frac{d(x,\Gamma(t))}\eps \right) - \xi'\right) \bar m(\xi') \\ & = \b \int\!\mathrm{d}\xi'\; \tilde J\left(\frac{s}\eps - \xi'\right) \left[\bar m(\xi') + \eps Q\left(t,x,\frac{s}\eps \right) \bar m'(\xi')\right] + o(\eps) \end{split} \] and \[ \begin{split} & \b \int\!\mathrm{d} y\; J_\eps(x,y) \; \bar m \left(\frac{d(y,\Gamma(t))}\eps + \eps Q\left(t,y,\frac{d(y,\Gamma(t))}\eps \right)\right) \\ &\;\; = \b \int\!\mathrm{d} y\; \eps^{-d} j\left(\eps^{-1}\sqrt{(s-y\cdot \mathrm{e_0})^2+|y-(y\cdot \mathrm{e_0})\mathrm{e_0}|^2}\right) \bigg[\bar m \left(\frac{y\cdot \mathrm{e_0}}\eps\right) \\ & \qquad +\eps Q\left(t,x,\frac{y\cdot \mathrm{e_0}} \eps \right)\bar m'\left(\frac{y\cdot \mathrm{e_0}}\eps\right) - \sum_{i=1}^{d-1} \kappa^{(i)}_t \frac{(y\cdot \mathrm{e}_i)^2}2\;\bar m'\left(\frac{y\cdot \mathrm{e_0}}\eps\right)\bigg] + o (\eps) \\ & \;\; = \b \int\!\mathrm{d}\xi'\; \tilde J\left(\frac{s}\eps - \xi'\right) \big[\bar m(\xi') + \eps Q(t,x,\xi')\bar m'(\xi')\big] \\ &\qquad - \eps \b \kappa_t \int\!\mathrm{d}\xi'\int_{\bb R^{d-1}}\!\mathrm{d}\eta\; j\left(\sqrt{\left(\frac s\eps -\xi'\right)^2+|\eta|^2}\right) \bar m'(\xi')\; \frac{\eta_1^2}2 + o (\eps)\;. \end{split} \] We now choose $Q(t,x,\xi) = Q^*(t,x,\xi) := \mc K(t,x) \bar{Q}(\xi)$, where $\mc K \colon [0,T] \times \bb T^d \to \bb R$ is any smooth function satisfying $\mc K(t, x)= \kappa_t(x)$ for all $x \in \Gamma(t)$, while $\bar{Q} \colon \bb R \to \bb R$ is a suitable a smooth function, to be fixed later and satisfying \begin{equation} \label{Q1} \sup_{\xi\in\bb R} \frac{\big| \bar Q(\xi) \big| + \big|\bar Q'(\xi)\big|}{1+|\xi|}<+\infty\;. \end{equation} Therefore, under this assumption, \begin{equation} \label{edf} \begin{split} \eps\b\frac{\delta F^\epsilon}{\delta m}(\varphi_\eps) & = \eps\b \mc K(t,x)\int\!\mathrm{d}\xi'\; \tilde J\left(\frac{s}\eps - \xi'\right) \bar m'(\xi') \left[ \bar{Q}\left(\frac{s}\eps \right) -\bar{Q}(\xi')\right] \\ & +\eps \b \kappa_t \int\!\mathrm{d}\xi'\int_{\bb R^{d-1}}\!\mathrm{d}\eta\; j\left(\sqrt{\left(\frac s\eps -\xi'\right)^2+|\eta|^2}\right) \bar m'(\xi')\; \frac{\eta_1^2}2 + o (\eps)\;. \end{split} \end{equation} Inserting this expansion in the approximated expression for $S_\eps^{(3)}(\varphi_\eps)$ we obtain, \[ \lim_{\eps\to 0} S_\eps^{(3)}(\varphi_\eps) = \int_0^T\!\mathrm{d} t\int_{\Gamma(t)} \!\mathrm{d}\s\; A_{\bar Q}(\kappa_t)\;, \] where \[ A_{\bar Q}(\kappa_t) := \frac {\kappa_t^2}{2\beta} \int\!\mathrm{d}\xi\; \bar a \sqrt{1-\bar m^2} \; \left[\b\tilde J*(\bar m' \bar{Q})- \b(\tilde J*\bar m')\bar{Q} - \b f \right]^2\;, \] with \begin{equation} \label{f} f(\xi) =\int\!\mathrm{d}\xi'\int_{\bb R^{d-1}}\!\mathrm{d}\eta\; j\left(\sqrt{\left(\xi -\xi'\right)^2+|\eta|^2}\right) \bar m'(\xi')\; \frac{\eta_1^2}2\;. \end{equation} Recalling the definitions \eqref{L}, \eqref{nu}, and using $\bar m' = (1-\bar m^2)\b \tilde J *\bar m'$, we get, \[ A_{\bar Q}(\kappa_t) = \frac {\kappa_t^2}{4\beta} \int\!\nu(\mathrm{d}\xi)\; \left[L(\bar m' \bar{Q}) - H \right]^2\;, \] where \begin{equation} \label{H} H := \b\, 2\bar a \sqrt{1-\bar m^2}\, f \;. \end{equation} By \eqref{N} and \eqref{nu}, \[ \frac{\int\!\nu(\mathrm{d}\xi)\; \bar m'(\xi) H(\xi)}{\int\!\nu(\mathrm{d}\xi)\;(\bar m')^2} = N\b \int \!\mathrm{d}\xi\;\bar m'(\xi)f(\xi) = N\b\tau = \theta\;, \] where in the last equalities we used that, by \eqref{tau}, $\tau = \int \!\mathrm{d}\xi\;\bar m'(\xi)f(\xi)$, and the relations \eqref{mu} and \eqref{theta}. It follows that the component of $H$ orthogonal to $\bar m'$ in $L^2(\bb R,\nu(\mathrm{d}\xi))$ is \begin{equation} \label{cH} \hat H = H - \theta \bar m'\;. \end{equation} Therefore, by the symmetry of $L$ and $L\bar m'=0$, \[ A_{\bar Q}(\kappa_t) = \frac{(\theta\kappa_t)^2}{4 \mu}+ \frac 1{4\beta} \int\!\nu(\mathrm{d}\xi)\; \left[L(\bar m' \bar Q) - \hat H \right]^2\;. \] The corrector $\bar Q$ is now determined by minimizing the above expression. More precisely, $\bar Q$ is the solution to the equation $L(\bar m' \bar Q) =\hat H$ which satisfies \eqref{Q1} and $\bar{Q}(0)=0$, whose existence and uniqueness is the content of Lemma \ref{lem:Q} in Appendix \ref{app:a}. Moreover, with this choice, \begin{equation} \label{s4} \lim_{\eps\to 0} S_\eps^{(3)}(\varphi_\eps) = \frac{1}{\mu} \int_0^T\!\mathrm{d} t\int_{\Gamma(t)} \!\mathrm{d}\s\; \frac{(\theta\kappa_t)^2}4\;. \end{equation} By \eqref{s1}, \eqref{s2}, and \eqref{s4} the statement (b) of the theorem follows. \medskip \noindent {\it Proof of \rm (a).} By Legendre duality, $\mc L_\eps(u,v)= \sup_p \; \{pv-\mc H_\eps(u,p)\}$, where, given measurable functions $u\colon \bb T^d \to [-1, 1]$ and $\eta \colon \bb T^d \to \bb R$, \begin{equation} \label{Heps=} \begin{split} \mc H_\eps(u,\eta) & = \eps^{-2}\frac{c_\eps(u)}{\beta} \Big[\cosh(\beta J_\eps*u+2\beta \eta) -\cosh(\beta J_\eps*u) \\ & \qquad\qquad - u\sinh(\beta J_\eps*u+2\beta \eta) + u\sinh(\beta J_\eps*u)\Big]\;. \end{split} \end{equation} Whence, letting $\varphi_\eps$ be as in \eqref{recovery}, for each $g=g(t,x)$, \[ \begin{split} S_\eps(\varphi_\eps) & \ge \eps^{-1}\int_0^T\!\mathrm{d} t\int \!\mathrm{d} x\; \bigg\{ \dot\varphi_\eps g - \eps^{-2} \frac{c_\eps(\varphi_\eps)}{\beta} \Big[\cosh(\beta J_\eps*\varphi_\eps+2\beta g) \\ & \qquad -\cosh(\beta J_\eps*\varphi_\eps)- \varphi_\eps\sinh(\beta J_\eps*\varphi_\eps+2\beta g) + \varphi_\eps\sinh(\beta J_\eps*\varphi_\eps)\Big]\bigg\} \\ & =: \Lambda_\eps(\varphi_\eps,g) \;. \end{split} \] Given a fixed smooth function $p=p(t,x)$, we choose \[ g(t,x) = g_\eps(t,x) = \eps N p(t,x) \left[\frac{\bar m'(s/\eps)}{2\bar a(s/\eps)\sqrt{1-\bar m(s/\eps)^2}} \right]_{s=d(x,\Gamma(t))} \] and compute the limit of $\Lambda_\eps(\varphi_\eps,g_\eps)$ as $\eps\to 0$. By second order Taylor expansion of $\mc H_\eps(u,\cdot)$ and observing the the remainder are equibounded and converges to zero point-wise as $\eps\to 0$, we have \[ \Lambda_\eps(\varphi_\eps,g_\eps) = \Lambda_\eps^{(1)}(\varphi_\eps,g_\eps) + \Lambda_\eps^{(2)}(\varphi_\eps,g_\eps) + \Lambda_\eps^{(3)}(\varphi_\eps,g_\eps) + o(1) \;, \] where \[ \Lambda_\eps^{(1)}(\varphi_\eps,g_\eps) = \int_0^T\!\mathrm{d} t\int \!\mathrm{d} x\; \eps^{-1} \dot\varphi_\eps g_\eps\;, \] \[ \begin{split} \Lambda_\eps^{(2)}(\varphi_\eps,g_\eps) & = \int_0^T\!\mathrm{d} t\int \!\mathrm{d} x\; \eps^{-3} c_\eps(\varphi_\eps)\cosh(\beta J_\eps*\varphi_\eps) \big[ \varphi_\eps - \tanh(\beta J_\eps*\varphi_\eps) \big]2g_\eps \\ & = \int_0^T\!\mathrm{d} t\int \!\mathrm{d} x\; \eps^{-3} c_\eps(\varphi_\eps) \sqrt{1-\varphi_\eps^2} \sinh\bigg(\eps\b\frac{\delta F^\epsilon}{\delta m}(\varphi_\eps)\bigg) 2g_\eps\;, \end{split} \] \[ \Lambda_\eps^{(3)}(\varphi_\eps,g_\eps) = - \int_0^T\!\mathrm{d} t\int \!\mathrm{d} x\; \eps^{-3} c_\eps(\varphi_\eps) \cosh(\beta J_\eps*\varphi_\eps) \big[ 1-\varphi_\eps \tanh(\beta J_\eps*\varphi_\eps) \big] 2\beta g_\eps^2\;. \] By \eqref{Q} and the assumptions on $R_\eps$, the expansion \eqref{phid} holds with an extra additive $o(\eps)$ due to the presence of $R_\eps$. Therefore, as $g_\eps$ is equibounded and recalling \eqref{N}, the same reasoning leading to \eqref{s2} gives, \[ \begin{split} \lim_{\eps\to 0}\Lambda_\eps^{(1)}(\varphi_\eps,g_\eps) & = \lim_{\eps\to 0}\int_0^T\!\mathrm{d} t\int\limits_{|s|\le C\eps(\log\eps)^2}\!\mathrm{d} s\int_{d=s}\!\mathrm{d}\s\; \eps^{-1}\;\frac{-\bar m'(s/\eps)^2Np\,\partial_td }{2\bar a(s/\eps)\sqrt{1-\bar m(s/\eps)^2}} \\ & =- \int_0^T\!\mathrm{d} t\int_{\Gamma(t)} \!\mathrm{d}\s\; v_t p\;. \end{split} \] Concerning $\Lambda_\eps^{(2)}$ and $\Lambda_\eps^{(3)}$, we observe that, as $g_\eps=O(\eps) \bar m'(d/\eps)$ and the dependence on $\varphi_\eps$ of the integrands is locally Lipschitz, the contribution due to $R_\eps$ is $o(1)$ as $\eps\to 0$, and therefore can be neglected. Noticing that \eqref{edf} holds true here with $Q$ in place of $\mc K\bar Q$ and recalling \eqref{f} we have, \[ \begin{split} \lim_{\eps\to 0}\Lambda_\eps^{(2)}(\varphi_\eps,g_\eps) & = \lim_{\eps\to 0} \int_0^T\!\mathrm{d} t\int\limits_{|s|\le C\eps(\log\eps)^2}\!\mathrm{d} s\int_{d=s}\!\mathrm{d}\s\; \;N p \\ & \quad\qquad \times \eps^{-1}\Big\{\bar m' \big[-\b\tilde J*(\bar m' Q)+ \b(\tilde J*\bar m')Q+ \b \mc K f \big]\Big\}(s/\eps) \\ & = \int_0^T\!\mathrm{d} t\int_{\Gamma(t)} \!\mathrm{d}\s\; Np \int\!\nu(\mathrm{d}\xi)\; \bar m' \big[\b\kappa_t\, 2\bar a \sqrt{1-\bar m^2}\, f -L(\bar m' Q)\big] \\ & = \int_0^T\!\mathrm{d} t\int_{\Gamma(t)} \!\mathrm{d}\s\;\theta\kappa_t p\;, \end{split} \] where in the last identity we used that $\int\!\nu(\mathrm{d}\xi)\; \bar m' L(\bar m' Q) = 0$. Finally, as $\cosh(\beta J_\eps*\varphi_\eps) \big[ 1-\varphi_\eps \tanh(\beta J_\eps*\varphi_\eps) \big] = \sqrt{1-\bar m(s/\eps)^2} + o(1)$, \[ \begin{split} \lim_{\eps\to 0}\Lambda_\eps^{(3)}(\varphi_\eps,g_\eps) & = \lim_{\eps\to 0} \int_0^T\!\mathrm{d} t\int_{\bb R}\!\mathrm{d} s\int_{d=s}\!\mathrm{d}\s\; \; \eps^{-1}\beta N^2 p^2 \frac{-\bar m'(s/\eps)^2}{2\bar a(s/\eps)\sqrt{1-\bar m(s/\eps)^2}} \\ & \xrightarrow{\eps\to 0} \int_0^T\!\mathrm{d} t\int_{\Gamma(t)} \!\mathrm{d}\s\; (-\mu p^2) \;. \end{split} \] We conclude that, for any function $p=p(x,t)$, \[ \varliminf_{\eps\to 0} S_\eps(\varphi_\eps) \ge \int_0^T\!\mathrm{d} t\int_{\Gamma(t)} \!\mathrm{d}\s\;(-v_tp+\theta\kappa_t p - \mu p^2)\;, \] whence, by optimizing over $p$, \[ \varliminf_{\eps\to 0} S_\eps(\varphi_\eps) \ge \int_0^T\!\mathrm{d} t\int_{\Gamma(t)} \!\mathrm{d}\s\;\sup_p\; (-v_tp+\theta\kappa_t p - \mu p^2) = \frac 1{4\mu}\int_0^T\!\mathrm{d} t\int_{\Gamma(t)}\!\mathrm{d}\s\; (v_t-\theta \kappa_t)^2 \;. \] The statement (a) of the theorem is thus proved. \qed \section{Glauber+Kawasaki process} \label{sec:4} In this section we analyze the sharp interface limit of the action functional in the context of the Glauber+Kawasaki process. \subsection{Motivation} \label{sec:4.1} The so-called Glauber+Kawasaki process is a simple stochastic model describing a chemical reaction among two species together with their diffusion. Recall that $\bb T_{L}^d$ denotes the $d$-dimensional torus of side $L$ and, given an integer $N\ge 1$, in this section we let $\bb T^d_{L,N} := (LN^{-1} \bb Z / L \bb Z)^d $ be the discrete approximation of $\bb T^d_L$ with lattice spacing $L/N$. Set also $\Omega_{N,L}:=\{0,1\}^{\bb T_{L,N}^d}$, if $\eta\in\Omega_{N,L}$ we regard its value at the site $i\in \bb T_{L,N}^d$, that can be either zero or one, as representing the species occupying $i$. The Glauber+Kawasaki process is a continuous time Markov chain on the state space $\Omega_{N,L}$, whose dynamics is obtained superimposing two elementary mechanisms, respectively modeling the reaction (Glauber) and the diffusion (Kawasaki). Namely, the generator of the chain is \begin{equation} \label{g+k} \mc L_{N} := \mc L_{\mathrm{G}} + N^2 \mc L_{\mathrm{K}}\;. \end{equation} Let $c$, a strictly positive local function of the configuration, be the rate of the reaction; then, given $f\colon \Omega_{N,L}\to \bb R$, \begin{equation*} \mc L_{\mathrm{G}} f \, (\eta) := \sum_{i\in \bb T_{L,N}^d} c(\tau_i \eta) \big[ f(\eta^i) - f(\eta)\big] \;, \end{equation*} where $\tau_i$ is the translation, i.e., $(\tau_i\eta)_j:= \eta_{j-i}$, and $\eta^i$ is the configuration obtained from $\eta$ by flipping the occupation number at $i$. The Kawasaki dynamics is instead defined by the generator, \begin{equation*} \mc L_{\mathrm{K}} f \, (\eta) := \frac 12 \sum_{\{i,j\}} \big[ f (\eta^{i,j}) - f(\eta) \big]\;, \end{equation*} where the sum runs over the (unordered) nearest neighbors pairs $\{i,j\} \subset \bb T_{L,N}^d$ and $\eta^{i,j}$ is the configuration obtained from $\eta$ by exchanging the occupation numbers at the sites $i$ and $j$. Note that in \eqref{g+k} the Kawasaki dynamics has been speeded up by $N^2$; as the lattice spacing is $L/N$ this corresponds to a diffusive rescaling. Let $\mc M_{+}(\bb T_L^d)$ be the set of positive measures on $\bb T_L^d$ and define the \emph{empirical density} as the map $\pi^N\colon \Omega_{L,N} \to \mc M_{+}(\bb T_L^d)$ given by \begin{equation*} \pi^N (\eta) = \frac 1{N^d} \sum_{i\in \bb T_{L,N}^d} \eta_i \delta_{i}\;. \end{equation*} Assuming that the initial datum $\eta(0)$ for the Glauber+Kawasaki process is well prepared, in the sense that $\pi^N (\eta(0)) \to u_0(x) \mathrm{d} x$ for some Borel function $u_0\colon \bb T_L \to [0,1]$, in \cite{DFL} it is proven that $\pi^N (\eta(t)) \to u(t,x) \mathrm{d} x$ in probability, where $u\colon [0,\infty)\times \bb T_L^d \to [0,1]$ solves the reaction diffusion equation, \begin{equation} \label{hy} \begin{cases} \partial_t u = \frac 12 \Delta u + B(u) - D(u)\;,\\ u(0) =u_0\;. \end{cases} \end{equation} The reaction term is described by the coefficients $B,D\colon [0,1] \to [0,+\infty)$ that can be obtained from the microscopic rate $c\colon \Omega_{L,N} \to (0,\infty)$ according to the following procedure. For $\rho\in [0,1]$ let $\nu_\rho$ be the Bernoulli measure with parameter $\rho$, namely the product probability on $\Omega_{L,N}$ with marginals $\nu_\rho(\eta_i=1) = \rho$. Then, \begin{equation*} B(\rho) = \nu_\rho\big( (1-\eta_0) c \big)\;, \qquad\qquad D(\rho) = \nu_\rho\big( \eta_0 c \big)\;. \end{equation*} Observe that, as $c$ is a strictly positive local function, $B$ and $D$ are strictly positive polynomials in $(0,1)$, while $B(1)=0$ and $D(0)=0$. The hydrodynamic equation \eqref{hy} describes the typical behavior of the Glau\-ber+Kawasaki process in the diffusive scaling limit. On the other hand, the statistics of the fluctuations cannot be described simply by adding a Gaussian noise to \eqref{hy}. In fact, as precised by the large deviation theory, the Poissonian nature of the underlying Glauber dynamics is still felt in the diffusive limit. A main motivation for analyzing the large deviations properties of the empirical density is the following. Since the Glauber+Kawasaki process is irreducible, by general criterion on Markov chains, there exists a unique stationary probability $\mu_{L,N}$ on $\Omega_{L,N}$. As the dynamics does not satisfy the detailed balance condition, $\mu_{L,N}$ cannot be written in a closed form (with the exception of the special choices discussed in \cite{GJLV}) and, as shown in \cite{BJ}, it exhibits long range correlations. According to the general ideology on thermodynamics limits, we are not really interested in the whole details of the probability $\mu_{L,N}$, but mainly in the statistics of the empirical density in the limit $N\to \infty$. It is therefore natural to introduce the sequence of probabilities $\{\wp_{L,N}\}_{N\in \bb N}$ on $\mc M_+(\bb T^d_L)$ defined by $\wp_{L,N} := \mu_{L,N} \circ (\pi^N)^{-1}$ and look for its asymptotic behavior as $N\to \infty$. Let $W\colon [0,1] \to \bb R$ be such that $B-D =-W'$. If $W$ has a unique minimizer, it is natural to expect that the sequence $\{\wp_{L,N}\}_{N\in \bb N}$ converges to the stationary solution of \eqref{hy} corresponding to the minimizer of $W$. Indeed, in the one dimensional case, this is proven in \cite{BPSV1} when $W$ has a single well, and in \cite{BPSV2} when $W$ has a double well. A finer description of the asymptotics of $\{\wp_{L,N}\}_{N\in \bb N}$ can be achieved by looking at its large deviations. In a sloppy notation, this means the estimate \begin{equation*} \wp_{L,N} \big( \pi \sim u\, \mathrm{d} x \big) \asymp \exp\{ -N^d F_L(u) \}\;, \end{equation*} for a suitable functional $F_L$ on the set of densities $u\colon \bb T_L^d\to [0,1]$. Here $F_L$ plays the same role as the Cahn-Hilliard functional in the gradient theory of phase transition or the Lebowitz-Penrose functional \eqref{F}, with the minor inconvenient that it is not known. According to the Friedlin-Wentzel theory for diffusions on $\bb R^n$, see \cite{FW}, the functional $F_L$ can be characterized in terms of a dynamical problem. To this end, fix $T>0$, a sequence of initial configurations $\eta^N(0)$, and consider the large deviations asymptotics for the empirical measure in the time window $[0,T]$. Under the assumption that $B$ and $D$ are concave, this large deviation principle has been proved in \cite{JLV,BL,LT} in one dimension (however, the result can be extended to higher dimension), the corresponding rate function, denoted by $I_{T,L}$, will be recalled later. As proven in \cite[Chp.~6]{FW} for diffusions on $\bb R^n$ and in \cite{FLT} for the present setting (in one dimension and with additional hypotheses on the coefficients $B$ and $D$ implying a complete characterization of the stationary solutions to \eqref{hy}), the functional $F_L$ is the \emph{quasi-potential} associated to the dynamical rate functional $I_{T,L}$. This means that $F_L$ can be obtained from $I_{T,L}$ by solving a suitable variational/combinatorial problem whose details are here omitted. Within this approach, in \cite{FLT} it is deduced that the cluster points of $\{\wp_{L,N}\}$ are supported by the stationary solutions to \eqref{hy} associated to the minimizers of $W$. We consider here the case of a bistable reaction term. Recalling $W$ satisfies $B-D =-W'$, we thus assume that $W$ has a twofold degenerate quadratic minimum, namely, there exist $0<\rho_-<\rho_+<1$ such that for $\rho\neq \rho_\pm$ we have $W(\rho) > W(\rho_-)=W(\rho_+)$ and $W''(\rho_-),W''(\rho_+) >0$. In this situation, the probability $\mu_{L,N}$ describes the phase coexistence of the two stable phases, like a Gibbs measure undergoing a first order phase transition. Our purpose is to characterize the corresponding surface tension, that measures the cost of a transition between the two stable phases $\rho_\pm$. As in the case of Ising-Kac model, the surface tension is identified by considering the sharp interface limit. By setting $\eps=L^{-1}$, as far as the dynamical behavior is concerned, under diffusive rescaling, the joint limit $N\to\infty$ and $\eps\to 0$ (with $\eps\gg N^{-1}$) of the empirical measure has been analyzed in \cite{Bo,KS0}. More precisely, it is there proven that the limiting dynamics is described by the motion by mean curvature of the interface separating the stable phases, respectively in the classical setting \cite{Bo} and in the level set formulation \cite{KS0}. In order to analyze the asymptotic behavior of the probability $ \wp_{L,N}$, let us introduce the family of functionals $F^\epsilon$ on the set of densities $u\colon \bb T^d \to [0,1]$ defined by \begin{equation*} F^\epsilon (u) := \epsilon^{d-1} F_{\epsilon^{-1}} \big( u (\epsilon \cdot) \big)\;. \end{equation*} We are next going to argue, but not rigorously prove, that as $\epsilon\to 0$ the sequence $ F^\epsilon$ converges to a functional $F$ that is finite only if $u\in BV \big(\bb T^d;\{\rho_-,\rho_+\}\big)$ and for such $u$ is proportional to the (measure theoretic) perimeter of the jump set of $u$. Namely, $F(u)=\tau \mc H^{d-1}(S_u)$, where $ \mc H^{d-1}$ is the $(d-1)$-dimensional Hausdorff measure on $\bb T^d$ and $S_u$ denotes the jump set of $u$. The constant $\tau>0$ is then identified with the surface tension for the Glauber+Kawasaki processes and it will be characterized in terms of the solution to a one-dimensional ODE. As the quasi-potential $F_L$ is not directly accessible, we shall consider the sharp interface limit of the dynamical rate function $I_{T,L}$. More precisely, let $S_\epsilon$ be the functional on the set of paths $\phi\colon [0,T]\times \bb T^d \to [0,1]$ defined by $S_\epsilon (\phi) := \epsilon^{d-1} I_{\epsilon^{-2}T, \epsilon^{-1}} \big( \phi(\epsilon^2\cdot, \epsilon \cdot)\big)$. In Theorem~\ref{thm:4.1} below we prove that, for suitable sequences $\phi_\epsilon$ converging to \begin{equation*} \phi(t,x)= \begin{cases} \rho_+ & \textrm{ if } x\in \Omega(t)\;,\\ \rho_- & \textrm{ if } x\not\in \bar \Omega(t)\;,\\ \end{cases} \end{equation*} for some open $\Omega(t) \subset \bb T^d$ with smooth boundary, \begin{equation} \label{ldrf} \lim_{\epsilon\to 0} S_\epsilon(\phi_\epsilon) =\frac 1{2} \tau \int_0^T\!\mathrm{d} t\int_{\Gamma(t)}\!\mathrm{d}\s\; \left(v_t- \frac 12 \kappa_t\right)^2\;, \end{equation} where $\Gamma(t) =\partial \Omega(t)$, $\mathrm{d}\sigma$ is the surface measure on $\Gamma(t)$, $v_t$ is the normal velocity of $\Gamma(t)$, $\kappa_t$ is its mean curvature, and $\tau$ is a positive constant. Observe now that the limiting dynamical rate function in \eqref{ldrf} measures, in $L^2$ sense, the deviations with respect to the motion by mean curvature $v_t = \frac 12 \kappa_t$. Since this evolution is - informally - the gradient flow of (one half of) the perimeter, we deduce that the quasi-potential associated to the limiting dynamical rate function is proportional to the perimeter and we identify the proportionality constant with $\tau$, see \cite[Thm.~4.3.1]{FW} for a proof of this statement in the context of diffusions in $\bb R^n$. \subsection{Preliminaries} \label{sec:4.2} Let $\bar u$ be the instanton (standing wave) associated to the hydrodynamic equation \eqref{hy} in dimension one, namely the solution to \begin{equation} \label{inst1} \frac 12 \bar u '' + B(\bar u) - D(\bar u) = 0\;, \quad \bar u(\pm\infty) = \rho_\pm\;, \quad \bar u(0) = \frac{\rho_++\rho_-}2\;. \end{equation} Clearly $\bar u'(\xi)>0$ and it can be easily shown that \begin{equation} \label{asbaru} \sup_{\xi\in\bb R} \big(\big|\bar u'(\x)\big| + |\bar u''(\xi)\big|\big) \mathrm{e}^{\gamma|\xi|} <+\infty\;, \end{equation} where $\gamma=\min\{D'(\rho_+)-B'(\rho_+);D'(\rho_-)-B'(\rho_-)\}$. The large deviation asymptotics for the empirical density under the Glauber+Ka\-wasaki dynamics has been analyzed in \cite{BL,JLV}. We next recall the associated rate function. Given $L$ positive, let $C(L):=\{\rho\in L^\infty(\bb T_L^d) \colon 0\le \rho \le 1\}$, where $\bb T_L^d$ is the $d$-dimensional torus of side $L>0$, equipped with the (metrizable) weak*-topology. We define $I_{T,L} \colon C([0,T];C(L)) \to [0, \infty]$ by \begin{equation} \label{Igk} I_{T,L}(\phi) := \sup_{H\in C^{1,2}([0,T]\times\bb T_L^d)} J^H_{T,L}(\phi) \;, \end{equation} where \begin{equation} \label{Jgk} \begin{split} &J^H_{T,L}(\phi) := \int\!\mathrm{d} r\, \big[\phi(T,\cdot)H(T,\cdot) - \phi(0,\cdot)H(0,\cdot)\big] \\ & \quad - \int_0^T\!\mathrm{d} t \int\!\mathrm{d} r\, \bigg[\phi\bigg(\partial_t H + \frac 12 \Delta H \bigg) + \frac 12 \phi (1-\phi) |\nabla H |^2 \bigg] \\ & \quad - \int_0^T\!\mathrm{d} t \int\!\mathrm{d} r\, \bigg[B(\phi) (\mathrm{e}^H-1) + D(\phi)(\mathrm{e}^{-H}-1)\bigg]\;. \end{split} \end{equation} Under suitable assumptions on the initial conditions, in\cite{BL,JLV,LT} it is proven that the empirical magnetization sampled according to the Glauber dynamics, regarded as a random variable taking values in the Skorokhod space $D([0,T];\mc M(\bb T^d_L))$, satisfies a large deviation principle with speed $N^d$ and rate function $\mc I_{T,L}$ given by $\mc I_{T,L}(\nu) = I_{T,L}(\phi)$ if $\nu_t= \phi_t \, \mathrm{d} r$ for some $\phi \in C([0,T];C(L))$ and $+\infty$ otherwise. By \cite[Lemma 2.1]{JLV}, or rather its generalization in dimension $d\ge 1$, if $\phi\in C^{2,3}([0,T]\times \bb T_L^d;(0,1))$ then the supremum in \eqref{Igk} is achieved for $H=H(\phi) \in C^{1,2}([0,T]\times \bb T_L^d)$, the unique classical solution to the non-linear Poisson equation, \begin{equation} \label{Ihke} \partial_t \phi + \nabla\cdot [\phi(1-\phi)\nabla H] = \frac 12 \Delta \phi + B(\phi)\mathrm{e}^H - D(\phi)\mathrm{e}^{-H}\;, \end{equation} so that, for such $H$, \begin{equation} \label{Ihk1} \begin{split} & I_{T,L}(\phi) = J^H_{T,L}(\phi) = \frac 12 \int_0^T\!\mathrm{d} t \int \!\mathrm{d} r\, \phi(1-\phi) |\nabla H|^2 \\ & + \int_0^T\!\mathrm{d} t \int\!\mathrm{d} r\, B(\phi) \big(1-\mathrm{e}^H + H \mathrm{e}^H\big) + \int_0^T\!\mathrm{d} t \int\!\mathrm{d} r\, D(\phi) \big(1-\mathrm{e}^{-H} - H \mathrm{e}^{-H}\big)\;. \end{split} \end{equation} Due to the lack of reversibility of the underlying microscopic dynamics, it is not possible to decompose the action function in a form analogous to \eqref{gg*}. \subsection{Sharp interface limit of the action functional} \label{sec:4.3} To this end, we set $\eps=L^{-1}$, perform a diffusive rescaling of space and time, and normalize the resulting action with a factor $L^{d-1}$. As in the previous section, the space variable in $\bb T^d$ is denoted by $x$. We then introduce the rescaled action functional renormalized with a factor $L^{d-1}$. We thus define the rescaled functional $S_\eps \colon C([0,T];C(1)) \to [0,\infty]$ by \begin{equation} \label{Sgk} S_\eps(\phi) = \eps^{d-1} I_{\eps^{-2}T,\eps^{-1}}(\phi(\eps^2\cdot, \eps \cdot )) \;, \end{equation} whose variational representation is \begin{equation} \label{Sgk1} S_\eps(\phi) := \sup_{H\in C^{1,2}([0,T]\times\bb T^d)} J^H_\eps(\phi) \;, \end{equation} where \begin{equation} \label{Jgk1} \begin{split} &J^H_\eps(\phi) := \frac 1\eps\int\!\mathrm{d} x\, \big[\phi(T,\cdot)H(T,\cdot) - \phi(0,\cdot)H(0,\cdot)\big] \\ & \quad - \frac 1\eps \int_0^T\!\mathrm{d} t \int\!\mathrm{d} x\, \bigg[\phi \bigg(\partial_t H + \frac 12 \Delta H \bigg) + \frac 12 \phi(1-\phi) |\nabla H|^2 \bigg] \\ & \quad - \frac 1\eps \int_0^T\!\mathrm{d} t \int\!\mathrm{d} x\, \bigg(B(\phi) \frac{\mathrm{e}^H-1}{\eps^2} + D(\phi) \frac{\mathrm{e}^{-H}-1}{\eps^2}\bigg)\;. \end{split} \end{equation} Moreover, the representation \eqref{Ihke}, \eqref{Ihk1} gives, for $\phi\in C^{2,3}([0,T]\times \bb T^d;(0,1))$, \begin{equation} \label{Ihke1} \partial_t \phi + \nabla\cdot [\phi(1-\phi)\nabla H] = \frac 12 \Delta \phi + \frac{B(\phi)\mathrm{e}^H - D(\phi)\mathrm{e}^{-H}}{\eps^2} \end{equation} and \begin{equation} \label{Ihk2} \begin{split} S_\eps(\phi) & = \frac 1{2\eps} \int_0^T\!\mathrm{d} t \int \!\mathrm{d} x\, \phi(1-\phi) |\nabla H|^2 \\ & \quad + \frac 1{\eps^3} \int_0^T\!\mathrm{d} t \int\!\mathrm{d} x\, B(\phi) \big(1-\mathrm{e}^H + H \mathrm{e}^H\big) \\ & \quad + \frac 1{\eps^3} \int_0^T\!\mathrm{d} t \int\!\mathrm{d} x\, D(\phi) \big(1-\mathrm{e}^{-H} - H \mathrm{e}^{-H}\big)\;. \end{split} \end{equation} As in the previous section, given a $C^1$-family of oriented smooth surfaces $\Gamma=\{ \Gamma(t)\}_{t\in [0,T]}$, with $\Gamma(t) = \partial\Omega(t)$ for some open $\Omega(t) \subset \bb T^d$, we denote by $n_t=n_{\Gamma(t)}$ the inward normal of $\Gamma(t)$, by $v_t\colon \Gamma(t) \to \bb R$ the normal velocity of $\Gamma$ at time $t$, by $\kappa_t$ the mean curvature of $\Gamma(t)$, and by $d(\cdot,\Gamma(t))$ a regularized version of the signed distance from $\Gamma(t)$. For such families of surfaces we define the limiting action functional, \begin{equation} \label{Rpn1} S_\mathrm{ac}(\Gamma) = \frac 1{4\mu}\int_0^T\!\mathrm{d} t\int_{\Gamma(t)}\!\mathrm{d}\s\; \left(v_t- \frac 12 \kappa_t\right)^2\;. \end{equation} where the mobility $\mu$ is computed according to the following procedure. Recalling the definition \eqref{inst1}, let $L_{\bar u}$ be the linear operator given by \begin{equation} \label{Lbar} L_{\bar u} \psi = \big[(\bar u (1-\bar u) \psi'\big]' - [B(\bar u) + D(\bar u)]\psi\;, \end{equation} which is obtained by linearizing \eqref{Ihke} in dimension one at $\phi=\bar u$ around $H=0$. Then, \begin{equation} \label{mu1} \mu =\frac{2\langle\bar u', (-L_{\bar u})\bar u'\rangle_{L^2} }{ \|\bar u'\|_{L^2}^4}\;. \end{equation} For later purpose, we notice that since $B+D$ is strictly positive then $L_{\bar u}$ is bijective on $L^2(\bb R)$. Moreover, the inverse of $L_{\bar u}$ preserves the decays properties, in the sense that if $L_{\bar u}\psi = w$ then, for any $\gamma'>0$, \begin{equation} \label{decayl} \sup_{\xi\in\bb R}|w(\xi)|\mathrm{e}^{\gamma'|\xi|} < + \infty \quad \Longrightarrow \quad \sup_{\xi\in\bb R} \big(|\psi(\xi)| + |\psi'(\xi)| + |\psi''(\xi)|\big) \mathrm{e}^{\gamma'|\xi|}< +\infty\;. \end{equation} \begin{theorem} \label{thm:4.1} Given a $C^1$-family of oriented smooth surfaces $\Gamma=\{ \Gamma(t)\}_{t\in [0,T]}$, with $\Gamma(t) = \partial\Omega(t)$, consider sequences $\{ \phi_\eps \} \subset C([0,T];C(1))$, converging to $\rho_- + (\rho_+ - \rho_-) {1 \mskip -5mu {\rm I}}_{\Omega(\cdot)}$, of the form \begin{equation} \label{recovery1} \phi_\eps(t,x) = \bar u\left(\frac{d(x,\Gamma(t))}\eps + \eps Q\left(t,x,\frac{d(x,\Gamma(t))}\eps\right)\right)+\eps R_\eps(t,x)\;, \end{equation} where $\bar u$ is the instanton, $Q \colon [0,T] \times \bb T^d \times \bb R \to \bb R$ is a smooth function such that \begin{equation} \label{Q3} \begin{split} & \sup_{(t,x,\xi) \in [0,T] \times \bb T^d \times \bb R} \bigg\{ \frac{\big| Q(t,x,\xi) \big| + \big| \partial_\xi Q(t,x,\xi)\big|}{1+|\xi|} \\ & \quad\qquad\qquad + \frac{\big|\partial_t Q(t,x,\xi) \big| +\big|D_x Q(t,x,\xi) \big| + \big|D^2_{xx}Q(t,x,\xi)\big|}{1+|\xi|}\bigg\} < + \infty\;, \end{split} \end{equation} and $R_\eps \colon [0,T] \times \bb T^d \to \bb R$ is a smooth function. \begin{itemize} \item[(a)] If $\| R_\eps\|_\infty +\| \partial_t R_\eps \|_\infty +\| \Delta R_\eps \|_\infty \to 0$ as $\eps \to 0$ then, for any $Q$, \[ \liminf_{\eps\to 0} S_\eps(\phi_\eps) \ge S_\mathrm{ac}(\Gamma)\;. \] \item[(b)] There exist $Q^*$ such that, choosing $Q=Q^*$ and $R_\eps=0$ we have, \[ \lim_{\eps\to 0} S_\eps(\phi_\eps) = S_\mathrm{ac}(\Gamma)\;. \] \end{itemize} \end{theorem} For expository reasons, we prove the statements in reverse order. \noindent {\it Proof of \rm (b).} In the sequel, we assume $R_\eps =0$ and $Q(t,x,\xi) := A(t,x)\bar Q(\xi)$, where $A\colon [0,T]\times\bb T^d \to \bb R$ and $\bar Q\colon \bb R\to \bb R$ are smooth functions to be determined later, with $\bar Q$ such that \begin{equation} \label{Q2} \sup_{\xi\in\bb R} \bigg\{ \frac{\big| \bar Q(\xi) \big|}{1+|\xi|} + \big|\bar Q'(\xi)\big| + \big|\bar Q''(\xi)\big|\bigg\} < + \infty\;. \end{equation} In order to compute the cost of the sequence \eqref{recovery1} with these choices, we start by the expansions, \begin{equation} \label{expsu} \begin{split} \phi_\eps & = \bar u(d_\eps) + \eps\bar u'(d_\eps) Q_\eps + \eps R^{(1)}_\eps \;, \qquad \partial_t\phi_\eps = \bar u'(d_\eps) \frac{\partial_t d}\eps + R^{(2)}_\eps \;, \\ \Delta\phi_\eps & = \frac{\bar u''(d_\eps)}{\eps^2} +\frac {\bar u'(d_\eps) \Delta d + \bar u'''(d_\eps) Q_\eps+ 2 \bar u''(d_\eps) Q_\eps' + \bar u'(d_\eps) Q_\eps''}\eps+ R^{(3)}_\eps \; , \end{split} \end{equation} where the notation $d=d(x,\Gamma(t))$, $d_\eps = d/\eps$, $Q_\eps = Q(t,x,d_\eps)$, $Q_\eps'=\partial_\xi Q(t,x,d_\eps)$, and $Q_\eps''=\partial_{\xi\xi}^2 Q(t,x,d_\eps)$ has been adopted, and $R^{(i)}_\eps=R^{(i)}_\eps(t,x,d_\eps)$, $i=1,2,3$, are such that \[ \limsup_{\eps \to 0} \sup_{(t,x,\xi) \in [0,T] \times \bb T^d \times \bb R} \mathrm{e}^{\gamma |\xi|/2} |R^{(i)}_\eps(t,x,\xi)| < \infty\;, \] with $\gamma$ as in \eqref{asbaru}. Eqs.\eqref{expsu} can be easily derived using \eqref{asbaru} and recalling that $|\nabla d|=1$ in a neighborhood of $\Gamma(t)$. Next, assuming for the function $H$, unique solution to \eqref{Ihke1}, an expansion of the form $H(t,x) = \eps H_1(t,x,d(x,\Gamma(t))/ \eps)) + O(\eps^2)$, we deduce a linear equation for $H_1$. To this end, we write \begin{equation} \label{expsu1} \begin{split} & \frac{B(\phi_\eps)\mathrm{e}^H - D(\phi_\eps)\mathrm{e}^{-H}}{\eps^2} = \frac{B(\bar u(d_\eps)) - D(\bar u(d_\eps)) }{\eps^2} \\ & \quad + \frac{B'(\bar u(d_\eps)) - D'(\bar u(d_\eps)) }{\eps} \bar u'(d_\eps) Q_\eps + \frac{B(\bar u(d_\eps)) + D(\bar u(d_\eps)) }{\eps} H_1 + O(1)\;. \end{split} \end{equation} Plugging \eqref{expsu} and \eqref{expsu1} in \eqref{Ihke1} and making use of \eqref{inst1} and its derivative, we deduce, after some straightforward computations, that, for $(t,x)\in [0,T]\times\bb T^d$ fixed, $H_1(t,x,\cdot)$ satisfies \begin{equation} \label{H1} (\bar u(1-\bar u) H_1')' - [B(\bar u) + D(\bar u)] H_1 = \left(\frac 12 \Delta d -\partial_td\right) \bar u' + \bar u'' A\bar Q' + \frac 12 \bar u' A\bar Q''\;. \end{equation} Hence, by choosing $A = \partial_td - \frac 12 \Delta d$ and recalling \eqref{Lbar}, we get $H_1(t,x,\xi) = A(t,x) h(\xi)$ where $h\colon \bb R\to\bb R$ solves \begin{equation} \label{key} L_{\bar u} h = - \bar u' + \bar u'' \bar Q' + \frac 12 \bar u' \bar Q''\;. \end{equation} For later purpose, we remark that, in view of \eqref{asbaru}, \eqref{Q2}, and \eqref{decayl}, \begin{equation} \label{asbarh} \sup_{\xi\in\bb R} \big(|h(\xi)| + |h'(\x)| + |h''(\xi)|\big) \mathrm{e}^{\gamma|\xi|} <+\infty\;. \end{equation} With this choice of $H_1$, the initial assumption on the expansion for $H$ holds. More precisely, in Appendix \ref{app:a} it is proven that $H(t,x) = \eps H_1(t,x,d(x,\Gamma(t))/ \eps)) + \eps^2 \tilde{H}_\eps(t,x)$ with \begin{equation} \label{Htilde} \limsup_{\eps \to 0} \sup_{(t,x) \in [0,T] \times \bb T^d } \big( |\tilde{H}_\eps(t,x)|+\eps |\nabla \tilde{H}_\eps(t,x)|\big) <\infty \, . \end{equation} From \eqref{Ihk2}, the explicit form of $H_1$, and \eqref{Htilde} we then have, \begin{equation} \label{Ihk3} S_\eps(\phi_\eps) = S_\eps^{(1)} + S_\eps^{(2)} + O(\eps) \;, \end{equation} where, after an integration by part, \[ S_\eps^{(1)} = \frac 1{2\eps} \int_0^T\!\mathrm{d} t \int \!\mathrm{d} x\, \bigg[-\nabla\cdot (\phi_\eps (1-\phi_\eps)\, \eps^2 \nabla H_1) + \frac{B(\phi_\eps) + D(\phi_\eps)}2 H_1\bigg] H_1 \] and \[ \begin{split} S_\eps^{(2)}& =\frac 12 \int_0^T\!\mathrm{d} t \int \!\mathrm{d} x\, \phi_\eps (1-\phi_\eps)\, \eps^2 \nabla H_1 \cdot \nabla \tilde H_\eps \\ & \quad + \int_0^T\!\mathrm{d} t \int\!\mathrm{d} x\, \bigg[ (B(\phi_\eps)+D(\phi_\eps)) H_1\tilde H_\eps + \frac{B(\phi_\eps)-D(\phi_\eps)}3 H_1^3 \bigg] \;. \end{split} \] Now, from \eqref{expsu}, \eqref{asbarh}, and \eqref{H1} we deduce, \begin{equation} \label{nuova} \begin{split} \nabla \cdot (\phi_\eps & (1-\phi_\eps) \nabla (\eps H_1)) - \frac{B(\phi_\eps)+D(\phi_\eps)}{\eps^2} (\eps H_1) \\ & = \frac 1\eps (\bar u(d_\eps)(1-\bar u(d_\eps)) H_1')' - \frac{B(\bar u(d_\eps)) + D(\bar u(d_\eps))}\eps H_1 + R^{(4)}_\eps \\ & = \frac 1\eps \bigg[ \left(\frac 12 \Delta d -\partial_td\right) \bar u'(d_\eps) + \bar u''(d_\eps) A\bar Q' + \frac 12 \bar u'(d_\eps) A\bar Q''\bigg] + R^{(4)}_\eps\;, \end{split} \end{equation} with $R^{(4)} = R^{(4)}_\eps(t,x,d_\eps)$, such that \[ \limsup_{\eps \to 0} \sup_{(t,x,\xi) \in [0,T] \times \bb T^d \times \bb R} \mathrm{e}^{\gamma |\xi|/2} |R^{(4)}_\eps(t,x,\xi)| < \infty\;. \] Recalling that $A(t,x) = \partial_td(x,\Gamma(t)) - \frac 12 \Delta d(x,\Gamma(t)) = v_t(x) - \frac 12 \kappa_t(x)$ for $x\in\Gamma(t)$ and using \eqref{asbaru}, \eqref{expsu}, \eqref{key}, and \eqref{Htilde}, by applying the co-area formula as done in the proof of Theorem \ref{thm:3.1}, we can compute the limit of $S_\eps^{(1)}$ and $S_\eps^{(2)}$ as $\eps\to 0$. By few direct calculations (that we omit) we obtain, \begin{equation} \label{Ihk4} \lim_{\eps\to 0} S_\eps^{(1)}= C_{\bar Q} \int_0^T\!\mathrm{d} t\int_{\Gamma(t)}\!\mathrm{d}\s\; \bigg(v- \frac 12 \kappa_t\bigg)^2\;, \qquad \lim_{\eps\to 0} S_\eps^{(2)} = 0\;, \end{equation} where, denoting by $\langle\cdot,\cdot\rangle_{L^2}$ the scalar product in $L^2(\bb R;\mathrm{d}\xi)$, \[ C_{\bar Q} = \frac 12 \Big\langle \Big(\bar u' - \bar u''\bar Q'-\frac 12 \bar u' \bar Q''\Big), (- L_{\bar u})^{-1} \Big(\bar u' - \bar u''\bar Q'-\frac 12 \bar u' \bar Q''\Big) \Big\rangle_{L^2}\;. \] We observe that, since $\bar u''\bar Q'+\frac 12 \bar u' \bar Q'' \in L^2(\bb R;\mathrm{d}\xi)$ and $\big\langle\bar u', \bar u''\bar Q'+\frac 12 \bar u' \bar Q''\big\rangle_{L^2} = 0$, \begin{equation} \label{C*} C_{\bar Q} \ge C^* := \frac 12 \min_{\psi:\langle \bar u',\psi\rangle_{L^2}=0} \langle(\bar u' - \psi) , (- L_{\bar u})^{-1} (\bar u' - \psi)\rangle_{L^2}\;. \end{equation} The above minimum is achieved at \[ \bar\psi = \bar u' - \frac{\|\bar u'\|_{L^2}^2}{\langle \bar u', (- L_{\bar u})\bar u'\rangle_{L^2}} L_{\bar u} \bar u'\;, \] so that, recalling \eqref{mu1}, \begin{equation} \label{C*1} C^* = \frac 12 \langle(\bar u' - \bar\psi) (- L_{\bar u})^{-1} (\bar u' - \bar\psi)\rangle_{L^2} = \frac{\|\bar u'\|_{L^2}^4}{2\langle\bar u',(-L_{\bar u})\bar u'\rangle_{L^2}} = \frac{1}{4\mu}\;. \end{equation} In view of \eqref{Ihk3}, \eqref{Ihk4}, and \eqref{C*}, to conclude the proof of the statement (b), it remains to show that there is $\bar Q$ for which the minimum is obtained, i.e., there exists a solution $\bar Q$ to the linear equation $\bar u''\bar Q'+\frac 12 \bar u' \bar Q'' = \bar \psi$ satisfying \eqref{Q2}. This solution can be explicitly computed, precisely, \[ \bar Q(\xi) = 2\int_0^\xi\!\mathrm{d}\xi'\, \frac{1}{\bar u'(\xi')^2} \int_{-\infty}^{\xi'}\mathrm{d}\xi''\, \bar u'(\xi'')\bar\psi(\xi'')\;, \] which satisfies \eqref{Q2} in view of \eqref{asbaru}. \medskip \noindent {\it Proof of \rm (a).} Let $\phi_\eps$ be as in \eqref{recovery1}. By \eqref{Sgk1}, \eqref{Jgk1}, and integration by parts we have, for any $H\in C^{1,2}([0,T]\times\bb T^d)$, \begin{equation} \label{Jgk2} \begin{split} S_\eps(\phi_\eps) & \ge \frac 1\eps \int_0^T\!\mathrm{d} t \int\!\mathrm{d} x\, \left[ \bigg(\partial_t \phi_\eps- \frac 12 \Delta \phi_\eps \bigg)H - \frac 12 \phi_\eps(1-\phi_\eps) |\nabla H|^2 \right] \\ & \quad + \frac 1\eps \int_0^T\!\mathrm{d} t \int\!\mathrm{d} x\, \left( B(\phi_\eps) \frac{1-\mathrm{e}^H}{\eps^2} + D(\phi_\eps) \frac{1-\mathrm{e}^{-H}}{\eps^2}\right)\;. \end{split} \end{equation} We choose $H$ of the form $H(t,x) = \eps H_1 (t,x,d(x,\Gamma(t))/ \eps)$, with $H_1\colon [0,T]\times\bb T^d \times \bb R \to \bb R$ a smooth function to be determined later such that, for some $\gamma'>0$, \begin{equation} \label{h1} \sup_{(t,x,\xi) \in [0,T] \times \bb T^d \times \bb R} \mathrm{e}^{\gamma' |\xi|} \big\{ |H_1(t,x,\xi)| + |\nabla_xH_1(t,x,\xi)| + |\partial_\xi H_1(t,x,\xi)|\big\} < \infty\;. \end{equation} Noticing that the dependence on $\phi_\eps$ of the integrands in the right-hand side of \eqref{Jgk2} is locally Lipschitz and recalling the hypothesis on $R_\eps$, in view of the above assumptions on $H_1$, it is readily seen that the contribution due to $R_\eps$ is $o(1)$ as $\eps\to 0$, and hence it can be neglected. Therefore, few direct calculations (using \eqref{expsu}, here applied to $\phi_\eps - \eps R_\eps$, and the co-area formula) give \[ \begin{split} \liminf_{\eps\to 0} S_\eps(\phi_\eps) & \ge \int_0^T\!\mathrm{d} t \int_{\Gamma(t)}\!\mathrm{d}\sigma \int\!\mathrm{d}\xi\, \bigg\{ \left[ \bar u' \Big( \partial_t d - \frac 12 \Delta d \Big) + \bar u'' Q' + \frac 12 \bar u' Q'' \right] H_1 \\ & \quad - \frac 12 \bar u(1-\bar u) (H_1')^2 - \frac{B(\bar u)+D(\bar u)}2 H_1^2 \bigg\}\;, \end{split} \] where the notation $Q'=\partial_\xi Q(x,t,\xi)$, $Q''=\partial_{\xi\xi}Q(t,x,\xi)$, $H_1'=\partial_\xi H_1(x,t,\xi)$, and $H_1''=\partial_{\xi\xi}H_1(t,x,\xi)$ has been adopted. The maximum of the expression in the right-hand side is obtained for $H_1 = \mc H$ with, for any $(t,x)\in [0,T]\times\bb T^d$ fixed, $\mc H(t,x,\cdot)$ solution to \[ L_{\bar u} \mc H = \Big(\frac 12 \Delta d -\partial_td\Big) \bar u' - \bar u'' Q' - \frac 12 \bar u' Q'' := F_Q \;, \] which satisfies the assumptions \eqref{h1} in view of \eqref{asbaru}, \eqref{Q3}, and \eqref{decayl}. Hence, \begin{equation} \label{fh} \liminf_{\eps\to 0} S_\eps(\phi_\eps) \ge \frac 12 \int_0^T\!\mathrm{d} t \int_{\Gamma(t)}\!\mathrm{d}\sigma \int\!\mathrm{d}\xi\, F_Q (-L_{\bar u})^{-1}F_Q\;. \end{equation} We next observe that, in view of \eqref{C*}, for each $(t,x)\in [0,T]\times\bb T^d$ fixed, \[ \frac 12 \int\!\mathrm{d}\xi\, F_Q (-L_{\bar u})^{-1}F_Q \ge \Big( \partial_t d - \frac 12 \Delta d \Big)^2 C^*\;. \] As $\partial_td(x,\Gamma(t)) - \frac 12 \Delta d(x,\Gamma(t)) = v_t(x) - \frac 12 \kappa_t(x)$ for $x\in \Gamma(t)$, the statement (a) follows by \eqref{fh} and \eqref{C*1}. \qed \section{Approximating nucleation events} \label{sec:5} In this section we discuss how the nucleation part of the rate function in \eqref{s+s} can be recovered from its absolutely continuous part. The general result should be the following. Given $T>0$, let $\Gamma=\Gamma(t)$, $t\in [0,T]$, be a path of interfaces satisfying $S(\Gamma)<+\infty$ (with possible nucleation events), then there exists a sequence of paths $\{\Gamma_\delta\}$ with zero nucleation cost such that $\Gamma_\delta\to \Gamma$ and $S_\mathrm{ac}(\Gamma_\delta) \to S(\Gamma)$ as $\delta\to 0$. We shall not discuss the issue at this level of generality but rather provide a strategy for a special class of paths. We also restrict the analysis to the two-dimensional isotropic case. The basic idea is that in dimension $d=2$ points (i.e., $(d-2)$-dimensional interfaces) can be nucleated with no cost and we can then let them evolve for a short time (vanishing as $\delta\to 0$) in such a way that at the final time the resulting interface approximates the one we want to nucleate. Moreover, as we are going to argue, it is possible to arrange the evolution so that the corresponding cost indeed approximates the nucleation one. Let us consider a path $\Gamma$ of the form, \begin{equation} \label{ori} \Gamma(t)= \begin{cases} \emptyset & \textrm{if } t\in [0,\bar t)\;, \\ \Gamma^0 (t) & \textrm{if } t \in [\bar t, T]\;, \end{cases} \end{equation} where $\bar t$ is the nucleation time and $\Gamma^0$ is a smooth path of smooth one-dimensional interfaces with initial value $\bar\Gamma := \Gamma^0(\bar t)$. We assume that $\Gamma^0(t)=\partial\Omega^0(t)$, for some open set $\Omega^0(t)$, when $t\in (\bar t,T]$, while $\bar \Gamma = \lim_{t\downarrow \bar t} \Omega^0(t)$. The corresponding nucleation cost is \begin{equation*} S_\mathrm{nucl} (\Gamma) = 2 \tau \mathrm{Per} (\bar\Gamma)\;, \end{equation*} where we recall that $\tau$ is the surface tension and $\mathrm{Per}(\bar\Gamma)$ denotes here the length of $\bar\Gamma$, while the factor $2$ is due to the fact that $\bar\Gamma$ has be thought as an interface with double multiplicity. In view of the assumed smoothness of $\bar \Gamma$, by localization, it suffices to consider the case in which it is a segment, say of length $\ell$. In order to define the corresponding approximating path $\Gamma_\delta$, we first construct a path $\Sigma_\delta(s)$, $s\in [0,\sigma_\delta]$, with $\sigma_\delta\to 0$, satisfying $\Sigma_\delta(0) \to \bar\Gamma$, $\mathrm{Per}\big(\Sigma_\delta(0)\big)\to 2 \, \mathrm{Per} (\bar\Gamma)= 2\ell$, and $\Sigma_\delta(\sigma_\delta)=\emptyset$. To this end, chop the segment $\bar\Gamma$ into $N_\delta$ sub-segments (with $N_\delta$ diverging as $\delta\to 0$) and then fat each subsegment to an ellipse with major axis of length $\ell/N_\delta$ and minor axis of length $m_\delta \ll \ell/N_\delta$. Denoting by $\bar\Sigma_\delta$ the resulting interface, then $\bar\Sigma_\delta\to \bar\Gamma$ and $\mathrm{Per}\big(\bar\Sigma_\delta\big)\to 2 \ell$. The path $\Sigma_\delta(s)$, $s\ge 0$ is now defined as the evolution by mean curvature with initial datum $\bar\Sigma_\delta$ and transport coefficient $\theta$. Here, we understand that each ellipse evolves by mean curvature separately. By comparing the evolution of each ellipse with that of a circle of initial diameter equals to the major axis, we deduce that $\Sigma_\delta(\sigma_\delta)=\emptyset$ for some $\sigma_\delta \le (\ell/N_\delta)^2/(8\theta )$. We now set \begin{equation*} \Gamma_\delta(t) := \begin{cases} \emptyset & t\in [0,\bar t -\sigma_\delta)\;, \\ \Sigma_\delta(\bar t - t) & t\in [\bar t -\sigma_\delta, \bar t)\;, \\ \Gamma^0_\delta(t) & t\in [\bar t, T]\;, \end{cases} \end{equation*} where $\Gamma^0_\delta$ is a suitable approximation of the path $\Gamma^0$ in \eqref{ori}, satisfying $\Gamma^0_\delta(\bar t) = \Sigma_\delta(0)$, organized so that $S_{\mathrm{ac}, [\bar t, T]}(\Gamma^0_\delta) \to S_{\mathrm{ac}, [\bar t, T]}(\Gamma^0)$, whose details are omitted. To conclude we next show that \begin{equation*} S_{\mathrm{ac}, [0,\bar t]} (\Gamma_\delta) \to 2 \tau \ell\,. \end{equation*} Even if this is essentially a Friedlin-Wentzel argument for evaluating the quasi-potential in the reversible case, we provide the details of the computation. Denoting by $v_\delta$ and $\kappa_\delta$ the normal velocity and mean curvature of $\Gamma_\delta$, we write \begin{equation*} \frac 1{4\mu} (v_\delta -\theta \kappa_\delta)^2 = \frac 1{4\mu} (v_\delta +\theta \kappa_\delta)^2 - \frac{\theta}{\mu} \kappa_\delta v_\delta\;. \end{equation*} By construction of the path, that has been obtained by time reversal of motion by mean curvature, the first term on the right-hand side above vanishes. Since \begin{equation*} \frac{\mathrm{d}}{\mathrm{d} t } \mathrm{Per}(\Gamma_\delta(t)) = -\int_{\Gamma_\delta(t)} v_\delta \kappa_\delta\,, \end{equation*} we conclude by using the Einstein relation \eqref{theta}.
{ "timestamp": "2018-02-13T02:22:02", "yymm": "1802", "arxiv_id": "1802.04194", "language": "en", "url": "https://arxiv.org/abs/1802.04194" }
\section{Introduction} \label{sec-intro} \setcounter{equation}{0} \subsection{Statement of the problem} Let $\Omega$ be an unbounded open set of ${\mathbb R}^3$ corresponding to a closed waveguide. Here by closed waveguide we mean that there exists $\omega$ a $\mathcal C^2$ bounded open simply connected set of $\mathbb{R}^2$ such that the following condition is fulfilled \begin{equation} \label{closed}\Omega \subset \omega\times {\mathbb R}.\end{equation} For $A\in L^\infty(\Omega)^3$, we define the magnetic Laplacian $\Delta_A$ given by $$\Delta_A=\Delta+2iA\cdot\nabla+i\textrm{div}(A)-|A|^2.$$ According to \cite[Theorem 3.4 page 223]{EE}, for any $u\in H^1(\Omega)$ and $\varphi\in\mathcal C^\infty_0(\Omega)$, we have $u\varphi\in W^{1,1}_0(\Omega)$, where $W^{1,1}_0(\Omega)$ denotes the closure of $\mathcal C^\infty_0(\Omega)$ in $W^{1,1}(\Omega)$. Therefore, using a density argument we can prove that, for any $u\in H^1(\Omega)$ and $A\in L^\infty(\Omega)^3$, we have div$(A)u\in D'(\Omega)$ and $\Delta_A u\in D'(\Omega)$. Thus, for $q \in L^\infty(\Omega;\mathbb C)$ and $u\in H^1(\Omega)$, we can introduce the equation \begin{equation} \label{eq1} \Delta_Au + q u = 0,\quad\mbox{in}\ \Omega \end{equation} in the sense of distributions. Since we make no assumption on the boundary of $\Omega$, in a similar way to \cite{KU}, we define the trace map $\tau$ on $H^1(\Omega)$ by $\tau u=[u]$ with $[u]$ the class of $u$ in the quotient space $\frac{H^1(\Omega)}{H^1_0(\Omega)}$, where $H^1_0(\Omega)$ denotes the closure of $\mathcal C^\infty_0(\Omega)$ in $H^1(\Omega)$. We associate to any solution $u\in H^1(\Omega)$ of \eqref{eq1} the trace $ N_{A,q}u\in \left(\frac{H^1(\Omega)}{H^1_0(\Omega)}\right)'$, with $\left(\frac{H^1(\Omega)}{H^1_0(\Omega)}\right)'$ the dual space of $\frac{H^1(\Omega)}{H^1_0(\Omega)}$, defined by $$\left\langle N_{A,q}u,\tau g\right\rangle_{\left(\frac{H^1(\Omega)}{H^1_0(\Omega)}\right)',\frac{H^1(\Omega)}{H^1_0(\Omega)} }:=-\int_\Omega (\nabla+iA)u\cdot \overline{(\nabla+iA)g}dx+\int_\Omega qu\overline{g}dx,\ g\in H^1(\Omega).$$ Here, by using a density argument, one can prove that this map is well defined for $u$ solving \eqref{eq1} since for $g\in H^1_0(\Omega)$ the right hand side of this identity is equal to $0$. Recall that for $\Omega=\omega\times {\mathbb R}$ one can identify $\frac{H^1(\Omega)}{H^1_0(\Omega)}$ to $H^{\frac{1}{2}}(\partial\omega\times{\mathbb R}):=L^2({\mathbb R};H^{\frac{1}{2}}(\partial\omega)\cap H^{\frac{1}{2}}({\mathbb R};L^2(\omega))$. Then, for $u\in H^1(\Omega)$ solving \eqref{eq1} and $A\in W^{1,\infty}(\Omega)^3$, we have $\tau u=u_{|\partial\Omega}$ and $$N_{A,q}u=-\partial_{\nu_A}u=-\partial_\nu u-i(A\cdot \nu)u\in H^{-\frac{1}{2}}(\partial\omega\times{\mathbb R})=(H^{\frac{1}{2}}(\partial\omega\times{\mathbb R}))',$$ with $\nu$ the outward unit normal vector to $\partial\omega\times {\mathbb R}$. This means that $-N_{A,q}$ is the natural extension of the magnetic normal derivative in non smooth setting for general unbounded domains satisfying \eqref{closed}. We introduce then the data \begin{equation} \label{data} \mathcal D_{A,q}:=\{(\tau u, N_{A,q}u):\ u\in H^1(\Omega), \textrm{ $u$ solves \eqref{eq1}}\}.\end{equation} Note that for $\Omega=\omega\times {\mathbb R}$, $A\in W^{1,\infty}(\Omega)^3$ and assuming that $0$ is not in the spectrum of $\Delta_A + q$ with Dirichlet boundary condition, $\mathcal D_{A,q}$ corresponds, up to the sign, to the graph of the so called Dirichlet-to-Neumann map associated with \eqref{eq1}. In this paper we consider the simultaneous recovery of the magnetic field associated with $A$ and $q$ from the data $\mathcal D_{A,q}$. We consider both results with full and partial data. \subsection{Physical motivations} Let us first observe that, the problem addressed in this paper is linked to the so called electrical impedance tomography (EIT in short) method and its applications in medical imaging and geophysical prospection (see \cite{Uh} for more detail). The statement of the present inverse problem in an unbounded closed waveguide can be addressed in the context of problems of transmission to long distance or transmission through particular structures, with important ratio length-to-diameter, such as nanostructures. Here the goal of the inverse problem can be described as the unique recovery of an electromagnetic impurity perturbing the guided propagation (see \cite{CL,KBF}). Let us also mention that in this paper we consider general closed waveguides, only subjected to condition \eqref{closed}, that have not necessary a cylindrical shape comparing to other related works like \cite{CKS2,CKS3,Ki4}. This means that we can consider our inverse problem in closed waveguides with different types of geometrical deformations, including bends and twisting, which can be used in several context for improving the propagation of signals (see for instance \cite{Sr}). \subsection{State of the art} We recall that the Calder\'on problem, addressed first in \cite{Ca}, has attracted many attention over the last decades (see for instance \cite{Ch,Uh} for an overview of several aspects of this problem). The first positive answer to this problem in dimension $n \geq 3$ has been addressed by Sylvester and Uhlmann in \cite{SU}. Here the authors introduced the so called complex geometric optics (CGO in short) solutions which remain one of the most important tools for the study of this problem. This last result has been extended in several way. For instance, we can mention the problem stated with partial data by \cite{BU} and improved by \cite{KSU}. One of the first results about the recovery, modulo gauge invariance, of electromagnetic potentials has been addressed in \cite{Suu} where the author proved the determination of magnetic field associated with magnetic potentials $A$ lying in $ W^{2,\infty}$ by assuming that the magnetic field is sufficiently small. The smallness assumption of \cite{Suu} was removed by \cite{NSU} for smooth coefficients. Since then, \cite{T} extends this result to magnetic potentials lying in $\mathcal C^1$ and \cite{Sa1} extends it to magnetic potentials lying in a Dini class. To our best knowledge, the result with the weakest regularity assumption so far, for general bounded domain, is the one of \cite{KU} where the authors have considered bounded electromagnetic potentials. More recently, in the specific case of a ball in ${\mathbb R}^3$, \cite{H} proved the recovery of unbounded magnetic potentials. Concerning results with partial data associated with this last problem, we mention the work of \cite{Chu,FKSU} and concerning the stability issue, without being exhaustive, we refer to \cite{B,CDR1,CDR2,CP,Pot2,Pot1,Tz}. We mention also the work of \cite{CK,HK,Ki3} related to problems for hyperbolic and parabolic equations treated with an approach similar to the one considered for elliptic equations. Note that all the above mentioned results have been stated in a bounded domain. Only a small number of articles studied such inverse boundary value problems in an unbounded domain. In \cite{LU}, the authors combined unique continuation results with CGO solutions and a Carleman estimate borrowed from \cite{BU} in order to prove the unique recovery of compactly supported electric potentials of a Schr\"odinger operator in a slab from partial boundary measurements. This last result has been extended to magnetic Schr\"odinger operators by \cite{KLU} and the stability issue has been addressed by \cite{CM}. We refer also to \cite{Ik,L1,L2,SW,Y} for other related inverse problems stated in a slab. In \cite{CKS2,CKS3}, the authors considered the stable recovery of coefficients periodic along the axis of an infinite cylindrical domain. More recently, \cite{Ki4} considered, for what seems to be the first time, the recovery of non-compactly supported and non-periodic electric potentials appearing in an infinite cylindrical domain. The results of \cite{Ki4} include also an extension of the work of \cite{LU} to the recovery of non-compactly supported coefficients in a slab. We mention also the work \cite{BKS,BKS1, CS, KKS,Ki1, KPS1, KPS2} treating the determination of coefficients appearing in different PDEs on an infinite cylindrical domain from boundary measurements. \subsection{Statement of the main results} Let us recall that there is an obstruction to the simultaneous recovery of $A$, $q$ from the data $\mathcal D_{A,q}$ given by gauge invariance. More precisely according to \cite[Lemma 3.1]{KU}, which is stated for bounded domains but whose arguments can be extended without any difficulty to unbounded domains satisfying \eqref{closed}, the data $\mathcal D_{A,q}$ satisfies the following gauge invariance. \begin{equation} \label{gauge} \mathcal D_{A+\nabla\varphi,q}=\mathcal D_{A,q},\quad \varphi\in\{ h_{|\Omega}: \ h\in W^{1,\infty}_{loc}({\mathbb R}^3:{\mathbb R}),\ \nabla_xh\in L^\infty({\mathbb R}^3)^3,\ h_{|{\mathbb R}^3\setminus\Omega}=0\}.\end{equation} Taking into account this obstruction, for $A=(a_1,a_2,a_3)$, we consider the recovery of the magnetic field corresponding to the 2-form valued distribution $dA$ defined by $$dA:=\underset{1\leq j<k\leq 3}{\sum} (\partial_{x_j}a_k-\partial_{x_k}a_j)dx_j\wedge dx_k$$ and $q$. Assuming that $\Omega$ is simply connected and with some suitable regularity assumptions (see for instance Section 4.2), one can check that this result is equivalent to the recovery of the electromagnetic potential modulo gauge invariance. This paper contains three main results. In the first main result, stated in Theorem \ref{t1}, we consider the unique determination of electromagnetic potentials with low regularity from the full data $\mathcal D_{A,q}$. In our second main result stated in Theorem \ref{c1}, we prove, for electromagnetic potentials known on the neighborhood of the boundary outside a compact set, that measurements restricted to a bounded subset of $\partial\Omega$ can also recover uniquely the magnetic field and the electric potential. Finally, in our last result stated in Theorem \ref{t6}, we give a partial data result by proving the unique recovery of a magnetic field and an electric potential associated with general class of electromagnetic potentials from restriction of the data $\mathcal D_{A,q}$. In our first main result we consider general class of bounded electromagnetic potentials and a general closed waveguide. This result can be stated as follows. \begin{theorem} \label{t1} Let $\Omega$ be an unbounded domain satisfying \eqref{closed}, let $A_1,A_2\in L^\infty(\Omega)^3\cap L^2(\Omega)^3$ be such that $A_1-A_2\in L^1(\Omega)^3$ and let $q_1,q_2\in L^\infty(\Omega;\mathbb C)$. Then the condition \begin{equation} \label{t1a} \mathcal D_{A_1,q_1}=\mathcal D_{A_2,q_2} \end{equation} implies $dA_1=dA_2$. Moreover, assuming that $q_1-q_2\in L^2(\Omega;\mathbb C)$, \eqref{t1a} implies $q_1=q_2$. \end{theorem} Let us remark that Theorem \ref{t1} is stated with boundary measurements in all parts of the unbounded boundary $\partial\Omega$. Despite the general setting of this problem, it may be difficult for several applications, like for transmission to long distance, to have access to such data. In order to make the measurements more relevant for some potential applications, we need to consider data restricted to a bounded portion of $\partial\Omega$. This will be the goal of our second result where we extend Theorem \ref{t1} to recovery of coefficients from measurements restricted to bounded portions of $\partial\Omega$. From now on, we assume that $\Omega$ is a domain with Lipschitz boundary. For all $s\in\left[0,\frac{1}{2}\right]$, we denote by $H^{s}_{loc}(\partial\Omega)$ the set of $f\in L^2_{loc}(\partial\Omega)$ such that for any $\chi\in \mathcal C^\infty_0({\mathbb R}^3)$, $\chi f \in H^{s}(\partial\Omega)$. For any $u\in H^1(\Omega)$, we can define $\tau_0 u=u_{|\partial\Omega}$ as an element of $H^{\frac{1}{2}}_{loc}(\partial\Omega)$. In the same way, for $U$ a closed (resp. open) subset of $\partial\Omega$ and for $u\in H^1(\Omega)$ solving $\Delta_Au+qu=0$, with $A\in L^\infty(\Omega)^3$ and $q\in L^\infty(\Omega)$, we denote by $N_{A,q}u_{|U}$ the restriction of $N_{A,q}u$ to the subspace $$\{\tau g:\ g\in H^1(\Omega),\ \textrm{supp}(\tau_0g)\subset U\}$$ of $\frac{H^1(\Omega)}{H^1_0(\Omega)}$. Note that here $N_{A,q}u_{|U}$ is the natural extension of the restriction, up to the sign, of the magnetic normal derivative of $u$ to the set $U$. For $r>0$ and $S_r=\partial\Omega\cap(\overline{\omega}\times[-r,r]) $, we can consider the restriction $\mathcal D_{A,q, r}$ of the data $\mathcal D_{A,q}$ given by \begin{equation} \label{pada}\mathcal D_{A,q, r}:=\{(\tau u, N_{A,q}u_{|S_r}):\ u\in H^1(\Omega), \textrm{ $u$ solves \eqref{eq1}},\ \textrm{supp}(\tau_0 u)\subset S_r\}.\end{equation} In the spirit of \cite[Corollary 1.3]{Ki4}, fixing $\delta\in(0,r/2)$, we will apply Theorem \ref{t1} in order to prove the recovery of coefficients known on a neighborhood of the boundary outside $\Omega\cap (\omega\times(\delta-r,r-\delta))$ from the data $\mathcal D_{A,q, r}$. For this purpose we need the following assumption on $\Omega$ and the admissible coefficients.\\ \textbf{Assumption 1:} For $j=1,2$, and for any $F\in L^2(\Omega)$ the equations $\Delta_{A_j}u_j+\overline{q_j}u_j=F$ and $\Delta_{A_j}u_j+q_ju_j=F$ admit respectively a solution $u_j\in H^1_0(\Omega)$. We mention that Assumptions 1 will be fulfilled if for instance $\Omega=\omega_1\times {\mathbb R}$, with $\omega_1$ a bounded open subset of ${\mathbb R}^2$ with Lipschitz boundary, and if $0$ is not in the spectrum of the operators $\Delta_{A_j}+q_j$ and $\Delta_{A_j}+\overline{q_j}$, $j=1,2$, with Dirichlet boundary condition. Let $\textbf{n}$ be the outward unit normal vector of $\partial\Omega$.\footnote{Since $\Omega$ is only subjected to the condition $\Omega\subset\Omega_1$ we may have $\Omega\neq\Omega_1$ this is the reason why we use a different notation for the outward unit normal vector of $\Omega_1$ and $\Omega$. } Since $\Omega$ is only subjected to the condition $\Omega\subset\Omega_1$ we may have $\Omega\neq\Omega_1$ this is why we use a different notation for the outward unit normal vector of $\Omega_1$ and $\Omega$. Before we state our result, let us also recall that for any $A\in L^\infty(\Omega)^3$ satisfying $\textrm{div}(A)\in L^\infty(\Omega)$, we can define the trace map $A\cdot \textbf{n}$ as the unique element of $$\mathcal B\left(\frac{H^1(\Omega)}{H^1_0(\Omega)}; \left(\frac{H^1(\Omega)}{H^1_0(\Omega)}\right)'\right)$$ defined by \begin{equation} \label{An}\left\langle (A\cdot \textbf{n})\tau g,\tau h\right\rangle_{\left(\frac{H^1(\Omega)}{H^1_0(\Omega)}\right)',\frac{H^1(\Omega)}{H^1_0(\Omega)}}:=\int_{\Omega}\textrm{div}(A)h\overline{g}dx+\int_{\Omega}A\cdot\nabla h\overline{g}dx+\int_{\Omega} h(A\cdot\overline{\nabla g})dx,\ g,h\in H^1(\Omega).\end{equation} Again, by a density argument, one can easily check the validity of this definition by noticing that the right hand side of the identity vanishes as soon as $g\in H^1_0(\Omega)$ or $h\in H^1_0(\Omega)$. Here we use again the fact that, for $u\in H^1(\Omega)$ and $\varphi\in\mathcal C^\infty_0(\Omega)$, we have $u\varphi\in W^{1,1}_0(\Omega)$. Assuming that Assumption 1 is fulfilled, we state our second main result as follows. \begin{theorem}\label{c1} Let $\Omega$ be a connected open set with Lipschitz boundary satisfying \eqref{closed}. For $j=1,2$, let $A_j\in L^\infty(\Omega)^3\cap L^2(\Omega)^3$, div$(A_j)\in L^\infty(\Omega)$, $q_j\in L^\infty(\Omega;\mathbb C)$, $A_1-A_2\in L^1(\Omega)^3$. In addition, let Assumption 1 be fulfilled and, for $A_j\cdot \textbf{n}$, $j=1,2$, defined by \eqref{An} with $A=A_j$, let the condition \begin{equation} \label{c1d}A_1\cdot \textbf{n} =A_2\cdot \textbf{n}\end{equation} be fulfilled. Assume also that there exist $\delta\in(0,r/2)$ and two open connected set $\Omega_{\pm}\subset\Omega$ with Lipschitz boundary such that \begin{equation} \label{c1a}\partial\Omega\cap (\overline{\omega}\times(-\infty,-r+\delta])\subset \partial\Omega_-,\quad \partial\Omega\cap (\overline{\omega}\times[r-\delta,+\infty))\subset \partial\Omega_+,\end{equation} \begin{equation} \label{c1b} A_1(x)=A_2(x),\quad q_1(x)=q_2(x),\quad x\in\Omega_-\cup\Omega_+.\end{equation} Then, the condition \begin{equation} \label{c1c} \mathcal D_{A_1,q_1, r}=\mathcal D_{A_2,q_2, r}\end{equation} implies $dA_1=dA_2$. Moreover, assuming that $q_1-q_2\in L^2(\Omega;\mathbb C)$, \eqref{c1c} implies $q_1=q_2$. \end{theorem} For our last main result we will consider the specific case where $\Omega=\omega\times{\mathbb R}$. This time we want to consider the recovery of the coefficients not from full boundary measurements but from partial boundary measurements without assuming the knowledge of the coefficients close to the boundary. We remark that $\partial\Omega= \partial \omega\times{\mathbb R} $ and that the outward unit normal vector $\nu$ to $\partial\Omega$ takes the form $$ \nu(x',x_3)=(\nu'(x'),0)^T,\ x=(x',x_3)\in\partial\Omega, $$ with $\nu'$ the outward unit normal vector of $\partial \omega$. In light of this identity, from now on, we denote by $\nu$ both the exterior unit vectors normal to $\partial \omega$ and to $\partial \omega\times{\mathbb R}$. We fix $\theta_0 \in \mathbb{S}^1 :=\{ y\in{\mathbb R}^2;\ \abs{y}=1\}$ and we introduce the $\theta_0$-illuminated (resp., $\theta_0$-shadowed) face of $\partial \omega$, defined by $$\partial \omega_{\theta_0}^- := \{ x \in \partial \omega;\ \theta_0 \cdot \nu(x) \leq 0 \}\ (\mbox{resp.},\ \partial \omega_{\theta_0}^+= \{x \in \partial \omega;\ \theta_0 \cdot \nu(x) \geq 0\}).$$ From now on, we denote by $x \cdot y := \sum_{j=1}^k x_j y_j$ the Euclidian scalar product of any two vectors $x:=(x_1,\ldots,x_k)^T$ and $y:=(y_1,\ldots,y_k)^T$ of $\mathbb C^k$. We fix $V$ a portion of $\partial\Omega$ taking the form $V:=V'\times {\mathbb R}$, where $V'$ is an arbitrary open neighborhood of $\partial \omega_{\theta_0}^-$ in $\partial\omega$. We introduce also the set of data $$\mathcal D_{A,q, V}=\{(\tau u, N_{A,q}u_{|V}):\ u\in H^1(\Omega), \textrm{ $u$ solves \eqref{eq1}}\}.$$ Then we can state our last main result as follows. \begin{theorem} \label{t6} Let $\Omega=\omega\times{\mathbb R}$ and, for $j=1,2$, let $A_j\in L^\infty(\Omega)^3\cap L^2(\Omega)^3$, div$(A_j)\in L^\infty(\Omega)$, $q_j\in L^\infty(\Omega;\mathbb C)$, $A_1-A_2\in L^1(\Omega)^3$. Let also $A_1$ and $A_2$ satisfy \eqref{c1d}. Then the condition \begin{equation} \label{t6a} \mathcal D_{A_1,q_1, V}=\mathcal D_{A_2,q_2,V} \end{equation} implies $dA_1=dA_2$. Moreover, assuming that $q_1-q_2\in L^1(\Omega;\mathbb C)$, \eqref{t1a} implies also that $q_1=q_2$. \end{theorem} \subsection{Comments about our results} To the best of our knowledge Theorem \ref{t1} is the first result of recovery of a magnetic field and an electric potential in an unbounded domain with such a general setting. This point can be seen through four different aspects of the theorem. First, Theorem \ref{t1} is stated in a general unbounded domain subject only to condition \eqref{closed}. This makes an important difference with other related results which, to our best knowledge, have all been stated in specific unbounded domains like a slab, the half space or a cylindrical domain (see \cite{KLU,LU,CKS2,CKS3}). In particular, Theorem \ref{t1} holds true with domains having different types of geometrical deformations like bends or twisting, which are frequently used in problems of transmission for improving the propagation. Second, to the best of our knowledge, in contrast to all other results stated for elliptic equations in an unbounded domain, Theorem \ref{t1} requires no assumptions about the spectrum of the magnetic Schr\"odinger operator associated with the electromagnetic potential under consideration. Usually such conditions make some restrictions on the class of coefficients under consideration, here we avoid such constraints. Third, we prove, for what seems to be the first time, the recovery of electromagnetic potentials that are neither compactly supported nor periodic. Actually we consider a class of electromagnetic potentials admitting various type of behavior outside a compact set (roughly speaking we consider magnetic potentials lying in $L^1(\Omega)^3$ and electric potentials lying in $L^2(\Omega)$). Fourth, Theorem \ref{t1} seems to be the first result stated for an unbounded domain with electromagnetic potentials having regularity comparable to \cite{KU}, where the recovery of electromagnetic potentials has been stated with the weakest regularity condition so far for general bounded domains. The main tools in our analysis are CGO solutions suitably designed for unbounded domains satisfying \eqref{closed}. Here in contrast to \cite{CKS2,CKS3,KLU,LU} we do not restrict our analysis to compactly supported or periodic coefficients where, by mean of unique continuation or Floquet decomposition, one can transform the problem stated on an unbounded domain into a problem on a bounded domain. Like \cite{Ki4}, we introduce a new class of CGO solutions designed for infinite cylindrical domains. The difficulties in the construction of such solutions are coming both from the fact that we consider magnetic potentials that are not compactly supported and the fact that we need to preserve the square integrability of the CGO solutions, which is not guarantied by the usual CGO solutions in unbounded domains. In addition, like in \cite{KU}, we build CGO solutions designed for bounded magnetic potentials. The construction of our CGO solutions requires Carleman estimates in negative order Sobolev space that we prove by extending some results, similar to those of \cite{FKSU,ST}, to infinite cylindrical domains. Let us observe that the construction of CGO solutions satisfying the square integrability property works only for domains contained into an infinite cylinder. For instance, we can not apply our construction to domains like slab or half space. However, in a similar way to \cite[Corollary 1.4]{Ki4}, applying Theorem \ref{t1} and \ref{c1}, one can prove that the result of \cite{KLU} can be extended to electromagnetic potentials supported in infinite cylinder. In this paper we consider electric potentials $q$ that can be complex valued but we consider magnetic potentials $A$ that take value in ${\mathbb R}^3$. Like in \cite{KLU,KU}, we could state our result with magnetic potentials taking value in $\mathbb C^3$, but for simplicity we restrict our analysis to real valued magnetic potentials. \subsection{Outline} This paper is organized as follows. In Section 2, we derive some Carleman estimates that will be useful at the same time for building the CGO solutions and restricting the data in Theorem \ref{t6}. In Section 3, we use the Carleman estimates in order to build our CGO solutions. Combining all these tools, in Section 4, 5, 6 we prove respectively Theorem \ref{t1}, Theorem \ref{c1} and Theorem \ref{t6}. Finally, in Section 7 we explain how our result can be extended to higher dimension. \section{Carleman estimates} From now on, we fix $\Omega_1=\omega\times{\mathbb R}$. We associate to every point $x \in \Omega_1$ the coordinates $x=(x',x_3)$, where $x_3 \in {\mathbb R}$ and $x':= (x_1,x_2) \in \omega$. In a similar way to the discussion before the statement of Theorem \ref{t6}, we denote by $\nu$ both the exterior unit vectors normal to $\partial \omega$ and to $\partial\Omega_1$. The goal of this section is to establish two Carleman estimates for the magnetic Laplace operator in the unbounded cylindrical domain $\Omega_1$. We start with a Carleman estimate which will be our first main tool. Then, using this Carleman estimate we will derive a Carleman estimate in negative order Sobolev space. \subsection{General Carleman estimate} In order to prove our Carleman estimates we introduce first a weight function depending on two parameters $s,\rho\in(1,+\infty)$ and we consider, for $\rho>s>1$ and $\theta\in\mathbb S^2$, the perturbed weight \begin{equation} \label{phi}\varphi_{\pm,s}(x',x_3):=\pm \rho\theta\cdot x'-s{(x'\cdot\theta)^2\over 2},\quad x=(x',x_3)\in\omega\times{\mathbb R}=\Omega_1.\end{equation} We define \[ P_{A,q,\pm,s}:=e^{-\varphi_{\pm,s}}(\Delta +2iA\cdot\nabla +q)e^{\varphi_{\pm,s}}.\] Like in \cite{FKSU,ST}, we consider convexified weight, instead of the linear weight used in \cite[Proposition 31]{Ki4}, in order to be able to absorb first order perturbations of the Laplacian. Our first Carleman estimates can be seen as an extension of \cite[Proposition 2.3]{FKSU}, stated with linear weight, to unbounded cylindrical domains. These estimates take the following form. \begin{proposition}\label{p1} Let $A\in L^\infty(\Omega_1)^3$ and $q\in L^\infty(\Omega_1;\mathbb C)$. Then there exist $s_1>1$ and, for $s>s_1$, $\rho_1(s)$ such that for any $v\in\mathcal C^2_0({\mathbb R}^3)\cap H^1_0(\Omega_1)$ the estimate \begin{equation} \label{p1a} \begin{aligned}&\rho\int_{\partial\omega_{\pm,\theta}\times{\mathbb R}} |\partial_\nu v|^2|\theta\cdot \nu| d\sigma(x)+s\rho^{-2}\int_{\Omega_1}|\Delta v|^2dx+ s\int_{\Omega_1}|\nabla v|^2dx+s\rho^2\int_{\Omega_1}|v|^2dx \\ &\leq C\left[\norm{P_{A,q,\pm,s}v}^2_{L^2(\Omega_1)}+\rho\int_{\partial\omega_{\mp,\theta}\times{\mathbb R}} |\partial_\nu v|^2|\theta\cdot \nu| d\sigma(x)\right]\end{aligned}\end{equation} holds true for $s>s_1$, $\rho\geq \rho_1(s)$ with $C$ depending only on $\Omega_1$ and $M\geq \norm{q}_{L^\infty(\Omega_1))}+\norm{A}_{L^\infty(\Omega_1)^3}$. \end{proposition} \begin{proof} We start by proving that for all $s>1$ there exists $\rho_1(s)$ such that for $\rho >\rho_1(s)$ we have \begin{equation} \label{p1b}\begin{aligned}\norm{e^{-\varphi_{\pm,s}}\Delta e^{\varphi_{\pm,s}}v}^2_{L^2(\Omega_1)}\geq&\rho\int_{\partial\omega_{\pm,\theta}\times{\mathbb R}} |\partial_\nu v|^2|\theta\cdot \nu| d\sigma(x)-8\rho\int_{\partial\omega_{\mp,\theta}\times{\mathbb R}} |\partial_\nu v|^2|\theta\cdot \nu| d\sigma(x)+s\int_{\Omega_1}|\nabla v|^2dx\\ \ &+\frac{s\rho^2}{2}\int_{\Omega_1}|v|^2dx+cs\rho^{-2}\int_{\Omega_1}|\Delta v|^2dx,\end{aligned}\end{equation} with $c$ depending only on $\Omega_1$. Using this estimate, we will derive \eqref{p1a}. The proof of this result being similar for $e^{-\varphi_{+,s}}\Delta e^{\varphi_{+,s}}$ and $e^{-\varphi_{-,s}}\Delta e^{\varphi_{-,s}}$, we will only consider it for $e^{-\varphi_{+,s}}\Delta e^{\varphi_{+,s}}$. We decompose $e^{-\varphi_{+,s}}\Delta e^{\varphi_{+,s}}$ into three terms \[e^{-\varphi_{+,s}}\Delta e^{\varphi_{+,s}}=P_{1,+}+P_{2,+}+P_{3,+},\] with \[P_{1,+}=\Delta'+|\nabla\varphi_{+,s}|^2-\Delta' \varphi_{+,s}=\Delta'+\rho^2-2s\rho (x'\cdot\theta)+s^2 (x'\cdot\theta)^2+s,\] \[P_{2,+}=\partial_{x_3}^2,\quad P_{3,+}=2\nabla'\varphi_{+,s}\cdot\nabla' +2\Delta' \varphi_{+,s}=2(\rho-s (x'\cdot\theta))\theta \cdot\nabla'-2s.\] Here $\Delta':=\partial_{x_1}^2+\partial_{x_2}^2$, $\nabla':=(\partial_{x_1},\partial_{x_2})^T$ and $\theta\cdot\nabla'=\theta_1\partial_{x_1}+\theta_2\partial_{x_2}$. Using some arguments similar to \cite[Proposition 2.3]{FKSU}, one can check that for all $s>1$ there exists $\rho_2(s)>1$ such that for $\rho >\rho_2(s)$ and $y\in\mathcal C^\infty(\overline{\omega})\cap H^1_0(\omega)$ we have $$\begin{aligned}&2\mathfrak R\int_\omega P_{1,+}y\overline{P_{3,+}y}dx'\\ &\geq \rho\int_{\partial\omega_{\pm,\theta}} |\partial_\nu y|^2|\theta\cdot \nu| d\sigma(x')-8\rho\int_{\partial\omega_{\mp,\theta}} |\partial_\nu y|^2|\theta\cdot \nu| d\sigma(x')+s\rho^2\int_{\Omega_1}|y|^2dx'+s\int_\omega |\nabla' y|^2dx.\end{aligned}$$ Applying this estimate to $v(\cdot,x_3):=x'\mapsto v(x',x_3)$, $x_3\in{\mathbb R}$, we obtain $$\begin{aligned}2\mathfrak R\int_\omega P_{1,+}v(\cdot,x_3)\overline{P_{3,+}v(\cdot,x_3)}dx'&\geq \rho\int_{\partial\omega_{\pm,\theta}} |\partial_\nu v(\cdot,x_3)|^2|\theta\cdot \nu| d\sigma(x')+s\int_\omega |\nabla' v(\cdot,x_3)|^2dx\\ \ &\ \ \ -8\rho\int_{\partial\omega_{\mp,\theta}} |\partial_\nu v(\cdot,x_3)|^2|\theta\cdot \nu| d\sigma(x')+s\rho^2\int_{\omega}|v(\cdot,x_3)|^2dx',\quad x_3\in{\mathbb R}.\end{aligned}$$ Integrating this estimate with respect to $x_3\in{\mathbb R}$, we get \begin{equation} \label{p1c}\begin{aligned}&\norm{P_{1,+}v+P_{2,+}v+P_{3,+}v}^2_{L^2(\Omega_1)}\\ &\geq \norm{P_{1,+}v+P_{2,+}v}^2_{L^2(\Omega_1)}+2\mathfrak R\int_{\Omega_1} P_{1,+}v\overline{P_{3,+}v}dx+2\mathfrak R\int_{\Omega_1} P_{2,+}v\overline{P_{3,+}v}dx \\ &\geq \norm{P_{1,+}v+P_{2,+}v}^2_{L^2(\Omega_1)}+2\mathfrak R\int_{\Omega_1} P_{2,+}v\overline{P_{3,+}v}dx+2\rho\int_{\partial\omega_{+,\theta}\times{\mathbb R}} |\partial_\nu v|^2|\theta\cdot \nu| d\sigma(x) \\ &\ \ \ -8\rho\int_{\partial\omega_{-,\theta}\times{\mathbb R}} |\partial_\nu v|^2|\theta\cdot \nu| d\sigma(x)+s\rho^2\int_{\Omega_1} |v|^2dx+s\int_{\Omega_1} |\nabla' v|^2dx .\end{aligned}\end{equation} On the other hand, integrating by parts with respect to $x_3\in{\mathbb R}$ and then with respect to $x'\in\omega$, we find \begin{equation} \label{p1d}\begin{aligned}\mathfrak R\int_{\Omega_1} P_{2,+}v\overline{P_{3,+}v}dx&=-\int_{\Omega_1}(\rho-s (x'\cdot\theta))\theta \cdot\nabla'|\partial_{x_3}v|^2dx+2s\int_{\Omega_1}|\partial_{x_3}v|^2dx\\ \ &=s\int_{\Omega_1}|\partial_{x_3}v|^2dx.\end{aligned}\end{equation} Moreover, fixing $$\tilde{c}=4\left(3+\sup_{x'\in\overline{\omega}}|x'|\right)^2,\quad \rho_1(s)=\rho_2(s)+\tilde{c}^{-1}\sqrt{s},$$ we deduce that, for $\rho>\rho_1(s)$, we have $$\norm{P_{1,+}v+P_{2,+}v}^2_{L^2(\Omega_1)}\geq s\tilde{c}^{-1}\rho^{-2}\norm{P_{1,+}v+P_{2,+}v}^2_{L^2(\Omega_1)}\geq s(2\tilde{c})^{-1}\rho^{-2}\norm{\Delta v}_{L^2(\Omega_1)}^2-\frac{s\rho^2}{2} \norm{ v}_{L^2(\Omega_1)}^2.$$ Combining this with \eqref{p1c}-\eqref{p1d} we deduce \eqref{p1b}. Now let us complete the proof of \eqref{p1a}. For this purpose, we introduce $$P_{4,\pm}=2iA\cdot\nabla+2iA\cdot\nabla \varphi_{\pm,s}+q=2iA\cdot\nabla+2(\pm\rho-s(x'\cdot\theta)) iA'\cdot\theta +q,$$ with $A=(a_1,a_2,a_3)$ and $A'=(a_1,a_2)$, and we recall that $P_{A,q,\pm,s}=e^{-\varphi_{\pm,s}}\Delta e^{\varphi_{\pm,s}}+P_{4,\pm}$. We find \[\begin{aligned}&\norm{P_{A,q,\pm,s}v}^2_{L^2(\Omega_1)}\\ &\geq {\norm{e^{-\varphi_{\pm,s}}\Delta e^{\varphi_{\pm,s}}v}^2_{L^2(\Omega_1)}\over 2}-\norm{P_{4,\pm}v}_{L^2(\Omega_1)}^2\\ &\geq {\norm{^{-\varphi_{\pm,s}}\Delta e^{\varphi_{\pm,s}}v}^2_{L^2(\Omega_1)}\over 2}-3\norm{A}_{L^\infty(\Omega_1)}^2\int_{\Omega_1} |\nabla v|^2dx -3\left(16\norm{A}_{L^\infty(\Omega_1)}^2\rho +\norm{q}_{L^\infty(\Omega_1)}^2\right)\int_{\Omega_1} | v|^2dx .\end{aligned}\] Fixing $s_1=48\norm{A}_{L^\infty(\Omega_1)}^2+6$, we deduce \eqref{p1a} from \eqref{p1b}. \end{proof} A direct consequence of these Carleman estimates is the following result which will be useful for Theorem \ref{t6}. \begin{co}\label{c2} Let $A\in L^\infty(\Omega_1)^3$ and $q\in L^\infty(\Omega_1;\mathbb C)$. There exists $\rho_1'>0$ such that for any $u\in\mathcal C^2_0({\mathbb R}^3)\cap H^1_0(\Omega_1)$ the estimate \begin{equation}\label{c2a}\begin{array}{l}\rho\int_{\partial\omega_{+,\theta}\times{\mathbb R}}e^{-2\rho\theta\cdot x'}\abs{\partial_\nu u}^2\abs{\theta\cdot\nu(x) } d\sigma(x) +\rho^2\int_{\Omega_1} e^{-2\rho\theta\cdot x'}\abs{u}^2dx+\int_{\Omega_1} e^{-2\rho\theta\cdot x'}|\nabla u|^2dx\\ \leq C\left(\int_{\Omega_1} e^{-2\theta\cdot x'}\abs{(-\Delta+2iA\cdot\nabla+q)u}^2dx+\rho\int_{\partial\omega_{-,\theta}\times{\mathbb R}}e^{-2\rho\theta\cdot x'}\abs{\partial_\nu u}^2\abs{\theta\cdot\nu(x) }d\sigma(x)\right)\end{array}\end{equation} holds true for $\rho\geq \rho_1'$ with $C$ depending only on $\Omega_1$ and $M\geq \norm{q}_{L^\infty(\Omega_1)}+\norm{A}_{L^\infty(\Omega_1)^3}$. \end{co} \begin{proof} We fix $u\in\mathcal C^2_0({\mathbb R}^3)\cap H^1_0(\Omega_1)$ and we set $v=e^{-\varphi_{+,s}}u$ such that $$\int_{\Omega_1} e^{-2\varphi_{+,s}}|(-\Delta+2iA\cdot\nabla+q)u|^2dx=\int_{\Omega_1}|P_{A,q,+,s}v|^2dx.$$ The fact that $v\in H^1_0(\Omega_1)$ implies $\partial_\nu v_{\vert\partial\Omega_1}=e^{-\rho\theta\cdot x'}e^{{s(x\cdot\theta)^2\over2}}\partial_\nu u_{\vert\partial\Omega_1}$ and we deduce that \begin{equation} \label{c2b}\int_{\partial\omega_{+,\theta}\times{\mathbb R}} |\partial_\nu v|^2\omega\cdot\nu d\sigma(x)\geq \int_{\partial\omega_{+,\theta}\times{\mathbb R}} e^{-2\rho\theta\cdot x'}|\partial_\nu u|^2\omega\cdot\nu d\sigma(x)\end{equation} \begin{equation} \label{c2c}\int_{\partial\omega_-\times{\mathbb R}} |\partial_\nu v|^2\omega\cdot\nu d\sigma(x)\geq e^{sb^2}\int_{\partial\omega_-\times{\mathbb R}} e^{-2\rho\theta\cdot x'}|\partial_\nu u|^2\omega\cdot\nu d\sigma(x),\end{equation} with $b=\left(2+2\sup_{x'\in\omega}|x'|\right)$. Moreover, since \[ \nabla u(x)=\nabla (e^{\varphi_{+,s}}v)=(\rho-sx'\cdot\theta)u\omega+e^{\rho\theta\cdot x'}e^{-{s(x'\cdot\theta)^2\over2}}\nabla v,\quad x=(x',x_3)\in\omega\times{\mathbb R},\] we obtain \[\int_{\Omega_1} e^{-2\rho\theta\cdot x'}|\nabla u|^2dx\leq 2\rho^2e^{sb^2}\int_{\Omega_1} |v|^2dx+2e^{sb^2}\int_{\Omega_1}|\nabla v|^2dx.\] Combining this estimates with \eqref{p1a} and \eqref{c2b}-\eqref{c2c}, for $s\geq s_1$ and $\rho>\rho_1(s)$, we get \begin{equation} \label{10}\begin{array}{l}\int_{\Omega_1} e^{-2\rho\theta\cdot x'}|\nabla u|^2dx+\rho^2\int_{\Omega_1} e^{-2\rho\theta\cdot x'}|u|^2dx+\rho\int_{\partial\omega_{+,\theta}\times{\mathbb R}} e^{-2\rho\theta\cdot x'}|\partial_\nu u|^2\omega\cdot\nu d\sigma(x)\\ \ \\ \leq \rho e^{sb^2}\int_{\partial\omega_{-,\theta}\times{\mathbb R}} e^{-2\rho\theta\cdot x'}|\partial_\nu u|^2\omega\cdot\nu d\sigma(x)+Ce^{sb^2}\int_{\Omega_1} e^{-2\rho\theta\cdot x'}|(-\Delta+2iA\cdot\nabla+q)u|^2dx.\end{array}\end{equation} From this last estimate we deduce \eqref{c2a} by fixing $s=s_1+1$ and $\rho_1'=\rho_1(s_1+1)$. \end{proof} \begin{rem}\label{r1} By density the result of Proposition \ref{p1} and Corollary \ref{c1} can be extended to any $v\in H^1_0(\Omega_1)$ satisfying $\Delta v\in L^2(\Omega_1)$ and $\partial_\nu v\in L^2(\partial\Omega_1)$.\end{rem} \subsection{Carleman estimate in negative order Sobolev space} The goal of this subsection is to apply the result of Proposition \ref{p1} in order to derive Carleman estimates in negative order Sobolev space which will be one of the most important ingredient in the construction of the CGO solutions. We recall first some preliminary tools and we derive a Carleman estimate in Sobolev space of negative order. In a similar way to \cite{Ki3}, for all $m\in{\mathbb R}$, we introduce the space $H^m_\rho({\mathbb R}^{3})$ defined by \[H^m_\rho({\mathbb R}^{3})=\{u\in\mathcal S'({\mathbb R}^{3}):\ (|\xi|^2+\rho^2)^{m\over 2}\hat{u}\in L^2({\mathbb R}^{3})\},\] with the norm \[\norm{u}_{H^m_\rho({\mathbb R}^{3})}^2=\int_{{\mathbb R}^3}(|\xi|^2+\rho^2)^{m}|\hat{u}(\xi)|^2 d\xi .\] Here for all tempered distributions $u\in \mathcal S'({\mathbb R}^3)$, we denote by $\hat{u}$ the Fourier transform of $u$ which, for $u\in L^1({\mathbb R}^3)$, is defined by $$\hat{u}(\xi):=\mathcal Fu(\xi):= (2\pi)^{-{3\over2}}\int_{{\mathbb R}^3}e^{-ix\cdot \xi}u(x)dx.$$ From now on, for $m\in{\mathbb R}$ and $\xi\in {\mathbb R}^3$, we set $$\left\langle \xi,\rho\right\rangle=(|\xi|^2+\rho^2)^{1\over2}$$ and $\left\langle D_x,\rho\right\rangle^m u$ defined by \[\left\langle D_x,\rho\right\rangle^m u=\mathcal F^{-1}(\left\langle \xi,\rho\right\rangle^m \mathcal Fu).\] For $m\in{\mathbb R}$ we define also the class of symbols \[S^m_\rho=\{c_\rho\in\mathcal C^\infty({\mathbb R}^3\times{\mathbb R}^3):\ |\partial_x^\alpha\partial_\xi^\beta c_\rho(x,\xi)|\leq C_{\alpha,\beta}\left\langle \xi,\rho\right\rangle^{m-|\beta|},\ \alpha,\beta\in\mathbb N^3\}.\] Following \cite[Theorem 18.1.6]{Ho3}, for any $m\in{\mathbb R}$ and $c_\rho\in S^m_\rho$, we define $c_\rho(x,D_x)$, with $D_x=-i\nabla $, by \[c_\rho(x,D_x)y(x)=(2\pi)^{-{3\over 2}}\int_{{\mathbb R}^3}c_\rho(x,\xi)\hat{y}(\xi)e^{ix\cdot \xi} d\xi,\quad y\in\mathcal S({\mathbb R}^3).\] For all $m\in{\mathbb R}$, we set also $OpS^m_\rho:=\{c_\rho(x,D_x):\ c_\rho\in S^m_\rho\}$. We fix $$P_{A,q,\pm}:=e^{\mp \rho x'\cdot\theta}(\Delta_{ A}+q) e^{\pm \rho x'\cdot\theta}$$ and, in the spirit of \cite[estimate (2.14)]{FKSU} and \cite[Lemma 2.1]{ST}, we consider the following Carleman estimate. \begin{proposition}\label{p2} Let $A\in L^\infty(\Omega_1)^3$ and $q\in L^\infty(\Omega_1;\mathbb C)$. Then, there exists $\rho_2>1$ such that for all $v\in \mathcal C^\infty_0(\Omega_1)$, we have \begin{equation} \label{p2a}\rho^{-1}\norm{v}_{H^1_\rho({\mathbb R}^3)}\leq C\norm{P_{A,q,\pm}v}_{H^{-1}_\rho({\mathbb R}^3)},\quad \rho>\rho_2,\end{equation} with $C>0$ depending on $\Omega_1$ and $\norm{q}_{L^\infty(\Omega_1)}+\norm{A}_{L^\infty(\Omega_1)^3}$. \end{proposition} \begin{proof} Since this result is similar for $P_{A,q,+}v$ and $P_{A,q,-}v$, we will only prove it for $P_{A,q,+}v$. For $\varphi_{+,s}$ given by \eqref{phi}, we consider $$R_{ A,q,+,s}:=e^{-\varphi_{+,s}}(\Delta_{ A}+q)e^{\varphi_{+,s}}$$ and in a similar way to Proposition \ref{p1} we decompose $R_{A,+,s}$ into three terms \[R_{ A,q,+,s}=P_{1,+}+P_{2,+}+P_{3,+,A},\] where we recall that \[P_{1,+}=\Delta+\rho^2-2s\rho (x'\cdot\theta)+s^2 (x'\cdot\theta)^2+s,\quad P_{2,+}=2(\rho-s (x'\cdot\theta))\theta \cdot\nabla-2s.\] \[ P_{3,+,A}=2iA\cdot\nabla+2iA\cdot\nabla \varphi_{+,s}+q-|A|^2+i\textrm{div}(A)=2iA\cdot\nabla+2(\rho-s(x'\cdot\theta)) iA'\cdot\theta +q-|A|^2+i\textrm{div}(A).\] We pick $ \tilde{\omega}$ a bounded $\mathcal C^2$ open set of ${\mathbb R}^2$ such that $\overline{\omega}\subset\tilde{\omega}$ and we extend the function $A$ and $q$ to ${\mathbb R}^3$ with $ A=0$, $q=0$ on ${\mathbb R}^3\setminus \Omega_1$. We consider also $\tilde{\Omega}=\tilde{\omega}\times{\mathbb R}$. We start with the Carleman estimate \begin{equation} \label{car}\rho^{-1}\norm{v}_{H^1_\rho({\mathbb R}^3)}\leq C\norm{R_{ A,q,+,s}v}_{H^{-1}_\rho({\mathbb R}^3)},\quad v\in\mathcal C^\infty_0(\Omega_1).\end{equation} For this purpose, we fix $w\in H^3({\mathbb R}^3)$ satisfying supp$(w)\subset\tilde{\Omega}$ and we consider the quantity \[\left\langle D_x,\rho\right\rangle^{-1}(P_{1,+}+P_{2,+})\left\langle D_x,\rho\right\rangle w.\] In all the remaining parts of this proof $C>0$ denotes a generic constant depending on $\Omega_1$ and $\norm{A}_{L^\infty(\Omega_1)^3}+\norm{q}_{L^\infty(\Omega_1)}$. Applying the properties of composition of pseudoddifferential operators (e.g. \cite[Theorem 18.1.8]{Ho3}), we find \begin{equation} \label{l2c}\left\langle D_x,\rho\right\rangle^{-1}(P_{1,+}+P_{2,+})\left\langle D_x,\rho\right\rangle=P_{1,+}+P_{2,+}+S_\rho(x,D_x),\end{equation} where $S_\rho$ is defined by \[S_\rho(x,\xi)=\nabla_\xi\left\langle \xi,\rho\right\rangle^{-1}\cdot D_x(p_{1,+}(x,\xi)+p_{2,+}(x,\xi))\left\langle \xi,\rho\right\rangle+\underset{\left\langle \xi,\rho\right\rangle\to+\infty}{ o}(1),\] with $$p_{1,+}(x,\xi)=-|\xi|^2+\rho^2-2s\rho (x'\cdot\theta)+s^2 (x'\cdot\theta)^2+s,\quad p_{2,+}(x,\xi)=2i[\rho-s(x'\cdot\theta)]\theta\cdot\xi'-2s,\quad \xi=(\xi',\xi_3)\in {\mathbb R}^2\times{\mathbb R}.$$ Therefore, we have $$S_\rho(x,\xi)={[-2i\rho s+2is^2x'\cdot\theta+2s(\theta\cdot\xi')](\theta\cdot\xi')\over |\xi|^2+\rho^2}+\underset{\left\langle \xi,\rho\right\rangle\to+\infty}{ o}(1)$$ and it follows \begin{equation} \label{l2d} \norm{S_\rho(x,D_x)w}_{L^2( {\mathbb R}^3)}\leq Cs^2\norm{w}_{L^2( {\mathbb R}^3)}.\end{equation} On the other hand, applying \eqref{p1a} to $w$, which is permitted according to Remark \ref{r1}, with $\Omega_1$ replaced by $\tilde{\Omega}$ and $A=0$, $q=0$, we get \[\norm{P_{1,+}w+P_{2,+}w}_{L^2({\mathbb R}^3)}\geq C\left(s^{1/2}\rho^{-1}\norm{\Delta w}_{L^2({\mathbb R}^3)}+s^{1/2}\norm{\nabla w}_{L^2({\mathbb R}^3)}+s^{1/2}\rho\norm{ w}_{L^2({\mathbb R}^3)}\right).\] Combining this estimate with \eqref{l2c}-\eqref{l2d}, for ${\rho\over s^2}$ sufficiently large, we obtain $$\begin{array}{l} \norm{(P_{1,+}+P_{2,+})\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}\\ =\norm{\left\langle D_x,\rho\right\rangle^{-1}(P_{1,+}+P_{2,+})\left\langle D_x,\rho\right\rangle w}_{L^2( {\mathbb R}^3)}\\ \geq Cs^{1/2}\left(\rho^{-1}\norm{\Delta w}_{L^2({\mathbb R}^3)}+\norm{\nabla w}_{L^2({\mathbb R}^3)}+\rho\norm{ w}_{L^2({\mathbb R}^3)}\right).\end{array}$$ On the other hand, using the fact that $w\in H^2(\tilde{\Omega})\cap H^1_0(\tilde{\Omega})$, the elliptic regularity for cylindrical domain (e.g. \cite[Lemma 2.2]{CKS}) implies $$\norm{w}_{H^2({\mathbb R}^3)}=\norm{w}_{H^2(\tilde{\Omega})}\leq C(\norm{\Delta w}_{L^2(\tilde{\Omega})}+\norm{ w}_{L^2(\tilde{\Omega})}).$$ Combining this with the previous estimate, for $s$ sufficiently large, we find \begin{equation} \label{l2e}\norm{(P_{1,+}+P_{2,+})\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}\geq Cs^{\frac{1}{2}}\rho^{-1}\norm{w}_{H^2_\rho({\mathbb R}^3)}.\end{equation} Moreover, we have \begin{equation} \label{p2c}\begin{aligned}&\norm{P_{3,+,A}\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}\\ &\leq \norm{[2i(\rho-s(x'\cdot\theta))A\cdot\theta+(q-|A|^2)]\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}+2\norm{A\cdot\nabla \left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}\\ &\ \ \ \ +\norm{i\textrm{div}(A)\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}.\end{aligned}\end{equation} For the first term on the right hand side of this inequality, we have \begin{equation} \label{p2d}\begin{aligned}\norm{[2i(\rho-s(x'\cdot\theta))A\cdot\theta+(q-|A|^2)]\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}&\leq\rho^{-1}\norm{[2i(\rho-s(x'\cdot\theta))A\cdot\theta+(q-|A|^2)]\left\langle D_x,\rho\right\rangle w}_{L^2({\mathbb R}^3)}\\ \ &\leq C\norm{\left\langle D_x,\rho\right\rangle w}_{L^2({\mathbb R}^3)}\\ \ &\leq C\norm{\left\langle D_x,\rho\right\rangle w}_{L^2({\mathbb R}^3)}=C\norm{ w}_{H^1_\rho({\mathbb R}^3)},\end{aligned}\end{equation} with $C$ depending only on $\norm{A}_{L^\infty(\Omega_1)^3}+\norm{q}_{L^\infty(\Omega_1)}$. For the second term on the right hand side of \eqref{p2c}, we get \begin{equation} \label{p2e}\begin{aligned}\norm{A\cdot\nabla \left\langle D,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}&\leq \rho^{-1}\norm{A\cdot\nabla \left\langle D_x,\rho\right\rangle w}_{L^2({\mathbb R}^3)}\\ \ &\leq \rho^{-1}\norm{A}_{L^\infty(\Omega_1)^3}\norm{\nabla \left\langle D,\rho\right\rangle w}_{L^2({\mathbb R}^3)}\\ \ &\leq \rho^{-1}\norm{A}_{L^\infty(\Omega_1)^3}\norm{ w}_{H^{2}_\rho({\mathbb R}^3)}.\end{aligned}\end{equation} Finally, for the last term on the right hand side of \eqref{p2c}, by duality, we find \begin{equation} \label{p2f}\begin{aligned}\norm{i\textrm{div}(A)\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}&\leq \rho^{-1} \norm{A\cdot\nabla \left\langle D_x,\rho\right\rangle w}_{L^2({\mathbb R}^3)}+\norm{(\left\langle D_x,\rho\right\rangle w) A}_{L^2({\mathbb R}^3)^3}\\ \ &\leq 2\rho^{-1}\norm{A}_{L^\infty(\Omega_1)^3}\norm{w}_{H^2_\rho({\mathbb R}^3))}.\end{aligned}\end{equation} Combining \eqref{p2c}-\eqref{p2f}, we obtain $$\norm{P_{3,+,A}\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}\leq C\rho^{-1}\norm{w}_{H^2_\rho({\mathbb R}^3)}$$ and combining this with \eqref{l2e} for $s>1$ sufficiently large, we get \begin{equation} \label{p2g}\norm{R_{ A,q,+,s}\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}^2\geq Cs^{\frac{1}{2}}\rho^{-1}\norm{w}_{H^2_\rho({\mathbb R}^3)}.\end{equation} Now let us set $\omega_j$, $j=1,2$ two open subsets of $\tilde{\omega}$ such that $\overline{\omega}\subset \omega_1$, $\overline{\omega_1}\subset \omega_2$, $\overline{\omega_2}\subset \tilde{\omega}$. We fix $\psi_0\in\mathcal C^\infty_0(\tilde{\omega})$ satisfying $\psi_0=1$ on $\overline{\omega_2}$, $w(x',x_3)=\psi_0(x') \left\langle D_x,\rho\right\rangle^{-1} v(x',x_3)$ and for $\psi_1\in\mathcal C^\infty_0(\omega_1)$ satisfying $\psi_1=1$ on $\omega$, we get $$(1-\psi_0 )\left\langle D_x,\rho\right\rangle^{-1} v=(1-\psi_0 )\left\langle D_x,\rho\right\rangle^{-1}\psi_1 v,$$ where $\psi_1v$ denotes the function $(x',x_3)=x\mapsto \psi_1(x')v(x)$. According to \cite[Theorem 18.1.8]{Ho3}, since $1-\psi_0$ is vanishing in a neighborhood of supp$(\psi_1)$, we have $(1-\psi_0) \left\langle D_x,\rho\right\rangle^{-1}\psi_1\in OpS^{-\infty}_\rho$ and it follows \[ \begin{aligned}\rho^{-1}\norm{v}_{H^{1}_\rho({\mathbb R}^3)}&=\rho^{-1}\norm{\left\langle D_x,\rho\right\rangle^{-1} v}_{H^2_\rho({\mathbb R}^3)}\\ \ &\leq \rho^{-1}\norm{w}_{H^2_\rho({\mathbb R}^3)}+\rho^{-1}\norm{(1-\psi_0)\left\langle D_x,\rho\right\rangle^{-1}\psi_1 v}_{H^2_\rho({\mathbb R}^3)}\\ \ &\leq \rho^{-1}\norm{w}_{H^2_\rho({\mathbb R}^3)}+{C\norm{v}_{L^2({\mathbb R}^3)}\over\rho^2} .\end{aligned}\] In the same way, we find $$\begin{aligned}\norm{P_{A,-,s} v}_{H^{-1}_\rho({\mathbb R}^3)}&\geq \norm{P_{A,-,s}\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}-\norm{P_{A,-,s}\left\langle D_x,\rho\right\rangle (1-\psi_0 )\left\langle D_x,\rho\right\rangle^{-1}\psi_1 v}_{H^{-1}_\rho({\mathbb R}^3)}\\ \ &\geq \norm{P_{A,-,s}\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}-C\norm{ (1-\psi_0 )\left\langle D_x,\rho\right\rangle^{-1}\psi_1 v}_{H^{2}_\rho({\mathbb R}^3)}\\ \ &\geq \norm{P_{A,-,s}\left\langle D_x,\rho\right\rangle w}_{H^{-1}_\rho({\mathbb R}^3)}-{C\norm{v}_{L^2({\mathbb R}^{1+n})}\over\rho^2}.\end{aligned}$$ Combining these estimates with \eqref{p2g}, we deduce that \eqref{car} holds true for a sufficiently large value of $\rho$. Then, fixing $s$, we deduce \eqref{p2a}. \end{proof} \section{CGO solutions } \label{sec2} In this section we introduce a class of CGO solutions suitable for our problem stated in an unbounded domain for magnetic Schr\"odinder equations. Like in the previous section, we fix $\Omega_1=\omega\times{\mathbb R}$. Our goal is to build CGO solutions for the equations \eqref{eq1} extended to the cylindrical domain $\Omega_1$ in order to consider their restrictions on $\Omega$ for proving Theorem \ref{t1}, since according to \eqref{closed} we have $\Omega\subset\Omega_1$. We consider CGO solutions on $\Omega_1$ corresponding to some specific solutions $u_j\in H^1(\Omega_1)$, $j=1,2$, of $\Delta_{A_1} u_1+q_1u_1=0$, $\Delta_{A_2} u_2+\overline{q_2}u_2=0$ in $\Omega_1$ for $A_j\in L^\infty(\Omega_1)^3\cap L^2(\Omega_1)^3$ and $q_j\in L^\infty(\Omega_1;\mathbb C)$. More precisely, like in \cite{Ki4}, we start by considering $\theta\in\mathbb S^{1}:=\{y\in{\mathbb R}^2:\ |y|=1\}$, $\xi'\in\theta^\bot\setminus\{0\}$ with $\theta^\bot:=\{y\in{\mathbb R}^2:\ y\cdot\theta=0\}$, $\xi:=(\xi',\xi_3)\in {\mathbb R}^3$ with $\xi_3\neq0$. Then, we define $\eta\in\mathbb S^2:=\{y\in{\mathbb R}^3:\ |y|=1\}$ by $$\eta=\frac{(\xi',-\frac{|\xi'|^2}{\xi_3})}{\sqrt{|\xi'|^2+\frac{|\xi'|^4}{\xi_3^2}}}.$$ It is clear that \begin{equation} \label{orth}\eta\cdot\xi=(\theta,0)\cdot\xi=(\theta,0)\cdot\eta=0.\end{equation} We set also $\psi\in\mathcal C^\infty_0({\mathbb R};[0,1])$ such that $\psi=1$ on a neighborhood of $0$ in ${\mathbb R}$ and, for $\rho>1$, we consider solutions $u_j\in H^1(\Omega_1)$ of $\Delta_{A_1} u_1+q_1u_1=0$, $\Delta_{A_2} u_2+\overline{q_2}u_2=0$ in $\Omega_1$ taking the form \begin{equation} \label{CGO1}u_1(x',x_3)=e^{\rho \theta\cdot x'}\left(\psi\left(\rho^{-\frac{1}{4}}x_3\right)b_{1,\rho}e^{i\rho x\cdot\eta-i\xi\cdot x}+w_{1,\rho}(x',x_3)\right),\quad x'\in\omega,\ x_3\in{\mathbb R},\end{equation} \begin{equation} \label{CGO2}u_2(x',x_3)=e^{-\rho \theta\cdot x'}\left(\psi\left(\rho^{-\frac{1}{4}}x_3\right)b_{2,\rho}e^{i\rho x\cdot\eta}+w_{2,\rho}(x',x_3)\right),\quad x'\in\omega,\ x_3\in{\mathbb R}.\end{equation} Here $b_{j,\rho}\in \mathcal C^\infty(\overline{\Omega_1})$ and the remainder term $w_{j,\rho}\in H^1(\Omega_1)$ satisfies the decay property \begin{equation} \label{RCGO} \lim_{\rho\to+\infty}(\rho^{-1}\norm{w_{j,\rho}}_{H^1(\Omega_1)}+\norm{w_{j,\rho}}_{L^2(\Omega_1)})=0.\end{equation} This construction can be summarized in the following way. \begin{theorem}\label{t4} For $j=1,2$ and for all $\rho>\rho_2$, with $\rho_2$ the constant of Proposition \ref{p2}, the equations $\Delta_{A_1} u_1+q_1u_1=0$, $\Delta_{A_2} u_2+\overline{q_2}u_2=0$, admit respectively a solution $u_j\in H^1(\Omega_1)$ of the form \eqref{CGO1}-\eqref{CGO2} with $w_{j,\rho}$ satisfying the decay property \eqref{RCGO}.\end{theorem} \begin{rem} \emph{Like in \cite{Ki4}, we can not consider CGO solutions similar to those on bounded domains since they will not be square integrable in $\Omega_1$. In a similar way to \cite{Ki4}, we consider this new expression of the CGO solutions with principal parts that propagates in some suitable way along the axis of $\Omega_1$ with respect to the large parameter $\rho$. Comparing to \cite{Ki4} we need also to consider here the presence of non-compactly supported magnetic potentials. This part of our construction, will be precised in the next subsection.}\end{rem} In order to consider suitable solutions taking the form \eqref{CGO1}-\eqref{CGO2}, we need to define first the expressions $b_{j,\rho}$ in the principal part, which will be solutions of some $\overline{\partial}$ type equation involving the magnetic potential $A_j$. Then, we will consider the remainder terms by using the Carleman estimates of the preceding section. \subsection{Principal parts of the CGO} In this subsection we will introduce the form of the principal part $b_{j,\rho}$, $j=1,2$, of our CGO solutions given by \eqref{CGO1}-\eqref{CGO2}. For this purpose, we assume that $b_{j,\rho}$, $j=1,2$, is an approximation of a solution $b_j$ of the equations \begin{equation} \label{trans}2(\tilde{\theta}+i\eta)\cdot \nabla b_1+2i[(\tilde{\theta}+i\eta)\cdot A_1(x)]b_1=0,\quad 2(-\tilde{\theta}+i\eta)\cdot \nabla b_2+2i[(-\tilde{\theta}+i\eta)\cdot A_2(x)]b_2=0,\quad x\in\Omega_1,\end{equation} here $\tilde{\theta}:=(\theta,0)\in\mathbb S^2$. This approach, also considered in \cite{BKS1,Ki4,KU,Sa1}, makes it possible to reduce the regularity assumption on the first order coefficients $A_j$. Indeed, by replacing the functions $b_1$, $b_2$, whose regularity depends on the one of the coefficients $A_1$ and $A_2$, with their approximation $b_{1,\rho}$, $b_{2,\rho}$, we can weaken the regularity assumption imposed on the coefficients $A_j$, $j=1,2$, from $W^{2,\infty}(\Omega_1)^3$ to $ L^\infty(\Omega_1)^3$. Moreover, this approach requires also no information about the domain $\Omega$ and the coefficients $A_j$, $j=1,2$, on $\partial\Omega$. More precisely, if in our construction we use the expression $b_j$ instead of $b_{j,\rho}$, $j=1,2$, then, following our strategy, we can prove Theorem \ref{t1} only for specific domains and for coefficients $A_1,A_2\in W^{2,\infty}(\Omega)^3\cap L^1(\Omega)$ satisfying $$\partial_x^\alpha A_1(x)=\partial_x^\alpha A_2(x),\quad x\in\partial\Omega,\ \alpha\in\mathbb N^3,\ |\alpha|\leq1,$$ where in our case we make no assumption on the shape of $\Omega$ (except the condition $\Omega\subset\omega\times{\mathbb R}$) and about $A_j$ at $\partial\Omega$. Let us also mention that comparing to results stated on bounded domains (e.g. \cite{FKSU,KLU,KU}), the magnetic potentials $A_1$, $A_2$ can not be extended to compactly supported functions of ${\mathbb R}^3$. However, we can extend them into functions of ${\mathbb R}^3$ supported in infinite cylinder. Combining this with the fact that $A_j\in L^2(\Omega_1)^3$, we will prove how we can build CGO solutions having properties similar to the one of \cite{KU}. In order to define $b_{j,\rho}$, $j=1,2$, we start by introducing a suitable approximation of the coefficients $A_j$, $j=1,2$. For all $r>0$, we define $B_r:=\{x\in{\mathbb R}^{3}:\ |x|<r\}$ and $B_r':=\{x'\in{\mathbb R}^{2}:\ |x'|<r\}$. We fix $\chi\in\mathcal C^\infty_0({\mathbb R}^{3})$ such that $\chi\geq0$, $\int_{{\mathbb R}^{3}}\chi(x)dx=1$, supp$(\chi)\subset B_1$, and we define $\chi_\rho$ by $\chi_\rho(x)=\rho^{{3\over 4}}\chi(\rho^{{1\over4}}x)$. Then, for $j=1,2$, we fix $$A_{j,\rho}(x):=\int_{{\mathbb R}^{3}}\chi_\rho(x-y)A_j(y)dy.$$ Here, we assume that, for $j=1,2$, $A_j=0$ on ${\mathbb R}^{3}\setminus \Omega_1$. For $j=1,2$, since $A_j\in L^2({\mathbb R}^{3})^3$, by density one can check that \begin{equation} \label{a1a}\lim_{\rho\to+\infty}\norm{A_{j,\rho}-A_j}_{L^2({\mathbb R}^{3})}=0,\end{equation} and, using the fact that $A_j\in L^\infty({\mathbb R}^{3})^3$, we deduce the estimates \begin{equation} \label{a1b}\norm{A_{j,\rho}}_{H^k({\mathbb R}^{3})}+\norm{A_{j,\rho}}_{W^{k,\infty}({\mathbb R}^{3})}\leq C_k\rho ^{{k\over 4}},\end{equation} with $C_k$ independent of $\rho$. We remark that $$A_{\rho}(x):=\int_{{\mathbb R}^{3}}\chi_\rho(x-y)A(y)dy=A_{1,\rho}(x)-A_{2,\rho}(x),$$ with $A=A_1-A_2$. Recall that, for $j=1,2$, supp$(A_{j,\rho})\subset \overline{\Omega_1}+B_1:=\{x+y:\ x\in \overline{\Omega_1}, y\in B_1\}$. Moreover, fixing $R:=\underset{x'\in\overline{\omega}}{\sup}|x'|$, $R_1:=2\sqrt{2}(R+2+\frac{R+2}{|\xi'|})$ and assuming that $|(s_1,s_2)|\geq R_1$, we find $|s_1|\geq\frac{R_1}{\sqrt{2}}$ or $|s_2|\geq\frac{R_1}{\sqrt{2}}$. In addition, since $\theta\cdot\xi'=0$, we get $$|(s_1,s_2)|\geq R_1\Longrightarrow |s_1\theta+s_2\xi'|=|(s_1,s_2|\xi'|)|\geq \max(|s_1|,|s_2||\xi'|)>2R+4$$ and, for all $x=(x',x_3)\in B_{R+1}'\times{\mathbb R}$, we get $$|(s_1,s_2)|\geq R_1\Longrightarrow |x'-s_1\theta-s_2\xi'|\geq |s_1\theta+s_2\xi'|- |x'|\geq R+3.$$ Thus, for all $x=(x',x_3)\in B_{R+1}'\times{\mathbb R}$, the function $$(s_1,s_2)\mapsto A_{j,\rho}(s_1\tilde{\theta}+s_2\eta+x)$$ will be supported in $B'_{R_1}$. Thus, we can define \begin{equation} \label{conde1} \begin{aligned}&\Phi_{1,\rho}(x):=\frac{-i}{2\pi} \int_{{\mathbb R}^2} \frac{(\tilde{\theta}+i\eta)\cdot A_{1,\rho}(x-s_1\tilde{\theta}-s_2\eta)}{s_1+is_2}ds_1ds_2,\\ &\Phi_{2,\rho}(x):=\frac{-i}{2\pi} \int_{{\mathbb R}^2} \frac{(-\tilde{\theta}+i\eta)\cdot A_{2,\rho}(x+s_1\tilde{\theta}-s_2\eta)}{s_1+is_2}ds_1ds_2.\end{aligned}\end{equation} Fixing \begin{equation} \label{conde2} b_{1,\rho}(x)=e^{\Phi_{1,\rho}(x)},\quad b_{2,\rho}(x)=e^{\Phi_{2,\rho}(x)},\end{equation} we obtain \begin{equation} \label{tt}(\tilde{\theta}+i\eta)\cdot \nabla b_{1,\rho}+i[(\tilde{\theta}+i\eta)\cdot A_{1,\rho}(x)]b_{1,\rho}=0,\quad (-\tilde{\theta}+i\eta)\cdot \nabla b_{2,\rho}+i[(-\tilde{\theta}+i\eta)\cdot A_{2,\rho}(x)]b_{2,\rho}=0,\quad x\in\Omega_1.\end{equation} Here, even if $A_{j,\rho}$, $j=1,2$, is not compactly supported, one can use the fact that the functions $$(s_1,s_2)\mapsto A_{j,\rho}(s_1\tilde{\theta}+s_2\eta+s_3\xi),\quad s_3\in{\mathbb R},$$ are compactly supported to deduce \eqref{tt}. Moreover, using the fact that $$(x-s_1\tilde{\theta}-s_2\eta)\notin\textrm{supp}(A_{j,\rho}),\quad x\in B'_{R+1}\times{\mathbb R},\ |(s_1,s_2)|>R_1,\ j=1,2,$$ for all $x\in B_{R+1}'\times{\mathbb R},\ j=1,2,$ we deduce that $$\begin{aligned}|\Phi_{j,\rho}(x)|&\leq \frac{1}{2\pi} \int_{|(s_1,s_2)|\leq R_1} \frac{|A_{j,\rho}(x-s_1\tilde{\theta}-s_2\eta)|}{|s_1+is_2|}ds_1ds_2\\ \ &\leq \frac{\norm{A_{j,\rho}}_{L^\infty({\mathbb R}^3)}}{2\pi} \int_{|(s_1,s_2)|\leq R_1} \frac{1}{|(s_1,s_2)|}ds_1ds_2\\ \ &\leq C,\end{aligned}$$ with $C$ independent of $\rho$. This proves that $$\norm{\Phi_{j,\rho}}_{L^\infty(B_{R+1}'\times{\mathbb R})}\leq C.$$ In the same way, we can prove that \begin{equation} \label{tt1}\norm{\Phi_{j,\rho}}_{W^{k,\infty}(B_{R+1}'\times{\mathbb R})}\leq C_k\rho^{\frac{k}{4}},\quad k\geq0,\end{equation} with $C_k$ independent of $\rho$. According to this estimate, we have \begin{equation} \label{cond31}\norm{b_{j,\rho}}_{W^{k,\infty}( B_{R+1}'\times{\mathbb R})}\leq C_k\rho^{{k\over4}},\quad k\geq0.\end{equation} Moreover, conditions \eqref{tt}, \eqref{cond31} and the fact that $$[\textrm{supp}(A_j)\cup \textrm{supp}(A_{j,\rho})]\subset \overline{\Omega_1}+B_1\subset B'_{R+1}\times{\mathbb R},\quad j=1,2,$$ imply that \begin{equation} \label{cond5} \begin{aligned}\norm{(\tilde{\theta}+i\eta)\cdot \nabla b_{1,\rho}+i[(\tilde{\theta}+i\eta)\cdot A_{1}]b_{1,\rho}}_{L^2(B_{R+1}'\times{\mathbb R})}&=\norm{[i[(\tilde{\theta}+i\eta)\cdot (A_{1}-A_{1,\rho})]]b_{1,\rho}}_{L^2(B_{R+1}'\times{\mathbb R})}\\ \ &\leq C\norm{A_{1}-A_{1,\rho}}_{L^2({\mathbb R}^3)},\end{aligned}\end{equation} \begin{equation} \label{cond6} \begin{aligned}\norm{(-\tilde{\theta}+i\eta)\cdot \nabla b_{2,\rho}+i[(-\tilde{\theta}+i\eta)\cdot A_2]b_{2,\rho}}_{L^2(B_{R+1}'\times{\mathbb R})}&=\norm{[i[(\tilde{\theta}+i\eta)\cdot (A_{2}-A_{2,\rho})]]b_{2,\rho}}_{L^2(B_{R+1}'\times{\mathbb R})}\\ \ &\leq C\norm{A_{2}-A_{2,\rho}}_{L^2({\mathbb R}^3)},\end{aligned}\end{equation} with $C>0$ independent of $\rho$. Using these properties of the expressions $b_{j,\rho}$, $j=1,2$, we will complete the construction of the solutions $u_j$ of the form \eqref{CGO1}-\eqref{CGO2}. \subsection{Remainder term of the CGO solutions} In this subsection we will construct the remainder term $w_{j,\rho}$, $j=1,2$, appearing in \eqref{CGO1}-\eqref{CGO2} and satisfying the decay property \eqref{RCGO}. For this purpose, we will combine the Carleman estimate \eqref{p2a} with the properties of the expressions $b_{j,\rho}$, $j=1,2$, in order to complete the construction of these solutions. In this subsection, we assume that $\rho>\rho_2$ with $\rho_2$ the constant introduced in Proposition \ref{p2}. The proof for the existence of the remainder term $w_{1,\rho}$ and $w_{2,\rho}$ being similar, we will only show the existence of $w_{1,\rho}$. Let us first remark that $w_{1,\rho}$ should be a solution of the equation \begin{equation} \label{t4a} P_{A_1,q_1,+}w=e^{-\rho \theta\cdot x'}(\Delta_{A_1}+q_1)e^{\rho \theta\cdot x'}w=e^{i\rho \eta\cdot x}F_{1,\rho}(x),\quad x\in\Omega_1,\end{equation} with $F_{1,\rho}$ defined, for all $x=(x',x_3)\in B'_{R+1}\times{\mathbb R}$ (we recall that $B_r'=\{x'\in{\mathbb R}^{2}:\ |x'|<r\}$ and $R=\underset{x'\in\overline{\omega}}{\sup}|x'|$), by \begin{equation} \label{t4b}\begin{aligned}F_{1,\rho}(x)&=-e^{-\rho \theta\cdot x'-i\rho \eta\cdot x}(\Delta_{A_1}+q_1)\left[e^{\rho \theta\cdot x'+i\rho \eta\cdot x}\psi\left(\rho^{-\frac{1}{4}}x_3\right)b_{1,\rho}e^{-i\xi\cdot x}\right]\\ &=-\left((-|\xi|^2+\textrm{div}(A_1)+q_1)\psi\left(\rho^{-\frac{1}{4}}x_3\right)+2i\eta_3\rho^{\frac{3}{4}}\psi'\left(\rho^{-\frac{1}{4}}x_3\right)-2i\xi_3\rho^{-\frac{1}{4}}\psi'\left(\rho^{-\frac{1}{4}}x_3\right)\right)b_{1,\rho}e^{-i\xi\cdot x}\\ &\ \ \ -\left[\rho^{-\frac{1}{2}}\psi''\left(\rho^{-\frac{1}{4}}x_3\right)b_{1,\rho}+ 2\partial_{x_3}b_{1,\rho}\rho^{-\frac{1}{4}}\psi'\left(\rho^{-\frac{1}{4}}x_3\right)-i2\xi\cdot\nabla b_{1,\rho}\psi\left(\rho^{-\frac{1}{4}}x_3\right)\right]e^{-i\xi\cdot x}\\ &\ \ \ -2\rho[(\tilde{\theta}+i\eta)\cdot \nabla b_{1,\rho}+i[(\tilde{\theta}+i\eta)\cdot A_{1}]b_{1,\rho}]\psi\left(\rho^{-\frac{1}{4}}x_3\right)e^{-i\xi\cdot x}.\end{aligned}\end{equation} Here we consider $A_1$ as an element of $L^\infty({\mathbb R}^3)^3\cap L^2({\mathbb R}^3)^3$ satisfying $A_1=0$ on ${\mathbb R}^3\setminus\Omega_1$. We fix $\varphi\in\mathcal C^\infty_0(B'_{R+1};[0,1])$ satisfying $\varphi=1$ on $B'_{R+\frac{1}{2}}$, and we define $$G_\rho(x',x_3):=\varphi(x')F_{1,\rho}(x',x_3),\quad x'\in{\mathbb R}^2,\ x_3\in{\mathbb R},$$ $$K_\rho(x):=G_\rho(x)-\varphi(x')\psi\left(\rho^{-\frac{1}{4}}x_3\right)\textrm{div}(A_1)b_{1,\rho}e^{-i\xi\cdot x} ,\quad x'\in{\mathbb R}^2,\ x_3\in{\mathbb R},\ x=(x',x_3).$$ It is clear that $K_\rho\in L^2({\mathbb R}^3)$ and in view of \eqref{cond31}-\eqref{cond6} and the fact that, using a change of variable, we find $$\norm{\chi\left(\rho^{-\frac{1}{4}}x_3\right)}_{L^2(B'_{R+1}\times{\mathbb R})}+\norm{\chi'\left(\rho^{-\frac{1}{4}}x_3\right)}_{L^2(B'_{R+1}\times{\mathbb R})}+\norm{\chi''\left(\rho^{-\frac{1}{4}}x_3\right)}_{L^2(B'_{R+1}\times{\mathbb R})}\leq C\rho^{\frac{1}{8}},$$ we deduce that \begin{equation} \label{t4h}\norm{K_\rho}_{H^{-1}_\rho({\mathbb R}^3)}\leq \rho^{-1}\norm{K_\rho}_{L^2({\mathbb R}^3)}=\rho^{-1}\norm{K_\rho}_{L^2(B'_{R+1}\times{\mathbb R})}\leq C(\norm{A_1-A_{1,\rho}}_{L^2({\mathbb R}^3)^3}+\rho^{-\frac{1}{8}}).\end{equation} In the same way, since supp$(\textrm{div}(A))\subset \overline{\omega}\times{\mathbb R}\subset B'_{R+\frac{1}{2}}\times{\mathbb R}$, we have $$\varphi(x')\psi\left(\rho^{-\frac{1}{4}}x_3\right)\textrm{div}(A_1)b_{1,\rho}=\psi\left(\rho^{-\frac{1}{4}}x_3\right)\textrm{div}(A_1)b_{1,\rho}.$$ Moreover, fixing $$c_{1,\rho}(x):=\psi\left(\rho^{-\frac{1}{4}}x_3\right)b_{1,\rho}(x),\quad x=(x',x_3)\in{\mathbb R}^2\times{\mathbb R},$$ for any $h\in H^1_\rho({\mathbb R}^3)$, we obtain $$\begin{aligned}&\abs{\left\langle \textrm{div}(A_1)c_{1,\rho},h\right\rangle_{H^{-1}_\rho({\mathbb R}^3), H^1_\rho({\mathbb R}^3)}}\\ &\leq \abs{\left\langle A_1\cdot \nabla c_{1,\rho},h\right\rangle_{L^2({\mathbb R}^3)}}+\abs{\left\langle c_{1,\rho},A_1\cdot \nabla h\right\rangle_{L^2({\mathbb R}^3)}}\\ &\leq \abs{\left\langle A_1\cdot \nabla c_{1,\rho},h\right\rangle_{L^2({\mathbb R}^3)}}+\abs{\left\langle c_{1,\rho},(A_1-A_{1,\rho})\cdot \nabla h\right\rangle_{L^2({\mathbb R}^3)}}+\abs{\left\langle c_{1,\rho},A_{1,\rho}\cdot \nabla h\right\rangle_{L^2({\mathbb R}^3)}}\\ &\leq \left(\norm{c_{1,\rho}}_{W^{1,\infty} (\Omega_1)}\norm{A_1}_{L^2(\Omega_1)^3}\rho^{-1}+\norm{c_{1,\rho}}_{L^\infty (B'_{R+1}\times{\mathbb R})}\norm{A_1-A_{1,\rho}}_{L^2({\mathbb R}^3)^3}\right)\norm{h}_{H^1_\rho({\mathbb R}^3)}+\abs{\left\langle \textrm{div}({c_{1,\rho}A_{1,\rho}}), h\right\rangle_{L^2({\mathbb R}^3)}}\\ &\leq \left(2\norm{c_{1,\rho}}_{W^{1,\infty} (B'_{R+1}\times{\mathbb R})}[\norm{A_1}_{L^2(\Omega_1)^3}+\norm{A_{1,\rho}}_{H^1({\mathbb R}^3)^3}]\rho^{-1}+\norm{c_{1,\rho}}_{L^\infty (B'_{R+1}\times{\mathbb R})}\norm{A_1-A_{1,\rho}}_{L^2({\mathbb R}^3)^3}\right)\norm{h}_{H^1_\rho({\mathbb R}^3)}.\end{aligned}$$ Here we use the fact that supp$(A_{1,\rho})\subset \Omega_1+B_1\subset B'_{R+1}\times{\mathbb R}$. Combining this with \eqref{a1b} and \eqref{cond31}, we find $$\abs{\left\langle \textrm{div}(A_1)c_{1,\rho},h\right\rangle_{H^{-1}_\rho({\mathbb R}^3), H^1_\rho({\mathbb R}^3)}}\leq C(\rho^{-\frac{3}{4}}+\norm{A_1-A_{1,\rho}}_{L^2({\mathbb R}^3)^3})\norm{h}_{H^1_\rho({\mathbb R}^3)}$$ and it follows $$\norm{\psi\left(\rho^{-\frac{1}{4}}x_3\right)\textrm{div}(A_1)b_{1,\rho}}_{H^{-1}_\rho({\mathbb R}^3)}\leq C(\rho^{-\frac{3}{4}}+\norm{A_1-A_{1,\rho}}_{L^2({\mathbb R}^3)^3}).$$ Then, \eqref{t4h} implies \begin{equation} \label{t4c} \norm{G_\rho}_{H^{-1}_\rho({\mathbb R}^3)}\leq C(\norm{A_{1}-A_{1,\rho}}_{L^2({\mathbb R}^3)^3}+\rho^{-\frac{1}{8}}).\end{equation} From now on, combining \eqref{p2a} with \eqref{t4c}, we will complete the construction of the remainder term $w_{1,\rho}$ by using a classical duality argument. More precisely, applying \eqref{p2a}, we consider the linear form $T_\rho$ defined on $\mathcal Q:=\{P_{A_1,\overline{q_1},-}w: w\in\mathcal C^\infty_0(\Omega_1)\}$ by $$T_\rho(P_{A_1,\overline{q_1},-}v):=\overline{\left\langle G_{\rho}, e^{-i\rho \eta\cdot x}v\right\rangle_{H^{-1}_\rho({\mathbb R}^3), H^1_\rho({\mathbb R}^3)}},\quad v\in\mathcal C^\infty_0(\Omega_1).$$ Here and from now on we define the duality bracket $\left\langle \cdot,\cdot\right\rangle_{H^{-1}_\rho({\mathbb R}^3), H^1_\rho({\mathbb R}^3)}$ in the complex sense, which means that $$\left\langle v,w\right\rangle_{H^{-1}_\rho({\mathbb R}^3), H^1_\rho({\mathbb R}^3)}=\left\langle v,w\right\rangle_{L^2({\mathbb R}^3)}=\int_{{\mathbb R}^3}v\overline{w}dx,\quad v\in L^2({\mathbb R}^3),\ w\in H^1({\mathbb R}^3).$$ Applying again \eqref{p2a}, for all $v\in\mathcal C^\infty_0(\Omega_1)$, we obtain $$\begin{aligned}|T_\rho(P_{A_1,\overline{q_1},-}v)|&\leq \norm{G_{\rho}}_{H^{-1}_\rho({\mathbb R}^3)}\norm{e^{-i\rho \eta\cdot x}v}_{H^1_\rho({\mathbb R}^3)}\\ \ &\leq 2\rho\norm{G_{\rho}}_{H^{-1}_\rho({\mathbb R}^3)}\rho^{-1}\norm{v}_{H^1_\rho({\mathbb R}^3)}\\ \ &\leq C\rho\norm{G_{\rho}}_{H^{-1}_\rho({\mathbb R}^3)}\norm{P_{A_1,\overline{q_1},-}v}_{H^{-1}_\rho({\mathbb R}^3)},\end{aligned}$$ with $C>0$ independent of $\rho$. Thus, applying the Hahn-Banach theorem, we deduce that $T_\rho$ admits an extension as a continuous linear form on ${H^{-1}_\rho({\mathbb R}^3)}$ whose norm will be upper bounded by $C\rho\norm{G_{\rho}}_{H^{-1}_\rho({\mathbb R}^3)}$. Therefore, there exists $w_{1,\rho}\in H^1_\rho({\mathbb R}^3)$ such that \begin{equation} \label{t4d}\left\langle P_{A_1,\overline{q_1},-}v, w_{1,\rho}\right\rangle_{H^{-1}_\rho({\mathbb R}^3), H^1_\rho({\mathbb R}^3)}=T_\rho(P_{A_1,\overline{q_1},-}v)=\overline{\left\langle G_{\rho}, e^{-i\rho \eta\cdot x}v\right\rangle_{H^{-1}_\rho({\mathbb R}^3), H^1_\rho({\mathbb R}^3)}},\quad v\in\mathcal C^\infty_0(\Omega_1),\end{equation} \begin{equation} \label{t4e}\norm{w_{1,\rho}}_{H^1_\rho({\mathbb R}^3)}\leq C\rho\norm{G_{\rho}}_{H^{-1}_\rho({\mathbb R}^3)}.\end{equation} From \eqref{t4d} and the fact that, for all $x\in\Omega_1$, $G_\rho(x)=F_{1,\rho}(x)$, we obtain $$\begin{aligned}\left\langle P_{A_1,q_1,+}w_{1,\rho},v\right\rangle_{D'(\Omega_1), \mathcal C^\infty_0(\Omega_1)}&=\overline{\left\langle P_{A_1,\overline{q_1},-}v, w_{1,\rho}\right\rangle_{H^{-1}_\rho({\mathbb R}^3), H^1_\rho({\mathbb R}^3)}}\\ \ &=\left\langle G_{\rho}, e^{-i\rho \eta\cdot x}v \right\rangle_{H^{-1}_\rho({\mathbb R}^3), H^1_\rho({\mathbb R}^3)}\\ \ &=\left\langle e^{i\rho \eta\cdot x}F_{1,\rho}, v \right\rangle_{D'(\Omega_1), \mathcal C^\infty_0(\Omega_1)}.\end{aligned}$$ It follows that $w_{1,\rho}$ solves $P_{A_1,q_1,+}w_{1,\rho}=e^{i\rho \eta\cdot x}F_{1,\rho}$ in $\Omega_1$ and $u_1$ given by \eqref{CGO1} is a solution of $\Delta_{A_1}u+q_1u=0$ in $\Omega_1$ lying in $H^1(\Omega_1)$. In addition, from \eqref{t4c} and \eqref{t4e}, we deduce that \begin{equation} \label{tutu}\rho^{-1}\norm{w_{1,\rho}}_{H^1(\Omega_1)}+\norm{w_{1,\rho}}_{L^2(\Omega_1)}\leq 2\rho^{-1}\norm{w_{1,\rho}}_{H^1_\rho({\mathbb R}^3)}\leq C(\norm{A_{1}-A_{1,\rho}}_{L^2({\mathbb R}^3)^3}+\rho^{-\frac{1}{8}})\end{equation} which implies the decay property \eqref{RCGO}. This completes the proof of Theorem \ref{t4}. \section{Uniqueness result} In this section we will use the result of the preceding section in order to complete the proof of Theorem \ref{t1}. Namely under the assumption of Theorem \ref{t1}, we will show that \eqref{t1a} implies that $dA_1=dA_2$. Then, assuming $A=A_1-A_2\in \mathcal C({\mathbb R}^3)$, we will prove that $q_1=q_2$. For $j=1,2$, we assume that $A_j\in L^\infty({\mathbb R}^3)^3\cap L^2({\mathbb R}^3)^3$ and $q_j\in L^\infty({\mathbb R}^3;\mathbb C)$ with $A_j$ and $q_j$ extended by $0$ on ${\mathbb R}^3\setminus \Omega$. We use here the notation of the previous sections and we assume that $A=A_1-A_2\in L^1({\mathbb R}^3)$. We start with the recovery of the magnetic field. \subsection{Recovery of the magnetic field} In this subsection we will prove that \eqref{t1a} implies that $dA_1=dA_2$. Let us first remark that $A_\rho=A_{1,\rho}-A_{2,\rho}=\chi_\rho*A$ and, since $A\in L^1({\mathbb R}^3)^3$, by density one can check that \begin{equation} \label{t1c}\lim_{\rho\to+\infty}\norm{A_\rho-A}_{L^1({\mathbb R}^3)}=0.\end{equation} For $j=1,2$, we fix $u_j\in H^{1}(\Omega_1)$ a solution of $\Delta_{A_1} u_1+q_1u_1=0$, $\Delta_{A_2} u_2+\overline{q_2}u_2=0$ in $\Omega_1$ of the form \eqref{CGO1}-\eqref{CGO2} with $\rho>\rho_2$ and with $w_{j,\rho}$ satisfying \eqref{RCGO}. In view of \eqref{closed}, we can see that the restriction of $u_1$ (resp. $u_2$) to $\Omega$ is lying in $H^1(\Omega)$ and it solves the equation $\Delta_{A_1} u_1+q_1u_1=0$ (resp. $\Delta_{A_2} u_2+\overline{q_2}u_2=0$) in $\Omega$. From now on, we consider the restriction to $\Omega$ of these CGO solutions initially defined on $\Omega_1$. In view of \eqref{t1a}, we can find $v_2\in H^1(\Omega)$ satisfying $\Delta_{A_2} v_2+q_2v_2=0$ with $\tau v_2=\tau u_1$ and $N_{A_1,q_1} u_1=N_{A_2,q_2} v_2$. Therefore, we have $$\begin{aligned}0=\left\langle N_{A_1,q_1}u_1,\tau u_2\right\rangle-\left\langle N_{A_2,q_2}v_2,\tau u_2\right\rangle&=\left\langle N_{A_1,q_1}u_1,\tau u_2\right\rangle-\overline{\left\langle N_{A_2,\overline{q_2}}u_2,\tau v_2\right\rangle}\\ \ &=\left\langle N_{A_1,q_1}u_1,\tau u_2\right\rangle-\overline{\left\langle N_{A_2,\overline{q_2}}u_2,\tau u_1\right\rangle}\\ \ &=i\int_{{\mathbb R}^3}(A\cdot\nabla u_1)\overline{u_2}dx -i\int_{{\mathbb R}^3}u_1(\overline{A\cdot\nabla u_2})dx+\int_{{\mathbb R}^3}\tilde{q}u_1\overline{u_2}dx,\end{aligned}$$ where $\tilde{q}=|A_2|^2-|A_1|^2+q$, with $q=q_1-q_2$ extended by zero to ${\mathbb R}^3$. According to \eqref{RCGO}, \eqref{cond31} and the fact that $A\in L^1({\mathbb R}^3)$, multiplying this expression by $-i\rho^{-1}2^{-1}$ and sending $\rho\to+\infty$, we find $$\begin{aligned}&\lim_{\rho\to+\infty}\int_{{\mathbb R}^3}(A\cdot(\tilde{\theta}+i\eta))\exp\left(\Phi_{1,\rho}+\overline{\Phi_{2,\rho}}\right)e^{-ix\cdot\xi}dx\\ &=\lim_{\rho\to+\infty}\int_{{\mathbb R}^3}\psi^2(\rho^{-\frac{1}{4}}x_3)(A\cdot(\tilde{\theta}+i\eta))\exp\left(\Phi_{1,\rho}+\overline{\Phi_{2,\rho}}\right)e^{-ix\cdot\xi}dx=0.\end{aligned}$$ Here we use \eqref{tt1} and the fact that by Lebesgue dominate convergence theorem $$\lim_{\rho\to+\infty}\norm{A-\psi^2(\rho^{-\frac{1}{4}}x_3)A}_{L^1({\mathbb R}^3)}=0.$$ Combining this with \eqref{tt1} and \eqref{t1c}, we obtain $$\lim_{\rho\to+\infty}\int_{{\mathbb R}^3}(A_\rho\cdot(\tilde{\theta}+i\eta))\exp\left(\Phi_{1,\rho}+\overline{\Phi_{2,\rho}}\right)e^{-ix\cdot\xi}dx=0.$$ On the other hand, one can easily check that $$\Phi_\rho=\Phi_{1,\rho}+\overline{\Phi_{2,\rho}}=\frac{-i}{2\pi}\int_{{\mathbb R}^2}\frac{(\tilde{\theta}+i\eta)\cdot A_{\rho}(x-s_1\tilde{\theta}-s_2\eta)}{s_1+is_2}ds_1ds_2.$$ and we deduce that \begin{equation} \label{t1d}\lim_{\rho\to+\infty}\int_{{\mathbb R}^3}(A_\rho\cdot(\tilde{\theta}+i\eta))e^{\Phi_{\rho}}e^{-ix\cdot\xi}dx=0.\end{equation} Now let us consider the following intermediate result. \begin{lemma} \label{l3} We have \begin{equation} \label{l3a} \int_{{\mathbb R}^3}(A_\rho\cdot(\tilde{\theta}+i\eta))e^{\Phi_{\rho}}e^{-ix\cdot\xi}dx=(\tilde{\theta}+i\eta)\cdot\left(\int_{{\mathbb R}^3}A_\rho(x)e^{-ix\cdot\xi}dx\right)=(2\pi)^{\frac{3}{2}}(\tilde{\theta}+i\eta)\cdot\mathcal F(A_\rho)(\xi).\end{equation} \end{lemma} \begin{proof} For $A_\rho$ compactly supported this result is well known and one can refer to \cite[Proposition 3.3]{KU} or \cite[Lemma 6.2]{Sa2} for its proof. Since here we deal with non-compactly supported magnetic potentials, the proof of the result will be required. From now on, to every $x\in{\mathbb R}^3$, we associate the coordinate $(x'',x_*)\in{\mathbb R}^2\times {\mathbb R}$, with $x''=(x'_1,x'_2)=(x\cdot\tilde{\theta},x\cdot\eta)$ and $x_*=\frac{x\cdot\xi}{|\xi|}$. Recall that supp$(A_\rho)\subset B'_{R+1}\times{\mathbb R}$ and, fixing $\tilde{A}_\rho: (x'',x_*)\mapsto A_\rho(x)$, in a similar way to Subsection 3.1, we find $$\textrm{supp}(\tilde{A}_\rho)\subset (-R-1,R+1)\times\left(-\frac{(R+1)}{|\xi'|},\frac{R+1}{|\xi'|}\right)\times {\mathbb R}\subset B'_{R_1}\times{\mathbb R}.$$ Thus, fixing $\tilde{\Phi}_\rho: (x'',x_*)\mapsto \Phi_\rho(x)$, for $|x''|>R_1$ we have $$\tilde{\Phi}_\rho(x'',x_*)=\frac{-i}{2\pi}\int_{B'_{R_1}}\frac{(\tilde{\theta}+i\eta)\cdot\tilde{A}_\rho(y'',x_*)}{x'_1-y'_1+i(x'_2-y'_2)}dy''.$$ It follows that $$|\tilde{\Phi}_\rho(x'',x_*)|\leq \frac{\norm{A_\rho}_{L^\infty({\mathbb R}^3)}|B'_{R_1}|}{2\pi(|x''|-R_1)},\quad |x''|>R_1,\ x_*\in{\mathbb R}$$ and in particular, for every $x_*\in{\mathbb R}$, we get \begin{equation} \label{l3b}|\tilde{\Phi}_\rho(x'',x_*)|=\underset{|x''|\to+\infty}{\mathcal O}\left(|x''|^{-1}\right).\end{equation} On the other hand, using the fact that $$(\partial_{x'_1}+i\partial_{x'_2})\tilde{\Phi}_\rho(x'',x_*)=(\tilde{\theta}+i\eta)\nabla \Phi_\rho=-iA_\rho\cdot (\tilde{\theta}+i\eta)$$ and the fact that $A_\rho\in L^1({\mathbb R}^3)$, by Fubini's theorem we find \begin{equation} \label{l3c}\int_{{\mathbb R}^3}(A_\rho\cdot(\tilde{\theta}+i\eta))e^{\Phi_{\rho}}e^{-ix\cdot\xi}dx=i\int_{{\mathbb R}}\left(\int_{{\mathbb R}^2}(\partial_{x'_1}+i\partial_{x'_2})e^{\tilde{\Phi}_\rho(x'',x_*)}dx''\right)e^{-ix_*|\xi|}dx_*.\end{equation} Moreover, for all $r>0$ fixing $n=(n_1,n_2)$ the outward unit normal vector to $B'_r$, we have $$\int_{|x''|<r}(\partial_{x'_1}+i\partial_{x'_2})e^{\tilde{\Phi}_\rho(x'',x_*)}dx''=\int_{|x''|=r}e^{\tilde{\Phi}_\rho(x'',x_*)}(n_1+in_2)d\sigma(x'').$$ Applying \eqref{l3b}, we find $$e^{\tilde{\Phi}_\rho(x'',x_*)}=1+\tilde{\Phi}_\rho(x'',x_*)+\underset{|x''|\to+\infty}{\mathcal O}\left(|x''|^{-2}\right)$$ and it follows \begin{equation} \label{l3d}\int_{|x''|<r}(\partial_{x'_1}+i\partial_{x'_2})e^{\tilde{\Phi}_\rho(x'',x_*)}dx''= \int_{|x''|=r}(n_1+in_2)d\sigma(x'')+\int_{|x''|=r}\tilde{\Phi}_\rho(x'',x_*)(n_1+in_2)d\sigma(x'')+\underset{r\to+\infty}{\mathcal O}\left(r^{-1}\right).\end{equation} In addition, we get $$\int_{|x''|=r}(n_1+in_2)d\sigma(x'')=\int_{|x''|<r}(\partial_{x'_1}+i\partial_{x'_2})1dx''=0,$$ $$\int_{|x''|=r}\tilde{\Phi}_\rho(x'',x_*)(n_1+in_2)d\sigma(x'')=\int_{|x''|<r}(\partial_{x'_1}+i\partial_{x'_2})\tilde{\Phi}_\rho(x'',x_*)dx''$$ and sending $r\to+\infty$ in \eqref{l3d}, we obtain $$\begin{aligned}\int_{{\mathbb R}^3}(A_\rho\cdot(\tilde{\theta}+i\eta))e^{\Phi_{\rho}}e^{-ix\cdot\xi}dx&=i\int_{{\mathbb R}}\left(\int_{{\mathbb R}^2}(\partial_{x'_1}+i\partial_{x'_2})\tilde{\Phi}_\rho(x'',x_*)dx''\right)e^{-ix_*|\xi|}dx_*\\ \ &=\int_{{\mathbb R}}\left(\int_{{\mathbb R}^2}(\tilde{\theta}+i\eta)\cdot\tilde{A}_\rho(x'',x_*)dx''\right)e^{-ix_*|\xi|}dx_*.\end{aligned}$$ From this identity, we deduce \eqref{l3a}. \end{proof} Combining \eqref{t1c} and \eqref{t1d}-\eqref{l3a}, we obtain $$(\tilde{\theta}+i\eta)\cdot\mathcal F(A)(\xi)=\lim_{\rho\to+\infty}(\tilde{\theta}+i\eta)\cdot\mathcal F(A_\rho)(\xi)=0.$$ In the same way, replacing $\eta$ by $-\eta$ in our analysis, we find $(\tilde{\theta}-i\eta)\cdot\mathcal F(A)(\xi)=0$ and it follows $\tilde{\theta}\cdot\mathcal F(A)(\xi)=\eta\cdot\mathcal F(A)(\xi)=0$. Combining this with the fact that $(\tilde{\theta},\eta)$ is an orthonormal basis of $\xi^\bot=\{y\in{\mathbb R}^3:\ y\cdot\xi=0\}$, we find \begin{equation} \label{t1e}\zeta \cdot\mathcal F(A)(\xi)=0,\quad \zeta\in\xi^\bot.\end{equation} Moreover, for $1\leq j<k\leq3$, fixing $\zeta=\xi_ke_j-\xi_j e_k,$ with $$e_j=(0,\ldots,0,\underbrace{1}_{\textrm{position\ } j},0,\ldots0),\quad e_k=(0,\ldots,0,\underbrace{1}_{\textrm{position } k},0,\ldots0),$$ \eqref{t1e} implies \begin{equation} \label{t1f}\xi_k \mathcal F(a_j)(\xi)-\xi_j \mathcal F(a_k)(\xi)=0,\quad 1\leq j<k\leq3,\end{equation} where $A=(a_1,a_2,a_3)$. Recall that so far, we have proved \eqref{t1f} for any $\xi=(\xi',\xi)\in{\mathbb R}^2\times{\mathbb R}$ with $\xi'\neq0$ and $\xi_3\neq0$. Since $A\in L^1({\mathbb R}^3)^3$ we can extend this identity to any $\xi\in{\mathbb R}^3$ by using the continuity of $\mathcal F(A)$. Then, we deduce from \eqref{t1f} that $$-i\mathcal F(\partial_{x_k}a_j-\partial_{x_j}a_k)(\xi)=\xi_k \mathcal F(a_j)(\xi)-\xi_j \mathcal F(a_k)(\xi)=0,\quad 1\leq j<k\leq3,\ \xi\in{\mathbb R}^3.$$ This proves that in the sense of distribution we have $dA=0$ and $dA_1=dA_2$. \subsection{Recovery of the electric potential} In this subsection we assume that \eqref{t1a}, $A\in L^\infty({\mathbb R}^3)^3$, $dA=0$ are fulfilled and we will prove that $q_1=q_2$. We start, with the following. \begin{lemma} \label{l4} Let $A=(a_1,\ldots,a_3)\in L^\infty({\mathbb R}^3)^3$. Assume that $dA=0$, and fix \begin{equation} \label{l4aa}\varphi(x):=\int_0^1A(sx)\cdot xds,\quad x\in{\mathbb R}^3.\end{equation} Then, we have $\varphi\in W^{1,\infty}_{loc}({\mathbb R}^3)$ and $\nabla \varphi=A$. \end{lemma} \begin{proof} Note first that since $A\in L^\infty({\mathbb R}^3)^3$, we have $\varphi\in L^\infty_{loc}({\mathbb R}^3)$. Let $\psi\in \mathcal C^\infty_0({\mathbb R}^3)$ and consider $j\in\{1,2,3\}$. We have $$\begin{aligned}\left\langle \partial_{x_j}\varphi,\psi\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}&=-\left\langle \varphi,\partial_{x_j}\psi\right\rangle_{L^2({\mathbb R}^3)}\\ \ &=-\sum_{k=1}^{3}\int_{{\mathbb R}^3}\int_0^1 x_ka_k(sx)\partial_{x_j}\psi(x)dsdx\\ \ &=-\sum_{k=1}^{3}\int_0^1 \int_{{\mathbb R}^3}x_ka_k(sx)\partial_{x_j}\psi(x)dxds.\end{aligned}$$ Applying the change of variable $y=sx$ and then $t=s^{-1}$, we obtain $$\begin{aligned}\left\langle \partial_{x_j}\varphi,\psi\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}&=-\sum_{j=1}^{3}\int_0^1 s^{-4}\left(\int_{{\mathbb R}^3}y_ja_j(y)\partial_{x_j}\psi(s^{-1}y)dy\right)ds\\ \ &=-\sum_{k=1}^{3}\int_1^{+\infty} t^{2}\int_{{\mathbb R}^3}y_ka_k(y)\partial_{x_j}\psi(ty)dydt\\ \ &=\int_1^{+\infty} t\left\langle \partial_{x_j} \left(\sum_{k=1}^{3}x_ka_k\right),\psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}dt,\end{aligned}$$ with, for $\tau\in{\mathbb R}$, $\psi(\tau\cdot):=x\mapsto\psi(\tau x)$. On the other hand, we have $$\begin{aligned}&\left\langle \partial_{x_j} \left(\sum_{k=1}^{3}x_ka_k\right),\psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}\\ &=\left\langle a_j,\psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}+\left\langle \left(\sum_{k=1}^{3}x_k\partial_{x_j}a_k\right),\psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}\end{aligned}$$ and using the fact that $dA=0$, we get $$\begin{aligned}&\left\langle \partial_{x_j} \left(\sum_{k=1}^{3}x_ka_k\right),\psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}\\ &=\left\langle a_j,\psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}+\left\langle \left(\sum_{k=1}^{3}x_k\partial_{x_k}a_j\right),\psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}\\ &=-2\left\langle a_j,\psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}-t\left\langle a_j, \left(\sum_{k=1}^{3}x_k\partial_{x_k}\psi(t\cdot)\right)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}. \end{aligned}$$ It follows $$\begin{aligned}&\left\langle \partial_{x_j}\varphi,\psi\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}\\ &=-\int_1^{+\infty} 2t\left\langle a_j,\psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}dt-\int_1^{+\infty} t^2\partial_t\left\langle a_j, \psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}dt\\ &=-\int_1^{+\infty}\partial_t\left[t^2\left\langle a_j, \psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}\right]dt\\ &=\left\langle a_j, \psi\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}-\lim_{t\to+\infty} t^2\left\langle a_j, \psi(t\cdot)\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}=\left\langle a_j, \psi\right\rangle_{D'({\mathbb R}^3),\mathcal C^\infty_0({\mathbb R}^3)}.\end{aligned}$$ This proves that $\nabla_x\varphi=A$ and it completes the proof of the lemma.\end{proof} According to Lemma \ref{l4}, the function $\varphi\in W^{1,\infty}_{loc}({\mathbb R}^3)$ given by \eqref{l4aa} satisfies $\nabla\varphi=A$. Since $\omega$ is simply connected $\Omega_1=\omega\times{\mathbb R}$ is also simply connected and ${\mathbb R}^3\setminus\Omega_1$ is connected. Therefore, according to the fact that $A=0$ in ${\mathbb R}^3\setminus\Omega_1$, by extracting a constant to $\varphi$ we may assume that $\varphi=0$ on ${\mathbb R}^3\setminus\Omega_1$. Thus, we have $\varphi_{|\partial\Omega_1}=0$. Note also that by eventually extending $\omega$, we may assume that $\Omega_1$ contains a neighborhood of $\overline{\Omega}$. Now, for $A\in L^\infty(\Omega_1)^3$ and $q\in L^\infty(\Omega_1)$ let us consider the set of data $$\mathcal D_{1,A,q}:=\{(\tau_1 u, N_{1,A,q}u):\ u\in H^1(\Omega_1),\ \Delta_Au+qu=0\},$$ where $\tau_1$ is the extension of the map $u\mapsto u_{|\partial\Omega_1}$ and, for any solution $u\in H^1(\Omega_1)$ of $\Delta_Au+qu=0$ on $\Omega_1$, $N_{1,A,q}u$ denotes the unique elements of $H^{-\frac{1}{2}}(\partial\Omega_1)$ satisfying $$\left\langle N_{1,A,q}u,\tau_1 g\right\rangle_{H^{-\frac{1}{2}}(\partial\Omega_1),H^{\frac{1}{2}}(\partial\Omega_1), }=-\int_{\Omega_1} (\nabla+iA)u\cdot \overline{(\nabla+iA)g}dx+\int_{\Omega_1} qu\overline{g}dx,\ g\in H^1(\Omega_1).$$ Repeating some arguments of \cite[Proposition 3.4]{KU} (see also \cite[Lemma 4.2]{Sa1}), one can easily check the following. \begin{proposition}\label{p4} For $j=1,2$, let $A_j\in L^\infty(\Omega_1)^3$, $q_j\in L^\infty(\Omega_1)$ and assume that $$A_1(x)=A_2(x),\quad q_1(x)=q_2(x),\quad x\in\Omega_1\setminus\Omega.$$ Then the condition \eqref{t1a} implies that $\mathcal D_{1,A_1,q_1}=\mathcal D_{1,A_2,q_2}$. \end{proposition} In view of this result and the fact that $A_1=A_2=0$ and $q_1=q_2=0$ on $\Omega_1\setminus \Omega$, we deduce that $\mathcal D_{1,A_1,q_1}=\mathcal D_{1,A_2,q_2}$. Moreover, using the fact that $A_1-A_2=\nabla\varphi$ with $\varphi\in W^{1,\infty}_{loc}(\Omega_1)$ satisfying $\varphi_{|{\mathbb R}^3\setminus\Omega_1}=0$, we obtain $$\mathcal D_{1,A_1,q_2}=\mathcal D_{1,A_2+\nabla\varphi,q_2}=\mathcal D_{1,A_2,q_2}=\mathcal D_{1,A_1,q_1}.$$ Therefore, repeating the argumentation of Section 4.1, with $A_1=A_2$, we find \begin{equation} \label{t1g}\lim_{\rho\to+\infty}\int_{{\mathbb R}^3} q(x)\psi^2(\rho^{-\frac{1}{4}} x_3)e^{-ix\cdot \xi}dx=0,\end{equation} for all $\xi=(\xi',\xi_3)\in{\mathbb R}^2\times{\mathbb R}$ with $\xi'\neq0$ and $\xi_3\neq0$. Here we have used the fact that, following our definition, $A_{1,\rho}=A_{2,\rho}$, $\overline{\Phi_{2,\rho}}=-\Phi_{1,\rho}$ and $b_{1,\rho}\overline{b_{2,\rho}}=1$. In \eqref{t1g}, we can assume for instance that $\psi=1$ on $[-1,1]$. We fix $q_\rho(x',x_3)=q(x',x_3)\psi^2(\rho^{-\frac{1}{4}} x_3)$, $(x',x_3)\in{\mathbb R}^2\times{\mathbb R}$ and we remark that $$\begin{aligned}\norm{\mathcal F(q_\rho)-\mathcal F(q)}_{L^2({\mathbb R}^3)}^2=\norm{q_\rho-q}_{L^2({\mathbb R}^3)}^2&\leq \int_{{\mathbb R}^3}(1-\psi^2(\rho^{-\frac{1}{4}} x_3))|q(x)|^2dx\\ \ &\leq \int_{|x_3|\geq \rho^{\frac{1}{4}}}\left(\int_{{\mathbb R}^2}|q(x',x_3)|^2dx'\right)dx_3.\end{aligned}$$ Combining this with the fact that, according to Fubini's theorem, $$x_3\mapsto \left(\int_{{\mathbb R}^2}|q(x',x_3)|^2dx'\right)\in L^1({\mathbb R}),$$ we deduce that $$\lim_{\rho\to+\infty}\norm{\mathcal F(q_\rho)-\mathcal F(q)}_{L^2({\mathbb R}^3)}=0.$$ Thus, there exists a sequence $(\rho_k)_{k\in\mathbb N}$ such that $\rho_k\to+\infty$ and for a.e. $\xi\in{\mathbb R}^3$ we have $$\lim_{k\to+\infty}\mathcal F(q_{\rho_k})(\xi)=\mathcal F(q)(\xi).$$ Combining this with \eqref{t1g}, we obtain that $\mathcal F(q)=0$ which implies that $q=0$ and $q_1=q_2$. This completes the proof of Theorem \ref{t1}. \section{Recovery from measurements on a bounded portion of $\partial\Omega$} In this section we will prove Theorem \ref{c1} and we assume that the conditions of this theorem are fulfilled. Recall that $\tau_0$ denotes the extension of the map $u\mapsto u_{|\partial\Omega}$ to $u\in H^1(\Omega)$ which takes values in $H^{\frac{1}{2}}_{loc}(\partial\Omega)$. Consider the sets of functions $$Q_{A,q}:=\{u\in H^1(\Omega):\ \Delta_{A} u+qu=0\},$$ $$ Q_{A,q,r}:=\{ u\in Q_{A,q}: \ \textrm{supp}(\tau_0 u)\subset S_r\},\quad j=1,2.$$ Here we recall that $S_r=\partial\Omega\cap(\overline{\omega}\times[-r,r])$. We have the following density result. \begin{proposition}\label{l5} The space $Q_{A_1,q_1,r}$ $($resp. $Q_{A_2,\overline{q_2},r}$$)$ is dense in $Q_{A_1,q_1}$ $($resp. $Q_{A_2,\overline{q_2}}$$)$ for the topology induced by $L^2(\Omega\setminus(\Omega_{-}\cup\Omega_{+}))$.\end{proposition} \begin{proof} The proof of these two results being similar, we will only show the density of $Q_{A_1,q_1,r}$ in $Q_{A_1,q_1}$. We will prove the proposition by contradiction. Assume that $Q_{A_1,q_1,r}$ is not dense in $Q_{A_1,q_1}$. Then, there exist $h\in L^2(\Omega\setminus(\Omega_{-}\cup\Omega_{+}))$ and $v_0\in Q_{A_1,q_1}$ such that \begin{equation} \label{l5a}\int_{\Omega\setminus(\Omega_{-}\cup\Omega_{+})} h\overline{v}dx=0,\quad v\in Q_{A_1,q_1,r},\end{equation} \begin{equation} \label{l5b}\int_{\Omega\setminus(\Omega_{-}\cup\Omega_{+})} h\overline{v_0}dx\neq0.\end{equation} Let us mention that in contrast to several other related density result (e.g. \cite[Proposition 3.1]{KLU} and \cite[Lemma 6.1]{Ki4}) we consider a general unbounded Lipschitz domain and we can not apply the Green formula in the usual sense. To avoid such difficulties, here we proceed differently than other related results. From now on, we extend $h$ by $0$ to $\Omega$. In view of Assumption 1, there exists $u\in H^1_0(\Omega)$ such that $\Delta_{A_1}u + \overline{q_1}u=h$. Then, condition \eqref{l5a} implies \begin{equation} \label{l5c}\int_{\Omega} (\Delta_{A_1} + \overline{q_1} ) u\overline{v}dx=0,\quad v\in Q_{A_1,q_1,r}.\end{equation} Moreover, for any $\varphi\in\mathcal C^\infty_0(\Omega)$ and any $w\in H^1(\Omega)$, we have \begin{equation} \label{ll5aa}\begin{aligned} 2i\int_\Omega (A_1\cdot\nabla\varphi)\overline{w}dx &=2i\overline{\left\langle wA_1, \nabla\varphi \right\rangle_{\left(\mathcal C^\infty_0(\Omega)^3\right)',\mathcal C^\infty_0(\Omega)^3}}\\ \ &=-2i\overline{\left\langle \textrm{div}(wA_1), \varphi \right\rangle_{D'(\Omega),\mathcal C^\infty_0(\Omega)}}\\ \ &=-2i\int_\Omega \textrm{div}(A_1)\varphi\overline{w} dx+\int_\Omega \varphi (\overline{2iA_1\cdot\nabla w}) dx.\end{aligned}\end{equation} By density we can extend this identity to $\varphi\in H^1_0(\Omega)$. Combining this with the fact that $u\in H^1_0(\Omega)$, for any $v\in Q_{A_1,q_1,r}$, we obtain \begin{equation} \label{ll5a}\begin{aligned}\int_\Omega \Delta u\overline{v}dx-\int_\Omega u\overline{\Delta v}dx&=\int_{\Omega} (\Delta_{A_1} + \overline{q_1}) u\overline{v}dx-\int_{\Omega} u \overline{(\Delta_{A_1} + q_1 )v}dx\\ \ &=\int_{\Omega\setminus(\Omega_{-}\cup\Omega_{+})} h\overline{v}dx=0.\end{aligned}\end{equation} On the other hand, in view of Assumption 1, for any $F\in\mathcal C^\infty_0({\mathbb R}^3)$, satisfying supp$(F_{|\partial\Omega})\subset S_r$, we can define $w_{F}\in H^1_0(\Omega)$ solving $\Delta_{A_1}w_F+q_1w_F=-\Delta_{A_1}F+q_1F$ and $v=w_F+F\in Q_{A_1,q_1,r}$. Using this choice for the element $v\in Q_{A_1,q_1,r}$ in \eqref{ll5a}, we deduce that \begin{equation} \label{ll5b}\int_\Omega \Delta u(\overline{w_F+F})dx-\int_\Omega u(\overline{\Delta w_F+\Delta F})dx=0.\end{equation} In addition, since $u\in H^1_0(\Omega)$ and $w_F\in H^1_0(\Omega)$, one can check by density that $$\int_\Omega \Delta u\overline{w_F}dx-\int_\Omega u\overline{\Delta w_F}dx=-\int_\Omega \nabla u\cdot\overline{\nabla w_F}dx+\int_\Omega \nabla u\cdot\overline{\nabla w_F}dx=0.$$ Combining this with \eqref{ll5b}, we get \begin{equation} \label{ll5c}\int_\Omega \Delta u\overline{F}dx-\int_\Omega u\overline{\Delta F}dx=0,\quad F\in\{G\in\mathcal C^\infty_0({\mathbb R}^3): \textrm{supp}(G_{|\partial\Omega})\subset S_r\}.\end{equation} We fix $\gamma_1$ an open set of $\partial\Omega$ such that $\gamma_1\subset (S_r\setminus[\partial\Omega\cap (\overline{\omega}\times[\delta-r,r-\delta])])$. Then, we consider $\Omega_*$ a bounded subset of ${\mathbb R}^3\setminus\Omega$ with no empty interior such that $\Omega_*\cap\partial\Omega\subset\gamma_1$ and such that $\Omega_{-,*}:=\Omega_{-}\cup\Omega_*$ is an open connected set of ${\mathbb R}^3$. Applying \eqref{ll5aa} and \eqref{ll5c}, we deduce that the extension of $u$ by zero to $\Omega_{-,*}$ satisfies $$ \left\{ \begin{array}{l} (\Delta_{A_1} + \overline{q_1} )u = 0\ \ \mbox{in}\ \Omega_{-,*},\\ u\in H^1(\Omega_{-,*})\\ u_{|\Omega_*} = 0. \end{array} \right. $$ Then, applying the unique continuation property for elliptic equations (e.g. \cite[Theorem 1.1]{GL} and \cite[Theorem 1]{SS}), we deduce that $u_{|\Omega_{-}}=0$. In the same way, we can prove that $u_{|\Omega_{+}}=0$. Using these properties, we would like to prove the following identity \begin{equation} \label{ll5d}\int_{\Omega}\Delta_{A_1}u\overline{v_0}dx=\int_{\Omega}u\overline{\Delta_{A_1}v_0}dx,\end{equation} where we recall that $v_0$ satisfies \eqref{l5b}. For this purpose, we first recall that in a similar way to \eqref{ll5a}, we can show that $$\int_\Omega \Delta u\overline{v_0}dx-\int_\Omega u\overline{\Delta v_0}dx=\int_{\Omega} \Delta_{A_1} u\overline{v_0}dx-\int_{\Omega} u \overline{\Delta_{A_1} v_0}dx.$$ Thus, we only need to prove that \begin{equation} \label{ll5e}\int_{\Omega}\Delta u\overline{v_0}dx=\int_{\Omega}u\overline{\Delta v_0}dx,\end{equation} for showing \eqref{ll5d}. Let $\varphi_1,\varphi_2\in\mathcal C^\infty_0({\mathbb R}^3)$ be such that $\varphi_1=1$ on $\overline{\omega}\times\left[\frac{\delta}{2}-r,r-\frac{\delta}{2}\right]$, $\varphi_2=1$ on a neighborhood of supp$(\varphi_1)$ and supp$(\varphi_2)\cap\partial\Omega\subset (\overline{\omega}\times\left[\frac{\delta}{3}-r,r-\frac{\delta}{3}\right])$. Since supp$(\varphi_2 v_0)\cap\partial\Omega\subset S_r$ and $$\Delta_{A_1}(\varphi_2 v_0)=-q_1\varphi_2v_0 + 2\nabla\varphi_2\cdot\nabla v_0+(\Delta_{A_1}\varphi_2)v_0\in L^2(\Omega),$$ in a similar way to \eqref{ll5c}, we can apply Assumption 1 and \eqref{l5a} in order to get \begin{equation} \label{ll5f}\int_\Omega \Delta u\overline{\varphi_2 v_0}dx-\int_\Omega u\overline{\Delta (\varphi_2v_0)}dx=0.\end{equation} In addition, using the fact that $\varphi_2=1$ on a neighborhood of supp$(\varphi_1)$, we get \begin{equation} \label{ll5g}\int_{\Omega}\Delta u(\overline{(1-\varphi_2)v_0})dx=\int_{\Omega}\Delta [(1-\varphi_1)u](\overline{(1-\varphi_2)v_0})dx.\end{equation} On the other hand, using the fact that $$\Omega_-\cup \left(\omega\times\left[\frac{\delta}{2}-r,r-\frac{\delta}{2}\right]\cap\Omega\right)\cup\Omega_+$$ corresponds to the intersection between a neighborhood of $\partial\Omega$ and $\Omega$, with the fact that \begin{equation} \label{ll5gg}(1-\varphi_1)u(x)=0,\quad x\in \Omega_-\cup \left(\omega\times\left[\frac{\delta}{2}-r,r-\frac{\delta}{2}\right]\cap\Omega\right)\cup\Omega_+,\end{equation} we deduce that the function $(1-\varphi_1)u$ extended by zero to ${\mathbb R}^3$, satisfies $\nabla[(1-\varphi_1)u]\in L^2({\mathbb R}^3)$ and div$(\nabla[(1-\varphi_1)u])=\Delta[(1-\varphi_1)u]\in L^2({\mathbb R}^3)$. Moreover, combining \eqref{ll5gg} with the arguments used in the proof of \cite[Theorem 3.4 page 223]{EE}, we can find a sequence of functions $(G_k)_{k\in\mathbb N}$ lying in $\mathcal C^\infty_0(\Omega)^3$ such that $$\lim_{k\to+\infty}\norm{G_k-\nabla[(1-\varphi_1)u]}_{L^2(\Omega)}=\lim_{k\to+\infty}\norm{\textrm{div}(G_k)-\Delta[(1-\varphi_1)u]}_{L^2(\Omega)}=0.$$ Then, we have $$\begin{aligned}\int_{\Omega}\textrm{div}(G_k) (\overline{(1-\varphi_2)v_0})dx&=\overline{\left\langle (1-\varphi_2)v_0,\textrm{div}(G_k)\right\rangle_{D'(\Omega),\mathcal C^\infty_0(\Omega)}}\\ \ &=-\overline{\left\langle \nabla [(1-\varphi_2)v_0],G_k\right\rangle_{\left(\mathcal C^\infty_0(\Omega)^3\right)',\mathcal C^\infty_0(\Omega)^3}}\\ \ &=-\int_{\Omega}G_k \cdot\overline{\nabla[(1-\varphi_2)v_0]})dx\end{aligned}$$ and sending $k\to+\infty$, we obtain $$\int_{\Omega}\Delta [(1-\varphi_1)u](\overline{(1-\varphi_2)v_0})dx=-\int_{\Omega}\nabla [(1-\varphi_1)u]\cdot(\overline{\nabla[(1-\varphi_2)v_0]})dx.$$ Then, using the fact that $(1-\varphi_1)u\in H^1_0(\Omega)$, we find $$\int_{\Omega}\Delta [(1-\varphi_1)u](\overline{(1-\varphi_2)v_0})dx=-\int_{\Omega}\nabla [(1-\varphi_1)u]\cdot(\overline{\nabla[(1-\varphi_2)v_0]})dx=\int_{\Omega}[(1-\varphi_1)u](\overline{\Delta[(1-\varphi_2)v_0]})dx.$$ Combining this with \eqref{ll5g} and applying again the fact that $\varphi_2=1$ on a neighborhood of supp$(\varphi_1)$, we find $$\int_{\Omega}\Delta u(\overline{(1-\varphi_2)v_0})dx=\int_{\Omega}[(1-\varphi_1)u](\overline{\Delta[(1-\varphi_2)v_0]})dx=\int_{\Omega}u(\overline{\Delta[(1-\varphi_2)v_0]})dx.$$ From this identity and \eqref{ll5f}, we deduce \eqref{ll5e} and by the same way \eqref{ll5d}. Applying \eqref{ll5d}, we find $$\int_{\Omega} h\overline{v_0}dx=\int_{\Omega} (\Delta_{A_1} + \overline{q_1} ) u\overline{v_0}dx=\int_{\Omega} u\overline{(\Delta_{A_1} + q_1 )v_0}dx=0.$$ This contradicts \eqref{l5b}. We have completed the proof of the proposition.\end{proof} Applying this proposition, we will complete the proof of Theorem \ref{c1}. \\ \ \\ \textbf{Proof of the Theorem \ref{c1}.} Let $u_1\in Q_{A_1,q_1,r}$ and $u_2\in Q_{A_2,\overline{q_2},r}$. In a similar way to Section 4, we can prove that \eqref{c1c} implies \begin{equation} \label{c1e}i\int_{\Omega}(A\cdot\nabla u_1)\overline{u_2}dx -i\int_{\Omega}u_1(\overline{A\cdot\nabla u_2})dx+\int_{\Omega}\tilde{q}u_1\overline{u_2}dx =0,\end{equation} with $A=A_1-A_2$ and $\tilde{q}=|A_2|^2-|A_1|^2+q_1-q_2$. On the other hand, according to \eqref{c1d}, we have $$\int_{\Omega} u_1(A\cdot\overline{\nabla u_2})dx=-\int_{\Omega}(A\cdot\nabla u_1)\overline{ u_2}dx-\int_{\Omega}\textrm{div}(A) u_1\overline{ u_2}dx.$$ Combining this with \eqref{c1e}, we obtain $$2i\int_{\Omega}(A\cdot\nabla u_1)\overline{u_2}dx +\int_{\Omega}[\tilde{q}+i\textrm{div}(A)]u_1\overline{u_2}dx =0.$$ Then, \eqref{c1b} implies $$2i\int_{\Omega\setminus(\Omega_-\cup\Omega_+)}(A\cdot\nabla u_1)\overline{u_2}dx +\int_{\Omega\setminus(\Omega_-\cup\Omega_+)}[\tilde{q}+i\textrm{div}(A)]u_1\overline{u_2}dx =0.$$ Applying Lemma \ref{l5}, we deduce by density that this last identity holds true for any $u_1\in Q_{A_1,q_1,r}$ and any $u_2\in Q_{A_2,\overline{q_2}}$. Then applying again \eqref{c1d} and \eqref{c1b}, we deduce that \eqref{c1e} holds true for any $u_1\in Q_{A_1,q_1,r}$ and any $u_2\in Q_{A_2,\overline{q_2}}$. In the same way, applying \eqref{c1d} and \eqref{c1b}, we can prove that \eqref{c1e} holds true for any $u_1\in Q_{A_1,q_1}$ and any $u_2\in Q_{A_2,\overline{q_2}}$. Finally, choosing $u_1,u_2$ in a similar way to Section 4, we can deduce that $dA_1=dA_2$. Then by repeating the arguments at the end of Section 4, we deduce that, for $q_1-q_2\in L^2(\Omega)$, we have $q_1=q_2$.\qed \section{The partial data result} This section is devoted to the proof of Theorem \ref{t6}. For all $y\in\mathbb S^{1}$, $r>0$, we set \[\partial\omega_{+,r,y}=\{x\in\partial\omega:\ \nu(x)\cdot y>r\},\quad\partial\omega_{-,r,y}=\{x\in\partial\omega:\ \nu(x)\cdot y\leq r\}.\] We assume that $\Omega=\omega\times{\mathbb R}$ and, without lost of generality, we assume that there exists $\varepsilon>0$ such that for any $\theta\in\{y\in\mathbb S^{1}:|y-\theta_0|\leq\varepsilon\} $ we have $\partial\omega_{-,\varepsilon,\theta}\subset V'$. We consider $\rho >\max(\rho_2,\rho_1') $, with $\rho_1'$ given in Corollary \ref{c2} and $\rho_2$ defined in Proposition \ref{p2}, and we fix $\theta\in\{y\in\mathbb S^{1}:|y-\theta_0|\leq\varepsilon\} $, $\xi:=(\xi',\xi_3)\in{\mathbb R}^3$ satisfying $\xi_3\neq0$ and $\xi'\in\theta^{\bot}\setminus\{0\}$. Then, we fix $u_1\in H^{1}(\Omega)$ a solution of $\Delta_{A_1} u_1+q_1u_1=0$ in $\Omega$ and $u_2\in H^{1}(\Omega)$ a solution of $\Delta_{A_2} u_2+\overline{q_2}u_2=0$ in $\Omega$ of the form \eqref{CGO1}-\eqref{CGO2} with $\rho>\rho_2$ and with $w_{j,\rho}$ satisfying \eqref{RCGO}. Following the argumentation of Section 3, used for proving the decay property of $w_{j,\rho}$ which is given for $j=1$ by \eqref{tutu}, we can show that $$\rho^{-1}\norm{w_{j,\rho}}_{H^1(\Omega)}+\norm{w_{j,\rho}}_{L^2(\Omega)}\leq C(\norm{A_{j}-A_{j,\rho}}_{L^2({\mathbb R}^3)^3}+\rho^{-\frac{1}{8}})$$ and assuming that $\rho^{-\frac{1}{8}}$ admits a faster decay than $\norm{A_{j}-A_{j,\rho}}_{L^2({\mathbb R}^3)^3}$ we get \begin{equation} \label{t6f}\rho^{-1}\norm{w_{j,\rho}}_{H^1(\Omega)}+\norm{w_{j,\rho}}_{L^2(\Omega)}\leq C\norm{A_{j}-A_{j,\rho}}_{L^2({\mathbb R}^3)^3}.\end{equation} In view of \eqref{t6a}, there exists $v_2\in H^1(\Omega)$ satisfying $\Delta_{A_2} v_2+q_2v_2=0$ and $\tau v_2=\tau u_1$, ${N_{A_2,q_2}v_2}_{|V}={N_{A_1,q_1}u_1}_{|V}$. Combining this with \eqref{c1d} we deduce that $u=v_2-u_1$ solves the boundary value problem \begin{equation} \label{eq4} \left\{\begin{array}{ll} \Delta_{A_2} u+q_2u=2iA\cdot \nabla u_1+(q+i\textrm{div}(A)+|A_2|^2-|A_1|^2)u_1 &\mbox{in}\ \Omega, \\ u=0 & \mathrm{on}\ \partial \Omega.\\ \end{array}\right. \end{equation} In particular, we have $$\Delta u=-2iA_2\cdot \nabla u-(q_2+i\textrm{div}(A_2)-|A_2|^2)u+ 2iA\cdot \nabla u_1+(q+i\textrm{div}(A)+|A_2|^2-|A_1|^2)u_1\in L^2(\Omega)$$ and, in view of \cite[Lemma 2.2]{CKS}, we deduce that $u\in H^2(\Omega)$. Now let us show that $\partial_\nu u_{| V}=0$. We fix $w\in H^2(\Omega)$ satisfying supp$( w_{|\partial\Omega})\subset V$ and using the fact that ${N_{A_2,q_2}v_2}_{|V}={N_{A_1,q_1}u_1}_{|V}$, we get $$\begin{aligned}0&=\left\langle N_{A_2,q_2}v_2,\tau w\right\rangle-\left\langle N_{A_1,q_1}u_1, \tau w\right\rangle\\ \ &=\int_\Omega (\nabla+iA_1)u_1\cdot \overline{(\nabla+iA_1)w}dx-\int_\Omega q_1u_1\overline{w}dx-\int_\Omega (\nabla+iA_2)v_2\cdot \overline{(\nabla+iA_2)w}dx+\int_\Omega q_2v_2\overline{w}dx\\ \ &=-\int_\Omega (\nabla+iA_2)u\cdot \overline{(\nabla+iA_2)w}dx+\int_\Omega q_2u\overline{w}dx+\int_\Omega [iu_1A\cdot\overline{\nabla w}-i(A\cdot\nabla u_1)\overline{w}-(|A_2|^2-|A_1|^2+q)u_1\overline{w}]dx.\end{aligned}$$ Applying \eqref{c1d} and the fact that $u\in H^1_0(\Omega)$, we get $$\begin{aligned}&\int_\Omega [iu_1A\cdot\overline{\nabla w}-i(A\cdot\nabla u_1)\overline{w}-(|A_2|^2-|A_1|^2+q)u_1\overline{w}]dx\\ &=-2i\int_\Omega (A\cdot\nabla u_1)\overline{w}dx-i\int_\Omega \textrm{div}(A) u_1\overline{w}dx-\int_\Omega (|A_2|^2-|A_1|^2+q)u_1\overline{w}dx\\ &=-\int_\Omega (\Delta_{A_2}u+q_2u)\overline{w}dx\\ &=-\int_\Omega \Delta u\overline{w}dx-2i\int_\Omega (A_2\cdot\nabla u)\overline{w}dx-i\int_\Omega \textrm{div}(A_2)u\overline{ w}dx+\int_\Omega (|A_2|^2-q_2)u\overline{w}dx\\ &=-\int_\Omega \Delta u\overline{w}dx-i\int_\Omega (A_2\cdot\nabla u)\overline{w}dx+i\int_\Omega A_2u\overline{ \nabla w}dx+\int_\Omega (|A_2|^2-q_2)u\overline{w}dx\\ &=-\int_\Omega \Delta u\overline{w}dx+\int_\Omega (\nabla+iA_2)u\cdot \overline{(\nabla+iA_2)w}dx-\int_\Omega \nabla u\cdot\overline{\nabla w}dx-\int_\Omega q_2u\overline{w}dx\end{aligned}$$ and it follows $$\int_{\partial\Omega}\partial_\nu u\overline{w}d\sigma(x)=\int_\Omega \Delta u\overline{w}dx+\int_\Omega \nabla u\cdot\overline{\nabla w}dx=0.$$ Allowing $w\in H^2(\Omega)$, satisfying supp$( w_{|\partial\Omega})\subset V$, to be arbitrary, we deduce $\partial_\nu u_{| V}=0$. In the same way, multiplying \eqref{eq4} by $\overline{u_2}$ and then applying \eqref{c1d} and the Green formula, we get \[\int_\Omega [2iA\cdot \nabla u_1\overline{u_2}+(q+i\textrm{div}(A)+|A_2|^2-|A_1|^2)u_1\overline{u_2}]dx=\int_{\partial\Omega} \partial_\nu u \overline{u_2}d\sigma(x).\] Moreover, we have $\partial_\nu u_{| V}=0$ and we get \begin{equation}\label{t6e} \int_\Omega [2iA\cdot \nabla u_1\overline{u_2}+(q+i\textrm{div}(A)+|A_2|^2-|A_1|^2)u_1\overline{u_2}]dx=\int_{\partial\Omega\setminus V}\partial_\nu u\overline{u_2}d\sigma(x).\end{equation} In view of \eqref{t6f}, we have \begin{equation} \label{tatita}\norm{w_{2,\rho}}_{L^2(\partial \Omega)}\leq C\norm{w_{2,\rho}}_{H^{1}(\Omega)}^{\frac{1}{2}}\norm{w_{2,\rho}}_{L^2(\Omega)}^{\frac{1}{2}}\leq C \rho^{\frac{1}{2}}\norm{A_{2}-A_{2,\rho}}_{L^2({\mathbb R}^3)^3}.\end{equation} Here we use the estimate $$\norm{f}_{L^2(\partial\Omega)}\leq C\norm{f}_{H^{1}(\Omega)}^{\frac{1}{2}}\norm{f}_{L^2(\Omega)}^{\frac{1}{2}},\quad f\in H^1(\Omega),$$ which can be proved, in a similar way to bounded domains, by using local coordinates associated with $\partial\omega$ in order to transform, locally with respect to $x'\in\overline{\omega}$ for $x=(x',x_3)\in\overline{\omega}\times{\mathbb R}=\overline{\Omega}$, $\overline{\Omega}$ into the half space. Applying \eqref{tatita} and the Cauchy-Schwarz inequality, we obtain \[\begin{aligned}\abs{\int_{\partial\Omega\setminus V}\partial_\nu u\overline{u_2}d\sigma(x)}&\leq\int_{\mathbb R}\int_{{\partial\omega}_{+,\varepsilon,\theta}}\abs{\partial_\nu ue^{-\rho x'\cdot \theta}\left(\psi\left(\rho^{-\frac{1}{4}}x_3\right)b_{2,\rho}e^{i\rho x\cdot\eta}+w_{2,\rho}(x)\right)} d\sigma(x')dx_3 \\ \ &\leq C\left(\int_{{\partial\omega}_{+,\varepsilon,\theta}\times {\mathbb R}}\abs{e^{-\rho x'\cdot \theta}\partial_\nu u}^2d\sigma(x)\right)^{\frac{1}{2}}\left(\norm{\psi\left(\rho^{-\frac{1}{4}}\cdot\right)}_{L^2({\mathbb R})}+\norm{w_{2,\rho}}_{L^2(\partial \Omega)}\right)\\ \ &\leq C\rho^{\frac{1}{2}}\norm{A_{2}-A_{2,\rho}}_{L^2({\mathbb R}^3)^3}\left(\int_{{\partial\omega}_{+,\varepsilon,\theta}\times {\mathbb R}}\abs{e^{-\rho x'\cdot \theta}\partial_\nu u}^2d\sigma(x)\right)^{\frac{1}{2}}\end{aligned}\] for some $C$ independent of $\rho$. This estimate and the Carleman estimate \eqref{c2a} implies \begin{eqnarray}&&\abs{\int_\Omega [2iA\cdot \nabla u_1\overline{u_2}+(q+i\textrm{div}(A)+|A_2|^2-|A_1|^2)u_1\overline{u_2}dx}^2\cr &&\leq C\rho\norm{A_{2}-A_{2,\rho}}_{L^2({\mathbb R}^3)^3}^2\int_{{\partial\omega}_{+,\varepsilon,\theta}\times {\mathbb R}}\abs{e^{-\rho x'\cdot \theta}\partial_\nu u}^2d\sigma(x)\cr &&\leq \varepsilon^{-1}C\rho\norm{A_{2}-A_{2,\rho}}_{L^2({\mathbb R}^3)^3}^2\int_{{\partial\omega}_{+,\theta}\times {\mathbb R}}\abs{e^{-\rho x'\cdot \theta}\partial_\nu u}^2|\nu \cdot\theta| d\sigma(x)\cr &&\leq \varepsilon^{-1}C\norm{A_{2}-A_{2,\rho}}_{L^2({\mathbb R}^3)^3}^2\left(\int_\Omega\abs{ e^{-\rho x'\cdot \theta}(-\Delta_{A_2} +q_2)u}^2dx\right)\cr &&\leq \varepsilon^{-1}C\norm{A_{2}-A_{2,\rho}}_{L^2({\mathbb R}^3)^3}^2\left(\int_\Omega\abs{ e^{-\rho x'\cdot \theta}[2iA\cdot \nabla u_1+(q+i\textrm{div}(A)+|A_2|^2-|A_1|^2)u_1]}^2dx\right)\cr &&\leq \varepsilon^{-1}C\rho^2\norm{A_{2}-A_{2,\rho}}_{L^2({\mathbb R}^3)^3}^2\norm{A}_{L^2({\mathbb R}^3)}^2,\end{eqnarray} where $C>0$ is a constant independent of $\rho$. Therefore, we have $$\abs{\int_\Omega [2iA\cdot \nabla u_1\overline{u_2}+(q+i\textrm{div}(A)+|A_2|^2-|A_1|^2)u_1\overline{u_2}dx}\leq C\rho\norm{A_{2}-A_{2,\rho}}_{L^2({\mathbb R}^3)^3}$$ and multiplying this inequality by $\rho^{-1}$ and sending $\rho\to+\infty$ we obtain from \eqref{a1a} that $$\lim_{\rho\to+\infty}\rho^{-1}\abs{\int_\Omega [2iA\cdot \nabla u_1\overline{u_2}+(q+i\textrm{div}(A)+|A_2|^2-|A_1|^2)u_1\overline{u_2}dx}=0.$$ Combining this identity with the arguments of Section 4, we deduce that \begin{equation} \label{t6g}\xi_k \mathcal F(a_j)(\xi)-\xi_j \mathcal F(a_k)(\xi)=0,\quad 1\leq j<k\leq3\end{equation} for all $(\xi',\xi_3)\in{\mathbb R}^2\times{\mathbb R}$ such that $\xi'\in\theta^\bot\setminus\{0\}$, $\theta\in\{y\in\mathbb S^{1}:|y-\theta_0|\leq\varepsilon\} $, $\xi_3\neq0$. Since $A\in L^1({\mathbb R}^3)$, we can extend by continuity the identity \eqref{t6g} to all $(\xi',\xi_3)\in{\mathbb R}^2\times{\mathbb R}$ such that $\xi'\in\theta^\bot$, $\theta\in\{y\in\mathbb S^{1}:|y-\theta_0|\leq\varepsilon\} $, $\xi_3\in{\mathbb R}$. Consider the Fourier transform in $x'$ and $x_3$ given, for $f\in L^1({\mathbb R}^3)$, by $$\mathcal F'(f)(\xi',x_3)=(2\pi)^{-1}\int_{{\mathbb R}^2}f(x',x_3)e^{-ix'\cdot\xi'}dx',\quad \mathcal F_{x_3}(f)(x',\xi_3)=(2\pi)^{-\frac{1}{2}}\int_{{\mathbb R}}f(x',x_3)e^{-ix_3\xi_3'}dx_3.$$ It is clear that $\mathcal F A=\mathcal F'[\mathcal F_{x_3}A]$ and using the fact that, for all $\xi_3\in{\mathbb R}$, $x'\mapsto \mathcal F_{x_3}A(x',\xi_3)$ is supported in $\overline{\omega}$ which is compact, we deduce that, for all $j=1,2,3$, $\xi'\mapsto \mathcal Fa_j(\xi',\xi_3)$ is complex valued real analytic. Therefore, for all $\xi_3\in{\mathbb R}$, the function $\xi'\mapsto\xi_k \mathcal F(a_j)(\xi)-\xi_j \mathcal F(a_k)(\xi)$ is real analytic and it follows that the identity \eqref{t6g} holds true for all $\xi\in{\mathbb R}^3$.Thus, we have $dA_1=dA_2$. Then in a similar way to Section 4, we can prove that we can apply the gauge invariance to get $$ \mathcal D_{A_1,q_1, V}=\mathcal D_{A_1,q_2,V}.$$ Repeating the above argumentation (see also \cite[Section 5]{Ki4}) we deduce that $$\lim_{\rho\to+\infty}\int_{{\mathbb R}^3}\chi^2(\rho^{-\frac{1}{4}} x_3)q(x)e^{-i\xi\cdot x}dx=0,$$ for all $(\xi',\xi_3)\in{\mathbb R}^2\times{\mathbb R}$ such that $\xi'\in\theta^\bot\setminus\{0\}$, $\theta\in\{y\in\mathbb S^{1}:|y-\theta_0|\leq\varepsilon\} $, $\xi_3\neq0$. Then, using the fact that $q\in L^1({\mathbb R}^3)$, an application of the Lebesgue dominate convergence theorem implies that $\mathcal F(q)(\xi)=0$, for all $(\xi',\xi_3)\in{\mathbb R}^2\times{\mathbb R}$ such that $\xi'\in\theta^\bot$, $\theta\in\{y\in\mathbb S^{1}:|y-\theta_0|\leq\varepsilon\} $, $\xi_3\in{\mathbb R}$. Then, using the fact that $q\in L^1({\mathbb R}^3)$ and supp$(q)\subset\overline{\omega}\times{\mathbb R}$, we can repeat the above arguments in order to deduce that $q=0$ and $q_1=q_2$. This completes the proof of Theorem \ref{t6}. \section{Extension to higher dimension} In this section we discuss about some possible extensions of our results to some class of domain $\Omega\subset{\mathbb R}^n$, $n\geq4$. For this purpose, let $n\geq4$ and consider $n_1,n_2\in\mathbb N$ such that $n_1+n_2=n$ and $n_1\geq3$. We fix also $\omega$ a bounded and $\mathcal C^2$ open set of ${\mathbb R}^{n_1}$. Then our claim can be stated as follows: all the results of the present paper can be extended to any open and unbounded set $\Omega$ of ${\mathbb R}^n$ satisfying \begin{equation} \label{op}\Omega\subset \Omega_2:=\omega\times{\mathbb R}^{n_2}.\end{equation} Let us explain why our results can also be extended to unbounded domains $\Omega$ satisfying \eqref{op}. The main ingredient are suitable CGO solutions for our problem. Once this is proved one can easily complete the proof of the uniqueness result by repeating our argumentation. Since here we know that $\omega$ is a bounded open set of ${\mathbb R}^{n_1}$ with $n_1\geq3$, instead of the construction of the present paper we will consider CGO solutions constructed by mean of a projection argument inspired by the analysis of \cite{BKS1,Ki1}. More precisely, we fix $\xi=(\xi',\xi'')\in{\mathbb R}^{n_1}\times{\mathbb R}^{n_2}$ and we consider $\eta,\theta\in\mathbb S^{n_1-1}$ such that $\eta\cdot\theta=\eta\cdot\xi'=\theta\cdot\xi'=0$. For all $r>0$, we denote by $B_r'$ the ball of center zero and of radius $r$ of ${\mathbb R}^{n_1}$, we fix also $R:=\underset{x'\in\overline{\omega}}{\sup}|x'|$, $R_1:=2\sqrt{2}(R+2)$, $\tilde{\theta}=(\theta,0)\in{\mathbb R}^n$ and $\tilde{\eta}=(\eta,0)\in{\mathbb R}^n$. We set $\chi\in\mathcal C^\infty_0({\mathbb R}^{n})$ such that $\chi\geq0$, $\int_{{\mathbb R}^{n}}\chi(x)dx=1$, supp$(\chi)\subset \{x\in{\mathbb R}^n:\ |x|<1\}$, and we define $\chi_\rho$ by $\chi_\rho(x)=\rho^{{n\over 4}}\chi(\rho^{{1\over4}}x)$. Then, for $j=1,2$, we fix $$A_{j,\rho}(x):=\int_{{\mathbb R}^{n}}\chi_\rho(x-y)A_j(y)dy.$$ In a similar way to Section 3.1, one can check that for all $x=(x',x'')\in B_{R+1}'\times{\mathbb R}^{n_2}$ the function $$(s_1,s_2)\mapsto A_{j,\rho}(s_1\tilde{\theta}+s_2\tilde{\eta}+x)$$ will be supported in $\{z\in{\mathbb R}^2:\ |z|<R_1\}$. Thus, we can define $$ \begin{aligned}&\Phi_{1,\rho}(x):=\frac{-i}{2\pi} \int_{{\mathbb R}^2} \frac{(\tilde{\theta}+i\tilde{\eta})\cdot A_{1,\rho}(x-s_1\tilde{\theta}-s_2\tilde{\eta})}{s_1+is_2}ds_1ds_2,\\ &\Phi_{2,\rho}(x):=\frac{-i}{2\pi} \int_{{\mathbb R}^2} \frac{(-\tilde{\theta}+i\tilde{\eta})\cdot A_{2,\rho}(x+s_1\tilde{\theta}-s_2\tilde{\eta})}{s_1+is_2}ds_1ds_2.\end{aligned}$$ Fixing $$ b_{1,\rho}(x)=e^{\Phi_{1,\rho}(x)},\quad b_{2,\rho}(x)=e^{\Phi_{2,\rho}(x)},$$ we will obtain functions satisfying properties similar to those described in Section 3.1. Now let us fix $\psi\in\mathcal C^\infty_0({\mathbb R}^{n_2})$ a real valued function. Applying the results of Section 3.2, which can be extended without any difficulty to this setting, one can construct solutions $u_j\in H^1(\Omega_2)$, $j=1,2$, of $\Delta_{A_j}u_j+q_ju_j=0$ on $\Omega_2$ of the form $$u_1(x',x'')=e^{\rho \theta\cdot x'}\left(\psi(x'')b_{1,\rho}(x',x'')e^{i\rho x'\cdot\eta-i\xi\cdot x}+w_{1,\rho}(x',x'')\right),\quad x'\in\omega,\ x''\in{\mathbb R}^{n_2},$$ $$u_2(x',x'')=e^{-\rho \theta\cdot x'}\left(\psi(x'')b_{2,\rho}(x',x'')e^{i\rho x'\cdot\eta}+w_{2,\rho}(x',x'')\right),\quad x'\in\omega,\ x''\in{\mathbb R}^{n_2},$$ with $w_j$ satisfying the decay property $$\lim_{\rho\to+\infty}(\rho^{-1}\norm{w_{j,\rho}}_{H^1(\Omega_2)}+\norm{w_{j,\rho}}_{L^2(\Omega_2)})=0.$$ After that, allowing the cut-off function $\psi\in\mathcal C^\infty_0({\mathbb R}^{n_2})$ to be arbitrary and repeating the arguments of Section 4 we can prove that all the results of this paper remain true when $\Omega\subset{\mathbb R}^n$ satisfies \eqref{op}. \section*{Acknowledgments} The author would like to thank Pedro Caro for fruitful discussions about recovery of bounded magnetic potentials. The author is grateful to the anonymous referees for their careful reading and their suggestions that allow to improve the paper. This work was partially supported by the French National Research Agency ANR (project MultiOnde) grant ANR-17-CE40-0029.
{ "timestamp": "2019-01-29T02:22:53", "yymm": "1802", "arxiv_id": "1802.04185", "language": "en", "url": "https://arxiv.org/abs/1802.04185" }
\section{Introduction} The irrationality of the value of the Riemann $\zeta$-function at odd integers is a long standing open problem. For even integer arguments, it was famously shown by Euler \cite{euler} that \begin{equation*} \zeta(2n) = (-1)^{n-1} \frac{2^{2n-1} \pi^{2n} B_{2n}}{(2n)!}, \end{equation*} where $B_{2n}$ is the $2n$'th Bernoulli number, which is rational. Lindemann \cite{lindemann} proved in 1882 that $\pi$ is transcendental, so it immediately follows that $\zeta(2n)$ is irrational for each $n \in \mathbb{N}$. By contrast, the value of $\zeta$ at odd integers largely remains a mystery, and not many results were known until 1979 when Ap\'ery \cite{apery} published a proof that $\zeta(3)$ is irrational. Rivoal \cite{rivoal} subsequently proved that infinitely many odd $\zeta$-values are irrational; and Zudilin \cite{zudilin} proved that at least one of $\zeta(5), \zeta(7), \zeta(9)$ and $\zeta(11)$ is irrational. We are not able to resolve the question of the irrationality of odd $\zeta$-values but we will present a related result. To motivate our results, let $k \ge 2$ be an integer. We first observe that \begin{equation*} \zeta(k) = \sum_{n=1}^\infty \frac{1}{n^k} = \sum_{n=1}^\infty \frac{((n-1)!)^k}{(n!)^k} = \sum_{n=1}^\infty \frac{[((n-1)!)^k]}{(n!)^k}, \end{equation*} where $[x]$ denotes the integer part of $x$, so that the last equality is trivial. We will modify this last expression by putting a real parameter inside the square brackets in the numerator, that is, we consider the series \begin{equation} \label{eq:modified_zeta} \sum_{n=1}^\infty \frac{[((n-1)!)^k x]}{(n!)^k}. \end{equation} Below, we will prove that for any integer $k \ge 2$, this series is irrational for $\mu$-almost all $x$, whenever $\mu$ is a Radon measure with positive Fourier dimension (see the definition below). In particular, this holds for almost all $x$ with respect to Lebesgue measure. Appealing to results of Kauffman \cite{kaufman1} and \cite{kaufman2}, we immediately see that the result also holds for almost all badly approximable numbers in an appropriate sense, as well as for almost all numbers with irrationality measure greater than some prescribed $v > 2$. More details will follow below. In addition to the perturbed $\zeta$-values of \eqref{eq:modified_zeta}, we are able to modify other famous series in a corresponding way. Vacca's formula for the Euler--Mascheroni constant $\gamma$ in \cite{vacca} states that \begin{equation*} \gamma = \sum_{n=1}^\infty (-1)^n \frac{[\log_2 n]}{n}. \end{equation*} We turn this into a factorial series as before to obtain a family of series depending on a real parameter $x$, \begin{equation} \label{modified_euler} \sum_{n=1}^\infty (-1)^n \frac{[(n-1)! [\log_2 n] x]}{n!}. \end{equation} Again, these series are irrational almost surely with respect to any measure satisfying the above properties. It would be natural to suspect that the full set of perturbed $\zeta$-values at integers together with the perturbed Euler--Mascheroni constant will be linearly independent over $\mathbb{Q}$ almost surely with respect to any such measure. We are not able to prove this for the particular perturbation of the series given above, although we are able to establish linear independence for the set of series \begin{multline} \label{eq:lin_ind} \{\sum_{n=1}^\infty \frac{[((n-1)!)^Kn^{K-j} x]}{(n!)^K}; j\in\{ 2,\dots ,K\}\}\\ \cup \{\sum_{n=1}^\infty (-1)^n \frac{[((n-1)!)^Kn^{K-1} x[\log_2 n]]}{(n!)^K}, 1\}, \end{multline} where $K \in \mathbb{N}$. These series specialise to the $\zeta$-values and the Euler--Maschero\-ni constant respectively, if we let $x=1$. Note however, that in this case the particular form of the series considered depend on their quantity. Finally, to illustrate the versatility of the method, we make similar modifications to two other famous series, whose irrationality is at present unknown, namely the series \begin{equation*} \sum_{n=1}^{\infty} \frac{1}{n^n} \quad \text{ and } \quad \sum_{n=1}^\infty \frac{1}{n! + 1}. \end{equation*} The first series is sometimes known as Sophmore's Dream, due to the seemingly `too-good-to-be-true' identity $$ \sum_{n=1}^{\infty} \frac{1}{n^n} = \int_0^1 x^{-x} dx, $$ discovered by J. Bernoulli in 1697. The first terms can be found in \cite{s}. The second one is due to Erd\H{o}s \cite{Erdos2}. In fact, he asked if for any integer $t$ the sum of the series $\sum_{n=1, n!\not= -t}^\infty \frac{1}{n! + t}$ is irrational. With these two series, the perturbed variants become \begin{equation} \label{eq:modified_series} \sum_{n=1}^\infty \frac{\left[\prod_{j=1}^{n-1} j^j x\right]}{\prod_{j=1}^{n} j^j} \quad \text{ and } \quad \sum_{n=1}^\infty \frac{\left[\prod_{j=1}^{n-1} (j! +1) x\right]}{\prod_{j=1}^{n} (j! +1)}. \end{equation} It is worth noting that our results are in a first instance metrical; the irrationality or linear independence is established for almost all real parameters in a set. However, our method also gives rise to an approach to proving the irrationality of the series in question for a particular, prescribed value of the parameter. Indeed, as the main idea of the proof is to establish the uniform distribution modulo $1$ of a certain sequence, we need only establish this in the particular case, as opposed to the `almost all' case, and in fact we can prove irrationality with a significantly weaker property than uniform distribution modulo $1$. In the final section of the paper, we give some seemingly new irrationality criteria for the original sequences. Our method is in the spirit of Schlage-Puchta \cite{p} when he proved the irrationality of $\sum_{n=1}^\infty \frac{[n^\alpha]+1}{n!}$ for all reals $\alpha$. This result was also proved by Han\v cl and Tijdeman \cite{ht2} by a different method. For more related results, see Han\v cl and Tijdeman \cite{ht1} and \cite{ht3}-\cite{ht5}. Throughout the paper, we let $\mathbb N$, $\mathbb Q$ and $\mathbb R$ denote the sets of all positive integers, rational numbers and real numbers, respectively. For a real number $x$, we denote by $[x]$, $\{ x\}$ and $\left\Vert x \right\Vert$ the integer part of $x$, the fractional part of $x$ and the distance from $x$ to the nearest integer, respectively. \section{A result on uniform distribution modulo $1$} In this section, we provide the first ingredient to our irrationality results. The ideas of the proof are found in Haynes, Jensen and Kristensen \cite{hjk}, where the method is applied to a different problem and stated in a different form. We state the result in the form needed here. We first need some notation. Let $\mu$ be a Radon measure on $\mathbb{R}$. The Fourier transform of $\mu$ is defined as \begin{equation*} \hat{\mu}(t) = \int_{-\infty}^\infty e^{-2\pi i xt}d\mu(x). \end{equation*} The behaviour of the Fourier transform of a measure at infinity is strongly related to the geometry of its support. Indeed, if we define the Fourier dimension of a measure $\mu$ to be \begin{equation*} \dim_F(\mu) = \sup\{\nu \ge 0 : \vert \hat{\mu}(t) \vert \ll (1+\vert t \vert)^{-\eta/2}\}, \end{equation*} the Fourier dimension of $\mu$ always gives a lower bound on the Hausdorff dimension of the support of $\mu$. For our purposes, it is only relevant that the Fourier dimension of the measures considered is positive. Examples of this of course include the Lebesgue measure on an interval, but other arithmetically interesting examples exist. Kaufman \cite{kaufman1} proved that the set \begin{equation*} F_M = \{x \in [0,1) \setminus \mathbb{Q} : a_n(x) \le M \text{ for all } n \in \mathbb{N}\}, \end{equation*} supports such a measure whenever $M \ge 3$. Here, $a_n(x)$ is the $n$'th partial quotient in the simple continued fraction expansion of $x$. The result was extended to $M \ge 2$ by Queff\'elec and Ramar\'e \cite{qr}. A further example, also due to Kaufman, is the set of numbers with a lower bound on their irrationality measure. For a real number $x$, let \begin{equation*} w(x) = \sup\{w > 0 : \vert x - p/q \vert < q^{-w} \text{ for infinitely many } p/q \in \mathbb{Q}\}. \end{equation*} From Dirichlet's theorem in Diophantine approximation, $w(x) \ge 2$ for all irrational numbers $x$. Let $v \ge 2$. Kaufmann \cite{kaufman2} constructed a measure $\mu_v$ on the set \begin{equation*} W(v) = \{x \in \mathbb{R} : w(x) \ge v\}, \end{equation*} such that $\dim_F \mu_v = \frac{2}{v}$. This coincides with the Hausdorff dimension of the set found by Jarn\'ik \cite{jarnik} and Besicovitch \cite{besicovitch}, and so is best possible. The following result of Haynes, Jensen and Kristensen \cite{hjk} is stated in terms of the Kaufmann measure on $F_M$, but the proof only requires the Fourier dimension of the measure to be positive. We state Corollary 7 of that paper for general measures. \begin{Theorem} \label{thm:UD} Let $\mu$ be a Radon measure on $\mathbb{R}$ with $\dim_F \mu > 0$ and let $(a_n)$ be a sequence of real numbers such that for some $c > 0$, $\vert a_{k} - a_j \vert \ge c$ for all $k,j \in \mathbb{N}$ with $k \neq j$. Then, $(a_n x)$ is uniformly distributed modulo $1$ for $\mu$-almost all $x \in \mathbb{R}$. \end{Theorem} We give a few words on the relation between the above statement and that of \cite{hjk}. In \cite{hjk}, the sequence $(a_n)$ is assumed to be composed of integers. This will not be the case for our sequences below, but in order for the proof of \cite{hjk} to work, we only need for the sequence to take its values in a discrete subset of the real numbers, i.e. a set with only isolated points. This is guaranteed by the assumption on universally lower bounded gaps. Also, in \cite{hjk}, a bound on the discrepancy of the sequence $(a_n x)$ is obtained. This gives a quantitative variant of uniform distribution modulo $1$, which we will not be needing here. Note however that the faster the sequence $(a_n)$ increases, the better the discrepancy bound. It is curious to remark how Theorem \ref{thm:UD} yields a short proof of a result usually attributed to Kahane and Salem \cite{ks}, stating that the ternary Cantor set does not support a Radon measure with positive Fourier dimension. Indeed, suppose such a measure $\mu$ existed. By Theorem \ref{thm:UD} applied with $a_n = 3^n$ would imply that for almost all numbers $x$ in the ternary Cantor set, $(3^n x)$ would be uniformly distributed modulo $1$, which is the same as saying that almost all numbers in the ternary Cantor set are normal to base $3$. Clearly this is not the case, which completes the proof of the result of Kahane and Salem. \section{Metrical Results} We proceed with the announced application of uniform distribution to irrationality. The idea of using uniform distribution in proofs of irrationality appears in \cite{p}. Our approach is inspired by this paper. We begin with the announced result on linear independence of the set given in \eqref{eq:lin_ind}. \begin{Theorem} \label{HanKris1.t1} Let $K$ be a positive integer and let $\mu$ be a Radon measure on $\mathbb{R}$ with positive Fourier dimension. For $\mu$-almost all numbers $x$ the set $\{\sum_{n=1}^\infty \frac{[((n-1)!)^Kn^{K-j} x]}{(n!)^K}; j\in\{ 2,\dots ,K\}\}\cup \{\sum_{n=1}^\infty (-1)^n \frac{[((n-1)!)^Kn^{K-1} x[\log_2 n]]}{(n!)^K}, 1\}$ consists of linearly independent numbers over rational numbers. \end{Theorem} \begin{proof} Let $x$ be a real number. Then, there are $A_0,\dots A_K\in \mathbb Z$ not all equal to $0$ and such that \begin{multline} \label{HanKris1.1} \sum_{j=2}^KA_j\sum_{n=1}^\infty\frac{[((n-1)!)^Kn^{K-j} x]}{(n!)^K}\\ +A_1\sum_{n=1}^\infty (-1)^n \frac{[((n-1)!)^Kn^{K-1} x[\log_2 n]]}{(n!)^K}+A_0=0. \end{multline} Let $N\in\mathbb Z^+$. Multiplying (\ref{HanKris1.1}) by $(N!)^K$, we obtain that \begin{multline*} \sum_{j=2}^KA_j\sum_{n=N+1}^\infty\frac{[((n-1)!)^Kn^{K-j} x]}{((N+1)\dots n)^K}\\ +A_1\sum_{n=N+1}^\infty (-1)^n \frac{[((n-1)!)^Kn^{K-1} x[\log_2 n]]}{((N+1)\dots n)^K}+B=0, \end{multline*} where $B$ is a suitable integer constant which depends on $N$. The sequences in this expression converge, and both $$ \sum_{j=2}^KA_j\sum_{n=N^{N(K+1)}+1}^\infty\frac{[((n-1)!)^Kn^{K-j} x]}{((N+1)\dots n)^K} = O\left(\frac{1}{N}\right) $$ and $$ A_1\sum_{n=N^{N(K+1)}+1}^\infty (-1)^n \frac{[((n-1)!)^Kn^{K-1} x[\log_2 n]]}{((N+1)\dots n)^K}= O\left(\frac{1}{N}\right) $$ We remove these tails at the cost of introducing an error term of order $O(\frac{1}{N})$. Now, note that $[x] = x - \{x\} = x + O(1)$ and apply this to remove the integer part of the sequence of numerators at the cost of a very small error, which is absorbed in the $O(\frac{1}{N})$. The upshot is that \begin{equation} \begin{split} \sum_{j=2}^KA_j\sum_{n=N+1}^{N^{N(K+1)}}\frac{((n-1)!)^Kn^{K-j} x}{((N+1)\dots n)^K}&+A_1\sum_{n=N+1}^{N^{N(K+1)}} (-1)^n \frac{((n-1)!)^Kn^{K-1} x[\log_2 n]}{((N+1)\dots n)^K}\\ &+ \label{eq:limit_zero} B+O\left(\frac 1N\right)=0. \end{split} \end{equation} As $N$ tends to infinity, the error term vanishes, and as $B$ is an integer, the fractional part of the first expression must converge to $0$. However, we will see that by Theorem \ref{thm:UD}, the sequence \begin{alignat*}{2} (f_x(N)) =& \Bigg(\sum_{j=2}^KA_j\sum_{n=N+1}^{N^{N(K+1)}}\frac{((n-1)!)^Kn^{K-j} x}{((N+1)\dots n)^K}\\ &+A_1\sum_{n=N+1}^{N^{N(K+1)}} (-1)^n \frac{((n-1)!)^Kn^{K-1} x[\log_2 n]}{((N+1)\dots n)^K}\Bigg) \end{alignat*} is uniformly distributed modulo $1$ for $\mu$-almost all $x$. This will complete the proof, as it immediately implies that the expression in \eqref{eq:limit_zero} cannot tend to an integer. To apply Theorem \ref{thm:UD}, we need to check that the sequence of integers considered satisfies the appropriate conditions. Namely, we need to check that the sequence $(a_N)$ given by \begin{multline*} a_N =\sum_{j=2}^KA_j\sum_{n=N+1}^{N^{N(K+1)}}\frac{((n-1)!)^Kn^{K-j} }{((N+1)\dots n)^K}\\ +A_1\sum_{n=N+1}^{N^{N(K+1)}} (-1)^n \frac{((n-1)!)^Kn^{K-1} [\log_2 n]}{((N+1)\dots n)^K} \end{multline*} is a discrete subset of the reals. This is however simple. Each of the interior sums in the first term is equal to \begin{equation} \label{eq:zetaUD} (N!)^K \sum_{n=N+1}^{N^{N(K+1)}}\frac{1}{n^j}, \end{equation} which grows at least as fast as $N!$ to any power slightly smaller than $k_j$, and so the asymptotic growth of the sequence $a_N$ is governed by the largest value of $k_j$, for which it grows in absolute value like a power of $N!$. Clearly, this sequence has the desired property. If $K=1$, and only the perturbed Euler--Mascheroni constant is considered, we obtain similarly a very rapid growth in the $a_N$. \end{proof} For the first perturbations of the series expansion of $\zeta$, we have the following almost sure irrationality statement. \begin{Theorem} Let $\mu$ be a Radon measure on $\mathbb{R}$ with positive Fourier exponent and let $\alpha \ge 2$ be an integer. For $\mu$-almost all numbers $x$ the sum $\sum_{n=1}^\infty \frac{[((n-1)!)^\alpha x]}{(n!)^\alpha}$ as well as the sum $\sum_{n=1}^\infty (-1)^n \frac{[(n-1)! x[\log_2 n]]}{n!}$ are irrational numbers. \end{Theorem} \begin{proof} Fix one of the series, $\sum_{n=1}^\infty \frac{[((n-1)!)^\alpha x]}{(n!)^\alpha}$ say. Suppose to the contrary that the series is rational and pick $p,q \in \mathbb{N}$ such that \begin{equation} \label{eq:zeta_1} q \sum_{n=1}^\infty \frac{[((n-1)!)^\alpha x]}{(n!)^\alpha} = p. \end{equation} Let $N \in \mathbb{N}$ and multiply \eqref{eq:zeta_1} by $(N!)^\alpha$. We then find that $$ q \sum_{n=N!+1}^\infty \frac{[((n-1)!)^\alpha x]}{(n!)^\alpha} = B $$ for some $B \in \mathbb{Z}$. We truncate the series at $(N!)^3$. Estimating the remainder by an integral, we easily find that \begin{multline*} q \sum_{n=(N!)^3+1}^\infty \frac{[((n-1)!)^\alpha x]}{(n!)^\alpha} \le qx (N!)^\alpha \sum_{n=(N!)^3+1}^\infty \frac{1}{N^\alpha} \\ \ll qx (N!)^{\alpha - 3 \alpha + 3} = O\left(\frac1N\right). \end{multline*} We now apply the property that $[x] = x - \{x\} = x + O(1)$ to remove the integer part in what remains, noting that $$ \sum_{n=N!+1}^{(N!)^3} \frac{1}{(n!)^\alpha} = O\left(\frac1N\right). $$ The upshot is that \begin{equation} \label{eq:zeta_2} \left\Vert x q \sum_{n=N!+1}^{(N!)^3} \frac{((n-1)!)^\alpha}{(n!)^\alpha} \right\Vert = O\left(\frac1N\right). \end{equation} But clearly the sequence $$ \left(q \sum_{n=N!+1}^{(N!)^3} \frac{((n-1)!)^\alpha}{(n!)^\alpha}\right) $$ satisfies the assumptions of Theorem \ref{thm:UD}, so that the interior of \eqref{eq:zeta_2} is uniformly distributed modulo $1$ for $\mu$-almost all $x$. In particular, it is close to $\frac{1}{2}$ infinitely often, which is a contradiction. For the perturbed Euler--Mascheroni constant, the same method and truncation applies. \end{proof} Note that we cannot prove the almost sure linear independence of these series by the present method. The reason is simple: As we remove the square brackets to pass from integer part in the numerator to the different series to which Theorem \ref{thm:UD} is applicable, since the exponents in the denominators in the last result are all different, we would get an error which is too large to be useful at all. We now prove almost sure irrationality of the last two perturbed series. \begin{Theorem} \label{HanKris1.t2} Let $\mu$ be a Radon measure on $\mathbb{R}$ with positive Fourier dimension. For $\mu$-almost all numbers $x$ the number $\sum_{n=1}^\infty \frac{[\prod_{j=1}^{n-1}j^j x]}{\prod_{j=1}^nj^j }$ is irrational. \end{Theorem} \begin{proof} Suppose the contrary that the series is rational. Let $x$ be a real number. Let $p,q\in\mathbb Z^+$ such that \begin{equation} \label{HanKris1.2} q\sum_{n=1}^\infty \frac{[\prod_{j=1}^{n-1}j^j x]}{\prod_{j=1}^nj^j }=p. \end{equation} Let $N\in\mathbb Z^+$ and multiply \eqref{HanKris1.2} by $\prod_{j=1}^Nj^j$ to obtain $$ q\sum_{n=N+1}^\infty \frac{[\prod_{j=1}^{n-1}j^j x]}{\prod_{j=N+1}^nj^j }=B. $$ where $B$ is a suitable integer constant which depends on $N$. We now truncate at $N^2+1$ and observe that $$ q\sum_{n=N+1}^\infty \frac{\{\prod_{j=1}^{n-1}j^j x\}}{\prod_{j=N+1}^nj^j } = O\left( \frac 1N\right), \quad q\sum_{n=N^2+1}^\infty \frac{[\prod_{j=1}^{n-1}j^j x]}{\prod_{j=N+1}^nj^j } = O\left( \frac 1N\right). $$ As in the preceding proof, we remove the integer part from the remaining term and find that, \begin{equation} \label{eq:UD_contradiction} \left\{ q\sum_{n=N+1}^{n=N^2} \frac{\prod_{j=1}^{n-1}j^j x}{\prod_{j=N+1}^nj^j } \right\} =O(\frac 1N). \end{equation} Now, the sequence $$ (a_N) = \left(q\sum_{n=N+1}^{n=N^2} \frac{\prod_{j=1}^{n-1}j^j }{\prod_{j=N+1}^nj^j }\right) $$ is an increasing sequence of rationals taking values in a discrete set, so by Theorem \ref{thm:UD}, the sequence $\{a_Nx\}$ is uniformly distributed modulo $1$. This is in contradiction with \eqref{eq:UD_contradiction}. \end{proof} We finish this section with the perturbed sum for $\sum \frac{1}{n!+1}$. \begin{Theorem} \label{HanKris1.t3} Let $\mu$ be a Radon measure on $\mathbb{R}$ with positive Fourier dimension. Then for almost all numbers $x$ the number $\sum_{n=1}^\infty \frac{[\prod_{j=1}^{n-1}(j!+1) x]}{\prod_{j=1}^n(j!+1) }$ is irrational. \end{Theorem} \begin{proof} Suppose the contrary. Let $x$ be a real number. Let $p,q\in\mathbb Z^+$ such that \begin{equation} \label{HanKris1.3} q\sum_{n=1}^\infty \frac{[\prod_{j=1}^{n-1}(j!+1) x]}{\prod_{j=1}^n(j!+1) }=p. \end{equation} Let $N\in\mathbb Z^+$ and multilpy \eqref{HanKris1.3} by $\prod_{j=1}^N(j!+1)$ to obtain $$ q\sum_{n=N+1}^\infty \frac{[\prod_{j=1}^{n-1}(j!+1) x]}{\prod_{j=N+1}^n(j!+1) }=B. $$ where $B$ is a suitable integer constant which depends on $N$. We truncate again at $n=N^2+1$ and note that $$ q\sum_{n=N+1}^\infty \frac{\{\prod_{j=1}^{n-1}(j!+1) x\}}{\prod_{j=N+1}^n(j!+1) }= O\left(\frac 1N \right)$$ and $$ \quad q\sum_{n=N^2+1}^\infty \frac{[\prod_{j=1}^{n-1}(j!+1) x]}{\prod_{j=N+1}^n(j!+1) }= O\left(\frac 1N \right). $$ As before, this implies that \begin{equation} \label{eq:UD_contradiction2} \left\{ q\sum_{n=N+1}^\infty \frac{\{\prod_{j=1}^{n-1}(j!+1) x\}}{\prod_{j=N+1}^n(j!+1) }\right\} = O(\frac 1N). \end{equation} To obtain a contradiction, we need only note that $$ (a_N) = \left(q\sum_{n=N+1}^{n=N^2+1} \frac{\prod_{j=1}^{n-1}(j!+1) }{\prod_{j=N+1}^n(j!+1) }\right) $$ is an increasing sequence of rationals taking values in a discrete set and apply Theorem \ref{thm:UD}. \end{proof} \section{Criteria for irrationality} In the above proofs, we have used much stronger results than actually needed. In fact, the uniform distribution of the sequences in question is unnecessarily strong, and we only need for the sequences of fractional parts in the proofs to have an accumulation point which is not an integer. Inserting $x=1$ in the various proofs recovers the original series, and in this way, we obtain some seemingly new criteria for the irrationality of the various series. This is where the explicit value of the truncation point is needed. We state these as corollaries. \begin{Corollary} Let $k \ge 2$ be an integer. If the sequence $$ \left(\left\{\sum_{n=N+1}^{(N!)^{(2k-1)/(k-1)}} \frac{((n-1)!)^{k}}{((N+1)\dots n)^{k}}\right\}\right) $$ has an irrational accumulation point or infinitely many accumulation points then $\zeta(k)$ is irrational. \end{Corollary} It is tempting to conduct numerical experiments on the distribution of this sequence for some value of $k$. With the help of Alex Ghitza, we have run some numerical experiments on $\zeta(5)$ using \texttt{Sage}. It does not appear that the sequence arising from $k=5$ accumulates at the endpoints of the unit interval. This is however not surprising, as is seen from \eqref{eq:zetaUD}. Indeed, from a numerical point of view, the sum $\sum_{n=N+1}^{(N!)^{9/4}}\frac{1}{n^5}$ is virtually indistinguishable from the sum $\sum_{n=N+1}^{\infty}\frac{1}{n^5}$. On multiplying by $(N!)^5$ and adding the integer $(N!)^5\sum_{n=1}^{(N)!}\frac{1}{n^5}$, which makes no difference as we are considering the sequence modulo $1$, numerically we are in fact just seeing the fractional parts of the sequence $(N!)^5 \zeta(5)$, for which the criterion is clear: if $\zeta(5)$ is rational, this sequence would be an integer for $N$ larger than the denominator of $\zeta(5)$. We state the corresponding results for the Euler--Mascheroni constant and the remaining two series. \begin{Corollary} If the sequence $$ \left(\left\{\sum_{n=N+1}^{(N!)^3} (-1)^n \frac{(n-1)! [\log_2 n]}{(N+1)\dots n}\right\}\right) $$ has an irrational accumulation point of infinitely many accumulation points then the Euler--Mascheroni constant $\gamma$ is irrational. \end{Corollary} From the numerical point of view, this sequence has the same defect as the preceding ones, and one would just end up with an experiment on the original Euler--Mascheroni constant. The irrationality criteria for the Sophmore's Dream problem and the Erd\H{o}s problem are given in the following two corollaries. \begin{Corollary} If the sequence $$ \left(\left\{ \sum_{n=N+1}^{N^2} \frac{\prod_{j=1}^{n-1}j^j }{\prod_{j=N+1}^nj^j } \right\}\right) = \left(\left\{ \sum_{n=N+1}^{N^2} \frac{\prod_{j=1}^Nj^j }{n^n } \right\}\right) $$ has an irrational accumulation point or infinitely many accumulation points, then the series $\sum_{N=1}^\infty \frac1{n^n}$ is irrational. \end{Corollary} \begin{Corollary} If the sequence $$ \left(\left\{ \sum_{n=N+1}^{n=N!+1} \frac{\prod_{j=1}^{n-1}(j!+1) }{\prod_{j=N+1}^n(j!+1) }\right\}\right) = \left(\left\{ \sum_{n=N+1}^{n=N!+1} \frac{\prod_{j=1}^N(j!+1) }{n!+1 }\right\}\right) $$ has infinitely many accumulation points or an irrational accumulation point, then the series $\sum_{N=1}^\infty \frac1{n!+1}$ is irrational. \end{Corollary} Numerically these are less unweildy than the series related to the $\zeta$-function. Nonetheless, the numbers involved grow extremely rapidly, and we have not been able to get any useful information from numerical experimentation. As a final remark, one can also obtain a criterion for the linear independence of $\zeta$-values from the above, although this is slightly more convoluted. Concretely, we get the following. \begin{Corollary} Let $K$ be a positive integer. Suppose that for any choice of $A_1, \dots, A_K \in \mathbb{Z}$ not all equal to $0$, the sequence of fractional parts of $$ \sum_{j=2}^KA_j\sum_{n=N+1}^{N^{N(K+1)}}\frac{((n-1)!)^Kn^{K-j} }{((N+1)\dots n)^K}\\ +A_1\sum_{n=N+1}^{N^{N(K+1)}} (-1)^n \frac{((n-1)!)^Kn^{K-1} [\log_2 n]}{((N+1)\dots n)^K} $$ has an accumulation point different from $0$ and $1$. Then, the set $$\{\gamma, \zeta(2), \zeta(3), \dots, \zeta(K)\}$$ consists of linearly independent numbers over rational numbers. \end{Corollary} \paragraph{Achnowledgements:} We thank Alex Ghitza for helping us with \texttt{Sage}.
{ "timestamp": "2018-02-13T02:16:01", "yymm": "1802", "arxiv_id": "1802.03946", "language": "en", "url": "https://arxiv.org/abs/1802.03946" }
\subsection*{S1. Derivation of the spin-$\frac{1}{2}$ effective model from a single-band Hubbard model under a DC field} This section is devoted to the derivation of the spin-$\frac{1}{2}$ effective model~(2). We start from a half-filled, repulsive Hubbard model ($U>0$) with an arbitrary on-site potential term. The Hamiltonian reads \begin{align} \mathcal{H} &= \sum_{ij \sigma} t_{ij} c^\dagger_{i \sigma} c_{j \sigma} + U \sum_i n_{i \uparrow} n_{i \downarrow} + \sum_{i \sigma} V_i n_{i \sigma} \nonumber\\ &= \mathcal{H}_t + \mathcal{H}_U + \mathcal{H}_V \label{Model_Onsite}, \end{align} where $\mathcal{H}_t$, $\mathcal{H}_U$, and $\mathcal{H}_t$ denote the electron hopping, the on-site Coulomb interaction, and the on-site potential, respectively. We assume that all the on-site potential energies are smaller than the Coulomb interaction energy, i.e., $|V_i| < U$. In order to perform perturbative calculations for any quantum system, it is generally useful to introduce projection operators onto Hilbert subspaces. Let us divide the full Hilbert space into a low- and high-energy states, $\{\ket{\Psi_g}\}$ and $\{\ket{\Psi_e}\}$, and define the projection operator onto the low-energy (high-energy) state $P_e$ ($P_g$). Using these instruments, we can arrive at the effective Hamiltonian for the low-energy subspace in the second-order perturbation theory: \begin{align} \mathcal{H}_\mathrm{eff} = \mathcal{H}_{gg} + \mathcal{H}_{ge} \frac{1}{E_g - \mathcal{H}_{ee}} \mathcal{H}_{eg}, \label{Formula_Heff} \end{align} where $\mathcal{H}_{\alpha \beta} = P_\alpha \mathcal{H} P_\beta$ $(\alpha, \beta = g, e)$ and $E_g$ is defined by $\mathcal{H}_{gg} \ket{\Psi_g} = E_g \ket{\Psi_g}$. We apply the above formula (\ref{Formula_Heff}) to the Mott insulating state of the Hubbard model (\ref{Model_Onsite}) in the strong-coupling limit, i.e. $U \to \infty$. In this limit, the ground states of the unperturbed Hamiltonian $\mathcal{H}_U + \mathcal{H}_V$ are states where all sites are singly occupied. We treat the hopping term $\mathcal{H}_t$ as the perturbation, and define the low-energy (high-energy) subspace as the ground states (states with doubly occupied sites). First we consider $\mathcal{H}_{eg}$ in the Mott insulating state of the Hubbard model~(\ref{Model_Onsite}). In the three terms $\mathcal{H}_t$, $\mathcal{H}_U$ and $\mathcal{H}_V$ of the Hamiltonian $\mathcal{H}$, only the hopping $\mathcal{H}_t$ has a matrix element between high and low-energy states, $\{\ket{\Psi_g}\}$ and $\{\ket{\Psi_e}\}$. Therefore $\mathcal{H}_{eg}$ is written as \begin{align} \mathcal{H}_{eg}&=P_e \left( \sum_{ij \sigma} t_{ij} c^\dagger_{i \sigma} c_{j \sigma} \right) P_g. \end{align} From the Pauli's exclusion principle and the half-filled condition, we see that $\mathcal{H}_{eg}$ survives only when the spin indices $\sigma$ on the $i$-th and $j$-th sites are different, i.e., $(S^z_i, S^z_j) = (\uparrow, \downarrow), (\downarrow,\uparrow)$, as shown in Fig.~\ref{process}~(a). We may thus rewrite $\mathcal{H}_{eg}$ as \begin{align} \mathcal{H}_{eg}&= \sum_{ij \sigma} t_{ij} c^\dagger_{i \sigma} c_{j \sigma} (S^z_i - S^z_j)^2. \end{align} Next we compute the energy difference between the ground and the intermediate high-energy states depicted in Fig.~\ref{process}. To this end, we may focus on two neighboring $i$-th and $j$-th sites. In the ground states, both the sites are singly occupied and thus their energy is given by $V_i + V_j$. In contrast, the $i$-th site is doubly occupied and the $j$-th site is vacant in the intermediate states. Thus the energy is $U + 2V_i$. These results lead to \begin{align} \frac{1}{E_g - \mathcal{H}_{ee}} \mathcal{H}_{eg} &= \sum_{ij \sigma} \frac{1}{(V_i + V_j) - (U + 2V_i)} t_{ij} c^\dagger_{i \sigma} c_{j \sigma} (S^z_i - S^z_j)^2 \nonumber \\ &= -\sum_{ij \sigma}\frac{1}{U - \Delta V_{ij}} t_{ij} c^\dagger_{i \sigma} c_{j \sigma} (S^z_i - S^z_j)^2. \label{eq:inter} \end{align} Here we define $\Delta V_{ij} = V_i -V_j$. \begin{figure}[thbp] \includegraphics[width=10cm]{process_v1.PNG} \caption{Spin configuration of states relevant to the second-order perturbative calculation: (a) A ground state and (b) an intermediate state.} \label{process} \end{figure} Finally, we operate the $\mathcal{H}_{ge}$ to Eq.~(\ref{eq:inter}) and then the second-order perturbation term is calculated as follows: \begin{align} \mathcal{H}_{ge} \frac{1}{E_g - \mathcal{H}_{ee}} \mathcal{H}_{eg} &= - P_g \sum_{i^\prime j^\prime \sigma^\prime} t_{j^\prime i^\prime} c^\dagger_{j^\prime \sigma^\prime} c_{i^\prime \sigma^\prime}\sum_{ij \sigma}\frac{1}{U - \Delta V_{ij}} t_{ij} c^\dagger_{i \sigma} c_{j \sigma} (S^z_i - S^z_j)^2 \nonumber\\ &= - \sum_{ij \sigma} \frac{|t_{ij}|^2}{U - \Delta V_{ij}} c^\dagger_{j \sigma} c_{i \sigma} c^\dagger_{i \sigma} c_{j \sigma} (S^z_i - S^z_j)^2 - \sum_{ij \sigma} \frac{|t_{ij}|^2}{U - \Delta V_{ij}} c^\dagger_{j \bar{\sigma}} c_{i \bar{\sigma}} c^\dagger_{i \sigma} c_{j \sigma} (S^z_i - S^z_j)^2 \nonumber\\ &= - \sum_{ij \sigma} \frac{|t_{ij}|^2}{U - \Delta V_{ij}} n_{j \sigma} (1-n_{i\sigma}) (S^z_i - S^z_j)^2 + \sum_{ij \sigma} \frac{|t_{ij}|^2}{U - \Delta V_{ij}} c^\dagger_{j \bar{\sigma}} c_{j \sigma} c^\dagger_{i \sigma}c_{i \bar{\sigma}} (S^z_i - S^z_j)^2 \nonumber \\ &= - \sum_{ij} \frac{|t_{ij}|^2}{U - \Delta V_{ij}} (S^z_i - S^z_j)^2 + \sum_{ij} \frac{|t_{ij}|^2}{U - \Delta V_{ij}} (S^-_j S^+_i + S^+_j S^-_i) (S^z_i - S^z_j)^2 \nonumber\\ &= - \sum_{ij} \frac{|t_{ij}|^2}{U - \Delta V_{ij}} (S^z_i - S^z_j)^2 + \sum_{ij} \frac{|t_{ij}|^2}{U - \Delta V_{ij}} (S^-_j S^+_i + S^+_j S^-_i) \nonumber\\ &= \sum_{ij} \frac{|t_{ij}|^2}{U - \Delta V_{ij}} \left( -\frac{1}{2} + 2 S^z_i S^z_j + S^+_i S^-_j + S^-_i S^+_j \right)\nonumber\\ &= \sum_{ij} \frac{2 |t_{ij}|^2}{U - \Delta V_{ij}} \bm S_i \cdot \bm S_j + \mathrm{const.}, \label{eq:final} \end{align} where we have defined $\bar{\sigma} = - \sigma$. The first-order term $\mathcal{H}_{gg}$ gives only a constant term, and therefore the effective Hamiltonian up to the second-order perturbation theory is given by \begin{align} \mathcal{H}_\mathrm{eff} &= \sum_{ij} \frac{2 |t_{ij}|^2}{U - \Delta V_{ij}} \bm S_i \cdot \bm S_j+ \mathrm{const.}\nonumber\\ &= \sum_{\langle ij \rangle} \frac{4 |t_{ij}|^2}{U} \frac{1}{1- \left( \frac{\Delta V_{ij}}{U} \right)^2} \bm S_i \cdot \bm S_j+ \mathrm{const.}, \end{align} where the summation is taken over the every bond $\langle i, j \rangle$ in the last line. This is the effective model (2) in the main text. \subsection*{S2. Derivation of the spin-1 effective model from a two-band Hubbard model under a DC field} In this section, we show the derivation of the effective spin-1 model (3). We start from a half-filled, two-orbital Hubbard model with an additional on-site potential. The Hamiltonian consists of three parts of hopping, interaction, and potential terms: \begin{eqnarray} \mathcal{H}&=& \mathcal{H}_t + \mathcal{H}_V + \mathcal{H}_\mathrm{int}. \label{eq:spin1Hubbard} \end{eqnarray} These terms are given by \begin{align} \mathcal{H}_t &=\sum_{ij}\sum_{\alpha}\sum_{\sigma} t_{ij} c^\dagger_{i \alpha \sigma} c_{j \alpha \sigma},\\ \mathcal{H}_V &= \sum_{i \sigma} V_i n_{i \alpha \sigma},\\ \mathcal{H}_\mathrm{int} &= U \sum_{i}\sum_{\alpha} n_{i \alpha \uparrow} n_{i \alpha \downarrow} + U^\prime \sum_{i}\sum_{\sigma \sigma^\prime} n_{i 1 \sigma} n_{i 2 \sigma^\prime} \nonumber \\ &\qquad - J \sum_{i} \sum_{\sigma \sigma^\prime} c^\dagger_{i 1 \sigma} c_{i 1 \sigma^\prime } c^\dagger_{i 2 \sigma^\prime}c_{i 2\sigma} - J_P \sum_{i} \left( c^\dagger_{i 1 \uparrow} c^\dagger_{i 1 \downarrow } c_{i 2 \downarrow } c_{i 2 \uparrow} + \mathrm{h.c.} \right). \label{eq:twoorbHubint} \end{align} Here $\alpha(=1,2)$ is orbital index and $\bar{\sigma}$ denotes the opposite spin $-\sigma$. In the interaction $\mathcal{H}_\mathrm{int}$, $U$, $U'$, $J$, and $J_P$ terms denote an intra-orbital interaction, an inter-orbital interaction, a Hund's coupling and a pair hopping respectively. Due to the rotational symmetry of Coulomb interaction, $J_P = J$ is required. For convenience, we transform the interaction $\mathcal{H}_\mathrm{int}$ (\ref{eq:twoorbHubint}) as follows: \begin{align} \mathcal{H}_\mathrm{int} &= U \sum_{i} \sum_\alpha n_{i \alpha \uparrow} n_{i \alpha \downarrow} + \left( U^\prime - \frac{J}{2}\right) \sum_{i}\sum_{\sigma \sigma^\prime} n_{i 1\sigma} n_{i 2\sigma^\prime} - 2 J \sum_{i} \bm S_{i1} \cdot \bm S_{i2} - J \sum_{i} \left( c^\dagger_{i 1 \uparrow} c^\dagger_{i 1 \downarrow } c_{i 2 \downarrow } c_{i 2 \uparrow} + \mathrm{h.c.} \right), \label{eq:Hund} \end{align} where we have used the identity \begin{align} \sum_\sigma c^\dagger_{i 1 \sigma}c_{i 1 \bar{\sigma}} c^\dagger_{i 2 \bar{\sigma}} c_{i 2 \sigma} =2 \bm S_{i1} \cdot \bm S_{i2} - \frac{1}{2} \sum_\sigma n_{i1 \sigma} n_{i2 \sigma} + \frac{1}{2} \sum_\sigma n_{i1 \sigma} n_{i2 \bar{\sigma}}, \end{align} and $\bm S_{i\alpha}$ is the spin operator for an $\alpha$-orbital electron on $i$-th site. First we discuss the ground state under the condition of both the half-filling and the strong-coupling limit $U > U^\prime > J \gg t$. In this condition, all the orbits are singly occupied and there are two electrons per one site in the ground states. We here introduce local bases $\ket{\psi}_i$ to represent the spin state on each site $i$. They are classified into the spin-triplet sector $\mathcal{T}_i$ and the spin-singlet sector $\mathcal{S}_i$ : \begin{align} \mathcal{T}_i &= \{ \ket{+}_i, \ket{\circ}_i, \ket{-}_i \}, \\\mathcal{S}_i &= \{ \ket{s}_i \}. \end{align} Four kinds of $\ket{\psi}_i$ are defined as \begin{align} \ket{+}_i &= c^\dagger_{i 1 \uparrow} c^\dagger_{i 2 \uparrow} \ket{0}, \\ \ket{-}_i &= c^\dagger_{i 1 \downarrow} c^\dagger_{i 2 \downarrow} \ket{0},\\ \ket{\circ}_i & = \frac{1}{\sqrt{2}} \left( c^\dagger_{i 1 \uparrow} c^\dagger_{i 2 \downarrow} \ket{0} + c^\dagger_{i 1 \downarrow} c^\dagger_{i 2 \uparrow} \ket{0} \right),\\ \ket{s}_i & = \frac{1}{\sqrt{2}} \left( c^\dagger_{i 1 \uparrow} c^\dagger_{i 2 \downarrow} \ket{0} - c^\dagger_{i 1 \downarrow} c^\dagger_{i 2 \uparrow} \ket{0} \right), \end{align} where $\ket{+}_i$, $\ket{-}_i$, and $\ket{\circ}_i$ are respectively the $S^z=+1$, $-1$, and $0$ state on $i$-th site. Within this localized spin subspace, the correlation function of two-orbital spins on single site is computed as \begin{align} \bra{\psi}_i \bm S_{i1} \cdot \bm S_{i2} \ket{\psi}_i = \begin{cases} \frac{1}{4} & (\ket{\psi}_i \in \mathcal{T}_i) \\ -\frac{3}{4} & (\ket{\psi}_i \in \mathcal{S}_i). \end{cases} \end{align} This result and Hund's coupling in Eq.~(\ref{eq:Hund}) clearly show that the ground state on each site is in the spin-triplet sector, namely, localized spin-1 system is realized in Eq.~(\ref{eq:spin1Hubbard}). Next, we focus on the zero-potential case of $V_i= 0$. As one will see soon later, the effective model for $V_i \neq 0$ can be easily derived by simply extending the result of the $V_i= 0$ case. Using the formula (\ref{Formula_Heff}), let us derive the effective spin model for the $V_i= 0$ case with the hopping $\mathcal{H}_t$ being the perturbation. To this end, we introduce the nine local bases $\ket{\psi_i}_i\ket{\psi_j}_j$ which represent neighboring $i$-th and $j$-th spin states ($\psi_{i,j}\in \{+,\circ,-\}$). In the matrix form, the bases are expressed as \begin{align} \ket{\Psi_{ij}} = \begin{pmatrix} \begin{array}{c} \ket{+}_i \ket{+}_j \\ \ket{+}_i \ket{\circ}_j \\ \ket{+}_i \ket{-}_j \\ \ket{\circ}_i \ket{+}_j \\ \ket{\circ}_i \ket{\circ}_j \\ \ket{\circ}_i \ket{-}_j \\ \ket{-}_i \ket{+}_j \\ \ket{-}_i \ket{\circ}_j \\ \ket{-}_i \ket{-}_j \end{array} \end{pmatrix}. \end{align} Through straightforward calculation, we obtain \begin{align} \mathcal{H}_{ge}\mathcal{H}_{eg}\ket{+}_i \ket{-}_j&=- \ket{\circ}_i \ket{\circ}_j + 2 \ket{+}_i \ket{-}_j, \\ \mathcal{H}_{ge}\mathcal{H}_{eg} \ket{+}_i \ket{0}_j &= -\ket{0}_i \ket{+}_j + \ket{+}_i \ket{0}_j,\\ \mathcal{H}_{ge}\mathcal{H}_{eg} \ket{0}_i \ket{0}_j &= -\ket{+}_i \ket{-}_j - \ket{-}_i \ket{+}_j + \ket{\circ}_i \ket{\circ}_j , \end{align} and \begin{align} \mathcal{H}_{ge}\frac{1}{E_g-\mathcal{H}_{ee}}\mathcal{H}_{eg}\ket{+}_i \ket{-}_j&=\frac{|t_{ij}|^2}{\Delta E_{ij}} \left(- \ket{\circ}_i \ket{\circ}_j + 2 \ket{+}_i \ket{-}_j \right), \\ \mathcal{H}_{ge}\frac{1}{E_g-\mathcal{H}_{ee}}\mathcal{H}_{eg} \ket{+}_i \ket{0}_j &=\frac{|t_{ij}|^2}{\Delta E_{ij}}\left( -\ket{0}_i \ket{+}_j + \ket{+}_i \ket{0}_j \right),\\ \mathcal{H}_{ge}\frac{1}{E_g-\mathcal{H}_{ee}}\mathcal{H}_{eg} \ket{0}_i \ket{0}_j &=\frac{|t_{ij}|^2}{\Delta E_{ij}}\left( -\ket{+}_i \ket{-}_j - \ket{-}_i \ket{+}_j + \ket{\circ}_i \ket{\circ}_j \right), \end{align} where \begin{align} \Delta E_{ij} &= \left\{ 2 \times \left( U^\prime- \frac{J}{2}\right) - 2 \times 2 J \cdot \frac{1}{4}\right\} - \left\{ U + 2 \times \left( U^\prime- \frac{J}{2}\right) \right\} \nonumber\\ &=-(U+J). \label{eq:deltaE} \end{align} From these results, the effective Hamiltonian in the $i$-th and $j$-th sites is given by \begin{align} \bra{\Psi_{ij}} \mathcal{H}_\mathrm{eff} \ket{\Psi_{ij}}&= E_g I + \frac{|t_{ij}|^2}{U+J} \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -2 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & -2 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}. \label{eq:matele} \end{align} On the other hand, the matrix elements of Heisenberg interaction between two spin-1 operators are computed as \begin{align} \bra{\Psi_{ij}} \bm S_{i} \cdot \bm S_{j} \ket{\Psi_{ij}}&= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}, \label{eq:spin-1_mat} \end{align} Comparing Eqs.~(\ref{eq:matele}) and (\ref{eq:spin-1_mat}), we see the identity \begin{align} \bra{\Psi_{ij}} \mathcal{H}_\mathrm{eff} \ket{\Psi_{ij}}&= E_g I + \frac{|t_{ij}|^2}{U-J} \left( \bra{\Psi_{ij}} \bm S_{i} \cdot \bm S_{j} \ket{\Psi_{ij}} - I \right). \label{Heff_matrixele} \end{align} Therefore, without the constant in Eq.~(\ref{Heff_matrixele}), the effective spin model for the zero-potential system is written as \begin{align} \mathcal{H}_\mathrm{eff} &= \sum_{ij} \frac{|t_{ij}|^2}{U + J} \bm S_i \cdot \bm S_j \nonumber\\ &=\sum_{\langle ij \rangle} \frac{ 2|t_{ij}|^2}{U + J} \bm S_i \cdot \bm S_j \nonumber\\ &=J^{S=1}_{ij} \sum_{\langle ij \rangle} \bm S_i \cdot \bm S_j , \end{align} where we have defined $J^{S=1}_{ij} = 2|t_{ij}|^2 / (U + J)$. Finally, let us turn to a generic case with $V_i \neq 0$. In this case, most of the perturbative calculations are the same as those of the $V_i=0$ case. However, $\Delta E_{ij}$ of Eq.~(\ref{eq:deltaE}) should be changed into \begin{align} \Delta E_{ij} &= \left\{ 2 \times \left( U^\prime- \frac{J}{2} \right) - 2 \times 2 J \cdot \frac{1}{4}\right) + 2V_i +2 V_j \} - \left\{ U + 2 \times \left( U^\prime- \frac{J}{2}\right) + 3 V_i + V_j \right\} \nonumber\\ &=-(U+J + \Delta V_{ij}), \end{align} where $\Delta V_{ij} = V_i -V_j$. Thus the effective spin model for the two-orbital Hubbard model with an on-site potential is written as \begin{align} \mathcal{H}_\mathrm{eff} &= \sum_{ij} \frac{|t_{ij}|^2}{U + J + \Delta V_{ij}} \bm S_i \cdot \bm S_j \nonumber\\ &=\sum_{\langle ij \rangle} \frac{ J^{S=1}_{ij}}{1 - \left(\frac{\Delta V_{ij}}{U+J}\right)^2} \bm S_i \cdot \bm S_j. \end{align} This is the effective model (3) in the main text. \subsection*{S3. Bosonization and Chain Mean-field Approach} In this section, we shortly explain the computation process of the critical temperature between N\'eel ordered and paramagnetic phases in our quasi-1D spin-$\frac{1}{2}$ model (5). First we summarize some results of the bosonization for spin-$\frac{1}{2}$ chains~[44, 47-49]. Then, by combining the chain mean-field theory (MFT) with the bosonization results~[83-89]. we determine the critical temperature of the quasi-1D model (5). We start from the definition of the 1D spin-$\frac{1}{2}$ XXZ chain model. The Hamiltonian is given by \begin{eqnarray} {\cal H}_{\rm xxz}&=& J\sum_j \left[S^x_{j}S^x_{j+1}+S^y_{j}S^y_{j+1}+\Delta_zS^z_{j}S^z_{j+1}\right] -H\sum_jS^z_j, \end{eqnarray} where $\bm S_j$ is the spin-$\frac{1}{2}$ operator in $j$-th site, $J>0$ is the strength of the exchange interaction, $\Delta_z$ is the XXZ anisotropy parameter, and $H$ is the external magnetic field. The point of $\Delta_z=1$ and $H=0$ corresponds to the SU(2)-symmetric antiferromagnetic Heisenberg model. The XXZ model is a typical integrable system and the TL-liquid phase with gapless spinon excitations widely exists in the range $-1 < \Delta_z \leq 1$ at zero field $H=0$. The TL liquid phase survives from zero field to the saturation field. The bosonization can accurately describe the low-energy properties in/around the TL-liquid phase. Through the standard bosonization process, the XXZ model in/around the TL-liquid phase is mapped to a low-energy gapless scalar-field theory, whose Hamiltonian is \begin{eqnarray} {\cal H}_{\rm eff}=\int dx\,\, \frac{v}{2}\, \Big[\frac{1}{K}(\partial_x\phi)^2+K(\partial_x\theta)^2\Big], \label{eq:eff} \end{eqnarray} where $x=j a_0$ is the continuous coordinate ($a_0$ : lattice constant), and $\phi(x,t)$ and $\theta(x,t)$ are the canonical pair of real scalar fields satisfying the commutation relation $[\phi(x,t),\partial_y\theta(y,t)]=i\delta(x-y)$. Two symbols $v$ and $K$ respectively denote the spinon group velocity and the TL-liquid parameter. For instance, $K=1$ and $v=\pi J a_0/2$ at the SU(2) point. Spin operators are also bosonized as \begin{eqnarray} S_{j}^z &\approx& M\,+\,\frac{a_0}{\sqrt{2\pi}}\partial_x{\phi} +(-1)^j A_1\cos\left(\sqrt{2\pi}{\phi} + 2\pi Mj\right)+\cdots,\nonumber\\ S_{j}^+ &\approx& e^{i\sqrt{2\pi}\theta}\left[(-1)^j B_0 +B_1\cos\left(\sqrt{2\pi}{\phi} + 2\pi Mj\right)+\cdots\right]. \label{eq:spin_boson} \end{eqnarray} where $M=\langle S_j^z\rangle$ is the $H$-induced uniform magnetization per one site, and $A_n$ and $B_n$ are non-universal constants depending on the model parameters $J$, $\Delta_z$ and $H$. The accurate values of $v$, $K$, $A_n$ and $B_n$ have been computed by using Bethe ansatz and numerical methods~[103-108]. On the basis of the formulas (\ref{eq:eff}) and (\ref{eq:spin_boson}), one can correctly calculate the long-distance or long-time behavior of correlation functions in the TL-liquid phase. Let us define the dynamical spin susceptibility with the wave number $k$ and frequency $\omega$ as $\chi_R^{ab}(k,\omega)=-\int_{0}^\beta d\tau \sum_k e^{-ik j a_0 +i\omega_n \tau} \langle T_\tau S^a_j (\tau)S^b_0(0)\rangle|_{i\omega_n=\omega+i\eta}$, where $\tau$ is imaginary time, $\beta=1/(k_BT)$ is inverse temperature, $\omega_n=2\pi n/\beta$ ($n$: integer), and $\eta$ is an infinitesimal positive constant. Through the bosonization technique with Eqs.~(\ref{eq:eff}) and (\ref{eq:spin_boson}), one can calculate the transverse dynamical susceptibility around $k=\pi+\delta k$ in the TL-liquid phase: \begin{eqnarray} \chi_R^{-+}(\pi+\delta k,\omega) &\approx& -B_0^2 \,\,\frac{a_0}{v}\,\,\sin\Big(\frac{\pi}{2K}\Big)\,\, \Big(\frac{2\pi a_0}{\beta v}\Big)^{1/K-2}\nonumber\\ &&\times B\Big(-i\frac{\beta(\omega-v \delta k)}{4\pi}+\frac{1}{4K},1-\frac{1}{2K}\Big) B\Big(-i\frac{\beta(\omega+v \delta k)}{4\pi}+\frac{1}{4K},1-\frac{1}{2K}\Big), \label{eq:transverse} \end{eqnarray} where $S^\pm_j=S^x_j\pm i S^y_j$ and $B(x,y)$ is Beta function. This formula is quite reliable in the range of $|\delta k|\ll a_0^{-1}$ and $|\omega|\ll J, k_BT$. In the TL-liquid phase of the XXZ chain, the relation $\chi_R^{xx}(k,\omega)=\chi_R^{yy}(k,\omega)=\frac{1}{2}\chi_R^{-+}(k,\omega)$ holds. Next, we apply the chain MFT to our quasi-1D spin-$\frac{1}{2}$ magnet (5) with the above bosonization results. In the chain MFT, we accurately take into account quantum and thermal fluctuation effects in the strong coupled 1D direction, while an inter-chain interaction is treated within the standard MFT. On the basis of this approach, the $S^x$ component of the dynamical spin susceptibility in the quasi-1D system (5) is calculated as the following RPA-like form: \begin{eqnarray} \chi_{\rm 3D}^{xx}(k_x,k_y,k_z,\omega) &=& \frac{\chi_{R}^{xx}(k_x,\omega)} {1-2(J_y\cos k_y+J_z\cos k_z)\chi_{R}^{xx}(k_x,\omega)}, \label{eq:XXZchain-MFT} \end{eqnarray} where the wave number $k_x$ corresponds to the 1D-chain direction, and $k_{y,z}$ are the wave numbers along the inter-chain direction. This result is quantitatively valid in the sufficiently weak inter-chain regime $|J_{y,z}|\ll J$. The phase transition between the N\'eel and paramagnetic phases is determined as the point where $\chi_{\rm 3D}^{xx}(\pi,\pi,\pi,\omega\to0)$ diverges. This point is equivalent to the condition that the denominator of Eq.~(\ref{eq:XXZchain-MFT}) becomes zero at $\bm k=(\pi,\pi,\pi)$ and $\omega\to 0$: \begin{eqnarray} -2(J_y+J_z)\chi_{R}^{xx}(\pi,\omega\to0)=1 \label{eq:chain-MFT_3} \end{eqnarray} Substituting the bosonization result (\ref{eq:transverse}) into this condition, we arrive at the formula of determining the phase transition temperature: \begin{eqnarray} B_0^2 \,\,\frac{(J_y+J_z) a_0}{v}\,\,\sin\Big(\frac{\pi}{2K}\Big)\,\, \Big(\frac{2\pi a_0}{\beta v}\Big)^{1/K-2}\,\,B\Big(\frac{1}{4K},1-\frac{1}{2K}\Big)^2=1. \label{eq:chain-MFT_4} \end{eqnarray} Using this result, we have drawn the phase boundary of Fig.4 (b) and (c). We finally note a technical issue that since the parameter $B_0$ is ill-defined just on the SU(2) point of $\Delta_z=1$ and $H=M=0$, we have used its value for a nearly SU(2)-symmetric model with an infinitesimal small magnetization $M=0.01$ in Fig. 4. \end{widetext} \end{document}
{ "timestamp": "2018-02-14T02:00:53", "yymm": "1802", "arxiv_id": "1802.04311", "language": "en", "url": "https://arxiv.org/abs/1802.04311" }
\section{Introduction} Assume that we have a Boolean function $f:\{0, 1\}^n\to\{0, 1\}$ called \emph{outer function} and a Boolean function $g:A\times B\to\{0, 1\}$ called \emph{gadget}. Consider a composed function $f\circ g : A^n \times B^n\to \{0, 1\}$, defined as follows: $$ (f\circ g)((a_1, \ldots, a_n), (b_1, \ldots, b_n) ) = f(g(a_1, b_1), \ldots, g(a_n, b_n)).$$ How can we deal with deterministic communication complexity of $f\circ g$, denoted below by $D^{cc}(f\circ g)$? Obviously, we have the following inequality: $$D^{cc}(f\circ g) \le D^{dt}(f) \cdot D^{cc}(g),$$ where $D^{dt}(f)$ stands for deterministic query complexity of $f$. Indeed, we can transform a decision tree for $f$ making $q$ queries into a protocol of communication cost $q \cdot D^{cc}(g)$ by simulating each query to $f$ with $D^{cc}(g)$ bits. It turns out that for some gadgets $g$ and for all $f$ of arity at most some function of $g$'s size this simple protocol is essentially optimal. The first gadget for which this was proved is the Indexing Function $$\mathsf{IND}_k:\{1, 2, \ldots, k\} \times \{0, 1\}^k \to\{0, 1\}, \qquad g(x, y) = y_x.$$ More specifically, in 2015 G\"{o}\"{o}s et al. (\cite{goos2015deterministic}) proved that for all $n\le 2^{k^{1/20}}$ and for all $f:\{0, 1\}^n\to\{0, 1\}$ it holds that \begin{equation} \label{index} D^{cc}(f\circ \mathsf{IND}_k) = \Omega(D^{dt}(f) \log k). \end{equation} Actually, instead of $f$ we can have not only a Boolean function but any relation $R\subset\{0, 1\}^n \times C$. The work of G\"{o}\"{o}s et al. was a generalization of the theorem of Raz and McKenzie (\cite{raz1997separation}), who in 1997 established \eqref{index} for a certain class of outer relations, called DNF-Search problems. Theorems of this kind, called usually \emph{simulation theorems}, can be viewed as a new method of proving lower bounds in communication complexity. Namely, lower bound on communication complexity of a composed function reduces to lower bound on query complexity of an outer function, and usually it is much easier to deal with the latter. As was shown by Raz and McKenzie, this method turns out to be powerful enough to separate monotone NC-hierarchy. Moreover, as was discovered by G\"{o}\"{o}s et al., this method can be quadratically better than the logarithm of the partition number, another classical lower bound method in deterministic communication complexity. There are simulation theorems not only for deterministic communication and query complexities, but for other models too, see, e.g., \cite{de2016limited, goos2017query, hatami2016structure, goos2016rectangles}. Note that input length of a gadget in \eqref{index} is even bigger than input length of an outer function. G\"{o}\"{o}s et al. in \cite{goos2015deterministic} asked, whether it is possible to prove a simulation theorem for a gadget which input length is logarithmic in input length of an outer function. This question was answered positively by Chattopadhyaay et al. (\cite{chattopadhyay2017simulation}) and independently by Wu et al. (\cite{wu2017raz}). Moreover, Chattopadhyaay et al. significantly generalized the proof of G\"{o}\"{o}s et al., having discovered a certain property of a gadget $g: A\times B \to\{0, 1\}$ which can be used as a black-box to show new simulation theorems: once $g$ satisfies this property, we have a simulation theorem for $g$. Their property is defined through so-called ``hitting distributions''. Let $\mu$ be probability distribution over rectangles $U\times V\subset A\times B$. Distribution $\mu$ is called \emph{$(\delta, h)$-hitting}, where $\delta\in (0, 1)$ and $h$ is a positive integer, if for every $X\subset A$ of size at least $2^{-h}|A|$ and for every $Y\subset B$ of size at least $2^{-h}|B|$ we have that $$\Pr_{U\times V \sim \mu}[U\times V \cap X\times Y \neq \varnothing] \ge 1- \delta$$ It turns out that if for every $b\in\{0, 1\}$ there is $(\delta, h)$-hitting distribution over $b$-monochromatic rectangles of $g$ , then there is a simulation theorem for $g$. The smaller $\delta$ and the bigger $h$, the better simulation theorem. More precisely, Chattopadhyaay et al. proved the following theorem. \begin{theorem} \label{simulation_theorem} Assume that $\varepsilon \in (0, 1)$ and and an integer $h$ are such that $h \ge 6/\varepsilon$. Then the following holds. For every (possibly partial) Boolean function $g:A\times B \to \{0, 1\}$ that has two $(\frac{1}{10}, h)$-hitting distribution, the one over 0-monochromatic rectangles and the other over 1-monochromatic rectangles, for every $n \le 2^{h(1 - \varepsilon)}$ and $f:\{0, 1\}^n\to\{0, 1\}$ it holds that $$D^{cc}(f\circ g^n) \ge \frac{\varepsilon h}{4} \cdot D^{dt}(f).$$ \end{theorem} Further, they showed that Inner Product and Gap Hamming Distance gadgets on $k$ bits have $(o(1), \Omega(k))$-hitting distributions for both kinds of monochromatic rectangles. More precisely, for every constant $\gamma > 0$ and for all large enough $k$ they constructed $(o(1), (1/2 - \gamma)k)$-hitting distributions for $k$-bit Inner Product (denoted below by $\mathsf{IP}_k$) . Due to Theorem \ref{simulation_theorem} this yealds the following simulation theorem for $\mathsf{IP}_k$: for every constant $\gamma > 0$ and for all $k$ large enough $$D^{cc}(f\circ \mathsf{IP}_k) = \Omega(D^{dt}(f) \cdot k),$$ where $f$ is any Boolean function depending on at most $2^{(1/2 - \gamma) k}$ variables. Other gadgets studied until this work do not achieve the same trade-off between the size of outer functions and the size of gadget. Namely, for $k$-bit Gap Hamming Distance the lower bound $D^{cc}(f\circ \mathsf{GHD}) = \Omega(D^{dt}(f) \cdot k)$ is shown in \cite{chattopadhyay2017simulation} only for $f$ depending on roughly $2^{0.45 k}$ variables or less. For Indexing gadget, as we saw, this trade-off is exponentially worse. It is also interesting to study how fast one can obtain a full description of hitting distributions for these gadgets. This might be useful in the following situation: assume that we are given a family of subrectangles (not necessarily monochromatic) of a gadget's matrix and we want to find single monochromatic rectangle which intersects most of them (actually, this is how hitting distributions are used in Theorem \ref{simulation_theorem}). The existence of such monochromatic rectangle is provided by definition of hitting distribution. However, to find such rectangle efficiently we at least must be able to list all the rectangles from the support of our hitting distribution. If it can be done in time polynomial in size of gadget's matrix, we call a corresponding family of hitting distributions \emph{polynomial-time listable.} In particular, this applies to $k$-bit Gap Hamming Distance: hitting distribution from \cite{chattopadhyay2017simulation} for this gadget are polynomial-time listable (roughly speaking, we just have to list all Hamming balls of a certain radius). At the same time, hitting distributions for $k$-bit Inner Product from \cite{chattopadhyay2017simulation} are not polynomial-time listable. Namely, their supports are of size $2^{\Omega(k^2)}$ (this number corresponds to the number of $k/2$-dimensional subspaces of $\mathbb{F}_2^k$). Though due to Chernoff bound it is possible to transform any $(0.1, h)$-hitting distribution into , say, $(0.2, h)$-hitting distribution with support size $2^{O(k)}$ (see Proposition \ref{size_proposition} below), this does not give explicit construction. \subsection{Our results} We show how to transform any explicit expander satisfying one additional restriction into a gadget with polynomial-time listable hitting distributions. The transformation is as follows. Assume that we have a graph $G = (V, E)$ and a coloring $c:V \to\{0, 1\}$. For $v\in V$ let $\Gamma(v)$ denote the set of all $u\in V$ such that $u$ and $v$ are connected by en edge in $G$. Assume further that for any two distinct $u, v\in V$ it holds that $|\Gamma(u)\cap \Gamma(v)| \le 1$. Then the following partial function is well defined: \begin{align*} &g(G, c): V\times V\to\{0, 1\},\\ &g(G, c)(u, v) = \begin{cases}1 & \mbox{$u\neq v$ and there is $w\in\Gamma(u)\cap \Gamma(v)$ s.t. $c(w) = 1$}, \\ 0 & \mbox{$u\neq v$ and there is $w\in\Gamma(u)\cap \Gamma(v)$ s.t. $c(w) = 0$}, \\ \mbox{undefined} & \mbox{otherwise.}\end{cases} \end{align*} Call $c$ balanced if each color is used at least $|V|/3$ times in $c$. It turns out that if $G$ is a good expander and if $c$ is balanced, then $g(G, c)$ possesses good hitting distributions: \begin{theorem} \label{from_expanders_to_hitting} Assume that $G = (V, E)$ is a $(m, d, \gamma)$-spectral expander in which for any two distinct $u, v\in V$ it holds that $|\Gamma(u)\cap \Gamma(v)| \le 1$ and $c:V \to\{0, 1\}$ is a balanced coloring of $G$. Assume also that $m\ge 1/\gamma^2$. Then for any $b\in\{0, 1\}$ there is a $\left(\frac{1}{10}, \lfloor2\log_2(1/\gamma)\rfloor - 100\right)$-hitting distribution $\mu_b$ over $b$-monochromatic rectangles of $g(G, c)$. All the probabilities of $\mu_b$ are rational. Moreover, there is a deterministic Turing machine which, having $G$ and $c$ on input, in time $m^{O(1)}$ lists all the rectangles from the support of $\mu_b$, together with probabilities $\mu_b$ assigns to them. \end{theorem} Provided that $G$'s adjacency matrix and $c$'s truth table can be computed in time $m^{O(1)}$, from Theorem \ref{from_expanders_to_hitting} we obtain polynomial-time listable family of hitting distributions. In particular, we apply Theorem \ref{from_expanders_to_hitting} to the following explicit family of expanders. If $q$ is a power of prime, let $AP_q$ denote a graph in which vertices are pairs of elements of $\mathbb{F}_q$ and in which $(a, b), (x, y)\in \mathbb{F}_q^2$ are connected by an edge if and only if $ax = b + y$. It is known that $AP_q$ is a $(q^2, q, 1/\sqrt{q})$-spectral expander. It can be easily shown that for any two distinct vertices $u, v$ of $AP_q$ it holds that $|\Gamma(u)\cap \Gamma(v)|\le 1$. \begin{corollary} \label{main_corollary} Let $q$ be a power of prime. Then in $AP_q$ for any two distinct vertices $u, v$ it holds that $|\Gamma(u)\cap \Gamma(v)| \le 1$. Moreover, for all $n \le 2^{\log_2 q - 200}$ and $f:\{0, 1\}^n\to\{0, 1\}$ the following holds: if $c$ is a balanced coloring of $AP_q$, then $$D^{cc}(f\circ g(AP_q, c)) \ge \frac{\log_2(q/n) - 200}{4} \cdot D^{dt}(f)$$ (in $g(AP_q, c)$ each party receives $2\log_2 q$ bits). \end{corollary} We also give an example of a natural-looking gadget for which Corollary \ref{main_corollary} implies a simulation theorem. Our gadget is the following one: Alice gets $a\in\mathbb{F}_{q^2}$ and Bob gets $b\in\mathbb{F}_{q^2}$. Here $q$ is a power of an odd prime. Their goal is to output 1, if $a - b$ is a square in $\mathbb{F}_{q^2}$ (by that we mean that there is $c\in\mathbb{F}_{q^2}$ such that $a - b = c^2$), and 0 otherwise. Let us denote this gadget by $\SQR^q$. Since $\mathbb{F}_{q^2}$ is a linear space over $\mathbb{F}_q$, we can naturally identify inputs to $\SQR^q$ with $\mathbb{F}_q^2$, i.e, $\SQR^q$ can be viewed as a function of the form $\SQR^q: \mathbb{F}_q^2\times \mathbb{F}_q^2 \to\{0, 1\}$. \begin{proposition} \label{square_proposition} For all large enough $q$ the following holds. If $q$ is a power of an odd prime, then there exists a balanced covering $c$ of $AP_q$ such that $g(AP_q, c)$ is a sub-function of $\SQR^p$, i.e., whenever $g(AP_q, c)(a,b)$ is defined, we have $g(AP_q, c)(a, b) = \SQR^q(a, b)$. A truth table of $c$ can be computed in time $q^{O(1)}$. \end{proposition} This Proposition implies a simulation theorem for $\SQR^q$, with the same parameters as in Corollary \ref{main_corollary} and with polynomial-time listable underlying hitting distributions. Next we observe that any spectral expander ``similar'' to $AP_q$ automatically satisfies restrictions of Theorem \ref{from_expanders_to_hitting}. \begin{proposition} \label{affine_like_proposition} Assume that $G = (V, E)$ is a $(m, d, \gamma)$-spectral expander and $$2d + 4 > d^2\left(2\gamma^2 + \frac{4(1 - \gamma^2)}{m}\right).$$ Then for any two distinct vertices $u, v\in V$ it holds that $|\Gamma(u)\cap \Gamma(v)| \le 1$. \end{proposition} In particular, all $(m^2, m, 1/\sqrt{m})$-spectral expanders satisfy these restrictions. However, Proposition \ref{affine_like_proposition} is by no means a necessary condition. For example, Theorem \ref{from_expanders_to_hitting} can be also applied Lubotzky-Phillips-Sarnak construction of Ramanujan graphs (\cite{lubotzky1988ramanujan}). More specifically, if $p, q$ are unequal primes, $p, q \equiv 1 \pmod{4}$ and $p$ is a quadratic residue modulo $q$, the paper \cite{lubotzky1988ramanujan} constructs an explicit graph $X^{p, q}$ which, in particular, is a $(q(q^2 - 1)/2, p + 1, 2\sqrt{p}/(p + 1))$-spectral expander and in which the shortest cycle is of length at least $2\log_{p} q$. It can also be easily shown that provided $p < q^2$ there are no self-loops in $X^{p, q}$. Thus if $p < \sqrt{q}$, then any two distinct vertices of $X^{p, q}$ have at most one common neighbor, while inequality from Proposition \ref{affine_like_proposition} is false for $X^{p, q}$. We then obtain some results related to the following question: what is the best possible trade-off between the arity of outer functions and the size of gadget in deterministic simulation theorems? Once again, consider $\SQR^q$. Note that in $\SQR^q$ each party receives $k = 2\log_2 q$ bits. Corollary \ref{main_corollary} lower bounds $D^{cc}(f\circ \SQR^q)$ whenever arity of $f$ is at most $2^{k/2 - O(1)}$. If the arity of $f$ is at most $2^{\left(1/2 - \Omega(1)\right) k}$, the lower bound becomes $\Omega(k \cdot D^{dt}(f))$. Thus $\SQR^q$ achieves the same trade-off between the arity of $f$ and the size of a gadget as $k$-bit Inner Product (while underlying hitting distributions for $\SQR^q$, unlike Inner Product, are polynomial-time listable). Ramanujan graphs yield gadgets with much worse trade-off. Namely, if $p$ is of order $\sqrt{q}$ and $c$ is a balanced coloring of $X^{p, q}$, then $g(X^{p, q}, c)$ is a gadget on $k \approx 3\log_2 q$ bits which admits a simulation theorem for all outer functions of arity roughly $2^{\log_2 p} = 2^{k/6}$. This raises the following question: for a given $k$ what is the maximal $h$ such that there is a gadget on $k$ bits having two $(\frac{1}{10}, h)$-hitting distributions, the one over $0$-monochromatic rectangles and the other over $1$-monochromatic rectangles? Above discussion shows that $h$ can be about $k/2$. In the following Proposition we observe that it is impossible to do better. \begin{proposition} \label{hitting_distributions_lower_bound} For every $g:\{0, 1\}^k \times\{0, 1\}^k\to\{0, 1\}$ and for every integer $h\ge 1$ there exists $b\in\{0, 1\}$ such that the following holds. For every probability distribution $\mu$ over $b$-monochromatic rectangles of $g$ there are $X, Y\subset\{0, 1\}^k$ of size at least $2^{k - h}$ such that $$\Pr_{R\sim \mu}[R \cap X\times Y \neq \varnothing] \le 2^{k - 2h + 1}.$$ \end{proposition} In addition we show the following simple proposition, studying the minimal possible support size of hitting distributions. \begin{proposition} \label{size_proposition} For every $g:\{0, 1\}^k \times\{0, 1\}^k\to\{0, 1\}$ the following holds \begin{itemize} \item if there is $\left(\frac{1}{20}, h\right)$-hitting distribution over $b$-monochromatic rectangles of $g$ for some $b\in\{0, 1\}$, then there is $\left(\frac{1}{10}, h\right)$-hitting distribution over $b$-monochromatic rectangles of $g$ which support is of size $2^{O(k)}$. \item Assume that for some $\delta <1$ and $h\in\mathbb{N}$ there are two $(\delta, h)$-hitting distributions $\mu_0, \mu_1$, where $\mu_b$ is over $b$-monochromatic rectangles of $g$. Then the support of $\mu_b$ is of size at least $2^h$, for every $b\in\{0, 1\}$. \end{itemize} \end{proposition} So it is impossible to improve a trade-off between the size of outer functions and the size of gadgets simply by improving hitting distributions. However, until now we only spoke about improving gadgets. What about outer functions? What causes a restriction on the arity of $f$ in Theorem \ref{simulation_theorem}? It can be verified that the only place in which arity of $f$ appears in the proof is so-called Thickness Lemma. Let us state this Lemma. Assume that $A$ is a finite set and $X$ is a subset of $A^n$. Here $n$ corresponds to the arity of $f$. Let $X_{[n]/\{i\}}$ denote the projection of $X$ onto all the coordinates except the $i$-th one. Define the following auxiliary bipartite graph $G_i(X)$. Left side vertices of $G_i(X)$ are taken from $A$, right side vertices of $G_i(X)$ are taken from $X_{[n]/\{i\}}$. We connect $a\in A$ with $(x_1, \ldots, x_{i - 1}, x_{i + 1}, \ldots x_n) \in X_{[n]/\{i\}}$ if and only if $$(x_1, \ldots, x_{i - 1}, a, x_{i + 1}, \ldots x_n)\in X.$$ Clearly, there are $|X|$ edges in $G_i(X)$. Let $MinDeg_i(X)$ denote the minimal possible degree of a right side vertex of $G_i(X)$. Similarly, let $AvgDeg_i(X)$ denote the average degree of a right side vertex of $G_i(X)$. There are $|X|$ edges and $|X_{[n]/\{i\}}|$ right side vertices, hence it is naturally to define $AvgDeg_i(X)$ as $$AvgDeg_i(X) = \frac{|X|}{|X_{[n]/\{i\}}|}.$$ Thickness Lemma relates this two measures. Namely, it states that if for every $i$ \emph{average} degree of $G_i(X)$ is big, then there is a large subset $X^\prime\subset X$ such that for every $i$ \emph{minimal} degree of $G_i(X^\prime)$ is big. The precise bounds can be found in the following \begin{lemma}[\cite{raz1997separation}] \label{thickness_lemma} Consider any $\delta \in (0, 1)$. Assume that for every $i\in\{1, 2, \ldots, n\}$ we have that $AvgDeg_i(X) \ge d$. Then there is $X^\prime\subset X$ of size at least $(1 - \delta)|X|$ suh that for every $i\in[n]$ it holds that $MinDeg_i(X^\prime) \ge \frac{\delta d}{n}$. \end{lemma} One possible way to improve a trade-off between the arity of $f$ and the size of gadget is to improve Thickness Lemma. For example, if we could replace $\frac{\delta d}{n}$ with $\frac{\delta d}{\sqrt{n}}$ in Lemma \ref{thickness_lemma} , this would mean that $k$-bit Inner Product and $k$-bit $\SQR$-gadget admit simulation theorems for all outer functions of arity roughly $2^{k}$ ( rather than $2^{k/2}$). However, such an improvement is impossible and the bounds given in Lemma \ref{thickness_lemma} are near-optimal. Note that Thickness Lemma says nothing about whether there even exists a \emph{non-empty} subset $X^\prime \subset X$ such that for all $i\in[n]$ it holds that $MinDeg_i(X^\prime)$ is larger, say, by a constant than $\frac{d}{n}$. And indeed, we show that for some $X$ there is no such $X^\prime$ at all. More precisely, we show the following \begin{theorem} \label{thickness_lemma_lower_bound} For every $\varepsilon > 0$ and for all $n\ge 2, s\ge 1$ there exists $m$ and a non-empty set $X\subset\{0, 1, \ldots, m - 1\}^n$ such that \begin{itemize} \item for all $i\in[n]$ it holds that $AvgDeg_i(X) \ge s(n - \varepsilon)$; \item there is no non-empty $Y\subset X$ such that for all $i\in[n]$ it holds that $MinDeg_i(Y) \ge s + 1$. \end{itemize} \end{theorem} Finally, we study hitting distributions for Disjointness gadget. More specifically, let $\DISJ^m$ be communication problem in which Alice receives $a\subset\{1, 2, \ldots, m\}$, Bob receives $b\subset\{1, 2, \ldots, m\}$ and the goal is to output 1, if $a\cap b = \varnothing$, and $0$ otherwise. Let $\DISJ^m_k$ be a restriction of $\DISJ^m$ to $k$-element subsets of $\{1, 2, \ldots, m\}$. We show the following Propositions: \begin{proposition} \label{disj_0} For all large enough $m$ the following holds. Assume that $k < 0.99 m$. Then $\DISJ^m_k$ has a $\left(\frac{1}{10}, \Omega(k)\right)$-hitting distribution over $0$-monochromatic rectangles. \end{proposition} \begin{proposition} \label{disj_1} Assume that $k < m^{1/3}$. Then for all $m$ large enough $\DISJ^m_k$ has a $\left(\frac{1}{10},\, \Omega(\log m) \right)$-hitting distribution over 1-monochromatic rectangles. \end{proposition} In particular, these two propositions imply the following simulation theorem for $\DISJ^m_{\log_2 m}$: \begin{corollary} \label{disjointness_corollary} There exists a constant $c$ such that for all $n\le m^c$ and for all $f:\{0, 1\}^n \to \{0, 1\}$ it holds that $$D^{cc}(f \circ \DISJ^m_{\log_2 m}) = \Omega(D^{dt}(f) \log m).$$ \end{corollary} On the other hand, it is known that $D^{cc}(\DISJ^m_{\log_2 m}) = \Omega(\log^2(m))$. This leaves a possibility that $\Omega(\log m)$-factor in the last corollary can be improved. \subsection{Organization of the paper} The rest of the paper is organized as follows. In Section 2 we give Preliminaries. In Section 3 we prove Theorem \ref{from_expanders_to_hitting} and derive Corollary \ref{main_corollary}. In Section 4 we prove Proposition \ref{square_proposition}. In Section 5 we prove Theorem \ref{thickness_lemma_lower_bound}. In Section 6 we prove Proposition \ref{affine_like_proposition}. In Section 7 we prove Proposition \ref{hitting_distributions_lower_bound}. In Section 8 we prove Proposition \ref{size_proposition}. In Section 9 we prove Propositions \ref{disj_0} and \ref{disj_1}. \section{Preliminaries} \hspace{\parindent}\textbf{Sets notations.} Let $[n]$ be the set $\{1, 2, \ldots, n\}$. Let $2^{[n]}$ denote the set of all subsets of $[n]$. Define $\binom{[n]}{k} = \{ s\in 2^{[n]} : |s| = k\}$. Assume that $A$ is a finite set, $X$ is a subset of $A^n$ and $S = \{i_1, \ldots, i_k\}$, where $i_1 < i_2 < \ldots < i_k$, is a subset of $[n]$. Let $X_S$ denote the following set: $$X_S = \{(x_{i_1}, \ldots, x_{i_k}) : (x_1, \ldots, x_n) \in X\} \subset A^{|S|}.$$ Given $X\subset A^n$ and $i\in [n]$, consider the following bipartite graph $G_i(X) = (A, X_{[n]\setminus \{i\}}, E)$, where $$ E = \left\{(x_i, (x_1, \ldots, x_{i - 1}, x_{i + 1}, \ldots, x_n)) : (x_1, \ldots, x_n) \in X\right\}.$$ Vertices of $G_i(X)$ which are from $A$ will be called left vertices. Similarly, vertices of $G_i(X)$ which are from $X_{[n]\setminus \{i\}}$ will be called right vertices. Define $MinDeg_i(X)$ as minimal $d$ such that there is a right vertex of $G_i(X)$ with degree $d$. Define $AvgDeg_i(X) = |X|/|X_{[n]\setminus\{i\}}|$. \medskip \textbf{Communication and query complexity.} For introduction in both query and communication complexities see, e.g., \cite{kushilevitz2006communication}. We will use the following notation. For a Boolean function $f:\{0,1\}^n\to\{0, 1\}$ let $D^{dt}(f)$ denote $f$'s deterministic query complexity, i. e., minimal $d$ such that there is a deterministic decision tree of depth $d$ computing $f$. For a (possibly partial) Boolean function $g:A\times B \to \{0, 1\}$, where $A, B$ are some finite sets, let $D^{cc}(g)$ denote $g$'s deterministic communication complexity, i. e., minimal $d$ such that there is a deterministic communication protocol of depth $d$, computing $g$. Let us stress that in the case when $g$ is partial \ by ``deterministic communication protocol computes $g$'' we mean only that a protocol outputs 0 on $(a, b)$ whenever $g(a, b) = 0$ and outputs 1 on $(a, b)$ whenever $g(a, b) = 1$; on inputs on which $g$ is not defined the protocol may output anything. If $f, g$ are as above, let $f\circ g$ denote the following (possibly partial) function: \begin{align*} f\circ g:& A^n \times B^n \to \{0, 1\},\\ (f\circ g)&((a_1, \ldots, a_n), (b_1, \ldots, b_n)) = f(g(a_1, b_1), \ldots, g(a_n, b_n)). \end{align*} We can also measure $D^{cc}(f\circ g)$, deterministic communication complexity of $f\circ g$, assuming that Alice's input is $(a_1, \ldots, a_n) \in A^n$ and Bob's input is $(b_1, \ldots, b_n)\in B^n$. \medskip \textbf{Hitting distributions}. Fix a (possibly partial) Boolean function $g:A\times B \to \{0, 1\}$. A set $R\subset A\times B$ is called \emph{rectangle} if there are $U\subset A, V\subset B$ such that $R = U\times V$. If $b\in\{0, 1\}$, then we say that rectangle $R$ is $b$-monochromatic for $g$ if $g(a, b) = b$ whenever $(a, b)\in R$. We stress that if $g$ is partial, then in the definition of $b$-monochromatic rectangle we require that $g$ is everywhere defined on $R$. Let $\delta$ be positive real and $h$ be positive integer. A probability distribution $\mu$ over rectangles $R\subset A\times B$ is called $(\delta, h)$-hitting if for all $X\subset A, Y\subset B$ such that $|X| \ge 2^{-h}|A|, |Y| \ge 2^{-h}|B|$ it holds that $$\Pr_{R\sim\mu}[R\cap X\times Y \neq \varnothing] \ge 1 - \delta.$$ In this paper we are focused only on those $\mu$ such that there exists $b\in\{0, 1\}$ for which all rectangles from the support of $\mu$ are $b$-monochromatic for $g$. In this case we simply say that $\mu$ is over $b$-monochromatic rectangles of $g$. Let $g_t:\{0, 1\}^{k_t}\times\{0, 1\}^{k_t} \to\{0, 1\}$ be family of gadgets and $\mu_t$ be family of probability distributions, where $\mu_t$ is over rectangles of $g_t$. We call $\mu_t$ polynomial-time listable if the following holds: \begin{itemize} \item the size of the support of $\mu_t$ is $2^{O(k_t)}$; \item all the probabilities of $\mu_t$ are rational; \item there is a deterministic Turing machine which, having $k_t$ on input, in time $2^{O(k_t)}$ computes $g_t$'s matrix and lists all the rectangles from the support of $\mu_t$, together with probabilities $\mu_t$ assigns to them. \end{itemize} \medskip \textbf{Functions of interest.} Consider a finite field of size $q$, denoted below by $\mathbb{F}_q$. We call $a\in \mathbb{F}_q$ a \emph{square} if there is $b\in\mathbb{F}_q$ such that $a = b^2$ in $\mathbb{F}_q$. Let $\SQR^q$ denote the following Boolean function: $$\SQR^q : \mathbb{F}_{q^2} \times \mathbb{F}_{q^2} \to\{0, 1\},\qquad\SQR^q(a, b) = \begin{cases}1 & \mbox{if $a - b$ is a square in $\mathbb{F}_{q^2}$,}\\0 & \mbox{if $a - b$ is not a square in $\mathbb{F}_{q^2}$.} \end{cases} $$ Let $\DISJ^m_k$ denote the following Boolean function: $$\DISJ^m : \binom{[m]}{k}\times \binom{[m]}{k}\to \{0, 1\}, \qquad \DISJ^m_k(a, b) = \begin{cases}1 & \mbox{if $a\cap b = \varnothing$,} \\ 0 & \mbox{if $a\cap b \neq \varnothing$.}$$ \end{cases}$$ \medskip \textbf{Expanders.} We consider undirected graphs which may possibly have parallel edges and self-loops. We assume that a self-loop at vertex $v$ contributes 1 to degree of $v$. A graph is called $d$-regular if each its vertex has degree $d$. A coloring of a graph $G = (V, E)$ is a function $c: V\to\{0, 1\}$. It is called balanced if $|V|/3 \le |c^{-1}(1)| \le 2|V|/3$. For any $A\subset V$ let $\Gamma(A)$ denote the set of all $v\in V$ such that there is $u\in A$ connected with $v$ by an edge of $G$. If $v\in V$, define $\Gamma(v) = \Gamma(\{v\})$. Fix graph $G = (V, E)$ and a coloring $c:V\to\{0, 1\}$. Assume that for any two distinct $u, v\in V$ it holds that $|\Gamma(u)\cap \Gamma(v)| \le 1$. Then the following partial function is well defined: \begin{align*} &g(G, c): V\times V\to\{0, 1\},\\ &g(G, c)(u, v) = \begin{cases}1 & \mbox{$u\neq v$ and there is $w\in\Gamma(u)\cap \Gamma(v)$ s.t. $c(w) = 1$}, \\ 0 & \mbox{$u\neq v$ and there is $w\in\Gamma(u)\cap \Gamma(v)$ s.t. $c(w) = 0$}, \\ \mbox{undefined} & \mbox{otherwise.}\end{cases} \end{align*} Let $M_G$ be an adjacency matrix of a $d$-regular graph $G = (V, E)$ with $|V| = m$. Note that $d$ is an eigenvalue of $M_G$. A graph $G$ is called $(m, d, \gamma)$-spectral expander if $M_G$ satisfies the following conditions: \begin{itemize} \item multiplicity of an eigenvalue $d$ is 1; \item absolute value of any other eigenvalue of $M_G$ is at most $\gamma d$. \end{itemize} \begin{proposition}[\cite{vadhan2012pseudorandomness}, Theorem 4.6] Assume that a graph $G = (V, E)$ is $(m, d, \gamma)$-spectral expander. Then for any $A\subset V$: \label{spectral_expansion} $$\frac{|\Gamma(A)|}{|A|} \ge \frac{1}{\gamma^2 + (1 - \gamma^2) \frac{|A|}{m}}.$$ \end{proposition} Assume the $q$ is a power of prime. Let $AP_q$ denote the following graph. Vertices of $AP_q$ are pairs of elements of $\mathbb{F}_q$ so that the number of vertices is $q^2$. We connect $(x, y)$ with $(a, b)$ by an edge if and only if $ax = b + y$ in $\mathbb{F}_q$. It is easy to see that $AP_q$ is $q$-regular. \begin{proposition}[\cite{reingold2002entropy}, Lemma 5.1] \label{affine_plane_gap} $AP_q$ is $(q^2, q, 1/\sqrt{q})$-spectral expander. \end{proposition} \medskip \textbf{$k$-wise independent hash functions}. We will need the following \begin{proposition}[\cite{vadhan2012pseudorandomness}, Corollary 3.34] \label{k_wise_hashing} For every $n, k\in\mathbb{N}$ there exists a polynomial-time computable function $\psi:\{0, 1\}^{kn}\times \{0, 1\}^n \to \{0, 1\}$ such for all distinct $x_1, \ldots, x_k\in\{0, 1\}^n$ and for all $b_1, \ldots, b_k\in\{0, 1\}$ the following holds: $$\Pr[\psi(s, x_1) = b_1, \ldots, \psi(s, x_k) = b_k] = 2^{-k},$$ where the probability is over uniformly random $s\in\{0, 1\}^{kn}$. \end{proposition} \medskip \textbf{Some useful facts.} We will use the following inequalities involving binomial coefficients: \begin{lemma} \label{binomial_fraction} For every $k, m$ the following holds: if $k \le m/2$, then $\binom{m - k}{k}/ \binom{m}{k} \ge 1 - \frac{k^2}{m - k}$. \end{lemma} \begin{lemma} \label{second_binomial_lemma} If $k \le 0.99 m$, then $\log_2\left(\binom{m}{k}/\binom{0.99 m}{k}\right) \ge 0.01k$. \end{lemma} Note that $\mathbb{F}_{q^2}$ contains a subfield of size $q$. Namely, $\mathbb{F}_q = \{x \in\mathbb{F}_{q^2} : x^q = x\}$. \begin{lemma} \label{square_lemma} Assume that $q$ is a power of an odd prime. Let $\alpha$ be a primitive root of $\mathbb{F}_{q^2}$. Then the following holds: \begin{itemize} \item $0, \alpha^2, \alpha^4, \ldots, \alpha^{q^2-1}$ are the only squares in $\mathbb{F}_{q^2}$; \item all the elements of $\mathbb{F}_q$ are squares in $\mathbb{F}_{q^2}$. \end{itemize} \end{lemma} Proofs of Lemmas \ref{binomial_fraction}, \ref{second_binomial_lemma} and \ref{square_lemma} can be found in Appendix. \section{Transforming Expanders into Gadgets} In this section we prove Theorem \ref{from_expanders_to_hitting} and derive Corollary \ref{main_corollary}. \begin{proof}[Proof of Theorem \ref{from_expanders_to_hitting}] Fix $b\in\{0, 1\}$ and set $h = \lfloor 2\log_2(1/\gamma)\rfloor - 100$. Let us define a $(\frac{1}{10}, h)$-hitting distribution $\mu_b$ over $b$-monochromatic rectangles of $g(G, c)$. Take $v\in c^{-1}(b)$ uniformly at random. Split $\Gamma(v)$ into two disjoints subsets $A, B$ randomly according to 10-wise independent hash function $\psi:\{0, 1\}^{10 \cdot\lceil \log_2 m \rceil} \times \{0, 1\}^{\lceil\log_2 m\rceil} \to\{0, 1\}$ from Proposition \ref{k_wise_hashing}. Namely, take $s\in\{0, 1\}^{10\lceil\log_2 m\rceil}$ uniformly at random. An element $u\in \Gamma(v)$ goes into $A$ if $\psi(s, u) = 0$ and into $B$ if $\psi(s, u) = 1$. By definition $A\times B$ is a $b$-monochromatic rectangle of $g(G, c)$. Indeed, any two distinct vertices from $\Gamma(v)$ have a common neighbor colored in $b$. It remains to show that for all $S, T\subset V$ of size at least $2^{-h} m$ with probability at least $0.9$ we have that $A\times B \cap S\times T \neq \varnothing$. It is enough to show that $\Pr[A\cap S \neq \varnothing] \ge 0.96$ and $\Pr[B\cap T \neq \varnothing] \ge 0.96$. Let us show that the first inequality holds, the proof of the second inequality is exactly the same. Actually we will show that $\Pr[|\Gamma(v) \cap S| \ge 10] \ge 0.97$. This is enough for our purposes: conditioned on $[|\Gamma(v) \cap S| \ge 10]$ the probability that $A$ is disjoint with $S$ is at most $2^{-10}$ (due to proposition \ref{k_wise_hashing} this is the probability that $\psi(s, \cdot)$ sends 10 fixed points of $\Gamma(v)$ into $B$). Therefore $\Pr[A\cap S] \ge (1 - 2^{-10}) \Pr[|\Gamma(v) \cap S| \ge 10] \ge 0.999 \cdot 0.97 > 0.96$. The size of $S$ is at least $2^{100}\gamma^2 m$. Partition $S$ into 10 disjoint subsets $S_1, \ldots, S_{10}$, each of size at least $2000 \lfloor\gamma^2 m\rfloor$. Since $m \ge 1/\gamma^2$, we also have $|S_1|, \ldots, |S_{10}| \ge 1000 \gamma^2 m$. If $|\Gamma(v) \cap S| < 10$, then $\Gamma(v)$ is disjoint with $S_i$ for some $i\in[10]$. Hence $$ \Pr[|\Gamma(v) \cap S| < 10] \le \sum\limits_{i = 1}^{10} \Pr[\Gamma(v) \cap S_i = \varnothing]. $$ If we show for all $i\in [10]$ that $\Pr[\Gamma(v) \cap S_i = \varnothing] \le 0.003$, we are done. Observe that $\Gamma(v)$ is disjoint with $S_i$ if and only if $v\notin \Gamma(S_i)$. This implies that \begin{equation} \label{gamma} \Pr[\Gamma(v) \cap S_i = \varnothing] = \frac{ |c^{-1}(b) \setminus \Gamma(S_i)|}{|c^{-1}(b)|} \le \frac{m - |\Gamma(S_i)|}{\frac{m}{3}}. \end{equation} In the last inequality we use the fact that $c$ is balanced. By Proposition \ref{spectral_expansion} we get \begin{align*} |\Gamma(S_i)| &\ge \frac{|S_i|}{\gamma^2 + \frac{|S_i|}{m}} \ge \frac{|S_i|}{\frac{|S_i|}{1000 \cdot m} + \frac{|S_i|}{m}} \ge \frac{1000 \cdot m}{1001} > 0.999m. \end{align*} Here in the second inequality we use the fact that $|S_i| \ge 1000 \gamma^2 m$. Due to \eqref{gamma} this means that $\Pr[\Gamma(v) \cap S_i = \varnothing] \le 0.003$ and thus the proof that $\mu_b$ is $\left(\frac{1}{10}, h\right)$-hitting is finished. Let us now show that $\mu_b$ can be ``written down'' in time $m^{O(1)}$ from $G$ and $c$. First of all, note that $g(G, c)$ is a gadget on $k = \lceil \log_2 m \rceil $ bits. To specify a rectangle from a support of $\mu_b$ we need to specify a vertex of $G$ and a ``seed'' $s$ of length $10k$. This shows that the support of $\mu_b$ is of size $m^{O(1)} = 2^{O(k)}$. This observation also allows us to list all the rectangles from the support of $\mu_b$ in time $2^{O(k)}$ --- just go through all vertices from $c^{-1}(b)$ and all seeds. Further, the $\mu_b$-probability of $A\times B$ can be computed as follows: \begin{align*} \mu_b(A\times B) =& \frac{|\{v \in V : \Gamma(v) = A\cup B\}}{|c^{-1}(b)|} \\ &\cdot \frac{\left|\{s\in \{0, 1\}^{10k} : \mbox{$\phi(s, \cdot)$ splits $A\cup B$ into $A$ and $B$} \}\right|}{2^{10 k}}. \end{align*} This probability is rational and can be computed in time $2^{O(k)}$, again by exhaustive search over all vertices and seeds. \end{proof} Now let us derive Corollary \ref{main_corollary}. Indeed, $AP_q$ is $(q^2, q, 1/\sqrt{q})$-spectral expander by Proposition \ref{affine_plane_gap}. Thus theorem \ref{from_expanders_to_hitting}, applied to $AP_q$, states that for any balanced coloring $c$ of $AP_q$ and for any $b\in\{0, 1\}$ there exists $\left(\frac{1}{10}, \lfloor \log_2(q)\rfloor - 100\right)$-hitting distribution over $b$-monochromatic rectangles of $g(AP_q, c)$. Apply Theorem \ref{simulation_theorem} to these hitting distributions with $\varepsilon = 1 - \log_2(n)/(\lfloor \log_2(q)\rfloor - 100)$. We only need to check that in $AP_q$ for any two distinct vertices $u, v$ is holds that $|\Gamma(u)\cap \Gamma(v)| \le 1$. Assume that $(x, y)$ and $(u, v)$ are distinct vertices of $AP_q$. Take any $(a, b) \in \Gamma((x, y)) \cap \Gamma((u, v))$. Then \begin{equation} \label{system} \begin{pmatrix}x & -1 \\ u & -1\end{pmatrix} \cdot \begin{pmatrix} a \\ b\end{pmatrix} = \begin{pmatrix} y \\ v\end{pmatrix}. \end{equation} If $x\neq u$, then $\mathsf{det} \begin{pmatrix}x & -1 \\ u & -1\end{pmatrix} \neq 0$ and hence system \eqref{system} has exactly one solution. If $x = u$, then $y\neq v$ and system \eqref{system} has no solution. Therefore $|\Gamma((x, y)) \cap \Gamma((u, v))|\le 1$. \section{$\SQR^q$ Gadget} In this section we prove Proposition \ref{square_proposition}. Fix $w\in\mathbb{F}_{q^2}$ such that $\{1, w\}$ is a basis of $\mathbb{F}_{q^2}$ over $\mathbb{F}_q$. Consider the following coloring of $AP_q$: set $c((a, b)) = 1$ if and only if $1 + wa$ is a square in $\mathbb{F}_{q^2}$; clearly a truth table of such $c$ can be computed in time $q^{O(1)}$. Note that $g(AP_q, c)((x, y), (u, v))$ is defined if and only if $(x, y), (u, v)$ are distinct and there is $(a, b) \in \Gamma((x, y)) \cap \Gamma((u, v))$. Let us show that for any such $(x, y), (u, v)$ it holds that \begin{equation} \label{subfunction} g(AP_q, c)((x, y), (u, v)) = c((a, b)) = \SQR^q(x + yw, u + v w). \end{equation} Indeed, we have that $ax = b + y, au = b + v$. This means that $y - v = a(x - u)$. Moreover, due to distinctness of $(x, y), (u, v)$ we have that $x\neq u$. Further, $$x + yw - (u + vw) = (x - u) + w(y - v) = (x - u) (1 + wa).$$ Note that $x - u$ is a non-zero element of $\mathbb{F}_q$. By the second item of Lemma \ref{square_lemma} this implies that $x + yw - (u + vw)$ is a square if and only if $1 + wa$ is a square. Hence \eqref{subfunction} is true for all $(x, y), (u, v)$ from the domain of $g(AP_q, c)$. It remains to show that $c$ is balanced. Take $(a, b, \lambda) \in \mathbb{F}_q \times \mathbb{F}_q \times (\mathbb{F}_q\setminus\{0\})$ uniformly at random. Note that $c((a, b)) = 1$ if and only if $1 + wa$ is a square. Thus $|c^{-1}(1)| = q^2\Pr[1 + wa \mbox{ is a square}]$. Due to the second item of Lemma \ref{square_lemma} we have that $1 + wa$ is a square if and only if $\lambda(1 + wa) $ is a square. Note that $\lambda(1 + wa) = \lambda + \lambda a w$ is distributed uniformly in $\{i + wj : i, j\in\mathbb{F}_q, i \neq 0\}$ (this is because for any $\lambda_0$ the distribution of $\lambda a$ given $\lambda = \lambda_0$ is uniform in $\mathbb{F}_q$). Due to the first item of Lemma \ref{square_lemma} for all large enough $q$ there are at least $0.4 q^2$ squares and at least $0.4q^2$ non-squares in $\{i + wj : i, j\in\mathbb{F}_q, i \neq 0\}$. This means that $1/3 \le \Pr[\lambda(1 + wa) \mbox{ is a square}] \le 2/3$ for all large enough $q$. Hence $q^2/3 \le |c^{-1}(1)| \le 2q^2/3$ and $c$ is balanced. \section{Unimprovaibilty of Thickness Lemma} Consider any set $X\subset \{0, 1, \ldots, m - 1\}^n$ and take any $i\in \{1, 2, \ldots, n\}$. Let us say that $x\in X$ is \emph{$i$-unique in $X$} if there is no other $x^\prime \in X$ such that $$x_1 = x^\prime_1, \ldots, x_{i - 1} = x_{i - 1}^\prime, x_{i + 1} = x_{i + 1}^\prime, \ldots, x_n = x^\prime_n.$$ Call a set $X\subset\{0, 1, \ldots, m - 1\}^n$ \emph{reducible} if for all \ non-empty $Y\subset X$ there is $i\in\{1, 2, \ldots, n\}$ such that $MinDeg_i(Y) = 1$. Note that $X$ is reducible if and only if for all non-empty $Y\subset X$ there is $y\in Y$ which is $i$-unique in $Y$ for some $i\in\{1, 2, \ldots, n\}$. \begin{lemma} \label{main_thickness_lemma} For every $\varepsilon > 0$ and for every $n\ge 2$ there exists $m> 0$ and a reducible set $X\subset\{0, 1, \ldots, m - 1\}^n$ such that for all $i\in\{1, 2, \ldots, n\}$ it holds that $AvgDeg_i(X) \ge n - \varepsilon$. \end{lemma} \begin{proof} Take any $m > 0$. Consider the following sequence of sets $X_2, X_3, \ldots$, where $X_n$ is a subset of $\{0, 1, \ldots, m - 1\}^n$: $$X_2 = \{(j, j) : j\in\{0, 1, \ldots, m - 1\}\} \cup \{(j, j + 1) : j \in \{0, 1,\ldots, m - 2\}\},$$ \begin{align*} X_{n + 1} &= \left\{(x, j) : x\in X_n,\, j \in\{0, 1, \ldots, m - 1\} \right\} \\ &\cup \left\{(y, 0) : y\in \{0, 1, \ldots, m - 1\}^n/ X_n\right\}. \end{align*} We have the following relation between the size of $|X_{n + 1}|$ and the size of $|X_n|$: $$|X_{n + 1}| = m\cdot |X_n| + m^n - |X_n| = (m - 1) \cdot |X_n| + m^n.$$ Let us show by induction on $n$ that $|X_n| \ge n(m - 1)^{n - 1}$. Indeed, for $n = 2$ this inequality is true: $|X_2| = 2m - 1 > 2(m - 1)$. Now, assume that $|X_n| \ge n(m - 1)^{n - 1}$ is already proved. Then \begin{align*} |X_{n + 1}| &= (m - 1)\cdot |X_n| + m^n \\ &\ge (m - 1) \cdot n(m - 1)^{n - 1} + (m - 1)^n \\ &\ge (n + 1) (m - 1)^n. \end{align*} This means that for every $i\in[n]$ it holds that $$AvgDeg_i(X_n) = \frac{|X_n|}{|(X_n)_{[n]/\{i\}}|}\ge \frac{n (m - 1)^{n - 1}}{m^{n - 1}} = n \left(1 - \frac{1}{m}\right)^{n - 1},$$ and the latter tends to $n$ as $m\to \infty$. Thus to show the lemma it is sufficient to show that $X_n$ is reducible. Once again, we will show it by induction on $n$. Consider $n = 2$ and take any non-empty $Y\subset X_2$. Let $y\in Y$ be the smallest element of $Y$ in lexicographical order. If $y = (j, j)$, then $y$ is $1$-unique in $Y$ and hence $MinDeg_1(Y) = 1$. If $y = (j, j + 1)$, then $y$ is $2$-unique in $Y$ and hence $MinDeg_2(Y) = 1$. Further, assume that $X_n$ is reducible. Consider any non-empty $Y\subset X_{n + 1}$. Assume that $Y$ intersects $\left\{(y, 0) : y\in \{0, 1, \ldots, m - 1\}^n/ X_n\right\}$ and hence for some $y\notin X_n$ it holds that $(y, 0)\in Y$. Then $MinDeg_{n + 1}(Y) = 1$. Indeed, in this case $(y, 0)$ is $(n + 1)$-unique in $Y$, because if $(y, j)\in Y \subset X_{n + 1}$ for some $j > 0$, then $y\in X_n$, contradiction. Now assume that $Y$ is a subset of $\left\{(x, j) : x\in X_n,\, j \in\{0, 1, \ldots, m - 1\} \right\}$. Then for some $j\in\{0, 1, \ldots, m - 1\}$ a set $Y^\prime = \{x\in X_n : (x, j) \in Y\}$ is non-empty. Since by induction hypothesis $X_n$ is reducible, there is $y \in Y^\prime$ which is $i$-unique in $Y^\prime$ for some $i\in [n]$. Let us show that $(y, j)$ is $i$-unique in $Y$ (this would mean that $MinDeg_i(Y) = 1$). Indeed, assume that there is $(y^\prime, j^\prime) \in Y$ which coincides with $(y, j)$ on all the coordinates except the $i^{th}$ one. Then $j = j^\prime$ and $y^\prime \in Y^\prime$. Due to $i$-uniqueness of $y\in Y^\prime$ we also have that $y = y^\prime$. \end{proof} \begin{definition} Let $s, m, n$ be positive integers and assume that $X$ is a subset of $\{0, 1, \ldots, m - 1\}^n$. Let $In(X, s)\subset \{0, 1, \ldots, sm - 1\}^n$ denote the following set: \begin{align*} In(X, s) = \{ (sx_1 + r_1, s x_2 + r_2, &\ldots, s x_n + r_n) :\\ &(x_1, \ldots, x_n)\in X,\, r_1, \ldots, r_n\in\{0, 1, \ldots, s - 1\} \}. \end{align*} \end{definition} Observe that for every $(y_1, \ldots, y_n) \in In(X, s)$ there is exactly one $(x_1, \ldots, x_n) \in X$ such that for some $r_1, \ldots, r_n\in\{0, 1, \ldots, s - 1\}$ it holds that $$y_1 = sx_1 + r_1, \ldots, y_n = sx_n + r_n.$$ \begin{lemma} \label{size_lemma} For every $i\in\{1, 2, \ldots, n\}$ it holds that $AvgDeg_i(In(X, s)) = s \cdot AvgDeg_i(X)$. \end{lemma} \begin{proof} Lemma follows from the following two equalities: $$|In(X, s)| = s^n \cdot |X|, \qquad |In(X, s)_{[n]/\{i\}}| = s^{n - 1} \cdot |X_{[n]/\{i\}}|.$$ \end{proof} \begin{lemma} \label{inflation_lemma} Assume that $X\subset \{0, 1, \ldots, m - 1\}^n$ is reducible. Then for all non-empty $Y\subset In(X, s)$ there is $i\in[n]$ such that $MinDeg_i(Y) \le s$. \end{lemma} \begin{proof} Let us prove this lemma by induction on $|X|$. \emph{Induction base}. Assume that $|X| = 1$ and $X = \{x\}$. Consider any $i\in[n]$. Each right vertex in $G_i(In(X, s)) $ is connected with exactly $s$ left vertices. Namely, these vertices are $sx_i, sx_i + 1, \ldots, sx_i + s - 1\in \{0, 1, \ldots, sm - 1\}$. This implies that for all non-empty $Y\subset In(X, s)$ and \emph{for all} $i\in [n]$ it holds that $MinDeg_i(Y) \le s$. \emph{Induction step.} Assume that for all reducible $X$ of size at most $t$ the lemma is proved. Take any reducible $X\subset \{0, 1, \ldots, m - 1\}^n$ of size $t + 1$. Since $X$ is reducible, there is $i\in[n]$ such that $MinDeg_i(X) = 1$. This means that there is $x = (x_1, \ldots, x_n)\in X$ which is $i$-unique in $X$. Assume for contradiction that there exists a non-empty $Y\subset In(X, s)$ such that for all $j\in[n]$ it holds that $MinDeg_j(Y) \ge s + 1$. There are two cases: \begin{itemize} \item \emph{The first case. There are $r_1, \ldots, r_n\in\{0, 1, \ldots, s - 1\}$ such that $\hat{x} = (sx_1 + r_1, \ldots, s x_n + r_n) \in Y$}. Let us show that $(\hat{x}_1, \ldots, \hat{x}_{i - 1}, \hat{x}_{i + 1}, \ldots, \hat{x}_n)$ is a right vertex of $G_i(Y)$ which is connected with at most $s$ left vertices (and thus $MinDeg_i(Y) \le s$). Namely, we will show that if $v\in \{0, 1, \ldots, sm - 1\}$ is connected with $(\hat{x}_1, \ldots, \hat{x}_{i - 1}, \hat{x}_{i + 1}, \ldots, \hat{x}_n)$, then $v = sx_i + r$ for some $r\in\{0, 1, \ldots, s\}$. Indeed, if $(\hat{x}_1, \ldots, \hat{x}_{i - 1}, v, \hat{x}_{i + 1}, \ldots, \hat{x}_n)\in Y \subset In(X, s)$, then for some $x_i^\prime \in\{0, 1, \ldots, m - 1\}$ and $r\in\{0, 1, \ldots, s - 1\}$ it holds that $v = sx_i^\prime + r$ and $(x_1, \ldots, x_{i - 1}, x_i^\prime, x_{i + 1},\ldots, x_n)\in X$. The latter due to $i$-uniqueness of $x$ means that $x_i = x_i^\prime$. \item \emph{The second case. There are no $r_1, \ldots, r_n\in\{0, 1, \ldots, s - 1\}$ such that $(sx_1 + r_1, \ldots, s x_n + r_n) \in Y$}. Clearly, $X/\{x\}$ is also reducible. But in this case $Y\subset In(X/\{x\}, s)$ and the latter contradicts induction hypothesis for $X/\{x\}$. \end{itemize} \end{proof} \begin{proof}[Proof of Theorem \ref{thickness_lemma_lower_bound}] Due to Lemma \ref{main_thickness_lemma} there is a reducible $X^\prime\subset\{0, 1, \ldots, m - 1\}^n$ such that for every $i\in[n]$ we have $AvgDeg_i(X^\prime) \ge n - \varepsilon$. By Lemma \ref{size_lemma}, applied to $X = In(X^\prime, s)$ for every $i\in[n]$ we have: $AvgDeg_i(X) \ge s(n - \varepsilon).$ Finally, due to Lemma \ref{inflation_lemma}, for all non-empty $Y\subset X$ there is $i\in[n]$ such that $MinDeg_i(Y) \le s$. \end{proof} \section{Expanders Similar to $AP_q$} In this section we prove Proposition \ref{affine_like_proposition}. Let us stress that this Proposition is just a slight improvement of Proposition \ref{spectral_expansion} for sets of size 2. Proposition \ref{spectral_expansion} itself is not strong enough to conclude that in all $(m^2, m, 1/\sqrt{m})$-spectral expanders any two distinct vertices have at most 1 common neighbor. For $S\subset V$ let $\mathbb{I}_S\in\mathbb{R}^{|V|}$ denote characteristic vector of a set $S$. Assume for contradiction that there are distinct $u, v\in V$ such that $|\Gamma(u)\cap \Gamma(v)| \ge 2$. Then the size of $\Gamma(\{u, v\})$ is at most $2d - 2$. Assume that $M$ is the adjacency matrix of $G$. Denote $w = \{u, v\}$. Let us show that \begin{equation} \label{affine_upper_bound} \| M \mathbb{I}_w\|^2 \le d^2\left(2\gamma^2 + \frac{4(1 - \gamma^2)}{m}\right). \end{equation} Indeed, observe that $\mathbb{I}_w = \frac{2}{m} \mathbb{I}_V + (\mathbb{I}_w - \frac{2}{m} \mathbb{I}_V)$ and $(\mathbb{I}_w - \frac{2}{m} \mathbb{I}_V)$ is perpendicular to $\mathbb{I}_V$. Since $G$ is a $(m, d, \gamma)$-spectral expander, this implies that \begin{align*} \| M \mathbb{I}_w\|^2 &= \|M\left(\frac{2}{m}\mathbb{I}_w\right) \|^2 + \|M\left(\mathbb{I}_w - \frac{2}{m} \mathbb{I}_V\right)\|^2\\ &\le \frac{4d^2}{m} + \gamma^2 d^2 \|\left(\mathbb{I}_w - \frac{2}{m} \mathbb{I}_V\right)\|^2 \\ &= \frac{4d^2}{m} + \gamma^2 d^2 \left(2\left(1 - \frac{2}{m}\right)^2 + (m - 2) \frac{4}{m^2}\right)\\ &= \frac{4d^2}{m} + \gamma^2 d^2 \left(2 - \frac{4}{m}\right) = d^2\left(2\gamma^2 + \frac{4(1 - \gamma^2)}{m}\right), \end{align*} and thus \eqref{affine_upper_bound} is proved. To obtain a contradiction it is enough to show the following inequality \begin{equation} \label{affine_lower_bound} \| M\mathbb{I}_w\|^2 \ge 2d + 4. \end{equation} Assume that there are $t \le 2d - 2$ non-zero coordinates in $M\mathbb{I}_w$. Let $\xi_1, \ldots, \xi_t$ be the values of these coordinates. Their sum is $2d$. We need to show that $\xi_1^2 + \ldots + \xi_t^2 \ge 2d + 4$. Observe that $\xi_1 - 1, \ldots, \xi_t - 1$ are non-negative integers and their sum is $2d - t \ge 2$. Clearly this implies that $(\xi_1 - 1)^2 + \ldots + (\xi_t - 1)^2 \ge 2$. Indeed, otherwise the sum of $\xi_1 - 1, \ldots, \xi_t - 1$ is either 0 or 1. Hence \begin{align*} \xi_1^2 + \ldots + \xi_t^2 = (\xi_1 - 1)^2 + \ldots + (\xi_t - 1)^2 + 4d - t \ge 2 + 4d - t \ge 2d + 4. \end{align*} \section{Proof of Proposition \ref{hitting_distributions_lower_bound}} Denote $s = 2^{k - h}$. Assume that there is a $0$-monochromatic rectangle $A\times B$ of $g$ such that $|A| \ge s$ and $B \ge s$. Then clearly the proposition is true for $b = 1$ and $X = A, Y = B$. Now assume that if $A\times B$ is a 0-monochromatic rectangle of $g$, then either $|A| < s$ or $B < s$. Take $\mathcal{X}, \mathcal{Y}$ independently and uniformly at random from the set of all $s$-element subsets of $\{0, 1\}^k$. Fix any 0-monochromatic rectangle $A\times B$ of $g$. Let us show that $\mathcal{X}\times \mathcal{Y}$ intersects $A\times B$ with probability at most $2^{k - 2h + 1}$. Indeed, assume WLOG that $|A| < s$. Then \begin{align*} \Pr[\mathcal{X}\times \mathcal{Y} \cap A\times B \neq \varnothing] &\le \Pr[\mathcal{X}\cap A \neq \varnothing] = 1 - \frac{\binom{2^k - |A|}{s}}{\binom{2^k}{s}}\le 1 - \frac{\binom{2^k - s}{s}}{\binom{2^k}{s}} \end{align*} Since $h\ge 1$, we have that $s\le 2^{k}/2$. Applying Lemma \ref{binomial_fraction} we obtain: $$\Pr[\mathcal{X}\times \mathcal{Y} \cap A\times B \neq \varnothing] \le \frac{s^2}{2^k - s} \le \frac{s^2}{2^{k}/2} = 2^{k - 2h + 1}.$$ Due to the standard averaging argument this means that for any probability distribution $\mu$ over $0$-monochromatic rectangles of $g$ it is possible to fix $\mathcal{X} = X, \mathcal{Y} = Y$ in such a way that $$\Pr_{R\sim \mu}[R \cap X\times Y \neq \varnothing] \le 2^{k - 2h + 1}.$$ \section{Proof of Proposition \ref{size_proposition}} \emph{Proof of the first item.} Let $\mu$ be $\left(\frac{1}{20}, h\right)$-hitting distribution over $b$-monochromatic rectangles of $g$. Consider $c2^k$ independent random variables $$\mathcal{R}_1, \ldots, \mathcal{R}_{c2^k},$$ where each $\mathcal{R}_i$ is distributed according to $\mu$. For any fixed $X, Y\subset\{0, 1\}^k$ of size at least $2^{k - h}$ it holds that the probability that $\mathcal{R}_i$ intersects $X\times Y$ is at least $1 - 1/20$. Due to standard Chernoff Bound, if $c> 0$ is large enough contant, then the probability that at least $c2^k/10$ rectangles among $\mathcal{R}_1, \ldots, \mathcal{R}_{c2^k}$ are disjoint with $X\times Y$ is smaller that $2^{-2 \cdot 2^{k}}$. This means that it is possible to fix $\mathcal{R}_1 = R_1, \ldots, \mathcal{R}_{c2^k} = R_{c2^k}$ in such a way that \emph{for all} $X, Y\subset\{0, 1\}^k$ of size at least $2^{k - h}$ there are at most $c2^k/10$ rectangles among $R_1, \ldots, R_{c2^k}$ which are disjoint with $X\times Y$. Therefore uniform distribution on the (multi)set $\{R_1, \ldots, R_{c2^k}\}$ is $\left(\frac{1}{10}, h\right)$-hitting distribution over $b$-monochromatic rectangles of $g$. Its support is of size at most $c2^k = 2^{O(k)}$. \emph{Proof of the second item.} Take any $b\in\{0, 1\}$. Let the support of $\mu_b$ be $\{U_1\times V_1, \ldots, U_s\times V_s\}$. Note that $\mu_{1 - b}$ never ``hits'' $U_i\times V_i$. Indeed, $\mu_{1 - b}$ is over $(1 - b)$-monocromatic rectangles and $U_i\times V_i$ is $b$-monochromatic. Since $\mu_b$ is $(\delta, h)$-hitting, this means that for every $i\in\{1, 2, \ldots, s\}$ either $U_i$ or $V_i$ is of size less than $2^{k - h}$. Therefore $X\times Y$ is disjoint with $U_i\times V_i$ for all $i\in\{1, 2, \ldots, s\}$, where $$X = \{0, 1\}^k \setminus \left( \bigcup\limits_{i : |U_i| < 2^{k - h}} U_i\right), \qquad Y = \{0, 1\}^k \setminus \left( \bigcup\limits_{i : |V_i| < 2^{k - h}} V_i\right).$$ Since $\mu_b$ is $b$-monochromatic, this means that either $X$ or $Y$ is of size less than $2^{k - h}$. On the other hand $$|X| \ge 2^{k} - s2^{k - h}, \qquad |Y| \ge 2^{k} - s2^{k - h}.$$ Hence $2^{k - h} > 2^{k} - s2^{k - h}$, which means that $s \ge 2^{h}$. \section{Hitting distributions for $\DISJ^m_k$} \begin{proof}[Proof of Proposition \ref{disj_0}] Take $I\in[m]$ uniformly at random and define $$U_I = \left\{b\in \binom{[m]}{k} : I \in b\right\}.$$ Note that $U_I \times U_I$ is a 0-monochromatic rectangle for $\DISJ^m_k$. Assume that $X \subset \binom{[m]}{k}$ is such that $|X|\ge \binom{m}{k} \cdot 2^{- \left\lfloor 0.01 k \right\rfloor}$. By Lemma \ref{second_binomial_lemma} this means that $|X| \ge \binom{0.99 m}{k}$. Hence the union of all subsets from $X$ has size at least $0.99 m$. This means the probability that $U_I$ is disjoint with $X$ is at most $0.01$. \end{proof} For the proof of Proposition \ref{disj_1} we need the notion of statistical distance. Let $\mu$ and $\nu$ be two probability distribution on the set $A$. Define statistical distance between $\mu, \nu$ as follows: $$\delta(\mu, \nu) = \max\limits_{B\subset A} |\mu\{B\} - \nu\{B\}|.$$ We will need the following feature of statistical distance: let $\mu$ be a probability distribution on $A$, let $B$ be the subset of $A$ and let $\mu | B$ denote the restriction of $\mu$ to $B$. In other words, if the random variables $X$ has distribution $\mu$, then $\mu|B$ is the distribution of $X$ conditioned on $X\in B$. One can easily see that $\delta( \mu,\, \mu | B) = 1- \mu\{B\}$. \begin{proof}[Proof of Proposition \ref{disj_1}] Let $h, t$ be as follows: $$h = \left\lceil (\log_2 m)/8 \right \rceil, \qquad t = \left\lceil m^{1/7}\right\rceil.$$ We will construct a $\left(\frac{1}{10}, h \right)$-hitting distribution over 1-monochromatic rectangles of $\DISJ^m_k$. Assume that $X\subset \binom{[m]}{k}$ is such that $|X| \ge \binom{m}{k} \cdot 2^{-h}$. Consider the following iterative random process. Take $J_1\in [m]$ uniformly at random, then take $J_2\in[m]/\{J_1\}$ uniformly at random and so on. Set $$A = \{J_1, J_2, \ldots, J_{m/2}\}.$$ Note that $A$ is distributed uniformly in $\binom{[m]}{m/2}$. Define $$U_A = \left\{b\in \binom{[m]}{k} : b\subset A\right\},\qquad V_A = \left\{b\in \binom{[m]}{k} : b\subset [m]/A\right\}.$$ Clearly, $U_A \times V_A$ is a 1-monochromatic rectangle for $\DISJ^m_k$. Our goal is to show that $U_A$ intersects $X$ with probability at least $0.99$ (the same will be true for $V_A$ as $V_A$ is distributed exactly as $U_A$). For every $i\in[t]$ define $$S_i = \{J_{k(i - 1) + 1}, \ldots, J_{k (i - 1) + k}\}.$$ Note that $S_1, \ldots, S_t$ are disjoint and $S_1, \ldots, S_t \subset A$. We will show that with probability at least $0.99$ there is $i\in[t]$ such that $S_i\in X$. This will be done in two steps. First of all, consider $t$ auxiliary random variables $R_1, \ldots, R_t\in\binom{[m]}{k}$. They are mutually independent and every $R_j$ is uniformly distributed in $\binom{[m]}{k}$. We shall show two things: \begin{itemize} \item the distribution of $(R_1, \ldots, R_t)$ is close in statistical distance to the distribution of $(S_1, \ldots, S_t)$; \item with high probability $\{R_1, \ldots, R_t\}$ contains an element from $X$. \end{itemize} The probability that $\{S_1, \ldots, S_t\}$ is disjoint with $X$ is at most the probability that $\{R_1, \ldots, R_t\}$ is disjoint with $X$ plus $\delta\left( (R_1, \ldots, R_t), (S_1, \ldots, S_t)\right)$. \begin{lemma} \label{statistical_distance_lemma} $\delta\left( (R_1, \ldots, R_t), (S_1, \ldots, S_t)\right) \le\frac{k^2 \cdot t^2}{m - k}$. \end{lemma} \begin{proof} Let $E$ denote the event that $R_1, \ldots, R_t$ are pairwise disjoint. Note that distribution of $(S_1, \ldots, S_t)$ is equal to conditional distribution $(R_1, \ldots, R_t)|E$ (this is due to the fact that distribution of $(S_1, \ldots, S_t)$ is uniform on its support). Thus $\delta\left( (R_1, \ldots, R_t), (S_1, \ldots, S_t)\right) = \Pr[\lnot E]$. The probability that $R_1$ and $R_2$ are not disjoint is equal to $1 - \binom{m - k}{k} /\binom{m}{k}$ and the latter by Lemma \ref{binomial_fraction} is at most $\frac{k^2}{m - k}$. Hence from the union bound it follows that $\Pr[\lnot E] \le \frac{k^2 \cdot t^2}{m - k}$, as required. \end{proof} For every $i\in[t]$ we have that $R_i\in X$ with probability at least $|X|/\binom{m}{k} \ge 2^{-h}$. Hence \begin{align*} \Pr[X \cap \{S_1, \ldots, S_t\} = \varnothing] &\le \Pr[X\cap \{R_1, \ldots, R_t\} = \varnothing] + \\ &\delta\left( (R_1, \ldots, R_t), (S_1, \ldots, S_t)\right)\\ &\le (1 - 2^{-h})^t + \frac{k^2 \cdot t^2}{m - k}\\ &\le \exp\{-2^{-h} \cdot t\} + \frac{k^2 \cdot t^2}{m - k}. \end{align*} If $h$ and $t$ are as above, then for all large enough $m$ the last expression is at most $0.01$. \end{proof} \vspace{0.4cm} \textbf{Acknowledgments.} I would like to thank Andrei Romashchenko and Nikolay Vereshchagin for help in writing this paper.
{ "timestamp": "2018-07-09T02:08:23", "yymm": "1802", "arxiv_id": "1802.04014", "language": "en", "url": "https://arxiv.org/abs/1802.04014" }
\section{Introduction} It is a known fact that one of the conceptual problems of quantum theory is the so-called ''measurement problem'' - the standard Quantum Mechanics (QM) crucially depends on the concept of measurement, even though such notion is not defined rigurously within the theory \cite{OS, NG, GF}. According to Okon and Sudarsky, the solution to the measurement problem may well lie in Quantum Gravity (QG), which is still lacking. Moreover, they suggest that it may be necessary to solve the measurement problem for to build a quantum theory of gravity. Gisin \cite{NG} and Gisin and Frowis \cite{GF} argued that, without solving the measurement problem, quantum theory is not complete, as it does not tell us how one should - in principle - perform measurements. They consider the time is ready to pass from the study of Quantum non-locality - a very fruitful subject of research - to the Quantum measurement problem, another basic problem in the foundations of Quantum Mechanics. Connections between quantum-foundational issues and QG have been also pointed out by Penrose \cite{RP} (see also \cite{LD1, LD2, DB, GGB}), who studied the intrinsic spacetime instability when macroscopic bodies are placed in quantum superposition of different locations, an idea that leads him to a link between quantum collapse of the wave function and gravity. Diosi \cite{LD3} introduced a nonlocal, gravitational term in the time-dependent Schrodinger equation for to find the quantum uncertainty in the position of a free pointlike macroscopic object, from the minimization of the energy. In addition, Pearle and Squires \cite{PS} suggested that the curvature scalar of the spacetime is responsable for the spontaneous quantum collapse. Motivated by the importance of the measurement process within QM and QG, we investigate in this paper the role played by the duration of measurement on the spacetime structure of the physical system under consideration. We know that the (Newtonian) gravity has not been checked out experimentally in the range below $0.1$ mm. We pass from short range distances to short range time intervals and suggest that the strength of the gravitational field may be modified when the measurement is performed in a very short time interval w.r.t. the gravitational radius of the object. Throughout the paper we use geometrical units $G = c = \hbar = 1$, unless otherwise specified. \section{Painleve-Gullstrand geometry with time dependent mass} The Schwarzschild exact solution for the geometry outside a star or a BH is given by \begin{equation} ds^{2} = -(1- \frac{2m}{r}) dt_{S}^{2} + (1- \frac{2m}{r})^{-1} dr^{2} + r^{2} d \Omega^{2}. \label{2.1} \end{equation} In (2.1) $t_{S}$ is the Schwarzschild time and $d\Omega^{2}$ is the metric on the unit 2-sphere. To get rid of the coordinate singularity of the metric at the horizon $r = 2m$, Painleve and Gullstarnd (P-G) used the following temporal transformation \cite{KW, TWZ, EP} \begin{equation} t = t_{S} + 2\sqrt{2mr}+ 2m ln\frac{\sqrt{r} - \sqrt{2m}}{\sqrt{r} + \sqrt{2m}}. \label{2.2} \end{equation} Therefore, the line element appears as \begin{equation} ds^{2} = -(1- \frac{2m}{r}) dt^{2} + dr^{2} + 2\sqrt{\frac{2m}{r}} dt dr + r^{2} d \Omega^{2}. \label{2.3} \end{equation} where $t$ is the free-fall time, that is the proper time experienced by an observer who free-falls from rest at infinity. We chose the ''+'' sign in front of the square root in order to deal only with the inward moving free particles (along a geodesic curve with $dr + \sqrt{2m/r}dt = 0$, the velocity $dr/dt = - \sqrt{2m/r} $ is negative). The geometry (2.3) is stationary, namely invariant under time translations (however, it is not invariant under time reversal because of the nondiagonal term). In addition, a constant time slice is simply flat space. We also emphasize that (2.3) represents physical space freely falling radially into the BH at the Newtonian escape velocity $\sqrt{2m/r}$. The proper time of one observer at rest ($dr = d \theta = d \phi = 0$) is $d \tau = \sqrt{1 - (2m/r)}dt$. As we know, a time dependent source with spherical symmetry will no longer lead to a Ricci-flat geometry, i.e., to a vacuum solution of the Einstein equations. Therefore, Birkhoff's theorem does not apply for this case. A nonstatic Schwarzschild (S) spacetime with a time dependent mass, outside an object with spherical symmetry was investigated in \cite{HC}. It was found there that the source of geometry (an anisotropic fluid) has zero energy density and radial pressure, nonzero tangential pressures and radial energy flux. We intend in this paper to introduce a variable mass directly in the line-element (2.3), for to explore whether more simple properties may be obtained. Therefore, we write down the geometry (2.3) as \begin{equation} ds^{2} = -(1- \frac{2m e^{-\frac{k}{t}}}{r}) dt^{2} + dr^{2} + 2\sqrt{\frac{2m e^{-\frac{k}{t}}}{r}} dt dr + r^{2} d \Omega^{2}, \label{2.4} \end{equation} with $lne = 1,~m(t) = me^{-\frac{k}{t}},~t>0$, $k$ - positive constant and $m$ - the particle constant mass. To find $k$, we make use of reasonings from \cite{RP, LD1, LD2, LD3}: one looks for a link between quantum collapse of the wave function and gravity, when macroscopic objects are placed in quantum superposition at different locations. Diosi \cite{LD3} added a nonlocal gravitational term to the standard QM terms from the Schrodinger equation \begin{equation} i\hbar \frac{\partial \Psi(\textbf{x},t)}{\partial t} = - \frac{\hbar^{2}}{2M} \Delta \Psi - GM^{2} \int \frac{|\Psi(\textbf{x'},t)|^{2}}{|\textbf{x} - \textbf{x'}|} d^{3}\textbf{x'}~ \Psi(\textbf{x},t), \label{2.5} \end{equation} for a macroscopic object of mass $M$ and radius $R$, with $\textbf{x}$ and $\textbf{x'}$ - the locations of the two branches of the superposition. Diosi showed that, when $\Delta \textbf{x}\equiv|\textbf{x} - \textbf{x'}| << R$, the Newtonian potential energy from (2.5) acquires the form \begin{equation} U(\textbf{x} - \textbf{x'}) \approx U(0) + \frac{1}{2} M\omega^{2} |\textbf{x} - \textbf{x'}|^{2}, \label{2.6} \end{equation} where $\omega^{2} = GM/R^{3} = 4\pi G\rho/3$ is the frequency of the Newtonian oscillator (which could be obtained from the geodesic deviation), $\rho$ is the constant density of the particle and $U(0) = GM^{2}/R$. The standard kinetic term from (2.5) tends to spread the wave function, competing with the Diosi-Penrose spontaneous collapse which tends to shrink the wave function. When the spreading rate $\hbar/M(\Delta \textbf{x})^{2}$ equals the collapse rate $1/\tau \equiv M\omega^{2} (\Delta \textbf{x})^{2}/\hbar$, an equilibrium is reached and one obtains $1/\tau = \omega$, where $\tau$ represents the decoherence time, required to collapse the macroscopic superposition, or the quantum Zenon time \cite{LD4}. For our case of interest, we propose to consider $\tau$ as the time that light needs to cross the Schwarzschild radius of the object. In this case we have $\tau = 2M$ which means to insert $k = 2m$ in Eq. (2.4). Hence (2.4) becomes \begin{equation} ds^{2} = -(1- \frac{2m e^{-\frac{2m}{t}}}{r}) dt^{2} + dr^{2} + 2\sqrt{\frac{2m e^{-\frac{2m}{t}}}{r}} dt dr + r^{2} d \Omega^{2}, \label{2.7} \end{equation} with $m(t) = m e^{-\frac{2m}{t}}$. To avoid a signature switch of the metric coefficient $g_{tt}$, we impose the condition $f(r,t) \equiv 1- \frac{2m}{r} e^{-\frac{2m}{t}} >0$, namely $r > 2m e^{-\frac{2m}{t}}$, with $r_{AH} = 2m e^{-\frac{2m}{t}}$ - the location of the apparent horizon. That is necessary because otherwise the proper time and $t$ will not have the same sign for an observer located somewhere at $r, \theta, \phi = const$. For constant $r$, $f(r,t)$ is a monotonic decreasing function of $t$, tends to unity when $t \rightarrow 0$ and acquires the standard Schwarzschild value $(1 - 2m/r)$ at infinity (or when $t >> 2m$). When $f(r,t)$ is considered as a function of $r$, it equals unity for $r \rightarrow \infty$. However, the limit $r \rightarrow 0$ has to be taken with $t \rightarrow 0$, in order to satisfy the condition $r > 2m(t)$. Consequently, $0 < f(r,t) <1$. We notice also that the apparent horizon is an increasing function of $t$, from $r_{AH} \rightarrow 0$ when $t \rightarrow 0$ and $r_{AH} \rightarrow 2m$ at infinity, having an inflexion point at $t = m$. We take the timelike variable $t$ as the duration of measurement, so that from (2.7) results that gravity is weakened when a measurement is performed in a time interval of the order or less than $2m$. This could be checked measuring the trajectory of a high energy cosmic ray particle (a proton, for example), freely falling in the gravitational field of the Earth. If the duration of measurement is of the order of $2m$ or less ($m$ being the Earth mass), the trajectory will be less curved. As we already remarked in \cite{HC}, we may now give a reasonable explanation of the fact that the zero point energy does not gravitate: the very fast quantum vacuum fluctuations reduce the strength of gravity so much that its influence is canceled. \section{Properties of the gravitational fluid} In order for (2.7) metric (which is not Ricci flat) to be a solution of Einstein's equation $G_{ab} = 8\pi T_{ab}$, with $a,b = t, r, \theta, \phi$, we need a source stress tensor on its r.h.s. The source is an anisotropic fluid with the nonzero components \begin{equation} T^{r}_{~t} = \frac{m^{2}e^{-\frac{2m}{t}}}{2\pi r^{2}t^{2}},~~~ T^{r}_{~r} = \frac{m}{4\pi rt^{2}}\sqrt{\frac{2m}{r} e^{-\frac{2m}{t}}} = 4 T^{\theta}_{~\theta} =4 T^{\phi}_{~\phi}. \label{3.1} \end{equation} Let us take now a congruence of observers with the velocity vector field \begin{equation} u^{a} = \left(1, - \sqrt{\frac{2m e^{-\frac{2m}{t}}}{r}}, 0, 0\right) ,~~~u^{a}_{~a} = - 1. \label{3.2} \end{equation} The above congruence of observers is geodesic, namely the acceleration $a^{b} = u^{a}\nabla_{a} u^{b} = 0$, to whom the inward radial velocity $ u^{r} = - \sqrt{\frac{2m}{r}e^{-\frac{2m}{t}}}$ is the Newtonian escape velocity. The spacetime (3.1) being nonstatic, the scalar expansion is nonzero \begin{equation} \Theta \equiv \nabla_{a}u^{a} = -\frac{3}{2r} \sqrt{\frac{2m e^{-\frac{2m}{t}}}{r}} \label{3.3} \end{equation} which vanishes when $t \rightarrow 0$ and $r \rightarrow 0$ ( which goes to zero simultaneously with $t$). We also obtain a nonzero shear tensor with the nonzero components $\sigma^{r}_{~r} = -2\sigma^{\theta}_{~\theta} = -2\sigma^{\phi}_{~\phi} = -(2/3)\Theta$ and $\sigma^{r}_{~t} = 2m e^{-\frac{2m}{t}}/r^{2}$. Consider the general form of an anisotropic fluid with energy flux \begin{equation} T_{ab} = (p_{t} + \rho) u_{a} u_{b} + p_{t} g_{ab} + (p_{r} - p_{t}) n_{a}n_{b} + u_{a} q_{b} + u_{b} q_{a}, \label{3.4} \end{equation} with $\rho(r,t) = T_{ab}u^{a}u^{b}$ the energy density of the fluid, $p_{r}(r,t)$ - the radial pressure, $p_{t}$ - the pressures on the transversal directions $\theta$ and $\phi$, $n^{a}$ is a spacelike vector orthogonal to $u^{a}$, with $n_{a}u^{a} = 0,~n_{a}n^{a} = 1$, $q^{a}$ is the heat flux with $q_{a}u^{a} = 0$ and it is given by the expression $q^{a} = - T^{a}_{~b}u^{b} - \rho u^{a}$, obtained from (3.4). Using now (3.2) and (3.4), one finds that \begin{equation} u_{a} = (-1, 0, 0, 0),~~~ n^{a} = \left(0, 1, 0, 0 \right) ,~~~ n_{a} = \left(\sqrt{\frac{2m e^{-\frac{2m}{t}}}{r}}, 1, 0, 0\right). \label{3.5} \end{equation} In spite of the fact that $T^{r}_{~t} \neq 0$, we get from (3.4) a vanishing energy flux $q^{a} = 0$. That is perhaps related to the geodesic character of the congruence (3.2). From (3.4) one further finds that $\rho = 0, p_{r} = T^{r}_{~r} = 4 p_{t}$. Having now the expressions of the energy density and pressures, it is an easy task to see that the weak energy condition (WEC) ($\rho \geq 0,~\rho + p_{r} \geq 0,~\rho + p_{t} \geq 0$), null energy condition (NEC) ($\rho + p_{r} \geq 0,~\rho + p_{t} \geq 0)$ and strong energy condition (SEC) ($\rho + p_{r} \geq 0,~\rho + p_{t} \geq 0,~\rho + p_{r} + 2p_{t} \geq 0$) are obeyed. However, the dominant energy condition (DEC) ($\rho > |p_{r}|,~ \rho > |p_{t}|$) is not satisfied because $\rho$ is vanishing. One observes that all components of $T^{a}_{~b}$ vanish when $t \rightarrow \infty$ (or when $t>>2m$) because the metric (2.7) becomes Ricci-flat. We must remind that the limit $r \rightarrow 0$ goes simultaneously with $t \rightarrow 0$ so that $T^{a}_{~b}$ tends to zero at this limit, too. That takes place because of the exponential factor $e^{-\frac{2m}{t}}$ which is present in all expressions, including the scalar curvature $R^{a}_{~a} = -12\pi p_{r}$. Moreover, in the latter case ($t \rightarrow 0$), the geometry (2.7) becomes Minkowskian and the effective mass $m(t)$ goes to zero. Having now the components of the stress tensor and the basic physical quantities associated to it, our next task is to compute the total energy flow measured by an observer sitting at r = const. \cite{HC3} \begin{equation} E = \int{T^{a}_{~b}u^{b}n_{a}\sqrt{-\gamma}}dt~d\theta~d\phi, \label{3.6} \end{equation} where $\gamma$ is the determinant of the 3-metric of constant $r$, i.e. $\gamma = -(1 - 2m(t)/r)r^{4} sin^{2}\theta$. With $T^{r}_{~t}$ from (3.1), Eq. (3.6) gives us $E = 0$. The fact that $E = 0$ is not surprising if we remember that P-G observers are in free fall (the acceleration vector $a^{b} = 0$) and the energy flux $q^{a}$ is vanishing. \section{Conclusions} The role of the measurement process in gravitational physics is investigated in this paper. In the time dependent spacetime we have proposed, the time variable plays the role of the duration of measurement upon some physical system. Very short time intervals w.r.t. the gravitational radius lead to much weaker values of the gravitational field where our system is located. That may direct us to an explanation of the well-known fact that the vacuum energy does not gravitate: very fast quantum fluctuations get rid of the influence of gravity. We also notice that some results from this paper are much more simple than similar quantities obtained in \cite{HC} and all parameters derived are finite throughout.
{ "timestamp": "2018-02-13T02:20:17", "yymm": "1802", "arxiv_id": "1802.04125", "language": "en", "url": "https://arxiv.org/abs/1802.04125" }
\section{Introduction} Consider the following one-dimensional stochastic differential equation with jumps: \begin{equation} dX_t=\left( \sum_{l=1}^{p_\alpha} \alpha^{(l)} a^{(l)}(X_t) \right)^{1/2}dw_t+\sum_{k=1}^{p_\beta} \beta^{(k)} b^{(k)}(X_t)dt+c(X_{t-})dJ_t, \label{hm:sde} \end{equation} defined on a complete filtered probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq0},P)$. The ingredients are as follows: \begin{itemize} \item The coefficients $\{a^{(l)}(x)\}_{l=1}^{p_\alpha}$ and $\{b^{(k)}(x)\}_{k=1}^{p_\beta}$ are known measurable functions; \item The statistical parameter \begin{equation} \theta:=(\alpha,\beta)\in \Theta_{\alpha} \times \Theta_{\beta}=\Theta \nonumber \end{equation} are unknown, where $\Theta_\alpha$ and $\Theta_\beta$ are bounded convex domains and subset of $\mathbb{R}^{p_\alpha}$ and $\mathbb{R}^{p_\beta}$, respectively; \item $w$ is a standard {Wiener process} and $J$ a {compound Poisson process} with intensity parameter $\lambda\in[0,\infty)$ and i.i.d jump-size random variables $\{\xi_i\}_{i\in\mathbb{N}}$, that is, \begin{equation*} J_t=\sum_{i=1}^{N_t} \xi_i; \end{equation*} \item $(w,J)$ is $\mathcal{F}_t$-adapted, and the initial variable $X_0$ is $\mathcal{F}_0$-adapted and independent of $(w,J)$. \end{itemize} Throughout this paper, we assume that there exists a true value $\theta_{0}:=(\alpha_0,\beta_0)\in\Theta$. We want to estimate $\theta_{0}$ based on a discrete-time but high-frequency observation $(X_{t^{n}_{j}})_{j=0}^{n}$ from a solution to \eqref{hm:sde}, where the sampling times are supposed to be equally spaced: \begin{equation} t^{n}_{j} =jh_{n} \n \end{equation} for a positive sequence $(h_n)$ such that $h_n \to 0$ and the terminal sampling time $T_{n}:=t^{n}_{n}=nh_n\to\infty$. Throughout we suppose that $\lambda>0$; for diffusion models, many estimator of $\theta$ have been proposed, such as Gaussian quasi-likelihood estimator \cite{Kes97}, adaptive estimator \cite{UchYos12}, multi-step estimator \cite{KamUch15}, to mention few. The special forms of the coefficients of \eqref{hm:sde} may seem restrictive. However, we are particularly interested in models which can be estimated without heavy computational effort. As will be mentioned in Section \ref{Asymptotic Results}, we do not need any numerical search of a maximizer to estimate $\theta$ as good as virtual situation where we know every jump instances over $(0,T_n]$. In the presence of the jump component, elimination of the effect of $J$ is crucial for a reliable estimation of $\theta$. A well-known approach for it is the threshold based method independently proposed in \cite{Man04} and \cite{ShiYos06}; see also \cite{OgiYos11} for subsequent developments. In the method, we look at sizes of the increments \begin{equation} \Delta^n_{j}X=\Delta_{j}X:=X_{t^n_{j}}-X_{t^n_{j-1}} \nonumber \end{equation} for $j=1,\dots,n$ in absolute value: we assume that one jump has occurred over $(t^n_{j-1},t^n_j]$ if $|\Delta_{j}X|>r_{n}$ for a pre-specified \textit{jump-detection threshold} $r_{n}>0$, and then estimate $\theta$ after removing such increments. For a suitably chosen $r_{n}>0$, it is shown that the estimator of $\theta$ is asymptotically normally distributed at the same rate as diffusion models, while finite-sample performance of the threshold method strongly depends on the value of $r_{n}$. A data-adaptive quantitative choice of $r_n$ is a subtle and sensitive problem in practice; see \cite{Shi08}, \cite{Shi09}, as well as the references therein. Obviously, if the model may have ``small'' jumps with positive probability, joint estimation of diffusion and jump components can exhibit a rather bad finite-sample performance; for example, some increments may simultaneously contain small jumps and large fluctuation caused by continuous component. This practical issue can also be seen in other jump detection methods such as \cite{AitJac09}. Recently, for estimating the volatility parameter in the non-ergodic framework, i.e., for a fixed $T>0$, $h_n=T/n$ and $T_n\equiv T$, \cite{InaYos18} proposed an alternative estimation procedure called a global jump-detection filter based on the theory of order statistics constructed from the whole increments; there, it is shown that the global filtering can work both theoretically and numerically better than the previously studied local one (\cite{Man04}, \cite{ShiYos06}, and \cite{OgiYos11}). Nevertheless, as will be seen later, required conditions on the distribution of jump sizes and decaying rate of $h_n\to 0$ may be more stringent in the case where $T_n\to\infty$. Hence it is not quite clear whether or not and how the global filtering of \cite{InaYos18} is directly applicable to our ergodic setting. The primary objective of this paper is to formulate an intuitively easy-to-understand strategy, which can simultaneously estimate $\theta$ and detect jumps without any precise calibration of a jump-detection threshold. For this purpose, we utilize the approximate self-normalized residuals \cite{Mas13-2}, which makes the classical Jarque-Bera test \cite{JarBer87} adapted to our model. More specifically, the hypothesis test whose significance level is $\alpha\in(0,1)$ is constructed by the following manner: let the null hypothesis be of ``no jump component'' against the alternative hypothesis of ``non-trivial jump component'': \begin{equation} \mathcal{H}_0: {\lambda=0} \quad \text{vs} \quad \mathcal{H}_1: {\lambda>0}. \n \end{equation} Then, if the Jarque-Bera type statistic introduced later is larger than a given percentile of the chi-square distribution with $2$ degrees of freedom, we reject the null hypothesis $\mathcal{H}_0$; and otherwise, we accept $\mathcal{H}_0$. For such a test, we can intuitively regard that the largest increment contains at least one jump when the null hypothesis is rejected. Following this intuition, our proposed method will go as follows: we iteratively conduct the test with removing the largest increments in the retained samples until rejection of $\mathcal{H}_0$ is stopped; after that, we construct the modified estimator of $\theta$ by the remaining samples. Our method enables us not only just to make a ``pre-cleaning'' of diffusion-like data sequence by removing large jumps which breaks the approximate Gaussianity of the self-normalized residuals, but also to approximately quantify jumps relative to continuous fluctuations in a natural way; see Remark \ref{hm:rem_F.esti}. This paper is organized as follows: in Section \ref{Preliminaries}, we give a brief summary of the approximate self-normalized residuals, and the Jarque-Bera type test for general jump diffusion models. Section \ref{Proposed strategy} provides our strategy and some remarks on its practical use. In Section \ref{Asymptotic Results}, we will propose a least-squares type estimator and its one-step version for \eqref{hm:sde}. In the calculation of our estimator we can sidestep optimization, and thus it is numerically tractable, with retaining high representational power of the nonlinearity in the state variable. Moreover, we will prove that our estimator is asymptotically equivalent to the ``oracle'' estimator which is constructed as if we observe the unobserved continuous part of $X$. We show some numerical experiments results in Section \ref{Numerical experiments}. Finally, Appendix \ref{hm:sec_proofs} presents the proofs of the results given in Section \ref{Asymptotic Results}. \medskip Here are some notations and conventions used throughout this paper. We largely abbreviate ``$n$'' from the notation like $t_{j}=t^{n}_{j}$ and $h=h_{n}$. For any vector variable $x=(x^{(i)})$, we write $\partial_x=\left(\frac{\partial}{\partial x^{(i)}}\right)_i$. For any process $Y$, $\Delta_j Y$ denotes the $j$-th increment $Y_{t_{j}}-Y_{t_{j-1}}$. $C$ denotes a universal positive constant which may vary at each appearance. $\top$ stands for the transpose operator, and $v^{\otimes2}:= vv^\top$ for any matrix $v$. The convergences in probability and in distribution are denoted by $\cip$ and $\cil$, respectively. All limits appearing below are taken for $n\to\infty$ unless otherwise mentioned. For two nonnegative real sequences $(a_n)$ and $(b_n)$, we write $a_n \lesssim b_n$ if $\limsup_n(a_n/b_n)<\infty$. For any $x\in\mathbb{R}$, $\lfloor x \rfloor$ denotes the maximum integer which does not exceed $x$. \section{Preliminaries} \label{Preliminaries} To see whether a working model fits data well or not, and/or whether data in hand have outliers or not, diagnosis based on residual analysis is often done. For jump diffusion models, \cite{Mas13-2} formulated a Jarque-Bera normality test based on self-normalized residuals for the driving noise process. In this section, we briefly review the construction of the self-normalized residual, and the Jarque-Bera statistics with its asymptotic behavior for general ergodic jump diffusion model described as: \begin{equation} dX_{t} = a(X_{t},\alpha)dw_{t} + b(X_{t},\beta)dt + c(X_{t-})dJ_{t}. \label{yu:sde} \end{equation} Given any function $f$ on $\mathbb{R}\times\Theta$ and $s\geq0$, we hereafter write \begin{equation*} f_s(\theta)=f(X_s,\theta), \end{equation*} and in particular, for all $j\in\{0,\dots,n\}$ we denote \begin{equation} f_{j}(\theta) = f(X_{t_{j}},\theta). \nonumber \end{equation} For each $j\in\{1,\dots,n\}$, let \begin{equation} \ep_j(\alpha)=\ep_{n,j}(\alpha):=\frac{\Delta_{j}X}{\sqrt{a^{2}_{j-1}(\alpha)h_n}}. \label{hm:ep_def} \end{equation} Then, following \cite{Mas13-2} we introduce the self-normalized residual and the Jarque-Bera type statistics: \begin{align*} &\hat{N}_j=\hat{S}_n^{-1/2}(\ep_j(\hat{\alpha}_{n})-\bar{\hat{\ep}}_n),\\ &\mathrm{JB}_n= \frac{1}{6n}\left(\sum_{j=1}^{n} (\hat{N}_j)^3-3\sqrt{h_n}\sum_{j=1}^{n} \partial_x a_{j-1}(\hat{\alpha}_{n})\right)^2+\frac{1}{24n}\left(\sum_{j=1}^{n}((\hat{N}_j)^4-3)\right)^2, \end{align*} where \begin{equation*} \bar{\hat{\ep}}_n:=\frac{1}{n}\sum_{j=1}^{n} \ep_j(\hat{\alpha}_{n}), \quad \hat{S}_n:=\frac{1}{n}\sum_{j=1}^{n}(\ep_j(\hat{\alpha}_{n})-\bar{\hat{\ep}}_n)^2. \end{equation*} The following theorem gives the asymptotic behavior of $\mathrm{JB}_n$, which ensures theoretical validity of the Jarque-Bera type test based on $\mathrm{JB}_n$. \begin{Thm}(\cite[Theorems 3.1 and 4.1]{Mas13-2}) \label{Achi&p} \begin{enumerate} \item Under $\mathcal{H}_0: {\lambda=0}$ and suitable regularity conditions, for any estimator $\hat{\alpha}_{n}$ of $\alpha$ satisfying \begin{equation}\label{sqn} \sqrt{n}(\hat{\alpha}_{n}-\alpha_0)=O_p(1), \end{equation} we have \begin{equation*} \mathrm{JB}_{n}\cil \chi^2(2). \end{equation*} \item Under $\mathcal{H}_1: {\lambda>0}$ and suitable regularity conditions, we have \begin{equation*} \mathrm{JB}_{n}\overset{P}\rightarrow \infty, \end{equation*} that is, $P(\mathrm{JB}_{n}>K) \to 1$ for any $K>0$. \end{enumerate} \end{Thm} \begin{Rem The residual defined by \eqref{hm:ep_def} is of the Euler type with ignoring the drift fluctuation; under the sampling conditions in Assumption \ref{Sampling} given later, we can ignore the presence of the drift term in construction of residuals. Indeed, instead of \eqref{hm:ep_def} we could consider \begin{equation} \ep_j(\theta)=\ep_{n,j}(\theta):=\frac{\Delta_{j}X - h_n b_{j-1}(\beta) }{\sqrt{a^{2}_{j-1}(\alpha)h_n}}. \nonumber \end{equation} Also, we could define $\mathrm{JB}_{n}$ only by the skewness or kurtosis part; this only changes the asymptotic degrees of freedom $2$ in Theorem \ref{Achi&p}-(1) by $1$. See \cite{Mas11} for the technical details. This case may require more computation time, while we would then have a stabilized performance under $\mathcal{H}_0$ compared with the case of \eqref{hm:ep_def}. \end{Rem} \begin{Rem The results of \cite{Mas13-2} can apply even when the jump component is driven by a compound Poisson process, possibly a much broader class of finite-activity processes. It is therefore expected that we may relax the structural assumption, although the theoretical results in Section \ref{Asymptotic Results} then require a large number of modifications. \end{Rem} In the rest of this section, suppose that the null hypothesis $\mathcal{H}_0$ is true, so that the underlying model is the diffusion process. Among choices of $\hat{\alpha}_{n}$, the Gaussian quasi-maximum likelihood estimator (GQMLE) is one of the most important candidates because it has the asymptotic efficiency in H\'{a}jek-Le Cam sense (cf. \cite{Gob02}). The GQMLE is defined as any maximizer of the Gaussian quasi-likelihood (GQL) \begin{equation*} \mathbb{H}_{n}(\theta) := \sum_{j=1}^{n} \log\left\{\frac{1}{\sqrt{2\pi a^{2}_{j-1}(\alpha)h_n}}\phi\left(\frac{\Delta_{j}X - b_{j-1}(\beta)h_n}{\sqrt{a^{2}_{j-1}(\alpha)h_n}}\right)\right\}, \end{equation*} where $\phi$ denotes the standard normal density. This quasi-likelihood is constructed based on the local-Gauss approximation of the transition probability $\mathcal{L}(X_{t_{j}}|X_{t_{j-1}})$ by $N(b_{j-1}(\beta)h_n, a^{2}_{j-1}(\alpha)h_n)$. It is well known that the asymptotic normality holds true under suitable regularity conditions \cite{Kes97}: For the GQMLE $\tilde{\theta}_n=(\tilde{\alpha}_n,\tilde{\beta}_n)$, we have \begin{equation} \left( \sqrt{n}(\tilde{\alpha}_n-\alpha_0),\, \sqrt{T_n}(\tilde{\beta}_n-\beta_0) \right) \cil N\left(0, \,\diag(I_{1}^{-1}(\theta_{0}), I_{2}^{-1}(\theta_{0}))\right), \nonumber \end{equation} where \begin{align*} &I_{1}(\theta_{0})=\frac{1}{2}\int \left(\frac{\partial_\alpha a^2}{a^2}(x,\alpha_0)\right)^{\otimes2}\pi_0(dx),\\ &I_{2}(\theta_{0})=\int\left(\frac{\partial_\beta b}{a}(x,\beta_0)\right)^{\otimes2}\pi_0(dx), \end{align*} both assumed to be positive definite. Here $\pi_0$ denotes the invariant measure of $X$. The strategy we will describe in Section \ref{Proposed strategy} is in principle valid even when the drift and diffusion coefficients are nonlinear in the parameters. However, if the coefficients $a$ and $b$ are highly nonlinear and/or the number of the parameters is large, then the calculation of the GQMLE can be quite time-consuming. To deal with such a problem, it is effective to separate optimizations of $\alpha$ and $\beta$ by utilizing the difference of the small-time stochastic orders of the $dt$- and $dw_t$-terms. To be specific, we introduce the following stepwise version of the GQMLE $\check{\theta}_n:=(\check{\alpha}_n,\check{\beta}_n)$: \begin{align*} &\check{\alpha}_n\in\mathop{\rm argmax}_{\alpha\in\bar{\Theta}_\alpha} \sum_{j=1}^{n} \log\left\{\frac{1}{\sqrt{2\pi a^{2}_{j-1}(\alpha)h_n}}\phi\left(\frac{\Delta_{j}X}{\sqrt{a^{2}_{j-1}(\alpha)h_n}}\right)\right\},\\ &\check{\beta}_n\in\mathop{\rm argmax}_{\beta\in\bar{\Theta}_\beta} \mathbb{H}_{n}(\check{\alpha}_n,\beta). \end{align*} Under some suitable regularity condition, it is shown that the stepwise GQMLE has the same asymptotic distribution as the original GQMLE $\tilde{\theta}_n$ (cf. \cite{UchYos12}). Hence $\check{\theta}_n$ is asymptotically efficient, and the claims in Theorem \ref{Achi&p} with $\hat{\alpha}_{n}$ replaced by $\check{\alpha}_n$ hold true. Although in general we have to conduct two optimization for the stepwise estimation scheme, it lessens the number of the parameters to be simultaneously optimized, thus reducing the computational time. \section{Proposed strategy}\label{Proposed strategy} In this section, still looking at \eqref{yu:sde}, we propose an iterative jump detection procedure based on the Jarque-Bera type test introduced in the previous section. Let $q\in(0,1)$ be a small number, which will later serve as the significance level. Suppose that we are given an estimator $\hat{\theta}_{n}$ of $\theta=(\alpha,\beta)$ defined to be any element $\hat{\theta}_{n} \in \mathop{\rm argmax} M_n$ for some contrast function $M_n$ of the from \begin{equation} M_n(\theta) := \sum_{j=1}^{n} m_{h_{n}}\left( X_{t_{j-1}},\Delta_j X;\,\theta \right). \nonumber \end{equation} Denote by $\chi^2_q(2)$ the upper $q$-percentile of the chi-squared distribution with $2$ degrees of freedom. Then, our procedure is as follows; we implicitly assume that there is no tie among the values $|\Delta_{1}X|,\dots,|\Delta_{n}X|$. \medskip \begin{itemize} \item[{\it Step 0.}] Set $k=k_n=0$, and let $\hat{\mathcal{J}}_{n}^0:=\emptyset$. \medskip \item[{\it Step 1.}] Calculate the modified estimator $\hat{\theta}_{n}^k$ defined by \begin{equation} \hat{\theta}_{n}^k \in \mathop{\rm argmax}_{\theta\in\Theta} \sum_{j\notin\hat{\mathcal{J}}_n^k} m_{h_{n}}\left( X_{t_{j-1}},\Delta_j X;\,\theta \right), \nonumber \end{equation} then let \begin{equation*} \bar{\hat{\ep}}_n^k:=\frac{1}{n-k}\sum_{j\notin\hat{\mathcal{J}}_{n}^k} \ep_j(\hat{\alpha}_{n}^k), \qquad \hat{S}_n^k:=\frac{1}{n-k}\sum_{j\notin\hat{\mathcal{J}}_{n}^k}(\ep_j(\hat{\alpha}_{n}^k)-\bar{\hat{\ep}}_n^k)^2, \end{equation*} and (re-)construct the following modified self-normalized residuals $(\hat{N}_j^k)_{j=1}^n$ and Jarque-Bera type statistics $\mathrm{JB}_{n}^k$: \begin{align}\label{yu:msnr} \hat{N}_j^k &:=(\hat{S}_n^k)^{-1/2}(\ep_j(\hat{\alpha}_{n}^k)-\bar{\hat{\ep}}_n^k), \nonumber\\ \mathrm{JB}_{n}^k &:= \frac{1}{6(n-k)}\left(\sum_{j\notin\hat{\mathcal{J}}_{n}^k} (\hat{N}_j^k)^3-3\sqrt{h_n}\sum_{j\notin\hat{\mathcal{J}}_{n}}\partial_x a_{j-1}(\hat{\alpha}_{n}^k)\right)^2 \\ &{}\qquad +\frac{1}{24(n-k)}\left(\sum_{j\notin\hat{\mathcal{J}}_{n}^k}((\hat{N}_j^k)^4-3)\right)^2.\nonumber \end{align} \medskip \item[{\it Step 2.}] If $\mathrm{JB}_{n}^k>\chi^2_q(2)$, then pick out the interval number \begin{equation} j(k+1):=\mathop{\rm argmax}_{j\in\{1,\dots, n\}\setminus\hat{\mathcal{J}}_{n}^k} |\Delta_{j}X|, \nonumber \end{equation} add it to the set $\hat{\mathcal{J}}_{n}^k$: \begin{equation} \hat{\mathcal{J}}_{n}^{k+1} := \hat{\mathcal{J}}_{n}^{k} \cup \{j(k+1)\}, \nonumber \end{equation} and then return to {\it Step 1}. If $\mathrm{JB}_{n}^k \le \chi^2_q(2)$, then set an estimated number of jumps to be \begin{equation} k^\star=k_n^\star(\omega) := \min\left\{ k\le n;~ \mathrm{JB}_{n}^k \le \chi^2_q(2) \right\} \nonumber \end{equation} and go to {\it Step 3}. \medskip \item[{\it Step 3.}] If $k^\star=0$, regard that there is no jump; otherwise, we regard that each of $\Delta_{j(1)}X, \dots, \Delta_{j(k^\star)} X$ contains one jump. Finally, set $\hat{\theta}_{n}^{k^\star}$ to be an estimator of $\theta$. \end{itemize} \medskip In practice, the above-described method enables us to divide the set of the whole increments $(\Delta_{j}X)_{j=1}^{n}$ into the following two categories: \begin{itemize} \item ``One-jump'' group $(\Delta_j X)_{j\in\hat{\mathcal{J}}_n^{k^{\star}}}=\{ \Delta_{j(1)}X,\dots,\Delta_{j(k^\star)}X\}$, and \item ``No-jump'' group $(\Delta_j X)_{j\notin\hat{\mathcal{J}}_n^{k^{\star}}}=(\Delta_{j}X)_{j=1}^{n} \setminus \{ \Delta_{j(1)}X,\dots,\Delta_{j(k^\star)}X\}$. \end{itemize} Automatically entailed just after jump removals are stopped is the estimator $\hat{\theta}_{n}^{k^{\star}}$ of the drift and diffusion parts of $X$, which is the maximizer of the \textit{modified Gaussian quasi-likelihood} defined by \begin{equation*} \theta \mapsto \sum_{j\notin\hat{\mathcal{J}}_{n}^{k^\star}} \log\left\{\frac{1}{\sqrt{2\pi a^{2}_{j-1}(\alpha)h_n}}\phi\left(\frac{\Delta_{j}X - b_{j-1}(\beta)h_n}{\sqrt{a^{2}_{j-1}(\alpha)h_n}}\right)\right\}. \end{equation*} As is demonstrated in Section \ref{Asymptotic Results}, our primary setting \eqref{hm:sde} is designed not to require any optimization using a numerical search such as the quasi-Newton method. We should note that, due to the nature of the testing, there may remain positive probability of spurious detection of jumps no matter how large number of data is. Nevertheless, as long as the underlying model is correct, the number of removals is much smaller than the total sample size, so that spurious removals are not serious here. \medskip \begin{Rem}\label{shift} In the above-described procedure we simply remove the largest increments at each step, with keeping the positions of the remaining data. Note that in the construction of the modified estimator $\hat{\theta}_{n}^k$ it is incorrect to use the ``shifted" samples $(Y_{t_j})_{j\notin\hat{\mathcal{J}}_n^{k_n}}$ defined by \begin{equation*} Y_{t_j}=X_{t_j}-\sum_{i\in\hat{\mathcal{J}}_n^{k_n}\cap\{1,\dots,j\}}\Delta_i X. \end{equation*} This is because one-step transition density of the original process $X$ is spatially different from $Y$, so that the estimation result would not suitably reflect the information of data. \end{Rem} \begin{Rem}\label{ite} At $k$-th iteration, it can be regarded that we conduct the Jarque-Bera type test for the trimmed data $(X_{t_{j-1}},\Delta_j X)_{j\notin\hat{\mathcal{J}}_n^k}$. Hence the null hypothesis $\mathcal{H}^k_0$ and alternative hypothesis $\mathcal{H}^k_1$ of the test are formally written as follows: \begin{align*} &\mathcal{H}^k_0: \sharp\left\{j\in\{1,\dots,n\} \ \middle| \ \Delta_j N\geq 1 \right\}\leq k,\\ &\mathcal{H}^k_1: \sharp\left\{j\in\{1,\dots,n\} \ \middle| \ \Delta_j N\geq 1 \right\}>k, \end{align*} where $\sharp A$ denotes the cardinality of any set $A$. From this formulation, we have the inclusion relation \begin{equation*} \mathcal{H}_0\subset \mathcal{H}_0^1\subset \mathcal{H}_0^2 \subset \dots \subset \mathcal{H}_0^k\subset \cdot\cdot\cdot, \end{equation*} which implicitly suggests that we can extract more than one increments at \textit{Step 2} when seemingly several jumps do exist: indeed, in view of the expectation of Poisson processes, it seems reasonable to remove at the first rejection of $\mathcal{H}_0$ not only $|\Delta_{j(1)}X|$ but the first $O(T_n)$ largest increments, resulting in acceleration of terminating the procedure. \end{Rem} \begin{Rem} In practice, the size of ``last-removed'' increment: \begin{equation} r_{n}(k^{\star}):=|\Delta_{j(k^{\star})}X| \nonumber \end{equation} would be used as a threshold for detecting jumps for future increments. \end{Rem} \begin{Rem} \label{hm:rem_F.esti} When the jump coefficient is parameterized as $c(x,\gamma)$ and a model of the common jump distribution, say $F_J$, of the compound Poisson process $J$ is given, we may consider estimation of $\gamma$ and $F_J$ based on the sequence $\{\Delta_{j(k)}X/c_{j(k)-1}(\gamma)\}_{k=1}^{k^\star}$, with supposing that they are i.i.d. random variables with common jump distribution $F_J$; note that number of jumps tends to increase for a larger $T_n$. This is beyond the scope of this paper, and we leave it as a future study. \end{Rem} \section{Asymptotic results}\label{Asymptotic Results} We now return to the model \eqref{hm:sde}. As was mentioned in the previous section, we have a choice of an estimator of $\theta$. As a matter of course, for each estimator $\hat{\theta}_{n}$, we need to study asymptotic behavior of its modified version $\hat{\theta}_{n}^{k*}$. In this section, we will derive asymptotic results for a numerically tractable least-squares type estimator and the corresponding one-step improved version. For simplicity, we write \begin{align*} &\mathbb{A}(x)=(a^{(1)}(x),\dots,a^{(p_\alpha)}(x))^\top, \quad \mathbb{B}(x)=(b^{(1)}(x),\dots,b^{(p_\beta)}(x))^\top. \end{align*} \begin{Assumption}[Regularity of coefficients]\label{Ascoef} The following conditions hold: \begin{enumerate} \item $\ds{0< \inf_{x,\alpha}\mathbb{A}(x)^{\top}\alpha \wedge \inf_x |c(x)|}$ \ and \ $\ds{\sup_{x,\alpha}\mathbb{A}(x)^{\top}\alpha \vee \sup_x |c(x)| <\infty}$; \item $\ds{\left|\sqrt{\mathbb{A}(x)^\top\alpha_0}-\sqrt{\mathbb{A}(y)^\top\alpha_0}\right|+\left|\mathbb{B}(x)-\mathbb{B}(y)\right|+\left|c(x)-c(y)\right|\lesssim |x-y|, \quad x,y\in\mathbb{R}}$; \item There exists a constant $C'\ge 0$ for which $\ds{|\partial_x \mathbb{A}(x)| + |\partial_x^{2} \mathbb{A}(x)| \lesssim 1+|x|^{C'}, \quad x\in\mathbb{R}}$. \end{enumerate} \end{Assumption} Here the supremum with respect to $\alpha$ is taken over the compact set $\bar{\Theta}_\alpha$. The basic scenario to construct an estimator of $\theta$ when $X$ had no jumps is as follows: \begin{itemize} \item We first estimate the diffusion parameter by the least-squares estimator (LSE): \begin{align} \tilde{\alpha}_n &:= \operatornamewithlimits {argmin}_\alpha \sum_{j=1}^{n} \left\{(\Delta_j X)^2-h_n \mathbb{A}_{j-1}^\top\alpha\right\}^2 \nonumber\\ &=\frac{1}{h_n} \left(\sum_{j=1}^{n} \mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1} \sum_{j=1}^{n} (\Delta_j X)^2\mathbb{A}_{j-1}. \nonumber\\ \nonumber \end{align} \item We then improve the LSE through the scoring with the GQL: \begin{equation}\label{ose} \hat{\alpha}_{n}:= \tilde{\alpha}_n-\left(\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\tilde{\alpha}_n)^2}\right)^{-1}\sum_{j=1}^{n} \left(\frac{1}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n}-\frac{(\Delta_j X)^2}{h_n(\mathbb{A}_{j-1}^\top \tilde{\alpha}_n)^2}\right)\mathbb{A}_{j-1}. \end{equation} \item Finally we estimate the drift parameter by the plug-in LSE: \begin{align} \hat{\beta}_{n}&:=\operatornamewithlimits {argmin}_\beta \sum_{j=1}^{n} \frac{(\Delta_j X-h_n \mathbb{B}_{j-1}^\top\beta)^2}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}} \nonumber\\ &=\frac{1}{h_n}\left(\sum_{j=1}^{n} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}}\right)^{-1} \sum_{j=1}^{n} \frac{\Delta_j X}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}}\mathbb{B}_{j-1}. \nonumber \end{align} \end{itemize} It is known that $\tilde{\alpha}_n$ is not asymptotically efficient, while $\hat{\beta}_{n}$ is in case where the underlying process is a diffusion process, which is why we additionally consider the improved version $\hat{\alpha}_{n}$ based on the stepwise GQL: \begin{equation*} \mathbb{H}_{1,n}(\alpha):=-\frac{1}{2}\sum_{j=1}^{n}\left\{\log\left( 2\pi h_n \mathbb{A}_{j-1}^\top\alpha \right)+\frac{(\Delta_j X)^2}{h_n\mathbb{A}_{j-1}^\top\alpha}\right\}. \end{equation*} Then $\hat{\alpha}_{n}$ is asymptotic efficient under appropriate regularity conditions. The form of the second term in the right-hand side in \eqref{ose} comes from the quasi-score associated with $\mathbb{H}_{1,n}(\alpha)$ and the expression of the Fisher information matrix corresponding to $\alpha$, where the latter equals the upper left part of $\Sigma_0$ in Theorem \ref{osan}. Now, in the presence of jumps, in view of Section \ref{Proposed strategy} we introduce the modified estimators \begin{align} &\tilde{\alpha}_n^{k_n}=\frac{1}{h_n} \left(\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1} \sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} (\Delta_j X)^2\mathbb{A}_{j-1}, \nonumber\\ &\hat{\beta}_{n}^{k_n}=\frac{1}{h_n}\left(\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\right)^{-1} \sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \frac{\Delta_j X}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}, \nonumber \end{align} where $\hat{\alpha}_{n}^{k_n}$ is the improved estimator defined by \begin{equation} \hat{\alpha}_{n}^{k_n}=\tilde{\alpha}_n^{k_n}-\left(\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n})^2}\right)^{-1}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \left(\frac{1}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n}}-\frac{(\Delta_j X)^2}{h_n(\mathbb{A}_{j-1}^\top \tilde{\alpha}_n^{k_n})^2}\right)\mathbb{A}_{j-1}. \label{hm:ose-a} \end{equation} The inverse matrices appearing in the above definitions asymptotically exist under the forthcoming conditions, hence implicitly assumed here for brevity. What is important from these expressions is that we can calculate the modified estimators $\tilde{\alpha}_n^{k_n}$, $\hat{\beta}_{n}^{k_n}$, and $\hat{\alpha}_{n}^{k_n}$ simply by removing the indices in $\hat{\mathcal{J}}_n^{k_n}$ in computing the sums without repetitive numerical optimizations, thus reducing the computational time to a large extent. Further, it should also be noted that we may proceed only with $\tilde{\alpha}_n^{k_n}$ without the improved version $\hat{\alpha}_n^{k_n}$, if the asymptotically efficient estimator is not the first thing to have and quick-to-compute estimator is more needed. \medskip To state our main result, we introduce further assumptions below. \begin{Assumption}[Stability]\label{Moments}$\ $ \begin{enumerate} \item There exists a unique invariant probability measure $\pi_0$, and for any function $f\in L_1(\pi_0)$, we have \begin{equation*} \frac{1}{T}\int_0^T f(X_t)dt\cip \int_\mathbb{R} f(x)\pi_0(dx), \quad \mathrm{as} \ T\to\infty. \end{equation*} \item $\ds{\sup_{t\in\mathbb{R}^+} E[|X_t|^q]<\infty}$ for any $q>0$. \end{enumerate} \end{Assumption} \medskip \begin{Assumption}[Sampling design]\label{Sampling} There exist positive constants $\kappa',\kappa \in (1/2,1)$ such that \begin{equation} n^{-\kappa'} \lesssim h_n \lesssim n^{-\kappa}. \nonumber \end{equation} \end{Assumption} Recall that the driving noise $J$ can be expressed as \begin{equation*} J_t=\sum_{i=1}^{N_t} \xi_i, \end{equation*} by a Poisson process $N$ and i.i.d random variables $(\xi_i)$ being independent of $N$. \begin{Assumption}[Jump size]\label{Asjsize}$\ $ \begin{enumerate} \item $E[|\xi_1|^q]<\infty$ for any $q>0$. \item In addition to Assumption \ref{Sampling}, \begin{equation} \limsup_{x\downarrow 0} x^{-s}P\left( |\xi_1| \le x \right) < \infty, \label{hm:add-3} \end{equation} for some constant $s$ satisfying \begin{equation} s > \frac{4(1-\kappa)}{2\kappa -1}. \nonumber \end{equation} \end{enumerate} \end{Assumption} \medskip Here are some technical remarks on each assumption. Assumption \ref{Ascoef} ensures the existence of a {c\`adl\`ag} solution of \eqref{hm:sde}, and its Markovian property (cf. \cite[chapter 6]{App09}). Assumption \ref{Moments} is essential to derive our theoretical results. In our Markovian framework, it suffices for Assumption \ref{Moments}-(1) to have \begin{equation} \left\| P_t(x,\cdot) -\pi(\cdot) \right\|_{TV} \to 0, \quad t\to\infty, \quad x\in\mathbb{R}, \label{hm:ergodicity} \end{equation} for some probability measure $\pi_0$, where $\| \mathfrak{m}(\cdot) \|_{TV}$ denotes the total variation norm of a signed measure $\mathfrak{m}$ and $\{P_t(x,dy)\}$ does the family of transition probability of $X$; then, $\pi_0$ is the unique invariant measure of $X$, and Assumption \ref{Moments}-(1) holds for any $f\in L_1(\pi_0)$ and any initial distribution $\mathcal{L}(X_0)$, see \cite{Bha82} for details. Further, \eqref{hm:ergodicity} with Assumption \ref{Moments}-(2) implies that \begin{equation} \int_\mathbb{R}|x|^q\pi_0(dx)<\infty \nonumber \end{equation} for any $q>0$; this can be seen in a standard manner using Fatou's lemma and the monotone convergence theorem through a smooth truncation of the mapping $x\mapsto|x|^q$ into a compact set. We refer to \cite{Kul09}, \cite{Mas08}, and \cite{Mas13-1} for an easy-to-check sufficient condition for \eqref{hm:ergodicity} and Assumption \ref{Moments}-(2). Assumptions \ref{Sampling} and \ref{Asjsize} describe a tradeoff between sampling frequency and probability of small jump size (quicker decay of $h_n$ allows for more frequent small jumps of $J$). We have formulated them with giving preference to simplicity over complexity. See Section \ref{hm:sec_pre-rems} for some technical consequences which we will really require in the proofs. \begin{Rem} We are focusing on estimation of both drift and diffusion coefficients under the ergodicity. Nevertheless, we may consistently estimate the diffusion coefficient even when the terminal sampling time is fixed, such as $T_n\equiv 1$, without ergodicity; see \cite{GenJac93}, and also \cite{InaYos18} as well as the references therein. Since \cite{Mas13-2} can handle the non-ergodic case as well, it is expected that our estimation strategy in Section \ref{Proposed strategy} would remain in place and the theoretical results in this section would have trivial non-ergodic counterparts, to be valid under much weaker assumptions; in particular, we would only require \eqref{hm:add-3} for some $s>0$. \end{Rem} \medskip To investigate the asymptotic property of our estimators, we introduce the unobserved continuous part of $X$ defined by \begin{align*} X^{\mathrm{cont}}_t=X_t-X_0-\int_0^t c(X_{s-})dJ_s=\int_0^t a(X_s,\alpha_0)dw_t+\int_0^t b(X_s,\beta_0)dt. \end{align*} Let $(\check{\alpha}_n)$ be any random sequence such that \begin{equation}\label{ines} \sqrt{n}(\check{\alpha}_n-\alpha_0)=O_p(1). \end{equation} As in \eqref{hm:ose-a}, we define the random sequence $\hat{\alpha}_{n}^{\mathrm{cont}}$ by \begin{equation*} \hat{\alpha}_{n}^{\mathrm{cont}}=\check{\alpha}_n-\left(\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\check{\alpha}_n)^2}\right)^{-1}\sum_{j=1}^{n}\left(\frac{1}{\mathbb{A}_{j-1}^\top\check{\alpha}_n}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \check{\alpha}_n)^2}\right)\mathbb{A}_{j-1}. \end{equation*} Correspondingly, we also define \begin{equation*} \hat{\beta}_{n}^{\mathrm{cont}}:=\frac{1}{h_n}\left(\sum_{j=1}^{n}\frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{\mathrm{cont}}}\right)^{-1} \sum_{j=1}^{n} \frac{\Delta_j X^{\mathrm{cont}}}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{\mathrm{cont}}}\mathbb{B}_{j-1}. \end{equation*} As is expected, $(\hat{\alpha}_{n}^{\mathrm{cont}},\hat{\beta}_{n}^{\mathrm{cont}})$ serves as a good estimator if it could be computed: \begin{Thm}\label{osan} Suppose that Assumptions \ref{Ascoef} to \ref{Sampling}, and Assumption \ref{Asjsize}-(1) hold, and that both $\int \mathbb{A}(x)^{\otimes 2}\pi_0(dx)$ and $\int \mathbb{B}(x)^{\otimes 2}\pi_0(dx)$ are positive definite. Then we have \begin{align*} \left(\sqrt{n}(\hat{\alpha}_{n}^{\mathrm{cont}}-\alpha_0), \sqrt{T_n}(\hat{\beta}_{n}^{\mathrm{cont}}-\beta_0)\right)\cil N\left(0, \Sigma_0\right), \end{align*} where \begin{equation*} \Sigma_0:=\begin{pmatrix} \displaystyle2\left\{\int\left(\frac{\mathbb{A}(x)}{(\mathbb{A}(x))^\top\alpha_0}\right)^{\otimes2}\pi_0(dx)\right\}^{-1} & \displaystyle O\\ \displaystyle O &\displaystyle \left\{\int\frac{\mathbb{B}^{\otimes2}(x)}{\mathbb{A}(x)^\top\alpha_0}\pi_0(dx)\right\}^{-1} \end{pmatrix}. \end{equation*} \end{Thm} \begin{Rem} \label{hm:rem_asymp.eff} The asymptotic covariance matrix of $\hat{\beta}^{\mathrm{cont}}_n$ is formally the efficient one, see \cite[Theorem 2.2]{KohNuaTra17}. Moreover, that of $\hat{\alpha}^{\mathrm{cont}}_n$ is the same as the estimator in \cite{ShiYos06} and \cite{OgiYos11} based on a jump-detection filter. \end{Rem} The next theorem states that, asymptotically, on the set $ \left\{ \mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}$, the number of jumps is less than $k_n$, and thus the modified LSE type diffusion estimator $\tilde{\alpha}_n^{k_n}$ consists of (true) "no-jump" group and has the $\sqrt{n}$-consistency. \begin{Thm}\label{Consistency} Suppose that Assumptions \ref{Ascoef} to \ref{Asjsize} hold, and that both $\int \mathbb{A}(x)^{\otimes 2}\pi_0(dx)$ and $\int \mathbb{B}(x)^{\otimes 2}\pi_0(dx)$ are positive definite. Then, for any $\ep>0$, we can find a sufficiently large $M>0$ and $N\in\mathbb{N}$ such that \begin{equation} \sup_{n\ge N}P\left(\left\{|\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)|>M\right\}\cap\left\{\mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}\right)<\ep. \nonumber \end{equation} \label{thm_consis1} \end{Thm} By re-defining $(\tilde{\alpha}_n^{k_n})$ as \begin{equation} \tilde{\alpha}_n^{k_n}=\begin{cases}\tilde{\alpha}_n^{k_n} & \quad \mathrm{on} \ \ \left\{\mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}, \\ \alpha_0 & \quad \mathrm{on} \ \ \left\{\mathrm{JB}_n^{k_n}>\chi^2_q(2)\right\}, \end{cases} \end{equation} $(\tilde{\alpha}_n^{k_n})$ enjoys the property \eqref{ines}: $\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)=O_p(1)$ from Theorem \ref{thm_consis1}, so that by Theorem \ref{osan}, we have \begin{align*} \left(\sqrt{n}( \hat{\alpha}_{n}^{k_n, \mathrm{cont}}-\alpha_0), \sqrt{T_n}(\hat{\beta}_{n}^{k_n, \mathrm{cont}}-\beta_0)\right)\cil N\left(0, \Sigma_0\right), \end{align*} where \begin{align*} &\hat{\alpha}_{n}^{k_n, \mathrm{cont}}:=\tilde{\alpha}_n^{k_n}-\left(\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n})^2}\right)^{-1}\sum_{j=1}^{n}\left(\frac{1}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n}}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \tilde{\alpha}_n^{k_n})^2}\right)\mathbb{A}_{j-1},\\ &\hat{\beta}_{n}^{k_n, \mathrm{cont}}:=\frac{1}{h_n}\left(\sum_{j=1}^{n}\frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\right)^{-1} \sum_{j=1}^{n} \frac{\Delta_j X^{\mathrm{cont}}}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}. \end{align*} Recall that we finish our procedure once we have $\mathrm{JB}_n^{k_n}\leq\chi^2_q(2)$. The following theorem is the main claim of this section. \begin{Thm}\label{Ae} Suppose that Assumptions \ref{Ascoef} to \ref{Asjsize} hold and that $\Sigma_0$ in Theorem \ref{osan} is positive definite. Then, for any $\ep>0$ and $q\in(0,1)$, we have \begin{align}\label{ae} &P\left(\left\{\left|\sqrt{n}(\hat{\alpha}_{n}^{k_n}-\hat{\alpha}_{n}^{k_n,\mathrm{cont}})\right|\vee\left|\sqrt{T_n}(\hat{\beta}_{n}^{k_n}-\hat{\beta}_{n}^{k_n,\mathrm{cont}})\right|>\ep\right\}\cap \left\{ \mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}\right)\to0. \end{align} \end{Thm} \begin{Rem} Since each phase of our method is conducted on the null hypothesis, we do not identify the true value in the re-defined $\tilde{\alpha}_n^{k_n}$ in practice. \end{Rem} \begin{Rem} We should note that the number of jump removals is automatically determined by the iterative Jarque-Bera type test, and thus there is no need to choose $(k_n)$ in practice. \end{Rem} \section{Numerical experiments}\label{Numerical experiments} \begin{table}[t] \caption{The performance of our estimators is given in case (i). The mean is given with the standard deviation in parenthesis. In this table, $k_n^\star$ denotes the number of jumps.} \label{res3-1} \begin{center} \begin{tabular}{cccccccccc} \hline &&&&&&&&&\\[-3.5mm] $T_n$ & $n$ & $h_n$ & $k_n^\star$ & \multicolumn{6}{c}{(i)$\text{Gamma distribution}$} \\ &&&& $\hat{\alpha}_{n}^{0}$ & $\hat{\beta}_{n}^{0}$ &$\hat{\alpha}_{n}^{k_n}$ &$\hat{\beta}_{n}^{k_n}$ & $\hat{\alpha}_{n}^{k_n^\star}$&$\hat{\beta}_{n}^{k_n^\star}$ \\ \hline 28.8& 1000 & 0.03&15&18.80&0.62&3.38&0.99&3.38&1.00 \\ &&&&(4.31)&(0.13)&(0.20)&(0.09)&(0.20)&(0.09)\\ 62.1&10000&0.006&30&17.7&0.63&3.07&1.00&3.08&1.00\\ &&&&(2.91)&(0.09)&(0.05)&(0.06)&(0.04)&(0.06)\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[t] \caption{The performance of our estimators is given in case (ii). The mean is given with the standard deviation in parenthesis. In this table, $k_n^\star$ denotes the number of jumps.} \label{res3-2} \begin{center} \begin{tabular}{cccccccccc} \hline &&&&&&&&&\\[-3.5mm] $T_n$ & $n$ & $h_n$ & $k_n^\star$ & \multicolumn{6}{c}{(ii)$\text{Bilateral inverse Gaussian distribution}$} \\ &&&& $\hat{\alpha}_{n}^{0}$ & $\hat{\beta}_{n}^{0}$ &$\hat{\alpha}_{n}^{k_n}$ &$\hat{\beta}_{n}^{k_n}$ & $\hat{\alpha}_{n}^{k_n^\star}$&$\hat{\beta}_{n}^{k_n^\star}$ \\ \hline 28.8& 1000 & 0.03&15&10.83&0.82&3.19&0.99&3.15&1.00 \\ &&&&(3.70)&(0.22)&(0.17)&(0.14)&(0.16)&(0.14)\\ 62.1&10000&0.006&30&10.22&0.82&3.04&1.01&3.04&1.01\\ &&&&(2.46)&(0.15)&(0.06)&(0.09)&(0.05)&(0.09)\\ \hline \end{tabular} \end{center} \end{table} In this section, we conduct Monte Carlo simulation in order to see the performance of our method. First we consider the following statistical model: \begin{equation}\label{yu:simmodel} dX_t=\sqrt{\frac{\alpha}{1+\sin^2 X_t}}dw_t-\beta X_tdt+dJ_t\quad X_0=0, \end{equation} with the true value $\theta_{0}:=(\alpha_0,\beta_0)=(3,1)$. As the jump size distributions, we set: \begin{itemize} \item[(i)] Gamma distribution $\Gamma(4,1)$ (one-sided positive jumps); \item[(ii)] Bilateral inverse Gaussian distribution $bIG(2,1,4,1)$ (two-sided jumps). \end{itemize} The bilateral inverse Gaussian random variable $X\sim bIG(\delta_1,\gamma_1,\delta_2,\gamma_2)$ is defined as the difference of two independent inverse Gaussian random variable $X_1\sim IG(\delta_1,\gamma_1)$ and $X_2\sim IG(\delta_2,\gamma_2)$. In the trials, we set the significance level $q=10^{-3}$, and the number of jumps fixed just for purposes of comparison. Based on independently simulated 1000 sample path, the mean and standard deviation of our estimator $(\hat{\alpha}_{n}^{k_n},\hat{\beta}_{n}^{k_n})$ are tabulated in Table \ref{res3-1} and Table \ref{res3-2} with the estimators $(\hat{\alpha}_{n}^0,\hat{\beta}_{n}^0)$ and $(\hat{\alpha}_{n}^{k_n^\star},\hat{\beta}_{n}^{k_n^\star})$. The first estimator $(\hat{\alpha}_{n}^0,\hat{\beta}_{n}^0)$ is constructed by the whole data, and the latter estimator $(\hat{\alpha}_{n}^{k_n^\star},\hat{\beta}_{n}^{k_n^\star})$ is constructed by the true no-jump group. From these tables, the following items are indicated: \begin{itemize} \item In both case, the modified estimators get closer and closer to the true value as jump removals proceed. \item Since the performances of $(\hat{\alpha}_{n}^{k_n},\hat{\beta}_{n}^{k_n})$ and $(\hat{\alpha}_{n}^{k_n^\star},\hat{\beta}_{n}^{k_n^\star})$ are almost the same, the jump detection by our method works well. \item Concerning the drift estimator, the degree of improvement is not large for (ii) relative to (i). It may be due to the two-sided jump structure of $bIG(2,1,4,1)$; thus the amount of improvement is generally expected to be much more significant when the jump distribution is skewed. \item In the estimator $(\hat{\alpha}_{n}^0,\hat{\beta}_{n}^0)$, the performance of $\hat{\alpha}_{n}^0$ is worse than $\hat{\beta}_{n}^0$. This is because the diffusion estimator is based on the square of the increments $(\Delta_j X)_j$, thus being heavily affected by jumps. \item Overall, the diffusion parameter are overestimated even by $\hat{\alpha}_{n}^{k_n^\star}$. Taking into consideration the fact that the mean-reverting point of $X$ is 0, the magnitude of the increment should be larger after one jump occurs. Thus, although jumps are correctly picked, such overestimation can be seen. \end{itemize} \bigskip \noindent \textbf{Acknowledgement.} This work was supported by JST, CREST Grant Number JPMJCR14D7, Japan. \bigskip
{ "timestamp": "2019-10-02T02:12:05", "yymm": "1802", "arxiv_id": "1802.03945", "language": "en", "url": "https://arxiv.org/abs/1802.03945" }
\section{Introduction}\label{sec:1} Convolutional Neural Network (CNN) has become one of the most successful computational models in machine learning and artificial intelligence. Remarkable progress has been achieved in the design of successful CNN {\it network structures}, such as the VGG-Net \cite{simonyan2014very}, ResNet \cite{he2016deep}, and DenseNet \cite{huang2016densely}. Less attention has been paid to the design of {\it filter structures} in CNNs. Filters, namely the weights in the convolutional layers, are one of the most important ingredients of a CNN model, as filters contain the actual model parameters learned from enormous amounts of data. Filters in CNNs are typically randomly initialized, and then updated using variants and extensions of gradient descent (``back-propagation"). As a result, trained CNN filters have no specific structures, which often leads to significant redundancy in the learned model \cite{DentonZBLF14,han2015learning,SqueezeNet}. Filters with improved properties will have a direct impact on the accuracy and efficiency of CNN, and the theoretical analysis of filters is also of central importance to the mathematical understanding of deep networks. \begin{figure}[t] \vskip 0.2in \begin{center} \includegraphics[ width = \linewidth, height=0.35\linewidth]{fig1_psi_a.png} \vskip -0.05in \caption{ In a DCFNet, an $L \times L \times M' \times M$ convolutional layer is decomposed into the product of $K$ bases of size $L \times L$ ($\Psi$) and $K M' \times M$ coefficients ($a$), where $\Psi$ is pre-fixed, and $a$ is learned from data. The basis can carry prior (explainable) structure if available. } \label{fig:fb1} \end{center} \vskip -0.2in \end{figure} This paper suggests to decompose convolutional filters in CNN into a truncated expansion with pre-fixed bases in the spatial domain, namely the Decomposed Convolutional Filters network (DCFNet), where the expansion coefficients remain learned from data. By representing the filters in terms of functional bases, which can come from prior data or task knowledge, rather than as pixel values, the number of trainable parameters is reduced to the expansion coefficients; and furthermore, regularity conditions can be imposed on the filters via the truncated expansion. For image classification tasks, we empirically observe that DCFNet is able to maintain the accuracy with a significant reduction in the number of parameters. Such observation holds even when random bases are used. In particular, we adopt in DCFNet the leading Fourier-Bessel (FB) bases \cite{abramowitz1964handbook}, which correspond to the low-frequency components in the input. We experimentally observe the superior performance of DCFNet with FB bases (DCF-FB) in both image classification and denoising tasks. DCF-FB network reduces the response to the high-frequency components in the input, which are least stable under image variations such as deformation and often do not affect recognition after being suppressed. Such an intuition is further supported by a mathematical analysis of the CNN representation, where we firstly develop a general result for the CNN representation stability when the input image undergoes a deformation, under proper boundedness conditions of the convolutional filters (Propositions \ref{prop:l2stable}, \ref{prop:deform1}, \ref{prop:deform2}). After imposing the DCF structure, we show that as long as the trainable expansion coefficients at each layer of a DCF-FB network satisfy a boundedness condition, the $L$-th-layer output is stable with respect to input deformation and the difference is bounded by the magnitude of the distortion (Theorems \ref{thm:deform3}, \ref{thm:deform4}). Apart from FB bases, the DCFNet structure studied in this paper is compatible with general choices of bases, such as standard Fourier bases, wavelet bases, random bases and PCA bases. We numerically test several options in Section \ref{sec:4}. The stability analysis for DCF-FB networks can be extended to other bases choices as well, based upon the general theory developed for CNN representation and using similar techniques. Our work is related to recent results on the topics of the usage of bases in deep networks, the model reduction of CNN, as well as the stability analysis of the deep representation. We review these connections in Section \ref{sec:1-1}. Finally, though the current paper focuses on supervised networks for classification and recognition applications in image data, the introduced DCF layers are a generic concept and can potentially be used in reconstruction and generative models as well. We discuss possible extensions in the last section. \subsection{Related works}\label{sec:1-1} {\bf Deep network with bases and representation stability}. The usage of bases in deep networks has been previously studied, including wavelet bases, PCA bases, learned dictionary atoms, etc. Wavelets are a powerful tool in signal processing \cite{mallat2008wavelet} and have been shown to be the optimal basis for data representation under generic settings \cite{donoho1994ideal}. As a pioneering mathematical model of CNN, the {\it scattering transform} \cite{mallat2012group,Bruna2013,Sifre2013} used pre-fixed weights in the network which are wavelet filters, and showed that the representation produced by a scattering network is stable with respect to certain variations in the input. The extension of the scattering transform has been studied in \cite{wiatowski2015deep,wiatowski2017mathematical} which includes a larger class of bases used in the network. Apart from wavelet, deep network with PCA bases has been studied in \cite{chan2015pcanet}. Making a connection to dictionary learning~\cite{aharon2006rm}, \cite{papyan2016convolutional} studied deep networks in form of a cascade of convolutional sparse coding layers with theoretical analysis. Deep networks with random weights have been studied in \cite{giryes2016deep}, with proved representation stability. The DCFNet studied in this paper incorporates structured pre-fixed bases combined by {\it adapted} expansion coefficients learned from data in a supervised way, and demonstrates comparable and even improved classification accuracy on image datasets. While the combination of fixed bases and learned coefficients has been studied in classical signal processing \cite{freeman1991design,mahalanobis1987minimum}, dictionary learning \cite{rubinstein2010double} and computer vision \cite{henriques2013beyond, bertinetto2016staple}, they were not designed with deep architectures in mind. Meanwhile, the representation stability of DCFNet is inherited thanks to the filter regularity imposed by the truncated bases decomposition. {\bf Network redundancy}. Various approaches have been studied to suppress redundancy in the weights of trained CNNs, including model compression and sparse connections. In model compression, network pruning has been studied in \cite{han2015learning} and combined with quantization and Huffman encoding in \cite{han2015deep_compression}. \cite{chen2015compressing} used hash functions to reduce model size without sacrificing generalization performance. Low-rank compression of filters in CNN has been studied in \cite{DentonZBLF14,ioannou2015training}. \cite{SqueezeNet, Lin2014} explored model compression with specific CNN architectures, e.g., replacing regular filters with $1 \times 1$ filters. Sparse connections in CNNs have been recently studied in \cite{ioannou2016deep, anwar2017structured, changpinyo2017power}. On the theoretical side, \cite{bolcskei2017optimal} showed that a sparsely-connected network can achieve certain asymptotic statistical optimality. The proposed DCFNet relates model redundancy compression to the regularity conditions imposed on the filters. In DCF-FB network, redundancy reduction is achieved by suppressing network response to the high-frequency components in the inputs. \begin{figure*}[t] \vskip -0.05in \begin{center} \includegraphics[width= \linewidth, height=0.275\linewidth]{fig2_multi_fb.png} \vskip -0.025 in \caption{ (Left) Multi-scale convolutional filters and Fourier-Bessel bases in various scales, $j_0 \le \cdots \le j_l \cdots \le J$. (Right) $L\times L$ Gabor filters in 8 directions in size of, $L=11$, and the approximation by $K$ leading FB bases with a reduction rate of $\frac{K}{L^2} = \frac{1}{3}$. The truncation incurs almost no change to the filters. The leading FB bases are shown in the middle panel. Images rescaled for illustration purpose. } \label{fig:fb2} \end{center} \vskip -0.1in \end{figure*} \section{Decomposed Convolutional Filters}\label{sec:2} \subsection{Notations of CNN} The output at the $l$-th layer of a convolutional neural network (CNN) can be written as $\{ x^{(l)}(u,\lambda) \}_{u \in \mathbb{R}^2, \lambda \in [M_l]}$, where $M_l$ is the number of channels in that layer and $[M] = \{1,\cdots, M\}$ for any integer $M$. A CNN with $L$ layers can be written as a mapping from $\{ x^{(0)}(u,\lambda) \}_{u \in \mathbb{R}^2, \lambda \in [M_0]}$ to $\{ x^{(L)}(u,\lambda) \}_{u \in \mathbb{R}^2, \lambda \in [M_L]}$, recursively defined via $x^{(l)}(u, \lambda) = \sigma( x^{(l)}_{\frac{1}{2}}(u, \lambda) + b^{(l)}(\lambda))$, $\sigma$ being the nonlinear mapping, e.g., ReLU, and \begin{equation}\label{eq:conv1} x^{(l)}_{\frac{1}{2}}(u, \lambda) = \sum_{\lambda' =1}^{M_{l-1}} \int W^{(l)}_{\lambda',\lambda}(v') x^{(l-1)}(u+v', \lambda') dv'. \end{equation} The filters $W^{(l)}_{\lambda',\lambda}(u)$ and the biases $b^{(l)}$ are the parameters of the CNN. In practice, both $x^{(l)}(u,\lambda)$ and $W^{(l)}_{\lambda',\lambda}(u)$ are discretized on a Cartesian grid, and the continuous convolution in \eqref{eq:conv1} is approximated by its discrete analogue. Throughout the paper we use the continuous spatial variable $u$ for simplicity. Very importantly, the filters $W^{(l)}_{\lambda',\lambda}(u)$ are locally supported, e.g., on $3 \times 3$ or $5 \times 5$ image patches. \subsection{Decomposition of convolutional filters} CNNs typically represent and store filters as vectors of the size of the local patches, which is equivalent to expanding the filters under the {\it delta bases}. Delta bases are not optimal for representing smooth functions. For example, regular functions have fast decaying coefficients under Fourier bases, and natural images have sparse representation under wavelet bases. DCF layers represent the convolutional filters as a truncated expansion under basis functions which are {\it non-adapted} through the training process, while adaption comes via the combination of such bases. Specifically, suppose that the convolutional filters $W_{\lambda',\lambda}(u)$ at certain layer, after a proper rescaling of the spatial variable (detailed in Section \ref{sec:3}), are supported on the unit disk $D$ in $\mathbb{R}^2$. Given a bases $\{\psi_k\}_k$ of the space $L^2(D)$, the filters can be represented as \begin{equation}\label{eq:dcf1} W_{\lambda',\lambda}(u) = \sum_{k=1}^K (a_{\lambda', \lambda })_k \psi_k(u), \end{equation} where $K$ is the truncation. The decomposition \eqref{eq:dcf1} is illustrated in Figure \ref{fig:fb1}, and conceptually, it can be viewed as a two-step scheme of a convolutional layer: \begin{enumerate} \item ($\Psi$-step) the input is convolved with each of the basis $\psi_k$, $k =1,\cdots, K$, which are {\it pre-fixed}. The convolution for each input channel is independent from other channels, adding computational efficiency. \item ($a$-step) the intermediate output is linearly transformed by an effectively fully-connected weight matrix $(a_{\lambda',\lambda})_k$ mapping from index $(\lambda',k)$ to $\lambda$, which is {\it adapted} to data. \end{enumerate} In \eqref{eq:dcf1}, $\psi_k$ can be any bases, and we numerically test on different choices in Section \ref{sec:4}, including data-adapted bases and random bases. All experiments consistently show that the convolutional layers can be drastically decomposed and compressed with almost no reduction on the classification accuracy, and sometimes even using random bases gives strong performance. In particular, motivated by classical results of harmonic analysis, we use FB bases in DCFNet, with which the regularity of the filters $ W_{\lambda',\lambda}$ can be imposed though constraining the magnitude the coefficients $\{(a_{\lambda', \lambda })_k\}_k$ (Proposition \ref{prop:fb2}). As an example, Gabor filters approximated using the leading FB bases are plotted in the right of Figure \ref{fig:fb2}. In experiments, DCFNet with FB bases shows superior performance in image classification and denoising tasks compared to original CNN and other bases being tested (Section \ref{sec:4}). Theoretically, Section \ref{sec:3} analyzes the representation stability of DCFNet with respect to input variations, which provides a theoretical explanation of the advantage of FB bases. \subsection{Parameter and computation reduction} Suppose that the original convolutional layer is of size $L \times L \times M' \times M$, as shown in Figure \ref{fig:fb1}, where typically $L= 3$, 5 and usually less than 11, $M'$ and $M$ grow from $3$ (number of input channels) to a few hundreds in the deep layers in CNN. After switching to the DCFNet as in \eqref{eq:dcf1}, there are $M' \times M \times K$ tunable parameters $(a_{\lambda', \lambda })_k$. Thus the number of parameters in that layer is a factor $\frac{K}{L^2}$ smaller, which can be significant if $K$ is allowed to be small, particularly when $M'$ and $M$ are large. The theoretical computational complexity can be calculated directly. Suppose that the input and output activation is $W \times W$ in spatial size, the original convolutional layer needs $M' W^2 \cdot M(1+2 L^2)$ flops (the number of convolution operations is $M'M$, each take $2L^2W^2$ flops, and the summation over channels take an extra $W^2 M'M$). In contract, a DCF layer takes $M'W^2\cdot 2K(L^2+M)$ flops, ($M'K$ many convolutions in the $\Psi$ step, and $2KM'MW^2$ flops in the $a$ step). Thus when $M \gg L^2$, the leading computation cost is $\frac{K}{L^2}$ of that of a regular CNN layer. The reduction rate of $\frac{K}{L^2}$ in both model complexity and theoretical computational flops is confirmed on actual networks used in experiments, c.f. Table \ref{tab:dcf-acc}. \section{Analysis of Representation Stability}\label{sec:3} The analysis in this section is firstly done for regular CNN and then the conditions on filters are reduced to generic conditions on learnt coefficients in a DCF Net. In the latter, the proof is for the Fourier-Bessel (FB) bases, and can be extended to other bases using similar techniques. \subsection{Stable representation by CNN} We consider the spatial deformation operator denoted by $D_{\tau}$, where $\tau:\mathbb{R}^{2}\to\mathbb{R}^{2}$ and is $C^{2}$, $\rho(u)=u-\tau(u)$, and \[ D_{\tau}x(u,\lambda)=x(\rho(u),\lambda),\quad\forall u,\lambda. \] We assume that the distortion is controlled, and specifically, \begin{itemize} \item[ ] {\bf (A0) } $|\nabla\tau|_{\infty} = \sup_{u} \| \nabla \tau(u) \| <\frac{1}{5}$, $\|\cdot\|$ being the operator norm. \end{itemize} The choice of the constant $\frac{1}{5}$ is purely technical. Thus $\rho^{-1}$ exists, at least locally. Our goal is to control $\|x^{(L)}[D_\tau x^{(0)}] -x^{(L)}[x^{(0)}]\|$, namely when the input undergoes a deformation the output at $L$-the layer is not severely changed. We achieve this in two steps: (1) We show that $ \| D_{\tau}x^{(L)}[x^{(0)}] - x^{(L)}[D_{\tau}x^{(0)}] \| $ is bounded by the magnitude of deformation up to a constant proportional to the norm of the signal, c.f. Proposition \ref{prop:deform1}. (2) We show that $x^{(L)}$ is stable under $D_\tau$ when $L$ is large, c.f. Proposition \ref{prop:deform2}. To proceed, define the $L^2$ norm of $x(u,\lambda)$ to be \begin{equation} \label{eq:normxl2def} \|x\|^{2}= \frac{1}{M}\sum_{\lambda\in[M]} \frac{1}{|\Omega|} \int_{\mathbb{R}^2} |x(u,\lambda)|^{2}du, \end{equation} where $|\Omega|^2 = (2\cdot 2^J)^2$ is the area of the image-support domain, c.f. Figure \ref{fig:fb2}. We assume that \begin{itemize} \item[ ] {\bf (A1)} $\sigma: \mathbb{R} \to \mathbb{R}$ is non-expansive, \end{itemize} which holds for ReLU. We also define the constants \begin{align} B_l & : = \max \{ \sup_{\lambda} \sum_{\lambda'=1}^{M_{l-1}} \| W^{(l)}_{\lambda', \lambda}\|_1, \sup_{\lambda'} \frac{M_{l-1}}{M_l} \sum_{\lambda=1}^{M_{l}} \| W^{(l)}_{\lambda', \lambda}\|_1 \}, \nonumber \\ C_l & : = \max \{ \sup_{\lambda} \sum_{\lambda'=1}^{M_{l-1}} \| |v || \nabla W^{(l)}_{\lambda', \lambda}(v) |\|_1, \nonumber \\ & ~~~~~~~~ \sup_{\lambda'} \frac{M_{l-1}}{M_l} \sum_{\lambda=1}^{M_{l}} \| |v || \nabla W^{(l)}_{\lambda', \lambda}(v) |\|_1 \}, \label{eq:BlCldef} \end{align} where $\| |v| |\nabla W(v)| \|_1$ denotes $\int_{\mathbb{R}^2} |v| |\nabla W(v)| dv$. Firstly, the following proposition shows that the layer-wise mapping is non-expansive whenever $B_l \le 1$, the proof of which is left to Supplementary Material (S.M.). \begin{proposition}\label{prop:l2stable} In a CNN, under (A1), if $B_l \le 1$ for all $l$, (a) The mapping of the $l$-th convolutional layer (including $\sigma$), denoted as $x^{(l)}[x^{(l-1)}]$, is non-expansive, i.e., $ \| x^{(l)}[ x_1 ] - x^{(l)}[ x_2 ] \| \le \| x_1 - x_2\| $ for arbitrary $x_1$ and $x_2$. (b) $\| x_c^{(l)} \| \le \| x_c^{(l-1)} \|$ for all $l$, where $x_c^{(l)}(u,\lambda) = x^{(l)}(u,\lambda) - x_0^{(l)}(\lambda)$ is the centered version of $x^{(l)}$, $x_0^{(l)}$ being the output at the $l$-th layer from a zero input at the bottom layer. As a result, $\| x_c^{(l)} \| \le \|x_c^{(0)}\| = \|x^{(0)}\|$. \end{proposition} To switch the operator $D_\tau$ with the $L$-layer mapping $x^{(L)}[x^{(0)}]$, the idea is to control the residual of the switching at each layer, which is the following lemma proved in S.M.. \begin{lemma}\label{lemma:commuting} In a CNN, under (A0) (A1), $B_l, C_l$ as in \eqref{eq:BlCldef}, \begin{align*} \|D_{\tau} x^{(l)}[ x^{(l-1)} ] & - x^{(l)}[D_{\tau} x^{(l-1)}] \| \\ & \le 4 ( B_l + C_l ) \cdot|\nabla \tau|_{\infty} \| x_c^{(l-1)}\|, \end{align*} where $x_c^{(l)}$ is as in Proposition \ref{prop:l2stable}. \end{lemma} We thus impose the assumption on the filters to be \begin{itemize} \item[ ] {\bf (A2)} For all $l$, $B_l$ and $C_l$ as in \eqref{eq:BlCldef} are less than 1. \end{itemize} The assumption (A2) corresponds to a proper scaling of the convolutional filters so that the mapping in each convolutional layer is non-expansive (Proposition \ref{prop:l2stable}), and in practice, this can be qualitatively maintained by the standard normalization layers in CNN. Now we can bound the residual of a $L$-layer switching to be additive as $L$ increases: \begin{proposition}\label{prop:deform1} In a CNN, under (A0), (A1), (A2), \begin{equation}\label{eq:thmdeform1} \| D_{\tau}x^{(L)}[x^{(0)}] - x^{(L)}[D_{\tau}x^{(0)}] \| \le 8 L | \nabla \tau |_\infty \|x^{(0)}\|. \end{equation} \end{proposition} Proof is left to S.M. We remark that it is possible to derive a more technical bound in terms of the constants $B_l$, $C_l$ without assuming (A2), using the same technique. We present the simplified result here. In the later analysis of DCF Net, (A2) will be implied by a single condition on the bases expansion coefficients, c.f. (A2'). To be able to control $\|D_\tau x^{(L)} - x^{(L)}\|$, we have the following proposition, proved in S.M. \begin{proposition}\label{prop:deform2} In a CNN, under (A1), \[ \| D_\tau x^{(l)} - x^{(l)} \| \le 2 |\tau|_\infty D_l \|x_c^{(l-1)}\|, \] where $x_c^{(l)}$ is as in Proposition \ref{prop:l2stable}, and $D_l : = \max \{ \sup_{\lambda} \sum_{\lambda'=1}^{M_{l-1}} \| \nabla W^{(l)}_{\lambda',\lambda}\|_1, \sup_{\lambda'} \frac{M_{l-1}}{M_l} \sum_{\lambda=1}^{M_{l}} \| \nabla W^{(l)}_{\lambda',\lambda}\|_1 \}$. \end{proposition} One may notice that $|\tau|_\infty$ is not proportional to $|\nabla \tau|_\infty$ when the deformation happens on a large domain, e.g., a rotation. It turns out that the multi-scale architecture of CNN induces a decrease of the quantity $D_l$ proportional to the inverse of the domain diameter, which compensate the increase of $|\tau|_\infty$ as scale grows, as long as the rescaled filters are properly bounded in integral. Thus a unified deformation theory can be derived for DCFNets, see next section. \begin{figure*}[t] \begin{center} \includegraphics[width=0.85\linewidth, height = 0.275\linewidth]{fig3_cnn_dcf_2.png} \caption{ Example convolutional filters (upper) and network outputs (bottom) in the second layer of a Conv-2 net trained on MNIST (left) and the corresponding DCFNet using 3 FB bases (right). The filters in DCFNet are visibly smoother than those in the CNN, so are the network outputs. Classification accuracy of the two networks is comparable, c.f. Table \ref{tab:dcf-acc}. } \label{fig:filter1} \end{center} \vskip -0.125 in \end{figure*} \subsection{Multi-scale filters and Fourier Bessel (FB) bases} Due to the downsampling (``pooling") in CNN, the support of the $l$-th layer filters $W^{(l)}_{\lambda',\lambda}$ enlarges as $l$ increases. Suppose that the input is supported on $\Omega$ which is a $ (2\cdot2^J)\times (2\cdot2^J)$ domain, and the CNN has $L$ layers. In accordance with the $2\times 2$ pooling, we assume that $ W^{(l)}_{\lambda',\lambda}$ is supported on $D(j_l)$, vanishing on the boundary, where $D(j)$ is a disk of radius $2^{j}$, $j_0 \le \cdots \le j_L \le J$, and $D(j_0)$ is of size of patches at the smallest scale. Let $\{\psi_k\}_k$ be a set of bases supported on the unit disk $D(0)$, and we introduce the rescaled bases \[ \psi_{j,k}(u) = 2^{-2j} \psi_k (2^{-j} u), \quad u\in D(j), \] where the normalization $2^{-2j}$ is introduced so that $\| \psi_{j,k}\|_1 = \| \psi_{k}\|_1$, where $\| f \|_1 : = \int_{\mathbb{R}^2} |f(u)| du$. The multiscale filters and bases are illustrated in the left of Figure \ref{fig:fb2}. By \eqref{eq:dcf1}, we have that \begin{equation} \label{eq:dcf2} W^{(l)}_{\lambda',\lambda}(u) = \sum_{k} (a^{(l)}_{\lambda', \lambda })_k \psi_{j_l,k}(u), \quad u\in D(j_l). \end{equation} While DCFNet is compatible with general choices of bases, we focus on the FB bases in this section as an example. FB bases $\psi_k$ are indexed by $k = (m,q)$ where $m$ and $q$ are the angular and radial frequencies respectively. They are supported on the unit disk $D =D(0)$, and in polar coordinates, \[ \psi_{m,q} (r, \theta) = c_{m,q} J_m (R_{m,q} r) e^{i m \theta}, \, r \in [0,1], \, \theta \in [0,2\pi], \] where $J_m$ is the Bessel function of the first kind, $m$ are integers, $q=1,2,\cdots$, $R_{m,q}$ is the $q$-th root of $J_m$, and $c_{m,q}$ is the normalizing constant s.t. $\langle \psi_{m,q}, \psi_{m',q'} \rangle = \int_D \psi_{m,q}(u) \psi_{m',q'}^* (u) du = \pi \delta_{m,m'}\delta_{q,q'}$. Furthermore, FB bases are eigenfunctions of the Dirichlet Laplacian on $D$, i.e., $-\triangle \psi_k = \mu_k \psi_k$, where $\mu_{m,q} = R_{m,q}^2$. % The eigenvalue $\mu_k$ grows as $k$ increases (Weyl's law). Thus FB bases can be ordered by $k$ so that $\mu_k$ increases, of which the leading few are shown in Table \ref{tab:fb2} and illustrated in Fig. \ref{fig:fb2}. % In principle, the frequency $q$ and $m$ should be truncated according to the Nyquist sampling rate. This truncation turned out to be not often used in our setting, due to the significant bases truncation in DCFNet. \begin{table}[t] \scriptsize \begin{centering} \begin{tabular}[t]{ c | c c c c c c c c } \hline $k$ & 1 & 2,3 & 4,5 & 6 & 7,8 & 9,10 & 11,12 & 13,14 \\ \hline $m$ & 0 & 1 & 2 & 0 & 3 & 1 & 4 & 2 \\ $q$ & 1 & 1 & 1 & 2 & 1 & 2 & 1 & 2 \\ $\mu_k$ & 5.78 & 14.68 & 26.37 & 30.47 & 40.71 & 49.22 & 57.58 & 70.85 \\ \hline \end{tabular} \caption{\label{tab:fb2} The angular frequency $m$, radial frequency $q$ and Dirichlet eigenvalue $\mu_k$ of the first $14$ Fourier-Bessel bases. Two $k$ corresponds to one pair of $(m,q)$ when $m\neq 0$ due to that both real and complex parts of the bases are used as real-valued bases. } \vskip -0.1in \end{centering} \end{table} The key technical quantities in the stability analysis of CNN are $\| W^{(l)}_{\lambda',\lambda} \|_1$ and $ \| |v| |\nabla W^{(l)}_{\lambda',\lambda} (v)| \|_1$, and with FB bases, these integrals are bounded by a $\mu_k$-weighted $L^2$-norm of $a^{(l)}_{\lambda',\lambda}$ defined as $ \| a \|_{FB} = (\sum_k \mu_k a_k^2 )^{1/2}$ for all $l$. The following lemma and proposition are proved in S.M. \begin{lemma}\label{lemma:fb1} Suppose that $\{ \psi_k \}$ are FB bases, the function $F(u) = \sum_k a_k \psi_k(u)$ is smooth on the unit disk. Then $\frac{1}{\sqrt{\pi}}\| \nabla F\|_2 = \| a \|_{FB} $, where $\mu_k$ are the eigenvalues of $\psi_k$ as eigenfunctions of the negative Dirichlet laplacian on the unit disk. As a result, $\| \nabla F\|_1 \le \pi \| a \|_{FB}$. \end{lemma} \begin{proposition}\label{prop:fb2} Using FB bases, $\| |v| |\nabla W^{(l)}_{\lambda',\lambda} (v)| \|_1$ and $\| W^{(l)}_{\lambda',\lambda} \|_1$ are bounded by $\pi \|a^{(l)}_{\lambda',\lambda}\|_{FB}$ for all $\lambda', \lambda$ and $l$. \end{proposition} Notice that the boundedness of $\| a \|_{FB}$ implies a decay of $|a_k|$ at least as fast as $\mu_k^{-1/2}$. This justifies the truncation of the FB expansion to the leading few bases, which correspond to the low-frequency modes. Proposition \ref{prop:fb2} implies that $B_l$ and $C_l$ are all bounded by $A_l$ defined as \[ \begin{split} A_l & : = \pi \max \{ \sup_{\lambda} \sum_{\lambda'=1}^{M_{l-1}} \| a^{(l)}_{\lambda', \lambda}\|_{FB} , \\ & ~~~~~~~~ \sup_{\lambda'} \frac{M_{l-1}}{M_l} \sum_{\lambda=1}^{M_{l}} \| a^{(l)}_{\lambda', \lambda}\|_{FB} \}. \end{split} \] Then we introduce \begin{itemize} \item[ ] {\bf (A2')} For all $l$, $A_l \le 1$, \end{itemize} and the result of Proposition \ref{prop:deform1} extends to DCFNet: \begin{theorem}\label{thm:deform3} In a DCFNet with FB bases, under (A0),(A1), (A2'), then \begin{equation*} \|D_\tau x^{(L)}[x^{(0)}] - x^{(L)}[D_\tau x^{(0)}] \| \le 8 L | \nabla \tau |_\infty \|x^{(0)}\|. \end{equation*} \end{theorem} Combined with Proposition \ref{prop:deform2}, we have the following deformation stability bound, proved in S.M.: \begin{theorem}\label{thm:deform4} In a DCFNet with FB bases, under (A0),(A1), (A2'), \begin{equation} \label{eq:thm-deform4} \begin{split} \| x^{(L)}[x^{(0)}] & - x^{(L)}[D_\tau x^{(0)}] \| \\ & \le ( 8 L | \nabla \tau |_\infty + 2 \cdot 2^{- j_L} |\tau|_\infty ) \|x^{(0)}\|. \end{split} \end{equation} \end{theorem} \section{Experiments}\label{sec:4} \begin{figure*} [t] \vskip -0.1in \centering \subfloat[Original] {\label{fig:original} \includegraphics[angle=0, width=.26\textwidth]{original.pdf} \hspace{7pt}} \subfloat[Gaussian noise] {\label{fig:gaussian} \includegraphics[angle=0, width=.26\textwidth]{gaussian.pdf} \hspace{7pt}} \subfloat[Speckle noise] {\label{fig:speckle} \includegraphics[angle=0, width=.26\textwidth]{speckle2.pdf} \hspace{0pt}} \caption{ Examples (randomly selected) of image denoising on the SVHN dataset with PSNR values shown. The average PSNR over the entire test set including 26,032 samples: with Gaussian noise, 30.01 for CNN, 31.24 for DCF-fb; with Speckle noise, 28.15 for CNN, 29.84 for DCF-fb. } \vskip -0.05in \label{fig:denoising} \end{figure*} \begin{table}[b] \vskip -0.1 in \small \begin{centering} \begin{tabular}[t]{ l | l } \hline ~~Conv-2 & ~~Conv-3 \\ \hline c5x5x1x16 ReLu mp3x3& c5x5x3x64 ReLu mp3x3\\ c5x5x16x64 ReLu mp3x3 & c5x5x64x128 ReLu mp3x3\\ fc128 ReLu fc10 & c5x5x128x256 ReLu mp3x3\\ & fc512 ReLu fc10 \\ \hline \end{tabular} \caption{\label{tab:network-arch-1} \small CNN network architectures used in MNIST, SVHN, and CIFAR10 experiments. c$L$x$L$x$M'$x$M$ stands for a convolutional layer of patch size $L$x$L$ and input (output) channel $M'$ ($M$). mp$L$x$L$ stands for $L$x$L$ max-pooling. For the corresponding DCFNets, each $L$x$L$x$M'$x$M$ CNN conv layer is expended over $K$ $L \times L$ bases for trainable coefficients implemented as a $1 \times 1 \times M'K \times M$ conv layer. } \end{centering} \vskip 0.1 in \end{table} In this section, we experimentally demonstrate that convolutional filters in CNN can be decomposed as a truncated expansion with pre-fixed bases, where the expansion coefficients remain learned from data. Though the number of trainable parameters are significantly reduced, the accuracy in tasks such as image classification and face verification is still maintained. Such empirical observations hold for data-independent Fourier-Bessel (FB) and random bases, and data-dependent PCA bases. \subsection{Datasets} We perform an experimental evaluation on DCFNets using the following public datasets: {\bf MNIST.} $28 \times 28$ grayscale images of digits from 0 to 9, with 60,000 training and 10,000 testing samples. {\bf SVHN.} The Street View House Numbers (SVHN) dataset \cite{svhn} contains $32 \times 32$ colored images of digits 0 to 9, with 73,257 training and 26,032 testing samples. The additional training images were not used. {\bf CIFAR10.} The dataset \cite{cifar} contains $32 \times 32$ colored images from 10 object classes, with 50,000 training and 10,000 testing samples. {\bf VGG-Face.} A large-scale face dataset, which contains about 2.6M face images from over 2.6K people \cite{vgg-face}. \footnote{The software is publicly available at \url{https://github.com/xycheng/DCFNet}.} \subsection{Object classification} In our object classification experiments, we evaluate the DCFNet with three types of predefined bases: Fourier-Bessel bases (DCF-FB), random bases which are generated by Gaussian vectors (DCF-RB), and PCA bases which are principal components of the convolutional filters in a pre-trained corresponding CNN model (DCF-PCA). Three CNN network architectures are used for classification, Conv-2 and Conv-3 shown in Table~\ref{tab:network-arch-1}, and VGG-16 \cite{simonyan2014very}. To generate the corresponding DCFNet structure from CNN, each CNN conv layer is expended over a set of pre-defined bases, and the obtained trainable expansion coefficients are implemented as a $1 \times 1$ conv layer. For example, a $5 \times 5 \times M' \times M$ conv layer is expended over $K$ $5 \times 5$ bases for trainable coefficients in a $1 \times 1 \times M'K \times M$ convolutional layer. $K$ denotes the number of basis used, and we evaluate multiple $K$ for different levels of parameter reduction. In order to be compatible with existing deep learning frameworks, pre-fixed bases are currently implemented as regular convolutional layers with zero learning rate. The additional memory cost incurred in such convenient implementation can be eliminated with a more careful implementation, as bases are pre-fixed and the addition across channels can be computed on the fly. The classification accuracy using DCFNets on various datasets are shown in Table \ref{tab:dcf-acc}. We observe that, by using only 3 Fourier-Bessel (FB) bases, we already obtain comparable accuracy as the original full CNN models on all datasets, while using $12\%$ parameters for $5 \times 5$ filters. When more FB bases are used, DCFNets outperform corresponding CNN models, still with significantly less parameters. As FB bases correspond to the low-frequency components in the inputs, DCF-FB network responds less to the high-frequency nuance details, which are often irrelevant for classification tasks. The superiority of DCF-FB network is further shown with less training data. For SVHN with 500 training samples, the testing accuracy (on a 50,000 testing set) of regular CNN and DCF-FB are 63.88\% and 66.79\% respectively. With 1000 training samples, the test accuracy are 73.53\% v.s. 75.45\%. Surprisingly, we observe that DCF with random bases also report acceptable performance. Both the FB and random bases are data independent. For comparison purposes, we also evaluate DCFNets with data dependent PCA bases, which are principal components of corresponding convolutional filters in pre-trained CNN models. When the CNN model is pre-trained with all training data, PCA bases (pca-f) shows comparable performance as FB bases. However, the quality of the PCA bases (pca-s) degenerates, when only a randomly selected subset of the training set is used for the pre-training. \begin{table}[h] \begin{centering} \scriptsize \begin{tabular}[t]{ l | c c | c c | c c } \hline \multicolumn{7}{c}{MNIST conv-2, 5x5 } \\ \hline & fb & rb & pca-s & pca-f & \# param. & \# MFlops \\ \hline CNN & \multicolumn{4}{|c|}{99.40 } & 2.61$\times 10^4$ & 3.37 \\ \hline $K$=14 & 99.47 & 99.35 & 99.38 & 99.41 & 1.46$\times 10^4$ & 2.40 \\ $K$=8 & 99.48 & 99.26 & 99.28 & 99.45 & 8.40$\times 10^3$ & 1.37 \\ $K$=5 & 99.39 & 99.28 & 99.28 & 99.43 & 5.28$\times 10^3$ & 0.86 \\ $K$=3 & 99.40 & 98.69 & 99.19 & 99.35 & 3.20$\times 10^3$ & 0.51 \\ \hline \multicolumn{7}{c}{SVHN conv-3, 5x5 } \\ \hline & fb & rb & pca-s & pca-f & \# param. & \# MFlops \\ \hline CNN & \multicolumn{4}{|c|}{94.22} & 1.03$\times 10^6$ & 201.64 \\ \hline $K$=14 & 94.63 & 93.75 & 94.52 & 94.42 & 5.74$\times 10^5$ & 121.91 \\ $K$=8 & 94.39 & 92.05 & 93.85 & 94.30 & 3.30$\times 10^5$ & 69.67 \\ $K$=5 & 93.93 & 91.28 & 92.34 & 94.03 & 2.06$\times 10^5$ & 43.55 \\ $K$=3 & 92.84 & 88.47 & 91.88 & 93.10 & 1.24$\times 10^5$ & 26.13 \\ \hline \multicolumn{7}{c}{Cifar10 conv-3, 5x5 } \\ \hline & fb & rb & pca-s & pca-f & \# param. & \# MFlops \\ \hline CNN & \multicolumn{4}{|c|}{ 85.66} & & \\ \hline $K$=14 & 85.88 & 84.76 & 85.27 & 85.34 & & \\ $K$=8 & 85.30 & 81.27 & 84.70 & 85.09 & \multicolumn{2}{c}{(same as above)} \\ $K$=5 & 84.35 & 77.96 & 83.12 & 83.94 & & \\ $K$=3 & 83.12 & 74.05 & 80.94 & 82.91 & & \\ \hline \multicolumn{7}{c}{ Cifar10 vgg-16, 3x3} \\ \hline & fb & rb & pca-s & pca-f & \# param. & \# MFlops \\ \hline CNN & \multicolumn{4}{|c|}{87.02} & 1.47$\times 10^7$ & 547.20 \\ \hline $K$=5 & 87.79 & 84.16 & 87.98 & 87.60 & 8.18$\times 10^6$ & 311.68 \\ $K$=3 & 88.21 & 78.46 & 87.45 & 87.54 & 4.91$\times 10^6$ & 187.02 \\ \hline \end{tabular} \caption{\label{tab:dcf-acc} Classification accuracy using DCFNets on various image benchmarks with different number of bases $K$. ``fb" and ``rb" stand for Fourier-Bessel bases and random bases respectively. ``pca-s" and ``pca-f" stand for PCA bases computed from a network pre-trained on a small subset of training images (1,000 random samples) and the full training set respectively. ``\# param." is number of parameters in all convolutional layers, and MFlops is the number of flops in all convolutional layers (including ReLU). } \end{centering} \vskip -0.2in \end{table} \subsection{Image denoising} To gain intuitions behind the superior classification performance of DCFNet, we conduct a set of ``toy" image denoising experiments on the SVHN image dataset. We take the first three $5\times5$ convolution blocks from the Conv-3 CNN network in Table~\ref{tab:network-arch-1}, which is used in our SVHN object classification experiments. We remove all pooling layers, and append at the end an FC-256 followed with a Euclidean loss layer. We then decompose each $5\times5$ conv layer in this CNN network over 3 random bases and 3 FB bases respectively, to produce DCF-RB and DCF-FB networks. We use SVHN training images with their gray-scale version as labels to train all three networks to simply reconstruct an input image (in gray-scale). Figure~\ref{fig:denoising} shows how three trained networks behave while reconstructing examples from the SVHN testing images. Without noise added to input images, Figure~\ref{fig:original}, all three networks report decent reconstruction, while DCF-RB shows inferior to both CNN and DCF-FB. PSNR values indicate CNN often produces more precise reconstructions; however, those missing high-frequency components in DCF-FB reconstructions are mostly nuance details. With noise added as in figures ~\ref{fig:gaussian} and \ref{fig:speckle}, DCF-FB produces significantly superior reconstruction over both CNN and DCF-RB, with about one tenth of the parameter number of CNN. The above empirical observations clearly indicate that Fourier-Bessel bases, which correspond to the low-frequency components in the inputs, enable DCF to ignore the high-frequency nuance details, which are often less stable under input variations, and mostly irrelevant for tasks such as classification. Such empirical observation provides good intuitions behind the superior classification performance of DCF, and is also consistent with the theoretical analysis on representation stability in Section~\ref{sec:3}. \subsection{Face verification} We present a further evaluation of DCFNet on face verification tasks using ``very deep" network architectures, which comprise a long sequence of convolutional layers. In order to train such complex networks, we adopt a very large scale \emph{VGG-face} \cite{vgg-face} dataset, which contains about 2.6M face images from over 2.6K people. As shown in Table~\ref{tab:facenet}, we adopt the VGG-Very-Deep-16 CNN architecture as detailed in \cite{vgg-face} by modifying layer 32 and 35 to change output features from 4,096 dimension to 512. Such CNN network comprises 16 weight layers, and all except the last Fully-Connected (FC) layer utilize $3\times3$ or $5\times5$ filters. The input to both CNN and DCFNet are face images of size $224 \times 224$ (with the average face image subtracted). As shown in Table~\ref{tab:vgg-face}, with FB bases, even only using $\frac{1}{3}$ parameters at weight layers ($K=3$ for $3 \times 3$, $K=8$ for $5 \times 5$), the DCFNet shows similar verification accuracy as the CNN structure on the challenging LFW benchmark. Note that our CNN model outperforms the \emph{VGG-face} model in \cite{vgg-face}, and such improvement is mostly due to the smaller output dimension we adopted, as both models share similar architecture and are trained on the same face dataset. \begin{table}[h!] \centering \scriptsize \label{tab:facenet} \begin{tabular}{c|c|c} \hline Layer & CNN & DCFNet \\ \hline \multirow{2}{*}{1} &\multirow{2}{*}{conv $3\times 3 \times 3\times 64$} & 3 $3\times 3$ basis\\ & & conv $1\times 1 \times 9\times 64$\\ \hline 2 & \multicolumn{2}{|c}{ReLu}\\ \hline \multirow{2}{*}{3} &\multirow{2}{*}{conv $3\times 3 \times 64 \times 64$} & 3 $3\times 3$ basis\\ & & conv $1\times 1 \times 192 \times 64$\\ \hline 4-5 & \multicolumn{2}{|c}{ReLu, maxPool $2\times 2$}\\ \hline \multirow{2}{*}{6} &\multirow{2}{*}{conv $3\times 3 \times 64\times 128$} & 3 $3\times 3$ basis\\ & & conv $1\times 1 \times 192\times 128$\\ \hline 7 & \multicolumn{2}{|c}{ReLu}\\ \hline \multirow{2}{*}{8} &\multirow{2}{*}{conv $3\times 3 \times 128 \times 128$} & 3 $3\times 3$ basis\\ & & conv $1\times 1 \times 384 \times 128$\\ \hline 9-10 & \multicolumn{2}{|c}{ReLu, maxPool $2\times 2$}\\ \hline \multicolumn{3}{c}{(1-31 CNN layers are identical to \emph{vgg-face} model in \cite{vgg-face}.)} \\ \hline \multirow{2}{*}{32} &\multirow{2}{*}{conv $5\times 5 \times 512\times 512$} & 8 $5\times 5$ basis \\ & & conv $1\times 1 \times 4096\times 512$\\ \hline 33-34 & \multicolumn{2}{|c}{ReLu, dropout}\\ \hline \multirow{2}{*}{35} &\multirow{2}{*}{conv $3\times 3 \times 512 \times 512$} & 3 $3\times 3$ basis\\ & & conv $1\times 1 \times 1536 \times 512$\\ \hline 36-39 & \multicolumn{2}{|c}{ReLu, dropout, FC, softmax}\\ \hline \end{tabular} \caption{\label{tab:facenet} Network architecture for face experiments. For the corresponding DCFNet, each $L$x$L$x$M'$x$M$ CNN conv layer is expended over $K$ $L \times L$ bases for trainable coefficients implemented as a $1 \times 1 \times M'K \times M$ conv layer ($K=3$ for $3 \times 3$, $K=8$ for $5 \times 5$). } \end{table} \begin{table}[h] \begin{centering} \small \begin{tabular}[t]{ c | c c c} \hline & Accuracy & \# param. & \# GFlops \\ \hline \emph{VGG-face} & 97.27 \% & - & - \\ \hline CNN & 97.65 \% & 21.26 $\times 10^6$ & 30.05 \\ DCFNet & 97.32 \% & 7.01 $\times 10^6$ & 10.09\\ \hline \end{tabular} \caption{\label{tab:vgg-face} Face verification accuracy on the LFW benchmark. } \end{centering} \end{table} \section{Conclusion and Discussion}\label{sec:5} The paper studies CNNs where the convolutional filters are represented as a truncated expansion under pre-fixed bases and the expansion coefficients are learned from labeled data. Experimentally, we observe that on various object recognition datasets the classification accuracy are maintained with a significant reduction of the number of parameters, and the performance of Fourier-Bessel (FB) bases is constantly superior. The truncated FB expansion in DCFNet can be viewed as a regularization of the filters. In other words, DCF-FB is less susceptible to the high-frequency components in the input, which are least stable under expected input variations and often do not affect recognition when suppressed. This interpretation is supported by image denoising experiments, where DCF-FB performs preferably over the original CNN and other basis options on noisy inputs. The stability of DCFNet representation is also proved theoretically, showing that the perturbation of the deep features with respect to input variations can be bounded under generic conditions on the decomposed filters. To extend the work, firstly, DCF layers can be incorporated in networks for unsupervised learning, for which the denoising experiment serves as a first step. The stability analysis can be extended by testing the resilience to adversarial noise. Finally, more structures may be imposed across the channels, concurrently with the structures of the filters in space. \newpage \bibliographystyle{icml2018} \section{Proofs} In the proofs, some technical details are omitted for brevity and readability. The full proofs are left to the long version of the work. \begin{proof}[Proof of Proposition 3.1] To prove (a), omitting $(l)$ in $W^{(l)}$, and let $M=M_l$, $M' = M_{l-1}$, $B_{\lambda', \lambda} = \| W_{\lambda', \lambda}\|_1$. By definition of $B_l$, we have that \begin{equation}\label{eq:Bl-bound} \begin{split} & \sum_{\lambda' \in [ M' ]} B_{\lambda', \lambda} \le B_l, \quad \forall \lambda \\ & \sum_{\lambda \in [ M ]} B_{\lambda', \lambda} \le B_l \frac{M}{M'}, \quad \forall \lambda'. \end{split} \end{equation} We essentially use Schur's test, being more careful with the summation over $\lambda'$. We derive by Cauchy-Schwarz which is equivalent to Schur's test: \begin{align*} & \| x^{(l)}[ x_1 ] - x^{(l)}[ x_2 ] \|^{2} \cdot |\Omega| M\\ = & \sum_{\lambda\in[M ] }\int \left|\sigma ( \sum_{\lambda'\in[ M' ]}\int x_1 (u+v',\lambda')W_{\lambda',\lambda}(v')dv'+b(\lambda) ) - \sigma (\sum_{\lambda'\in[ M' ]}\int x_2(u+v', \lambda')W_{\lambda',\lambda}(v')dv'+b(\lambda) ) \right|^{2}du\\ \le & \sum_{\lambda\in[ M ]}\int\left| \sum_{\lambda'\in[ M' ]} \int x_1(u+v',\lambda')W_{\lambda',\lambda}(v')dv'- \sum_{\lambda'\in[ M' ]} \int x_2(u+v', \lambda')W_{\lambda',\lambda}(v')dv'\right|^{2}du\\ = & \sum_{\lambda\in[ M ]}\int\left| \sum_{\lambda'\in[ M' ]}\int( x_1 - x_2 )(\tilde{v},\lambda')W_{\lambda',\lambda}(\tilde{v}-u)d\tilde{v} \right|^{2}du\\ \le & \sum_{\lambda\in[ M ]} \int\left( \sum_{\lambda_{1}'\in[ M' ]} \int| ( x_1 - x_2 ) ( v_{1},\lambda_1' )|^{2}\left|W_{ \lambda_1' ,\lambda}(v_{1}-u)\right|dv_{1}\right) \cdot \left(\sum_{\lambda_{2}'\in[ M' ]} \| W_{\lambda_2', \lambda}\|_1 \right)du\\ \le & B_l \cdot\sum_{\lambda_{1}'\in[M']}\int|( x_1 - x_2 )(v_{1}, \lambda_1' )|^{2} \left( \sum_{\lambda\in[M]} \| W_{ \lambda_1' , \lambda}\|_1 \right)dv_{1}\\ \le & B_l \cdot B_l \frac{M}{M'} \cdot \| x_1 - x_2 \|^{2} |\Omega| M' = B_l^2 M \| x_1 - x_2 \|^{2} |\Omega|, \end{align*} which means that \[ \| x^{(l)}[ x_1 ] - x^{(l)}[ x_2 ] \| \le B_l \| x_1 - x_2 \|. \] Thus $B_l \le 1$ implies (a). To prove (b), we firstly verify that $x_0^{(l)}(\lambda)$ indeed is a constant over space for all $\lambda$ and $l$. When $l=0$, $x_0^{(0)}$ is all zero, so the claim is true. Suppose that the claim holds for $l-1$, then \[ x_0^{(l)}(u,\lambda) = \sigma \left( \sum_{\lambda'} \int x_0^{(l-1)}(\lambda') W^{(l)}_{\lambda',\lambda}(v')dv' + b^{(l)}(\lambda) \right) \] which again does not depend on $u$. So we can write $x_0^{(l)}$ as $x_0^{(l)}(\lambda)$. Now by (a), \[ \| x_c^{(l)} \| = \| x^{(l)}[ x^{(l-1)} ] - x^{(l)}[x_0^{(l-1)}] \| \le \| x^{(l-1)} - x_0^{(l-1)} \| = \| x_c^{(l-1)} \|, \] which proves (b). \end{proof} \begin{proof}[Proof of Lemma 3.2] To illustrate the idea, we first prove the lemma in the one-dimensional case, i.e. $u\in \mathbb{R}$ instead of $\mathbb{R}^2$. We then extend to the 2D case. In the 1D case, the constant $c_1$ can be improved to be 2, and we only need $|\tau'|_\infty <\frac{1}{2}$. In the 2D case, we need $c_1= 4$ as in the final claim. To simply notation, we denote the mapping $x^{(l)}[x^{(l-1)}]$ as $y[x]$, $x_c^{(l-1)}$ by $x_c$, $M_{l-1} = M'$, $M_l = M$, and $W^{(l)}$ by $W$. Let $C_{\lambda',\lambda} = \int |v| |\frac{d}{dv}W_{\lambda',\lambda}(v)| dv$, and $B_{\lambda',\lambda} = \int |W_{\lambda',\lambda}(v)|dv$, then \eqref{eq:Bl-bound} holds, and the same relation holds for $C_{\lambda', \lambda}$ and $C_l$. By definition, \begin{align*} D_{\tau}y[x](u,\lambda) & =\sigma\left(\sum_{\lambda'\in[M']}\int x(\rho(u)+v',\lambda')W_{\lambda',\lambda}(v')dv'+b(\lambda)\right),\\ y[D_{\tau}x](u,\lambda) & =\sigma\left(\sum_{\lambda'\in[M']}\int x(\rho(u+v'),\lambda')W_{\lambda',\lambda}(v')dv'+b(\lambda)\right). \end{align*} Relaxing by removing $\sigma$ as in the proof of Proposition 3.1, one can derive that \[ \|D_{\tau}y[x]-y[D_{\tau}x]\|^{2} \cdot |\Omega| M \le \|E_{1}+E_{2}\|^{2}, \] where \begin{align*} E_{1}(u,\lambda) & =\sum_{\lambda'\in[M']}\int x_c(v,\lambda')(W_{\lambda',\lambda}(v-\rho(u))-W_{\lambda',\lambda}(\rho^{-1}(v)-u))dv,\\ E_{2}(u,\lambda) & =\sum_{\lambda'\in[M']}\int x_c(v,\lambda')W_{\lambda',\lambda}(\rho^{-1}(v)-u)(|(\rho^{-1})'(v)|-1)dv. \end{align*} Notice that $x$ is replaced by $x_c$ due to the fact that $x$ and $x_c$ differ by a constant field over space for each channel $\lambda'$. We bound $\|E_{1}\|$ and $\|E_{2}\|$ respectively. For $E_{1}$, we introduce $k_{\lambda',\lambda}^{(1)}(v,u)=W_{\lambda',\lambda}(v-\rho(u))-W_{\lambda',\lambda}(\rho^{-1}(v)-u)$, and re-write it as \[ E_{1}(u,\lambda)=\sum_{\lambda'\in[M']}\int x_c(v,\lambda')k_{\lambda',\lambda}^{(1)}(v,u)dv. \] Applying Schur's test as in the proof of Proposition 3.1, one can show that \[ \|E_{1}\| \le 2 |\tau'|_\infty C_l \sqrt{ M |\Omega| } \|x_c\| \] as long as for all $\lambda',\lambda$, \begin{equation} \sup_{u}\int\left|k_{\lambda',\lambda}^{(1)}(v,u)\right|dv,\,\sup_{v}\int\left|k_{\lambda',\lambda}^{(1)}(v,u)\right|du \le 2C_{\lambda',\lambda}|\tau'|_{\infty}. \label{eq:C-lambda'-lambda} \end{equation} \eqref{eq:C-lambda'-lambda} can be verified by 1D change of variable, and details omitted. For $E_{2}$, we introduce $k_{\lambda',\lambda}^{(2)}(v,u)=W_{\lambda',\lambda}(\rho^{-1}(v)-u)(|(\rho^{-1})'(v)|-1)$, and then we have that \[ \int |k_{\lambda',\lambda}^{(2)}(v,u)|du\le|(\rho^{-1})'(v)-1|\cdot\int|W_{\lambda',\lambda}(u)|du\le2|\tau'|_{\infty}B_{\lambda',\lambda}, \quad\forall v, \] where we use $1-(\rho^{-1})'(t)=\frac{-\tau'(\rho^{-1}(t))}{1-\tau'(\rho^{-1}(t))}$ and $|\tau'|<\frac{1}{2}$ to obtain the factor 2. Meanwhile, \[ \int|k_{\lambda',\lambda}^{(2)}(v,u)|dv=\int|W_{\lambda',\lambda}(\tilde{v}-u)||1-|\rho'(\tilde{v})||d\tilde{v}\le|\tau'|_{\infty}B_{\lambda',\lambda}, \quad\forall u. \] This gives that \[ \|E_{2}\| \le 2 |\tau'|_\infty B_l \sqrt{ M |\Omega| }\|x_c\|. \] Putting together we have that \[ \sqrt{ M |\Omega| } \|D_{\tau}y[x]-y[D_{\tau}x]\| \le \| E_1 + E_2 \| \le \|E_{1}\| + \|E_{2}\| \le 2 |\tau'|_\infty (C_l + B_l) \sqrt{ M |\Omega| }\|x_c\| \] which proves the claim in the 1D case. The extension to the 2D case uses standard elementary techniques. The assumption $| \nabla \tau |_\infty < \frac{1}{5}$ is used to derive that $ | |J \rho| - 1 |$, $| |J \rho^{-1}| - 1 | \le 4 | \nabla \tau |_\infty$, and $|J \rho|$, $ |J \rho^{-1}| \le 2$. In all the formula, $|(\rho^{-1})'(v)|$ is replaced by the Jacobian determinant $|J \rho^{-1}(v)|$. The integration in 1D is replaced by that along a segment in the 2D space. Details omitted. \end{proof} \begin{proof}[Proof of Prop. 3.3] Under these conditions, Proposition 3.1 applies. Let $c_1 = 4$. Introduce the notation \[ y_{l}=x^{(L)}\circ\cdots\circ D_{\tau}x^{(l)}\circ\cdots\circ x^{(0)},\quad l=0,\cdots,L \] where $y_{0}=x^{(L)}[D_{\tau}x^{(0)}]$, and $y_{L}=D_{\tau}x^{(L)}[x^{(0)}]$. The l.h.s equals $\|y_{0}-y_{L}\|$, and we will bound it by $\|y_{L}-y_{0}\|\le \sum_{l=1}^{L}\|y_{l}-y_{l-1}\|$. For each $l = 1,\cdots, L$, \begin{align*} \|y_{l}-y_{l-1}\| = & \|x^{(L)}\circ\cdots\circ D_{\tau}x^{(l)}\circ x^{(l-1)} \\ & - x^{(L)}\circ\cdots\circ x^{(l)}\circ D_{\tau}x^{(l-1)}\| \\ \le & \|D_{\tau}x^{(l)}\circ x^{(l-1)}-x^{(l)}\circ D_{\tau}x^{(l-1)}\| \\ \le & c_1 (C_l + B_l ) |\nabla \tau|_{\infty}\|x_c^{(l-1)}\| \\ \le & 2 c_1 |\nabla \tau|_{\infty}\|x_c^{(l-1)}\| \\ \le & 2 c_1 |\nabla \tau|_{\infty}\|x^{(0)}\|, \end{align*} where the first inequality is by the nonexpansiveness of the \ensuremath{(l+1)} to \ensuremath{L}-th layer, the second by Lemma 3.2, the third by (A2), and the last by Proposition 3.1 (b). Thus, $\sum_{l=1}^{L}\|y_{l}-y_{l-1}\| \le 2 c_1 L |\nabla \tau|_{\infty}\|x^{(0)}\|$. \end{proof} \begin{proof}[Proof of Proposition 3.4] The technique is similar to that in the proof of Lemma 3.2. Let the constant on the r.h.s be denoted by $c_2$. In the 1D case, the constant $c_2$ can be improved to be 1. In the 2D case, $c_2=2$ as in the final claim. Details omitted. \end{proof} \begin{proof}[Proof of Lemma 3.5] The first claim is a classical result, and has a direct proof as $ \int_{D(0)} |\nabla F|^2 = -\int_{D(0)} F \Delta F = \langle \sum_k a_k \psi_k, \sum_k a_k \mu_k \psi_k \rangle = \pi \sum_k a_k^2 \mu_k$ by the orthogonality of $\psi_k$, as stated above in the text. By Cauchy-Schwarz, $\| \nabla F\|_1 \le \sqrt{\pi} \| \nabla F\|_2 $. Putting together gives the second claim. \end{proof} \begin{proof}[Proof of Proposition 3.6] Omitting $\lambda', \lambda, l$, and let $j_l = j$, we write $W(u) = \sum_k a_k \psi_{j,k}(u)$. Rescaled to $D(0)$, we consider $w(u) = \sum_k a_k \psi_k(u)$, and one can verify that $\| |v| |\nabla W(v)| \|_1 = \| |v| |\nabla w(v) | \|_1$, and $\| W\|_1 = \|w\|_1$. Meanwhile, $ \int_{D(0)} |v| |\nabla w(v)| dv \le \int_{D(0)} |\nabla w(v)| dv$ by that $|v| \le 1$, and $\| w \|_1 \le \| \nabla w\|_1$ by Poincar\'e inequality, using the fact that $w$ vanishes on the boundary of $D(0)$. Thus $\| |v| |\nabla w | \|_1, \|w\|_1\le \|\nabla w\|_1$. The claim of the proposition follows by applying Lemma 3.5 to $w$. \end{proof} \begin{proof}[Proof of Theorem 3.8] Let $c_1 = 4$, $c_2 = 2$. The l.h.s. is bounded by $\| x^{(L)} - D_\tau x^{(L)} \| + \| D_\tau x^{(L)}[x^{(0)}] - x^{(L)}[D_\tau x^{(0)}] \| $. % The second term is less than $2 c_1 L | \nabla \tau |_\infty \|x^{(0)}\|$ by Theorem 3.7. To bound the first term, we apply Proposition 3.4, and notice that for all $\lambda', \lambda$, $\| \nabla W^{(L)}_{\lambda' , \lambda}\|_1 \le 2^{-j_L} \pi \| a^{(L)}_{\lambda, \lambda}\|_{FB}$ (consider $W^{(L)}_{\lambda' , \lambda}(u) = W(u) = \sum_k a_k \psi_{J, k}(u) = 2^{-2 J} \sum_k a_k \psi_{k}(2^{- J}u)$, $J=j_L$, let $w(u) = \sum_k a_k \psi_{k}(u)$, then $W(u) = 2^{-2 J} w(2^{-J}u)$, and $ \| \nabla W\|_1 = 2^{-J} \| \nabla w\|_1$, where $\| \nabla w\|_1 \le \sqrt{\pi} \|a\|_{FB}$ by Lemma 3.5), and thus $D_L \le 2^{-j_L} A_L$. By (A2'), this gives that $\| D_\tau x^{(L)} - x^{(L)} \| \le c_2 2^{-j_L} |\tau|_\infty \|x_c^{(L-1)}\|$, and $ \|x_c^{(L-1)}\| \le \|x^{(0)}\|$ by Proposition 3.1 (b). \end{proof} \section{Experimental Details} The training of a Conv-2 DCF-FB network (Table 2) on MNIST dataset: The network is trained using standard Stochastic Gradient Descent (SGD) with momentum $0.9$ and batch size $100$ for 100 epochs. $L^2$ regularization (``weightdecay") of $10^{-4}$ is used on the trainable parameters $a$'s. The learning rate decreases from $10^{-2}$ to $10^{-4}$ over the 100 epochs. Batch normalization is used after each convolutional layer. The typical evolution of training and testing losses and errors over epochs are shown in Figure \ref{fig:train}. \begin{figure}[h] \begin{center} \includegraphics[width = 0.45 \linewidth]{fig_loss.png} \includegraphics[width = 0.45 \linewidth]{fig_error.png} \caption{ The evolution of training and validation losses (left) and errors (right) over the epochs of a Conv-2 DCF-FB network trained on 50K MNIST using SGD. } \label{fig:train} \end{center} \end{figure} \end{document}
{ "timestamp": "2018-07-31T02:04:08", "yymm": "1802", "arxiv_id": "1802.04145", "language": "en", "url": "https://arxiv.org/abs/1802.04145" }
\section{Introduction} \begin{definition} Given $f$ a complex valued function defined on the unit disc in $\mathbb{R}^2$, we define $$Ef(x_1, x_2, x_3):= \int e^{i(\xi_1 x_1+\xi_2 x_2 +|\xi|^2 x_3)} f(\xi) d \xi .$$ \end{definition} Stein conjectured \cite{Stein} in the 1960s the following restriction estimate \begin{equation}\label{stein conjecture} \|Ef\|_{L^p(\mathbb{R}^3)}\leq C(p, S) \|f\|_{L^{\infty}} \end{equation} for all $p>3$. We refer to \cite{Tao4} for a survey about Stein's restriction conjecture. Tao obtained estimate~\ref{stein conjecture} for $p> 3+1/3$ in \cite{Tao3} using two ends argument, which was introduced in \cite{Wolff}. Later on, Guth improved the range of $p$ to $p> 3+1/4$ in \cite{Guth1} using the polynomial method, which is the previous best known estimate. In this paper, we give a small improvement on the restriction estimate~\ref{stein conjecture} for $p>3+ 3/13$ based on those two methods. \begin{theorem}\label{restriction theorem} If $f$ is supported on the unit disc in $\mathbb{R}^2$, then inequality~\ref{stein conjecture} holds for all $p>3+3/13$. \end{theorem} Theorem~\ref{restriction theorem} can be derived from Theorem~\ref{broad restriction} below. We refer to the introduction of \cite{Guth1} for a discussion of it. \begin{theorem}\label{broad restriction} If $f$ is supported on the unit disc in $\mathbb{R}^2$, then for any small $\epsilon>0$, there exists a large constant $C_{\epsilon}$ depending only on $\epsilon$ such that for any large enough radius $R$, $p>3+3/13$, $$\|Ef\|_{BL^p(B_R)}\leq C_{\epsilon}R^{\epsilon}\|f\|_{L^2}^{2/p} \|f\|_{L^{\infty}}^{1-2/p}.$$ \end{theorem} Here $\|Ef\|_{BL^p(B_R)}$ is the broad $L^p$--norm of $Ef$ defined in \cite{Guth1} and \cite{Guth2}. We give its full description in Section~\ref{preliminary}. Roughly speaking, $\|Ef\|_{BL^p(B_R)}$ can be split into the broad part and the narrow part and $\|Ef\|_{BL^p(B_R)}$ is a locally bilinear norm that captures the difficult part of $\|Ef\|_{L^p(B_R)}$ and spreads out on the Fourier side. The proof of Theorem~\ref{main induction theorem} is a mixed application of polynomial partitioning and two ends argument, two useful techniques in the history of the study of restriction estimate. Around 2000, Wolff and Tao (\cite{Wolff} and \cite{Tao3}) introduced the two ends argument to prove estimate~\ref{stein conjecture} for $p> 10/3$. This argument enables us to only look at the interaction between things that are far apart. In 2014, Guth \cite{Guth1} introduced the polynomial partitioning method to improve the range of $p$ in estimate~\ref{stein conjecture} for $p>3.25$. Polynomial partitioning helps us find where $Ef$ is large using a low degree polylnomial. And the polynomial itself gives us some information about how different parts of $Ef$ are related. We usually decompose $Ef$ into a sum over wavepackets $Ef_{\theta,v}$. Each $Ef_{\theta,v}$ is essentially supported in a tube $T_{\theta,v}$ of length $R$, radius $R^{1/2}$. One can visualize the absolute value $|Ef_{\theta,v}|$ as a constant depending on $Ef$ times the characteristic function of $T_{\theta,v}$. The function $Ef_{\theta,v}$ has some oscillation on $T_{\theta,v}$ which we explore by its $L^2$--norm later on. We apply polynomial partitioning iteratively to obain a collection of algebraic surfaces where $|Ef|$ is large in their thin neighborhoods. We observe that if $|Ef|$ is large in a thin neighborhood of an algebraic surface, then the large wavepackets $Ef_{\theta,v}$ must be organized into large brooms whose roots have large intersections with this surface. For simplicity, we think of algebraic surfaces as planes. Roughly speaking, a broom (rooted at a plane $\Sigma$) is a collection of wavepackets that \begin{itemize} \item intersects at a common place on $\Sigma$, which we call the root of the broom, \item points on a small range of directions which we quantify later, \item spans along the normal direction of $\Sigma$. In other words, we can find a plane $\Sigma'$, such that the $R^{1/2}$--neighborhood of $\Sigma'$ captures all the wavepackets in the broom and $\Sigma' \perp \Sigma$. \end{itemize} The main idea is that an algebraic surface on one end usually has a small intersection with a broom rooted on the other end. We refer to Figure~\ref{broom figure}. We apply the two ends argument to reduce the problem and study the interaction of $Ef$ on far apart algebraic surfaces. Finally, we prove an improved $L^2$--estimate by counting wavepackets using their broom structure. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{broom} \caption{A broom.} \label{broom figure} \end{figure} \subsection{Idea of proof} The proof contains three steps. \textbf{Step one:} We apply polynomial partitioning iteratively to observe some structure of $Ef$ through algebraic surfaces. This part follows the framework from \cite{Guth1}. Instead of applying induction on scale directly as in \cite{Guth1}, we manually write out the induction process with some small changes. We partition the measure $\mu_{Ef(U)} = \|Ef\|_{BL^p(U)}^p$ with a polynomial of degree $d$, which we choose to be about $\log R$. The zero set of the polynomial, which we denote $Z$, decomposes $\mathbb{R}^3$ into a disjoint union of components. Each component has about the same measure under $\mu_{Ef}$. In order to understand how a tube $T_{\theta,v}$ intersects those components, we furthermore decompose $\mathbb{R}^3$ into a thin neighborhood of $Z$ and a disjoint union of cells, where each cell is a subset of one previous component and essentially lies inside a ball of radius $R/d$. When the cellular part dominates, we continue partitioning each cell with a polynomial adapted to it of degree $d$. When the algebraic part dominates, we cover the thin neighborhood of $Z$ with balls of radius $R^{1-\delta}$ for some $\delta \ll \epsilon$. We record the tangential part and then apply polynomial partitioning on $\mu_{Ef}$ in each smaller ball. Here we apply polynomial partitioning on $\mu_{Ef}$ iteratively without changing the function $Ef$. This is slightly different from the proof in \cite{Guth1}. After one step, we have reduced $R$ to either $R/d$ or $R^{1-\delta}$. We stop the iteration process once the radius is smaller than $R^{\delta}$. Locally $Ef$ can be split into, according to previous partitioning, one cellular-transversal part and a sum of tangential parts from the algebraic steps. There are at most $\delta^{-2}$ tangential parts because in each algebraic step we reduce the radius to its power of $1-\delta$, and $(1-\delta)^{\delta^{-2}} \leq \delta$. If $Ef$ is dominated by the cellular-transversal part, then the information from polynomial partitioning is enough to prove Theorem~\ref{main induction theorem}. For this part we use only the method in \cite{Guth1}. If $Ef$ is dominated by the tangential parts, we need more information. The key observation in this paper is the following: if $|Ef|$ is large in a thin neighborhood $S$ of some low degree algebraic surface, then the large wavepackets $Ef_{\theta,v}$ must be organized into large brooms with root concentrated on $S$. Let us consider the example when there is only one wavepacket $Ef_{\theta,v}$ intersecting $S$. Since $|Ef_{\theta,v}|$ is roughly a constant times $\chi_{T_{\theta,v}}$, and $|T_{\theta,v}\cap S|$ is small compared to $T_{\theta,v}$, the $L^2$--norm of $Ef$ on $S$ is small compared to the whole $L^2$--norm. Here we provide a more detailed explanation of why the wavepackets must be organized into large brooms. Assume that the low degree algebraic surface is a plane and $S$ is the $r^{1/2}$--neighborhood of the plane with $R^{1/2}\leq r\leq R$. In the proof we need to estimate the $L^2$--norm of $Ef_{\tau}$ on $S$ for a cap $\tau$ of radius $r^{-1/2}$ and $G(\tau)$ parallel to $S$. One might assume all wavepackets in $Ef_{\tau}$ intersect at a common point on $S$ for simplicity. The dual of $S$ on the Fourier side is contained in a mini tube $s$ of length $r^{-1/2}$, radius $R^{-1/2}$, with direction orthogonal to $S$. By $L^2$--orthogonality, $$\|Ef_{\tau}\|_{L^2(S)}^2 \approx \sum_{s} \|Ef_{s}\|_{L^2(S)}^2.$$ Here $Ef_{s}$ consists of wavepackets that span along the normal direction of $S$. The example in the end of last paragraph shows that $Ef_s$ must contain many wavepackets, otherwise $S$ only captures small proportion of the $L^2$--norm of $Ef_s$. Once we show that wavepackets need to be organized into large brooms, then we can apply the following geometric observation. If we have a large broom concentrated at one end, then it is difficult for all the algebraic surfaces on the other end to intersect with multiple wavepackets (Figure~\ref{broom figure}). We work on this case in Step two and Step three. \textbf{Step two:} We apply Wolff's two ends argument to reduce the problem and count the wavepackets shared by distant algebraic surfaces. We cover $B_R$ with balls $B_k$ of radius $\rho$, where $\rho= R^{1-\epsilon_0}$ with $\delta \ll \epsilon_0\ll \epsilon$. We define some relation $T_{\theta,v} \sim B_k$ satisfying: \begin{equation}\label{two ends condition} \text{~for~ a ~fix~} T_{\theta,v},\text{~ the~ number~ of ~} B_k \text{~ with~} T_{\theta,v}\sim B_k\text{~ is~ bounded~ by~} O_{\delta}(1). \end{equation} It does not matter what the exact relation $\sim$ is at this step, all we need is condition~\ref{two ends condition}. We give the full definition of $\sim$ in Step three, which is adapted to the polynomial partitioning process in Step one. Inside each $B_k$, we decompose $Ef = Ef^{\sim} + Ef^{\nsim}$. If for most $B_k$ the $Ef^{\sim}$ dominates, then we apply induction on scale $\rho < R$ and condition~\ref{two ends condition} to sum over the balls $B_k$. Roughly speaking, because condition~\ref{two ends condition} says that each wavepacket is related to only a few balls, we could think as if the function $Ef$ in different $B_k$ are independent. So when we sum up the $B_k$s, the induction goes through. \textbf{Step three:} When the $Ef^{\nsim}$ dominates for most $B_k$, we give the explicit relation $\sim$ and count wavepackets in $Ef^{\nsim}$ using brooms. This step contains all the geometric ingredients, and can be further divided into three small steps. The difficult case is when $Ef^{\nsim}$ is concentrated in thin neighborhoods of many algebraic surfaces inside small balls. \begin{itemize} \item We prove a Lemma~\ref{plane} saying that for a fixed direction (adapted to the small ball), each algebraic surface can be viewed as several planes. \item We define the broom structure according to those planes, and define the $\sim$ relation. We define $T_{\theta,v}\sim B_k$ if $T_{\theta,v}$ belongs to many large brooms with roots inside $B_k$. \item The function $Ef^{\nsim}$ consists of large wavepackets $Ef_{\theta,v}$ that belong to many brooms with roots far apart. When each wavepacket has approximately the same weight, the large wavepackets in $Ef^{\nsim}$ hitting an algebraic surface $Z$ represent only a small proportion of large wavepackets in $Ef$. On the one hand, the proportion of large wavepackets hitting $Z$ becomes smaller if the size of the brooms is larger. On the other hand, the $L^2$--norm near $Z$ becomes smaller if the size of the brooms is smaller. We use this information to obtain the improved restriction estimate. \end{itemize} \textbf{The rest of this paper is organized in the following way. } We discuss some preliminaries including a sketch of the polynomial partitioning proof of \ref{stein conjecture} for $p>3.25$ in \cite{Guth1} in Section~\ref{preliminary}. The sketch proof is the starting point of our discussion, and the polynomial structure lemma (Lemma~\ref{structure2018}) applies it iteratively. Section~\ref{white lie} contains a white lie version of the proof. We assume in this section that all the algebraic surfaces are planes and that if one case dominates, then other cases vanish. The white lie proof contains the main idea and is close to my initial thoughts about this problem. Then we start the proof of Theorem~\ref{main induction theorem}. Section~\ref{structure} proves the polynomial structure lemma which indicates if $Ef$ has large $BL^p$--norm then we can find a collection of low degree algebraic surfaces such that the large wavepackets $Ef_{\theta,v}$ are tangential to them. Section~\ref{two ends} applies Wolff's two ends argument to reduce the problem and study $Ef^{\nsim}$. This part of the argument is general and is the same as in \cite{Wolff} and \cite{Tao3}. We estimate $Ef^{\nsim}$ in Section~\ref{estimate}. Subsection~\ref{plane section} includes a geometric lemma (Lemma~\ref{plane}) saying that we can think the algebraic surfaces as planes in our arguments. We define in Subsection~\ref{broom section} the relation $\sim$ according to the brooms and planes and we count the wavepackets of $Ef^{\nsim}$ using brooms. In Section~\ref{proof} we summarize the proof of Theorem~\ref{main induction theorem}. \subsection{Notation} If $X$ is a finite set, we use $|X|$ to denote its cardinality; if $X$ is a measurable set, we use $|X|$ to denote its Lebesgue measure. We use $B_r$ to denote a ball of radius $r$. We use $A\lesssim B$ or $A=O(B)$ to denote the estimate $A\leq CB$ where $C$ is an absolute constant. \section*{Acknowledgments} I would like to thank my advisor Larry Guth for enormous help throughout this project. I would also like to thank Ruixiang Zhang and Donghao Wang for helping me prove Lemma~\ref{plane}. \section{Preliminary}\label{preliminary} We recall the definition of wavepacket decomposition and we refer to Section 2.4 in \cite{Guth1} for a discussion. \begin{definition} If $B_R$ is a ball of radius $R$ in $\mathbb{R}^3$, then we can do wavepacket decomposition of $Ef$ in $B_R$ : $$ Ef = \underset{\theta,v}{\sum}Ef_{\theta,v},$$ where $\theta$ are caps of radius $R^{-1/2}$. Each $Ef_{\theta,v}$ is a \emph{wavepacket} essentially supported on a tube $T_{\theta,v}$ of length $R$, radius $R^{1/2}$, whose direction $G(\theta)$ is determined by $\theta$. \end{definition} We consider the broad $L^p$--norm $\|Ef\|_{BL^p(B_R)}$, which was defined in \cite{Guth1} and \cite{Guth2}. We recall the definition here. We decompose the unit disc into finitely overlapping smal disc $\alpha$ of radius $K^{-1}$, where $K$ is at the scale $R^{\delta^2}$, with $\delta\ll \epsilon$ for the $\epsilon$ in Theorem~\ref{main induction theorem}. We write $f=\sum_{\alpha}f_{\alpha}$, where $f_{\alpha}$ is supported in $\alpha$. The wavepackets in $Ef_{\alpha}$ are those $Ef_{\theta,v}$ with $\theta\subset \alpha$. The set $G(\alpha)\subset S^2$ is a spherical cap with radius $\sim K^{-1}$, representing the possible directions of wavepackets in $Ef_{\alpha}$. We define \begin{equation}\label{broad norm} \mu_{Ef}(B_{K}) := \underset{V_1, \dots, V_A : \text{~lines of~} \mathbb{R}^3}{\min}\big( \underset{\alpha: \Angle(G(\alpha), V_a)\geq K^{-1} \text{~for all~} 1\leq a\leq A}{\max}\int_{B_{K}} |Ef_{\alpha}|^p \big). \end{equation} We write the broad $L^p$--norm as $\|Ef\|_{BL^p_A(B_R)}^p= \mu_{Ef}(B_R)$. We often neglige $A$ for simplicity unless $A$ plays a role in the proof. One can see that the broad $L^p$--norm is bounded by sum of bilinear norms: $$\|Ef\|_{BL^p(B_R)}^p \leq \underset{\alpha_1, \alpha_2 \text{nonadjacent}}{\sum} \|(Ef_{\alpha_1}Ef_{\alpha_2})^{1/2}\|_{L^p(B_R)}^p.$$ We can view the $BL^p$--norm as approximately an $L^p$--norm with broadness: if $f$ is supported inside a small cap of radius $K^{-1}$, then $\|Ef\|_{BL^p(B_R)}=0$. We can also think the $BL^p$--norm as a local bilinear norm. We assume that $\|f\|_{L^2}=1$ since Theorem~\ref{main induction theorem} is invariant under multiplication by a constant. We sort the wavepackets $Ef_{\theta,v}$ according to the size of $\|f_{\theta,v}\|_{L^2}$, which we denote $\lambda$. The sum of wavepackets with $\|f_{\theta,v}\|_{L^2}\leq R^{-10}$ automatically holds for Theorem~\ref{main induction theorem}. So it suffices to consider $R^{-10}\leq \lambda \leq 1$. For each $\lambda$, let $Ef_{\lambda}$ be the sum of wavepackets with $\|f_{\theta,v}\|_{L^2}\sim \lambda_0$. Since there are $O(\log R)$ choices of $\lambda$, there exists a $\lambda_0$ such that $\|Ef_{\lambda_0}\|_{BL^p(B_R)}\gtrsim (\log R)^{-1} \|Ef\|_{BL^p(B_R)}$. From now on, we assume $Ef= Ef_{\lambda_0}$. In particular, each wavepacket in $Ef$ is either zero or with $\|f_{\theta,v}\|_{L^2}\sim \lambda_0$. Let $\mathbb{T}_0$ denote the collection of tubes $T_{\theta,v}$ with non zero $f_{\theta,v}$. \subsection{Polynomial partitioning in \cite{Guth1}}\label{polynomial partitioning} In this section, we sketch the proof of the following theorem in \cite{Guth1}. \begin{theorem}\label{restriction larry} If $f$ is supported on the unit disc in $\mathbb{R}^2$, then for any small $\epsilon>0$, there eixsts a large constant $C_{\epsilon}$ depending only on $\epsilon$ such that for any large radius $R$, and for any $p>13/4$, $$\|Ef\|_{BL^p(B_R)}\leq C_{\epsilon}R^{\epsilon}\|f\|_{L^2}^{12/13}\underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L_{avg}^{2}(\theta)}^{1/13},$$ where $|\theta|$ means the radius of $\theta$, and $f_{\theta}=f\phi_{\theta}$ is $f$ multiplied by a bump function $\phi_{\theta}$ supported in $2\theta$, and $\|f_{\theta}\|_{L_{avg}^2(\theta)}^2 = \Vol(\theta)^{-1}\|f_{\theta}\|_{L^2(\theta)}^2$. \end{theorem} We apply Theorem 0.6 in \cite{Guth1}, for any degree $d\geq 1$, we can find a non-zero polynomial $P$ of degree at most $d$ so that $\mathbb{R}^3\setminus Z(P)$ is a union of $O(d^3)$ disjoint cells $U_i'$. In our case we take $d=\log R$. We have $$ \mathbb{R}^3\setminus Z(P) = \bigsqcup U_i',$$ and $\|Ef\|_{BL^p(U_i')}^p = d^{-3}\|Ef\|_{BL^p(B_R)}^p$. We would like to decompose furthermore $U_i'$ into subsets such that each subset lies inside a ball of radius $R/d$. Let $Q=P\cdot G$, where $G$ is the product of planes forming a grid of cubes of side length $R/d$. We only need to count the planes intersecting $B_R$, so $\deg G \leq 3d$ and $\deg Q \leq 4d$. We have a new decomposition of $\mathbb{R}^3$: $$ \mathbb{R}^3\setminus Z(Q) = \bigsqcup O_j',$$ where each $O_j'$ is a subset of some $U_i' \cap B_{R/d}$. We define $W$ to be the $R^{1/2+\delta}$--neighborhood of $Z(Q)$. Let $U_i =U_i' \setminus W$ and $O_j =O_j' \setminus W$. By Milnor-Thom Theorem, the number of $O_j'$'s is bounded by $O(d^3)$. Since there are about $ d^3$ $U_i'$'s, for $99\%$ of the cells $U_i'$ each one contains at most $O(1)$ $O_j'$'s. Hence for $99\%$ of $U_i$ each one contains at most $O(1)$ $O_j$'s. By pigeonholing, there exists one $O_j \subseteq U_i$ such that $$\|Ef\|_{BL^p(O_j)}^p \gtrsim \|Ef\|_{BL^p(U_i)}^p.$$ We call both $O_j$ and $U_i$ cells and we see $O_j$ as a replacement for $U_i$ with one additional information that $O_j \subseteq B_{R/d}$. To summarize, we find about $d^3$ cells $O_j$ and a thin neigbhorhood $W$ of a low degree algebraic surface $ W \sqcup_{j}O_j \subset \mathbb{R}^3 $ satisfying \begin{equation}\label{shrinked sum} \|Ef\|_{BL^p(B_R)}^p \lesssim \|Ef\|_{BL^p(W)}^p +\sum_j \|Ef\|_{BL^p(O_j)}^p \end{equation} and \begin{equation}\label{shrinked cell} \|Ef\|_{BL^p(O_j)}^p \lesssim d^{-3} \|Ef\|_{BL^p(B_R)}^p. \end{equation} We cover $W$ with balls $B_k$ of radius $R^{1-\delta}$. We define as follows which tubes $T_{\theta,v}$ are tangential to $Z(P)$ in $B_k$ and which tubes are transversal to $Z(P)$ in $B_k$. \begin{definition}\label{tang} $\mathbb{T}_{k, \tang}$ is the set of all tubes $T$ obeying the following two conditions: \begin{itemize} \item $T\cap W\cap B_k\neq \emptyset$. \item If $z$ is any non-singular point of $Z(P)$ lying in $2B_k\cap 10 T$, then $$\Angle(v(T), T_zZ)\lesssim R^{-1/2+2\delta}.$$ \end{itemize} We note $\mathbb{T}_{\tang}=\cup_{k}\mathbb{T}_{k,\tang}$. \end{definition} \begin{definition} $\mathbb{T}_{k, \trans}$ is the set of all $T$ obeying the following two conditions: \begin{itemize} \item $T\cap W\cap B_k \neq \emptyset$. \item There exists a non-singular point $z$ of $Z(P)$ lying in $2B_k\cap 10 T$, so that $$\Angle(v(T), T_zZ)> R^{-1/2+2\delta}.$$ \end{itemize} We note $\mathbb{T}_{\trans}=\cup_{k} \mathbb{T}_{k, \trans}$. \end{definition} Fix a ball $B_{k}$ of radius $R^{1-\delta}$, let $$ f_{k, \tang}= \sum_{T_{\theta,v}\in \mathbb{T}_{k, \tang}} f_{\theta,v}, ~~~~ f_{k,\trans}= \sum_{T_{\theta,v}\in \mathbb{T}_{k, \trans}} f_{\theta,v}.$$ By triangle inequality of $BL^p$--norm (up to a change of $A$ in the broad norm definition), \begin{align*} \|Ef\|_{BL^p_A(B_R)}^p &\lesssim \sum_{j} \|Ef\|_{BL^p_A(O_j)}^p + \sum_k \|Ef_{k,\trans}\|_{BL_{A_1}^p(W\cap B_k)}^p +\sum_k \|Ef_{k, \tang}\|_{BL^p_{C_1}(W\cap B_k)}^p \\&+ \RapDec(R)\|f\|_{L^2}^p \end{align*} Here $A= A_1+C_1$, and we choose $C_1 \ll A$. \subsubsection{Cellular case} The cellular case is when $\|Ef\|_{BL^p(B_R)}^p \lesssim \sum_{O_j} \|Ef\|_{BL^p(O_j)}^p$. By inequality~\ref{shrinked cell} and pigeonholing, for more than $O(d^3)$ of the cells $O_j$, we have in cellular case $$\|Ef\|_{BL^p(O_j)}^p \gtrsim d^{-3}\|Ef\|_{BL^p(B_R)}^p.$$ We define $$f_j=\sum_{T_{\theta,v}\cap O_j\neq \emptyset} f_{\theta,v}.$$ Each tube $T_{\theta,v}$ intersects at most $d+1$ components $U_i$. Since we pick only one $O_j$ in each $U_i$, each tube $T_{\theta,v}$ intersects at most $d+1$ cells $O_j$. Hence \begin{equation}\label{cellular inequality} \sum_{j=1}^{O(d^3)} \|f_j\|_{L^2}^2 \lesssim d \|f\|_{L^2}^2. \end{equation} For $99\%$ of the cells $O_j$, the corresponding $f_j$ satisfies $\|f_j\|_{L^2}\lesssim d^{-1} \|f\|_{L^2}$.% By the induction assumption, $$\|Ef_j\|_{BL^p(O_j)} \lesssim (\frac{R}{d})^{\epsilon} \|f_j\|_{L^2}^{12/13}\underset{|\tau|=(R/d)^{-1/2}}{\max} \|f_{\tau}\|_{L^2_{avg}(\tau)}^{1/13} .$$ By $L^2$--orthogonality, $\underset{|\tau|=(R/d)^{-1/2}}{\max} \|f_{\tau}\|_{L^2_{avg}(\tau)} \leq \underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}$. Hence for $90\%$ of the $O_j$, we have \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim d^{3} \|Ef_j\|_{BL^p(O_j)}\\ &\lesssim d^{3-12p/13}(\frac{R}{d})^{\epsilon p} \|f\|_{L^2}^{12p/13}\underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^{p/13} \end{align*} When $p>\frac{13}{4}$, the induction closes. \begin{remark}\label{cells} When we are in the cellular case, we will only consider the cells $O_j$ satisfying: \begin{equation} \|Ef_j\|_{BL^p(O_j)}^p\gtrsim d^{-3}\|Ef\|_{BL^p(B_R)}^p \end{equation} and \begin{equation} \|f_j\|_{L^2}\lesssim d^{-1}\|f\|_{L^2}. \end{equation} There are more than $O(d^3)$ such cells. \end{remark} \subsubsection{Tangential case} We are in tangential case if $\sum_k \|Ef_{k,\tang}\|_{BL^p(W\cap B_k)}^p \gtrsim \|Ef\|_{BL^p(B_R)}^p$. We shall apply the following crucial geometric lemma (Lemma 4.9 in \cite{Guth1}). \begin{lemma}\label{geometric larry} If $\mathbb{T}\subseteq \mathbb{T}_{\tang}$ are tubes pointing in pairwise $R^{-1/2}$--different directions, then $|\mathbb{T}|\leq O(R^{1/2+O(\delta)})$. \end{lemma} In particular, it says that $\text{supp}f_{tang}$ belongs to a subset of area $ R^{-1/2 +O(\delta)}$. Same as in Section 3.4 in \cite{Guth1}, or we interpolate the bilinear restriction theorem \cite{Tao3} with the $L^2\rightarrow L^2$ bound, we obtain \begin{equation}\label{tangential 1} \|Ef_{k,\tang}\|_{BL^p(B_R)}^p\lesssim R^{\frac{5}{2}-\frac{3p}{4}} \|f_{k,\tang}\|_{L^2}^p. \end{equation} By Lemma~\ref{geometric larry}, \begin{equation}\label{tangential 2} \|f_{k,\tang}\|_{L^2}^2\lesssim R^{-1/2+O(\delta)} \underset{ |\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L_{avg}^2(\theta)}^2. \end{equation} When $p>13/4$, \begin{align*} \|Ef_{k,\tang}\|_{BL^p(B_R)}^p &\lesssim R^{\frac{5}{2}-\frac{3p}{4} -\frac{p}{52}}\|f\|_{L^2}^{12p/13}\underset{ |\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L_{avg}^2(\theta)}^{p/13}\\ &\lesssim R^{\epsilon} \|f\|_{L^2}^{12p/13}\underset{ |\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L_{avg}^2(\theta)}^{p/13}. \end{align*} \subsubsection{Transverse case} We are in transverse case when we are neither in cellular case nor tangential case. We cover $W$ with balls $B_k$ of radius $\rho=R^{1-\delta}$. By induction on scale of Theorem~\ref{restriction larry}, we assume that inside each $B_k$, we have $$\|Ef_{k, \trans}\|_{BL^p(B_k)}^p \leq C_{\epsilon} \rho^{\epsilon p} \|f\|_{L^2}^{12p/13}\underset{|\theta'|=\rho^{-1/2}}{\max}\|f_{\theta'}\|_{L_{avg}(\theta')}^{p/13}.$$ We shall apply Lemma 3.5 in \cite{Guth1} which we recall here. \begin{lemma}\label{transversal tubes} Each tube $T\in \mathbb{T}$ belongs to at most $\Poly(d)$ different sets $\mathbb{T}_{k,\trans}$. \end{lemma} By Lemma~\ref{transversal tubes}, we have \begin{equation}\label{transversal inequality} \sum_k \|f_{k,\trans}\|_{L^2}^2 \lesssim \Poly(d) \|f\|_{L^2}^2. \end{equation} We have as well $\|f_{\theta'}\|_{L^2_{avg}(\theta')} \leq \underset{\theta\subseteq \theta'}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}.$ We sum up over the balls $B_k$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim \sum_k \|Ef_{k, trans}\|_{BL^p(B_k)}^p\\ & \lesssim C_{\epsilon} \rho^{\epsilon p} ( \sum_k \|f_{k,\trans}\|_{L^2}^{12p/13}) \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p/13}\\ &\leq C_{\epsilon} R^{\epsilon p} \|f\|_{L^2}^{12p/13}\underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p/13}. \end{align*} We used the fact that $l^q\leq l^2$ when $q>2$ and $12p/13>2$ to sum up the $\|f_{k,\trans}\|_{L^2}^{12p/13}$ and then we applied inequality~\ref{transversal inequality}. Since $d=\log R$, we choose $R$ large enough such that $R^{\epsilon \delta}\gg \Poly(d)$. \section{A white lie version of the proof}\label{white lie} We prove a slightly stronger version of Theorem~\ref{broad restriction} which works better for induction. \begin{theorem}\label{main induction theorem} If $f$ is supported on the unit disc in $\mathbb{R}^2$, then for any small $\epsilon>0$, there exists a large constant $C_{\epsilon}$ depending only on $\epsilon$ such that for any large enough radius $R$, $p>3+3/13$, \begin{equation}\label{goal} \|Ef\|_{BL^p(B_R)}\leq C_{\epsilon}R^{\epsilon}\|f\|_{L^2}^{2/p}\underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L_{avg}^{2}(\theta)}^{1-2/p}, \end{equation} where $|\theta|$ means the radius of $\theta$, $f_{\theta}=f|_{\theta}$ and $\|f_{\theta}\|_{L_{avg}^2(\theta)}^2 = \Vol(\theta)^{-1}\|f_{\theta}\|_{L^2(\theta)}^2$. \end{theorem} Theorem~\ref{broad restriction} is a direct corollary of Theorem~\ref{main induction theorem} because $$\underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L_{avg}^{2}(\theta)} \leq \|f\|_{L^{\infty}}.$$ We use the $L^2_{avg}$--norm on the right-hand side because it is more suitable for our induction than $L^{\infty}$--norm. \subsection{Tangential case in the cells} Now we have finished the sketch proof of Theorem~\ref{restriction larry}, let's look at the right-hand side of the inequality in Theorem~\ref{restriction larry}. The right-hand side is a mix of $L^2$--norm and some approximation of $L^{\infty}$--norm. In the tangential case, we have a very strong estimate using the $L^{\infty}$--norm. If we put more weights on the $L^{\infty}$ part, can we hope to get a better restriction estimate? From the celluar case, we observe that we need a lot weight on the $L^2$--norm to close the induction. We start by writing out the induction iteration process and study the interaction between different cells. \textbf{The white lie} we assume here is that if we are in transverse case, then the tangential part is zero; if we are in tangential case, then the transverse part is zero. In reality, this might not happen and we are going to treat it carefully in the following sections. We think the polynomial partitioning iteration as an algorithm that stops at tangential case. We run the algorithm as follows. \textbf{Initial step}. We run the first polynomial partitioining as in Subsection~\ref{polynomial partitioning}. If we are at the tangential case, we stop and estimate the $\|Ef\|_{BL^p(B_R)}^p$. By inequality~\ref{tangential 1} and inequality~\ref{tangential 2}, when $p>3$, $\|Ef_{\tang}\|_{BL^p(B_R)}^p \lesssim R^{O(\delta)} \|f\|_{L^2}^2\underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}$. When we are in cellular case, we keep those cells satisfying the criteria in Remark~\ref{cells} and we denote them $O_1$. Each cell $O_1$ lies inside a ball of radius $r_1= R/d$. When we are in transversal case, we write $O_1=B_k \cap W$, and we call them cells as well. We sort the cells $O_1$ according to the size of $\|Ef_{k,\trans}\|_{BL^p(O_1)}^p$, which we denote $\lambda (O_1)$. It suffices to consider $\lambda (O_1)$ between $R^{-1} \|Ef\|_{BL^{p}(B_R)}^p$ and $\|Ef\|_{BL^p(B_R)}^p$ because we have at most $(R/\rho)^{3}$ cells. There exists a dyadic number $\lambda_0$, such that the cells $O_1$ with $\lambda (O_1)\sim \lambda_0$ dominates $(\log R)^{-1}$ of the $\|Ef\|_{BL^p(B_R)}^{p}$. We keep only those cells. We write $r_1=R^{1-\delta}$. We write \begin{equation}\label{ideal definition} f_{O_1}= \underset{T_{\theta,v}\cap O_1\neq \emptyset}{\sum} f_{\theta,v}. \end{equation} In cellular case, we rewrite inequality~\ref{cellular inequality} using $f_{O_1}$: \begin{equation}\label{ideal cellular bound} \sum_{O_1} \|f_{O_1}\|_{L^2}^2 \lesssim d \|f\|_{L^2}^2. \end{equation} In transversal case, we rewrite inequality~\ref{transversal inequality} using $f_{O_1}$: \begin{equation}\label{ideal transversal bound} \sum_{O_1} \|f_{O_1}\|_{L^2}^2 \lesssim \Poly(d) \|f\|_{L^2}^2. \end{equation} Under our white lie assumption, the tangential part is zero, so $f_{\trans} = f_{O_1}$. In reality, when we are in transversal case, the $f_{O_1}$ defined in defintion~\ref{ideal definition} does not necessarily satisfy inequality~\ref{ideal transversal bound}. \textbf{Iteration step}. Assume that we have $m$ steps cellular case, and $n$ steps transversal case, we obtain more than $O(d^m)$ cells $O_{m+n}$. Each cell has approximately the same $BL^p$--norm and is contained in a ball of radius $r_{m+n}$. We do wavepacket decomposition in each $B_{r_{m+n}}$ $$f = \underset{|\tau|=r_{m+n}^{-1/2},w}{\sum}f_{\tau,w}$$ and we do polynomial partitioning inside each cell $O_{m+n}$. If for more than $1/3$ of the cells $O_{m+n}$ we are in cellular case, then we keep only those $O_{m+n}$ and we choose children cells $O_{m+n+1}$ inside parent cells $O_{m+n}$ as previous, and we write $r_{m+n+1}= r_{m+n}/d$. If for more than $1/3$ of the cells $O_{m+n}$ we are in transversal case, then we select the children cells as follows. \begin{itemize} \item Inside each $O_{m+n}$, we sort the children cells $O_{m+n+1}$ according to their $BL^p$--norm. There exists a collection of the children cells with approximately the same $BL^p$--norm,which we denote $\lambda_0(O_{m+n})$, that dominates $(\log R)^{-1}$ of the $BL^p$--norm of the parent cell. \item We sort the parent cells $O_{m+n}$ according to $\lambda_0(O_{m+n})$, then by pigonholing we can find a common $\lambda_0$ such that more than $ (\log R)^{-1}$ of the $\lambda_0(O_{m+n})$ has about the same size as $\lambda_0$. \item We keep only the parent cells with $\lambda_0(O_{m+n})\sim \lambda_0$ and their children cells with $BL^p$--norm roughly $\lambda_0$. \end{itemize} We write $r_{m+n+1}= r_{m+n}^{1-\delta}$. Each $O_{m+n+1}$ lies inside a ball of radius $r_{m+n+1}$. Otherwise for most of the cells, we are in tangential case, then we stop and write $O=O_{m+n}$, $r=r_{m+n}$. Under the white lie assumption, restricted on $O$ we have $$f_O =\underset{T_{\theta,v}\cap O\neq\emptyset}{\sum}f_{\theta,v} + \RapDec(R)\|f\|_{L^2}.$$ Write $D=d^m$, we have \begin{equation}\label{number of cells} \# O \gtrsim R^{-\delta} D^3. \end{equation} From the induction process and white lie assumption, for each $O$ we have \begin{equation}\label{average cell} \|Ef\|_{BL^p(B_R)}^p \lesssim R^{\delta} \#\{O\} \|Ef_{O}\|_{BL^p(O)}^p \lesssim R^{\delta} \#\{O\}\|Ef_{O,\tang}\|_{BL^p(O)}^p \end{equation} for each cell $O$ and \begin{equation}\label{precise transversal} \sum_O{\|f_O\|_{L^2}^2 \lesssim DR^{\delta}\|f\|_{L^2}^2}. \end{equation} Inequality~\ref{precise transversal} is true because we have at most $n\lesssim \delta^{-2}$ transversal steps and we choose $d^n \lesssim (\log R)^{\delta^{-2}} \lesssim R^{\delta}$. Now we have finished the polynomial partitioning iteration, and we know that the $BL^p$--norm of $Ef$ is concentrated in the neighborhoods of several low degree algebraic surfaces. \begin{lemma}\label{large r} When $r\geq R^{13/16}$, for any $p> 42/13$, $$ \|Ef\|_{BL^p(B_R)}\leq C_{\epsilon}R^{\epsilon}\|f\|_{L^2}^{2/p}\underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L_{avg}^{2}(\theta)}^{1-2/p}.$$ \end{lemma} \begin{proof} We apply inequality~\ref{tangential 1} on each cell $O\subseteq B_r$, inequality~\ref{precise transversal} and Lemma~\ref{geometric larry} on scale $r$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p & \lesssim R^{\delta} \sum_O \|Ef_{O,\tang}\|_{BL^p(O)}^p \\ &\lesssim R^{\delta} r^{\frac{5}{2}-\frac{3p}{4}} \sum_O \|f_{O,\tang}\|_{L^2}^p\\ &\lesssim R^{\delta} r^{\frac{5}{2}-\frac{3p}{4}-\frac{p-2}{4} } D\|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{O,\tang,\theta}\|_{L^2_{avg}(\theta)}^{p-2} \end{align*} When $r\geq R^{13/16}$, $D\leq R/r \leq r^{3/13}$, the constant term is bounded by $C_{\epsilon}R^{\epsilon}$. \end{proof} \begin{lemma}\label{small r} When $r\leq R^{O(\epsilon_0)}$ with $\epsilon_0\ll \epsilon$, for any $p>3$, $$ \|Ef\|_{BL^p(B_R)}\leq C_{\epsilon}R^{\epsilon}\|f\|_{L^2}.$$ \end{lemma} \begin{proof} By inequality~\ref{precise transversal}, for most of the cells $O$, $\|f_{O}\|_{L^2}^2\lesssim \frac{DR^{O(\delta)}}{\#\{O\}} \|f\|_{L^2}^2$. We apply inequality~\ref{tangential 1} on each cell $O\subseteq B_r$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p& \lesssim R^{O(\epsilon_0)} \#\{O\} \|Ef_{O,\tang}\|_{BL^p(O)}^p \\ &\lesssim R^{O(\epsilon_0)} \# \{O\} r^{\frac{5}{2}-\frac{3p}{4}}\|f_{O,\tang}\|_{L^2}^p \\ &\lesssim R^{O(\epsilon_0)} \# \{O\}\|f_{O}\|_{L^2}^p\\ & \lesssim R^{O(\epsilon_0)} \# \{O\}^{1-p/2} \cdot D^{p/2}\|f\|_{L^2}^p \end{align*} Since $\#\{O\}\gtrsim R^{-\delta} D^3$, the constant term is bounded by $R^{O(\epsilon_0)}\ll R^{\epsilon}$ \end{proof} After Lemma~\ref{large r} and Lemma~\ref{small r}, it suffices to consider when $R^{O(\epsilon_0)} \leq r\leq R^{13/16}$. We apply Wolff's two ends argument to reduce to analysing the interaction between wavepackets near the algebraic surfaces in far apart cells. We cover the whole $B_R$ with balls $B_k$ of radius $\rho=R^{1-\epsilon_0}$ with $\delta\ll \epsilon_0 \ll \epsilon$. We define some relation $\sim$ between $B_k$ and large tube $T_{\theta,v}$ in the next subsubsection. The relation $\sim$ satisfies, for any $T_{\theta,v}$, the number of $B_k$ with $B_k\sim T_{\theta,v}$ is bounded by $O_{\delta}(1) $. We choose $R$ large enough such that $O_{\delta}(1) \leq R^{\delta^2}$. For each $B_k$, we define $Ef_k^{\sim} = \underset{T_{\theta,v}\sim B_k}{\sum} Ef_{\theta,v}$ and $Ef_{k}^{\nsim} = Ef-Ef_{k}^{\sim}$. By triangle inequality, $$\|Ef\|_{BL^p(B_R)}^p\lesssim \underset{B_k}{\sum}\|Ef_{k}^{\sim}\|_{BL^p(B_k)}^p + \underset{B_k}{\sum}\|Ef_{k}^{\nsim}\|_{BL^p(B_k)}^p.$$ \begin{lemma}\label{two ends2018} If $\underset{B_k}{\sum}\|Ef_{k}^{\sim}\|_{BL^p(B_k)}^p\gtrsim R^{-\delta}\|Ef\|_{BL^p(B_R)}^p$, assuming Theorem~\ref{main induction theorem} is true for balls of radius $\rho=R^{1-\epsilon_0}$, then $$\|Ef\|_{BL^p(B_R)}^p \leq C_{\epsilon}R^{\epsilon}\|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L_{avg}^2(\theta)}^{p-2}.$$ \end{lemma} \begin{proof} We apply induction on scale of Theorem~\ref{main induction theorem} at scale $\rho$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p & \lesssim R^{\delta}\underset{B_k}{\sum}\|Ef_{k}^{\sim}\|_{BL^p(B_k)}^p \\ &\lesssim R^{\delta} C_{\epsilon} \rho^{\epsilon} \|f_{k}\|_{L^2}^2 \underset{|\theta'|=\rho^{-1/2}}{\max}\|f_{k,\theta'}\|_{L_{avg}^2(\theta')}^{p-2}\\ &\lesssim C_{\epsilon} R^{\epsilon} R^{-\epsilon \epsilon_0 +O(\delta)} \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L_{avg}^2(\theta)}^{p-2} \end{align*} Since $\delta\ll \epsilon_0 \ll \epsilon$, the constant is bounded by $C_{\epsilon}R^{\epsilon}$. \end{proof} Otherwise the second term dominates, we discuss this case in the next subsubsection. \subsection{Analyse the brooms}\label{broom analysis} In this subsubsection, we give full definition of the relation $B_k\sim T_{\theta,v}$ and discuss the case when the $Ef_{k}^{\nsim}$s dominate. Recall that after polynomial partitioning iteration, we obtain a collection of $Ef_{O}$ and its tangential part $Ef_{O,\tang}$. Each $Ef_{O}$ restricted on $O\subseteq B_r$ is a sum of large wavepackets $Ef_{\theta,v}$ intersecting $O$ (this is because of the white lie). Each $Ef_{O,\tang}$ is essentially supported on the $r^{1/2}$--neighborhood of a degree $d$ algebraic surface inside a ball of radius $r$. For simplicity, we might assume that the algebraic surface is a plane. Lemma~\ref{plane} says that we can actually do so for the interest of this paper. From the polynomial partitioning iteration, we know that $$\|Ef_{O,\tang}\|_{BL^p(O)}^p \gtrsim \|Ef_{O}\|_{BL^p(O)}^p \gtrsim (\# O)^{-1}R^{-\delta}\|Ef\|_{BL^p(B_R)}^p.$$ The number of cells is greater than $O(D^{3}R^{-\delta})$. For a cell $O\subseteq B_k$, we define $Ef^{\sim}_{O}=\underset{T_{\theta,v}\sim B_k, T_{\theta,v}\cap O\neq \emptyset}{\sum} Ef_{\theta,v}$ and $Ef^{\nsim}_{O}=\underset{T_{\theta,v}\nsim B_k, T_{\theta,v}\cap O\neq \emptyset}{\sum} Ef_{\theta,v}$. We define $Ef^{\nsim}_{O,\tang}$ be the tangential part of $Ef^{\nsim}_{O}$ with respect to the polynomial partitioning for $Ef_{O}$. Since $Ef^{\sim}$ does not dominate, we know that for most of the cells, $\|Ef^{\sim}_{O}\|_{BL^p(O)}^p\ll \|Ef_{O}\|_{BL^p(O)}^p$. Under our white lie assumption, for most of the cells, $\|Ef^{\nsim}_{O,\tang}\|_{BL^p(O)}^p \gtrsim \|Ef_{O,\tang}\|_{BL^p(O)}^p$. We apply inequality~\ref{tangential 1} in $O\subseteq B_r$, $$\|Ef^{\nsim}_{O,\tang}\|_{BL^p(O)}^p \lesssim r^{\frac{5}{2}-\frac{3p}{4}}\|f^{\nsim}_{O,\tang}\|_{L^2}^2$$ We would like to show that \begin{equation}\label{L2ave4nsim} \|f^{\nsim}_{O,\tang}\|_{L^2}^2 \leq R^{-1/2+O(\epsilon_0)}\underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^2. \end{equation} From Lemma~\ref{geometric larry}, we know that $\|f_{O,\tang}\|_{L^2}^2 \leq r^{-1/2+O(\delta)} \underset{|\tau|=r^{-1/2}}{\max}\|f_{O,\tang, \tau}\|_{L^2_{avg}(\tau)}^2$. For a typical $O$ and a typical $\tau$ inside the support of $ f_{O,\tang}$, we would like to compare $\|f_{O,\tang,\tau}\|_{L^2_{avg}(\tau)}^2$ and $\underset{\theta\subseteq \tau}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^2$. We discuss separately the case when $r\geq R^{1/2}$ and the case when $r\leq R^{1/2}$. \subsubsection{ The case when $r\geq R^{1/2}$.} Since $Ef_{O,\tang,\tau}$ is tangential to the $r^{1/2}$--neighborhood of a plane $\Sigma$, which we denote a fat plane $S$. Recall that each $Ef_{\theta,v}$ has about the same $L^2$--norm or is zero. Fix a cell $O$ and the corresponding fat plane $S$, we decompose $S$ into disjoint union of planks $T_{{\mathcal B}}$ of length $r$, width $R^{1/2}$, thickness $r^{1/2}$. The direction of $T_{{\mathcal B}}$ is paralell to $G(\tau)$ up to angle difference $r^{-1/2}$. We decompose the cap $\tau$ into union of parallel strips with length $r^{-1/2}$, width $R^{-1/2}$. Each $s$ is parallel to the normal direction of $S$ up to angle difference $(\frac{R}{r})^{-1/2}$. We have the $L^2$--orthogonality on each plank $T_{{\mathcal B}}$: \begin{equation}\label{orthogonality on plank} \int_{T_{{\mathcal B}}} |Ef_{O,\tang,\tau}|^2 \lesssim \sum_s \int |Ef_{O,\tang,s}|^2 w_{T_{{\mathcal B}}}. \end{equation} The weight function $w_{T_{{\mathcal B}}}$ is essentially supported on $T_{{\mathcal B}}$ and rapidly decay elsewhere. One might think $w_{T_{{\mathcal B}}}$ as the characteristic function on $T_{{\mathcal B}}$ for a simpler model. \begin{definition}\label{broom definition} For each $s$ and plank $T_{{\mathcal B}}$ of length $r$, width $R^{1/2}$ and thickness $r^{1/2}$, we define a \emph{broom} ${\mathcal B}$ as the collection of large wavepackets $Ef_{\theta,v}$ with $\theta\subseteq s$ and the essential support $T_{\theta,v}\cap T_{{\mathcal B}}\neq \emptyset$. We call $T_{{\mathcal B}}$ the \emph{root} of the broom ${\mathcal B}$. We say that a broom ${\mathcal B}$ is rooted at a plane $\Sigma$ if the plank $T_{{\mathcal B}}$ is a subset of the $r^{1/2}$--neighborhood of $\Sigma$. \end{definition} \begin{remark}\label{broom organization} Let $B_r\subseteq B_R$ and $\tau$ be a cap of radius $r^{-1/2}$, and let $\Sigma\cap B_r$ be a plane parallel to $G(\tau)$ up to angle difference $r^{-1/2}$. The large wavepackets $Ef_{\theta,v}\in \mathbb{T}$ with $\theta\subset \tau$ and $T_{\theta,v}\cap\Sigma\neq\emptyset$ are organized into brooms and each of those $T_{\theta,v}$ belongs to a uniqe broom rooted at $\Sigma$. Different brooms ${\mathcal B}$ might share the same $T_{{\mathcal B}}$. \end{remark} We rewrite inequality~\ref{orthogonality on plank} as follows \begin{equation}\label{orthogonality of the broom} \int_{T_{{\mathcal B}}} |Ef_{O,\tang,\tau}|^2 \lesssim \sum_{\mathcal{B}} \int |Ef_{O,\tang,{\mathcal B}}|^2 w_{T_{{\mathcal B}}}, \end{equation} where ${\mathcal B}$ is the broom determined by plank $T_{{\mathcal B}}$ and $s$ in definition~\ref{broom definition} and $Ef_{O,\tang, {\mathcal B}}:=Ef_{O,\tang, s}$. \begin{lemma}\label{broom} For any $ R^{1/2}\leq r\leq R$, if ${\mathcal B}$ is a broom of size $b$ with root $T_{{\mathcal B}}$ and $Eg$ is the sum of large wavepackets $Ef_{\theta,v}$ with essential support $T_{\theta,v}$ in ${\mathcal B}$, then $$\|Eg\|_{L^2(w_{T_{{\mathcal B}}})}^2 \lesssim (\frac{R}{r})^{-1/2} b \|Eg\|_{L^2(B_r)}^2.$$ \end{lemma} \begin{proof} When $r\geq R^{1/2}$, since $|Ef_{\theta,v}|$ is essentially constant on $T_{\theta,v}$, by Cauchy-Schwartz inequality, \begin{align*} \int |Eg|^2 w_{T_{B}} & \leq \int| \sum_{T_{\theta,v}\in\mathcal{B}} Eg_{\theta,v}|^2 w_{T_{B}}\\ &\leq b \sum_{T_{\theta,v}\in\mathcal{B}} \int |Eg_{\theta,v}|^2 w_{T_{B}}\\ &\lesssim \frac{|T_{B}|}{Rr} b \sum_{T_{\theta,v} \in \mathcal{B}} \int_{B_r} |Eg_{\theta,v}|^2 . \end{align*} \end{proof} If $Ef_{O,\tang, \tau}$ is dominated by large brooms, then it has small $L^2$--norm. We sort the brooms ${\mathcal B}$ according to its size $b$. Since $0\leq b \leq (\frac{R}{r})^{1/2}$, there exists some dyadic number $b$ such that the brooms of size about $b$ dominates $(\log R)^{-1}$ of $\|Ef_{O,\tang,\tau}\|_{L^2(B_r)}^2$. \begin{lemma}\label{large brooms} Let $B_r\subset B_R$ and $r\geq R^{1/2}$, let $\tau$ be a cap of radius $r^{-1/2}$ and $\Sigma$ be a plane parallel to $G(\tau)$ up to angle difference $r^{-1/2}$. Let $N\Sigma$ be the $r^{1/2}$--neighborhood of $\Sigma$. If $Eh_{\tau}$ is the sum of large wavepackets $Ef_{\theta,v}$ with $\theta\subset \tau$ organized into brooms of uniform size about $b$, then $$ \int_{N\Sigma} |Eh_{\tau}|^2 \lesssim (\frac{R}{r})^{-1/2} b\int_{B_r}|Eh_{\tau}|^2.$$ \end{lemma} \begin{proof} By inequality~\ref{orthogonality of the broom} and then by Lemma~\ref{broom}, \begin{align*} \int_{N\Sigma} |Eh_{\tau}|^2 &\lesssim \sum_{T_{\mathcal{B}} }\int_{T_{{\mathcal B}}} |Eh_{\tau}|^2 \\ &\lesssim \sum_{\mathcal{B}} \int |Eh_{\tau, {\mathcal B}}|^2 w_{T_{{\mathcal B}}}\\ &\lesssim \sum_{{\mathcal B}} (\frac{R}{r})^{-1/2} b \int_{B_r}|Eh_{\tau, {\mathcal B}}|^2\\ &\lesssim (\frac{R}{r})^{-1/2} b \int_{B_r}|Eh_{\tau}|^2 \end{align*} \end{proof} \begin{remark} We observe that the size of $\|f_{O,\tang, \tau}\|_{L^2}$ is influenced by two independent factors: the number of large wavepackets $Ef_{\theta,v}$ intersecting $\Sigma$ tangentially and the size of brooms they construct. \end{remark} In other words, the size of brooms gives an upper bound for the ratio $\|f_{O,\tang,\tau}\|_{L^2}^2$ to $\|f_{O,\tau}\|_{L^2}^2$. The heuristic for the rest of the proof is the following. We define $T_{\theta,v}\sim B_k$ if $T_{\theta,v}$ belongs to a lot of large brooms with root inside $B_k$. Lemma~\ref{large brooms} says that if $Ef^{\nsim}_{O,\tang}$ has large $L^2$--nrom, then the large wavepackets in $Ef^{\nsim}_{O}$ are in the form of large brooms. In addition, the way we define the relation $\nsim$ says that each large wavepackets in $Ef^{\nsim}_{O}$ belongs to a lot of large brooms with roots far apart. Since each large wavepacket has about the same $L^2$--norm, it is difficult for the plane $\Sigma$ related to $O$ to capture large proportion of those large wavepackets. We sort the planes $\Sigma$ into $O(1)$ collections according to their normal directions such that each pair of planes in each collection have normal directions difference within $1/100$. There exists a collection containing a significant fraction of the planes. We consider only planes in this collection. In this white lie proof, we discuss the following special case: \begin{itemize} \item for each $\Sigma$, all the brooms $\mathcal{B}$ rooted at $\Sigma$ has about the same size $b$. \item each tube $T_{\theta,v}$ intersects about $\gamma$ planes $\Sigma$. \end{itemize} The main idea of counting wavepackets using brooms is included in this special case. We deal with the general case in Section~\ref{broom section}. Since each tube $T_{\theta,v}$ intersecting $\Sigma$ belongs to a unique broom $\mathcal{B}$ rooted at $\Sigma$, it belongs a broom of size about $b$. We define the function $\chi(T_{\theta,v}, \Sigma)=1$ if $T_{\theta,v}$ intersects $\Sigma$, otherwise $\chi(T_{\theta,v}, \Sigma)=0$. Since we are in the special case described above, for any tube $T_{\theta,v}$, we have: \begin{itemize} \item $\underset{T_{\theta,v}'\in \mathcal{B}}{\sum} \chi(T_{\theta,v}',\Sigma)\sim b$ for the broom $\mathcal{B}$ containing $T_{\theta,v}$ rooted at $\Sigma$. \item $\underset{\Sigma}{\sum} \chi(T_{\theta,v},\Sigma)\sim \gamma$ where we sum over all the planes intersecting $T_{\theta,v}$. \end{itemize} \begin{definition} For each $T_{\theta,v}$, let $B_k^*$ be the ball that maximizes $\underset{\Sigma\cap O\subseteq B_k}{\sum}\chi(T_{\theta,v}, \Sigma)$. If there are multiple maximizers, we choose only one. We define $T_{\theta,v}\sim B_k$ if $B_k \subseteq 10 B_k^*$. Otherwise, $T_{\theta,v}\nsim B_k$. \end{definition} Each tube $T_{\theta,v}$ is related to at most $O(1)$ $B_k$. \begin{lemma}\label{kappa} For the special case described in the last three paragraphs, when $R^{1/2}\leq r\leq R^{1-\epsilon_0} $, \begin{equation} \|f^{\nsim}_{O,\tang}\|_{L^2}^2 \lesssim R^{-1/2+O(\epsilon_0)} \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^2 \end{equation} \end{lemma} Lemma~\ref{kappa} is a direct corollary of Lemma~\ref{kappa tau}. \begin{proof} Since $Ef^{\nsim}_{O,\tang}$ is tangential to a plane $\Sigma$, the support of $f^{\nsim}_{O,\tang}$ lies inside the $r^{-1/2}$--neighborhood of a parabola. Then we apply Lemma~\ref{kappa tau} \begin{align*} \|f^{\nsim}_{O,\tang}\|_{L^2}^2 & \lesssim r^{-1/2} \underset{|\tau|=r^{-1/2}}{\max} \|f^{\nsim}_{O,\tang, \tau}\|_{L^2}^2\\ &\lesssim R^{-1/2+O(\epsilon_0)} \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^2. \end{align*} \end{proof} \begin{lemma}\label{kappa tau} Same assumption as in Lemma~\ref{kappa}, $$\|f^{\nsim}_{O,\tang,\tau}\|_{L^2_{avg}(\tau)}^2 \lesssim (\frac{R}{r})^{-1/2}R^{O(\epsilon_0)} \underset{\theta\subseteq\tau}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^2.$$ \end{lemma} \begin{remark} The assumption for $r\geq R^{1/2}$ is used on defining the brooms and Lemma~\ref{broom}. We also need $r\leq R^{1-\epsilon_0}$ so that each cell $O$ lies inside a $B_k$ and that $f^{\nsim}_{O}$ is well defined. The case when $r\geq R^{1-\epsilon_0}$ is included in Lemma~\ref{large r}. \end{remark} \begin{proof} We assume that there are about $(\frac{R}{r})^{\beta_0}$ nonzero wavepackets $Ef_{\theta,v}$ with $\theta\subseteq \tau$. By our assumption, a wavepacket is either zero or has about the same $L^2$--norm. Let $B_k$ be the ball of radius $R^{1-\epsilon_0}$ containing $O$ and let $\Sigma_1=\Sigma$ the plane associated to $O$. We say that $\Sigma_2\nsubseteq B_k$ if $\Sigma_2$ is associated to some cell $O_2$ outside of $5B_k$. The main idea is to double count the number of wavepackets shared by $\Sigma_1$ and far aparts $\Sigma_2$, specifically, the quantity \begin{equation}\label{double count2018} \underset{\Sigma_2\nsubseteq B_k}{\sum}\underset{\theta\subseteq \tau, v}{\sum} \chi (T_{\theta,v}, \Sigma_1)\chi (T_{\theta,v}, \Sigma_2). \end{equation} For each $T_{\theta,v}\nsim B_k$, \begin{equation}\label{nsim 2018} \underset{\Sigma_2\nsubseteq B_k}{\sum} \chi (T_{\theta,v}, \Sigma_2)\gtrsim \underset{\Sigma'}{\sum} \chi (T_{\theta,v}, \Sigma'), \end{equation} otherwise the $B_k^*$ that maximizes $\underset{\Sigma'\cap O\subseteq B_k^*}{\sum} \chi(T_{\theta,v}, \Sigma')$ should belong to $5B_k$, and $T_{\theta,v}\sim B_k$ by definition of the relation $\sim$. This is the only step we need the information that $Ef^{\nsim}_O$ consists of the wavepackets with $T_{\theta,v}\nsim B_k$. By our special case assumption, \begin{equation}\label{special case} \underset{\Sigma'}{\sum} \chi(T_{\theta,v}, \Sigma')\gtrsim \gamma. \end{equation} Assume that there are $(\frac{R}{r})^{\beta_1}$ nonzero wavepackets $Ef_{\theta,v}$ such that $\theta\subseteq \tau$, $T_{\theta,v}\nsim B_k$ intersecting $\Sigma_1$. We have the following lower bound for \ref{double count2018} by combining inequality~\ref{nsim 2018} and inequality~\ref{special case}, \begin{equation}\label{lower bound 2018} \underset{\Sigma_2\nsubseteq B_k}{\sum}\underset{\theta\subseteq \tau, v}{\sum} \chi (T_{\theta,v}, \Sigma_1)\chi (T_{\theta,v}, \Sigma_2)\gtrsim \gamma (\frac{R}{r})^{\beta_1}. \end{equation} Next we are going to give an upper bound for \ref{double count2018}. Here we need to apply the following geometric observation. When $O_1$ and $O_2$ are $R^{1-\epsilon_0}$ apart and the normals of corresponding $\Sigma_1$ and $\Sigma_2$ have angle difference within $1/100$, a broom rooted at $\Sigma_2$ can intersect with $\Sigma_1$ in at most $R^{O(\epsilon_0)}$ tubes $T_{\theta,v}$. This is because a broom rooted at $\Sigma_2$ spans on the normal direction of $\Sigma_2$. Since $O_1$ and $O_2$ have distance at least $R^{1-\epsilon_0}$, near $O_1$ the wavepackets in the broom rooted at $\Sigma_2$ are almost disjoint (up to $R^{O(\epsilon_0)}$ overlapping). Beacuse $\Sigma_1$ and $\Sigma_2$ has angle difference within $1/100$, a broom rooted at $\Sigma_2$ intersects transversally with $\Sigma_1$. In our special case, the wavepackets intersecting $\Sigma_2$ are organized into brooms of about uniform size $b$. Hence in direction $G(\tau)$, the number of wavepackets $T_{\theta,v}$ shared by $\Sigma_1$ and $\Sigma_2$ is $R^{O(\epsilon_0)} b^{-1}$ of the number of wavepackets intersecting $\Sigma_2$. For each $\Sigma_a\nsubseteq B_k$, we have \begin{equation}\label{a pair of planes} \underset{\theta\subseteq \tau, v}{\sum} \chi (T_{\theta,v}, \Sigma_1)\chi (T_{\theta,v}, \Sigma_2)\lesssim R^{O(\epsilon_0)} b^{-1} \underset{\theta\subseteq \tau, v}{\sum} \chi(T_{\theta,v}, \Sigma_2). \end{equation} In our special case, each wavepacket $T_{\theta,v}$ satisfies $\underset{\Sigma_2}{\sum} \chi(T_{\theta,v}, \Sigma_2)\sim \gamma$. There are about $(\frac{R}{r})^{\beta_0}$ nonzero wavepackets $Ef_{\theta,v}$ with $\theta\subseteq \tau$, so \begin{equation}\label{one tube} \underset{\theta\subseteq \tau, v}{\sum} \underset{\Sigma_2}{\sum} \chi (T_{\theta,v}, \Sigma_2) \lesssim \gamma (\frac{R}{r})^{\beta_0}. \end{equation} We sum over $\Sigma_2\nsubseteq B_k$ with inequality~\ref{a pair of planes} and then apply inequality~\ref{one tube}, we have the following upper bound for \ref{double count2018}, \begin{equation}\label{upper bound 2018} \underset{\Sigma_2\nsubseteq B_k}{\sum} \underset{\theta\subseteq \tau, v} \chi (T_{\theta,v}, \Sigma_1)\chi(T_{\theta,v}, \Sigma_2)\lesssim R^{O(\epsilon_0)} (\frac{R}{r})^{\beta_0} \gamma b^{-1}. \end{equation} We compare the lower bound~\ref{lower bound 2018} and upper bound~\ref{upper bound 2018} for \ref{double count2018}, \begin{equation}\label{counting2018} (\frac{R}{r})^{\beta_1}b\lesssim R^{O(\epsilon_0)} (\frac{R}{r})^{\beta_0}. \end{equation} We apply Lemma~\ref{large brooms} with $Eh_{\tau}= Ef^{\nsim}_{O,\tang, \tau}$, $$\int_{B_r} |Ef^{\nsim}_{O,\tang, \tau}|^2 \lesssim (\frac{R}{r})^{-1/2} b \int_{B_r} |Ef^{\nsim}_{O,\tau}|^2.$$ Since $\|Ef^{\nsim}_{O,\tang, \tau}\|_{L^2(B_r)}^2 \sim r\|f^{\nsim}_{O,\tang, \tau}\|_{L^2}^2$, we have $$ \|f^{\nsim}_{O,\tang,\tau}\|_{L^2}^2 \lesssim (\frac{R}{r})^{-1/2} b \|f^{\nsim}_{O,\tau}\|_{L^2}^2.$$ There are $(\frac{R}{r})^{\beta_1}$ out of $(\frac{R}{r})^{\beta_0}$ nonzero large wavepacekts $Ef_{\theta,v}$ with $\theta\subseteq \tau$ intersecting $\Sigma_1$ and $T_{\theta,v}\nsim B_k$, hence $$\|f^{\nsim}_{O,\tau}\|_{L^2}^2 \lesssim (\frac{R}{r})^{\beta_1-\beta_0} \|f_{\tau}\|_{L^2}^2.$$ Together with inequality~\ref{counting2018}, $$\|f^{\nsim}_{O,\tang, \tau}\|_{L^2}^2 \lesssim R^{O(\epsilon_0)} (\frac{R}{r})^{-1/2} \|f_{\tau}\|_{L^2}^2. $$ \end{proof} \begin{lemma}\label{large r end} When $R^{1/2}\leq r\leq R^{1-\epsilon_0}$ and $p>3+ \frac{3}{13}$, $$\|Ef\|_{BL^p(B_R)}^p \leq C_{\epsilon} R^{\epsilon}\|f\|_{L^2}^2\underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}$$. \end{lemma} \begin{proof} When $Ef^{\sim}_{O}$ dominates $Ef_{O}$ for most of the cells $O$, by our white lie assumption and definition of $f_{O}$ from \ref{ideal definition}, $$\|Ef^{\sim}_{k}\|_{BL^p(B_k)}^p \gtrsim \sum_{O\subseteq B_k}\|Ef^{\sim}_{O}\|_{BL^p(O)}^p-\RapDec(R)\|f\|_{L^2}^p.$$ Since $r\leq R^{1-\epsilon_0}$, each $O$ completely lies inside some $B_k$. We apply Lemma~\ref{two ends2018}, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{O(\delta)} \sum_{O}\|Ef^{\sim }_{O}\|_{BL^p(O)}^p \\ &\lesssim R^{O(\delta)} \sum_{B_k} \|Ef^{\sim}_{k}\|_{BL^p(B_k)}^p+\RapDec(R)\|f\|_{L^2}^p \\ &\leq C_{\epsilon} R^{\epsilon} \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}. \end{align*} When $Ef^{\nsim}_{O}$ dominates $Ef_{O}$ for most of the cells $O$, we have $Ef^{\nsim}_{O}=Ef^{\nsim}_{O,\tang}$ by the white lie assumption. We apply inequality~\ref{tangential 1}, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{O(\delta)} \sum_O \|Ef^{\nsim}_{O,\tang}\|_{BL^p(O)}^p\\ & \lesssim R^{O(\delta)}\sum_O r^{\frac{5}{2}-\frac{3p}{4}} \|f^{\nsim}_{O,\tang}\|_{L^2}^p. \end{align*} We apply Lemma~\ref{L2 orthogonality for nsim}, \begin{equation}\label{cellulartrans L2} \sum_O \|f^{\nsim}_{O,\tang}\|_{L^2}^2 \leq \sum_O \|f^{\nsim}_{O}\|_{L^2}^2\lesssim DR^{\delta}\|f\|_{L^2}^2. \end{equation} Combine with Lemma~\ref{kappa} we obtain one estimate \begin{align} \| Ef\|_{BL^p(B_R)}^p & \lesssim R^{O(\epsilon_0)} r^{\frac{5}{2}-\frac{3p}{4}} R^{-\frac{p-2}{4}} \sum_{O}\|f^{\nsim}_{O,\tang}\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}\\ \label{tangetial bound big r} &\lesssim R^{O(\epsilon_0)} r^{\frac{5}{2}-\frac{3p}{4}} R^{-\frac{p-2}{4}} D \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}. \end{align} We have another estimate from inequality~\ref{cellulartrans L2} and inequality~\ref{average cell}, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{O(\delta)} \#\{O\} \|Ef^{\nsim}_{O,\tang}\|_{BL^p(O)}^p\\ &\lesssim R^{O(\delta)} D^{\frac{p}{2}} \#\{O\}^{1-\frac{p}{2}} r^{\frac{5}{2}-\frac{3p}{4}}\|f\|_{L^2}^p \end{align*} Recall that in inequality~\ref{number of cells}, $\#\{O\}\gtrsim R^{-O(\delta)} D^3$, we have \begin{equation}\label{cellulartrans bound big r} \|Ef\|_{BL^p(B_R)}^p \lesssim R^{O(\delta)} D^{3-p}r^{\frac{5}{2}-\frac{3p}{4}}\|f\|_{L^2}^p. \end{equation} Combine estimate~\ref{tangetial bound big r} with estimate~\ref{cellulartrans bound big r}, the worst case happen when $D^{3-p}=D R^{-\frac{p-2}{4}}$. In other words, $D^4=R$. From the definition of $r$, we know that $r\leq \frac{R}{D}\leq D^3$, so \begin{align*} \| Ef\|_{BL^p(B_R)}^p&\lesssim R^{O(\epsilon_0)} r^{\frac{5}{2}-\frac{3p}{4}} R^{-\frac{p-2}{4}} D \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}\\ &\lesssim R^{O(\epsilon_0)} D^{3(\frac{5}{2}-\frac{3p}{4})} D^{-(p-2)+1} \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}\\ &\lesssim R^{O(\epsilon_0)} D^{\frac{21}{2}-\frac{13p}{4}} \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}. \end{align*} When $p>\frac{42}{13}$, the constant term is bounded by $R^{\epsilon}$. \begin{lemma}\label{L2 orthogonality for nsim} If $O\subseteq B_r$ are cells with $R^{\delta}\leq r\leq R^{1-\epsilon_0}$, then $\sum_{O} \|f^{\nsim}_{O}\|_{L^2}^2 \lesssim DR^{\delta}\|f\|_{L^2}^2.$ \end{lemma} \begin{proof} Recall that inequality~\ref{precise transversal} says that $$\sum_{O}\|f_{O}\|_{L^2}^2 \lesssim DR^{\delta}\|f\|_{L^2}^2.$$ When $r\geq R^{1/2}$, for each $O\subseteq B_k $ we have \begin{align*} \|f^{\nsim}_{O}\|_{L^2}^2 &=\underset{T_{\theta,v}\nsim B_k, T_{\theta,v}\cap O\neq \emptyset}{\sum}\|f_{\theta,v}\|_{L^2}^2 \\ &\lesssim \underset{T_{\theta,v}\cap O\neq \emptyset}{\sum} \|f_{\theta,v}\|_{L^2}^2\\ &=\|f_{O}\|_{L^2}^2. \end{align*} When $r\leq R^{1/2}$, it suffices to find an intermediate step cell $O_j$, such that the corresponding $ R^{1/2}\leq r_j\leq R^{1-\epsilon_0}$. Assume that there are $n_1$ cellular steps and $m_1$ transversal steps between $O$ and $O_j$, then there are at most$(n-n_1)$ cellular steps and $(m-m_1)$ transversal steps before $O_j$. Since $O_j\subseteq B_k$, the $f^{\nsim}_{O}$ for all $O\subset O_j$ comes from the same function $f^{\nsim}_{O_j}$. We apply inequality~\ref{ideal cellular bound} and inequality~\ref{ideal transversal bound}, \begin{align*} \sum_{O}\|f^{\nsim}_{O}\|_{L^2}^2 & =\sum_{O_j} \sum_{O\subset O_j} \|f^{\nsim}_{O_j}\|_{L^2}^2\\ &\lesssim d^{n_1} \Poly(d)^{m_1} \sum_{O_j} \|f^{\nsim}_{O_j}\|_{L^2}^2\\ &\lesssim d^{n_1}\Poly(d)^{m_1}\sum_{O_j}\|f_{O_j}\|_{L^2}^2 \\ &\lesssim d^n \Poly(d)^m \|f\|_{L^2}^2 \leq D R^{\delta}\|f\|_{L^2}^2. \end{align*} \end{proof} \end{proof} \subsubsection{The case when $r\leq R^{1/2}$} Now we discuss the case when $R^{\delta}\leq r\leq R^{1/2}$. Notice that in this case, a cell can be completely contained in a large tube $T_{\theta,v}$. In particular, two cells with distance $R^{1-\epsilon_0}$ can share at most $R^{O(\epsilon_0)}$ wavepackets. Since $r\leq R^{1/2}$, we shall define a bush structure and use bushes to count wavepackets. The arguments are similar but simpler. \begin{definition}\label{bush} If $r\leq R^{1/2}$, fix a cell $O$ and a cap $\tau$ of radius $r^{-1/2}$, a \emph{bush} $\mathcal{U}$ is the collection of nonzero large wavepackets $Ef_{\theta,v}$ with $T_{\theta,v}\cap O\neq \emptyset$ and $\theta\subseteq \tau$. We say that $\mathcal{U}$ is a bush rooted at cell $O$. \end{definition} Similar to Lemma~\ref{large brooms}, the size of a bush determines the $L^2$--norm near a plane. We note that Lemma~\ref{bush estimate} is only useful when the size of bush is smaller than $r^{1/2}$. \begin{lemma}\label{bush estimate} If $r\leq R^{1/2}$ and $\mathcal{U}$ is a bush of size $u$ rooted at cell $O\subseteq B_r$ in direction $\tau$, and $g_{\mathcal{U}}$ is the sum of wavepackets in the bush, then for any plane $\Sigma$ intersecting $B_r$ and its $r^{1/2}$--neighborhood $N\Sigma$, $$ \|Eg_{\mathcal{U}}\|_{L^2(N\Sigma)}^2 \leq r^{-1/2} u \int_{B_r}|Eg_{\mathcal{U}}|^2.$$ \end{lemma} \begin{proof} We decompose $g_{\mathcal{U}}=\underset{|\theta'|=r^{-1}, v'}{\sum} g_{\theta',v'}$. Since there are at most $u$ nonzero wavepackets $Ef_{\theta,v}$ in bush $\mathcal{U}$, the number of $\theta'$ such that $g_{\theta',v'}\neq 0$ is at most $u$. We apply Cauchy-Schwartz inequality, \begin{align*} \int_{N\Sigma} |Eg_{\mathcal{U}}|^2 &\leq \int_{N\Sigma} |\sum_{|\theta'|=r^{-1} } Eg_{\theta',v'}|^2 \\ &\leq u \int_{N\Sigma} \sum_{|\theta'|=r^{-1}} |Eg_{\theta',v'}|^2 \\ &\lesssim r^{-1/2} u\int_{B_r} \sum_{|\theta'|=r^{-1}} |Eg_{\theta',v'}|^2\\ &\lesssim r^{-1/2}u\int_{B_r}|Eg_{\mathcal{U}}|^2 . \end{align*} We applied the property that each $|Eg_{\theta', v'}|$ is essentially constant on $B_r$. \end{proof} We would like to show that a typical bush in $Ef^{\nsim}_O$ is small. We discuss the following special case and leave the general case in Section~\ref{broom section}: \begin{itemize} \item for a fixed cap $\tau$ of radius $r^{-1/2}$, every bush in the direction $G(\tau)$ has size about $u$, \item every nonzero wavepacket $Ef_{\theta,v}$ with $\theta\subseteq \tau$ intersects about $\gamma$ cells, in other words, belongs to about $\gamma$ bushes. \end{itemize} We define $\chi(T_{\theta,v}, O)=1$ if $T_{\theta,v}$ intersects $O$, otherwise $\chi(T_{\theta,v}, O)=0$. In our special case, we have \begin{itemize} \item if the cap $\tau\subseteq \mathrm{supp}\, f^{\nsim}_{O,\tang}$, then $\underset{\theta\subseteq \tau, v}{\sum} \chi(T_{\theta,v}, O)\sim u$, \item for a fixed nonzero wavepacket $Ef_{\theta,v}$, $\underset{O}{\sum} \chi(T_{\theta,v}, O)\sim \gamma$. \end{itemize} \begin{definition} For a fixed nonzero wavepackets $Ef_{\theta,v}$, let $B_k^*$ be the ball of radius $R^{1-\epsilon_0}$ that maximizes the quantity $\underset{O\subseteq B_k}{\sum} \chi(T_{\theta,v}, O)$. We define $T_{\theta,v}\sim B_k$ if $B_k\subseteq 10B_k^*$, otherwise we say $T_{\theta,v}\nsim B_k$. \end{definition} A nonzero wavepacket is related to at most $O(1)$ balls $B_k$. We prove the following Lemma~\ref{iota} in analogue with Lemma~\ref{kappa}, which is a direct corollary of Lemma~\ref{iota tau}. The proof of Lemma~\ref{iota} using Lemma~\ref{iota tau} is the same as the proof of Lemma~\ref{kappa}, which we omit here. \begin{lemma}\label{iota} In the special case described in the previous three paragraphs, when $r\leq R^{1/2}$, $$\|f^{\nsim}_{O,\tang}\|_{L^2}^2 \lesssim r^{-1} R^{O(\epsilon_0)} \underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^2.$$ \end{lemma} \begin{lemma}\label{iota tau} Under the same assumptions as in Lemma~\ref{iota}, $$\|f^{\nsim}_{O,\tang,\tau}\|_{L^2_{avg}(\tau)}^2 \lesssim r^{-1/2}R^{O(\epsilon_0)} \underset{|\theta|=R^{-1/2}, \theta\subseteq \tau}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^2.$$ \end{lemma} \begin{proof} We apply similar arguments as in Lemma~\ref{kappa tau}. We count the number of large wavepackets within the direction $G(\tau)$ shared by two far apart cells $O_1=O$ and $O_2$, specifically \begin{equation}\label{double count for iota2018} \underset{O_2\nsubseteq 5B_k}{\sum} \underset{\theta\subseteq \tau, v}{\sum} \chi(T_{\theta,v}, O_1)\chi(T_{\theta,v}, O_2). \end{equation} For each tube $T_{\theta,v}\nsim B_k$, we have \begin{equation}\label{nsim tube iota} \underset{O_2\nsubseteq 5B_k}{\sum}\chi (T_{\theta,v}, O_2)\gtrsim \underset{O'}{\sum} \chi(T_{\theta,v}, O'). \end{equation} Otherwise the ball $B_k^*$ that maximizes $\underset{O\subseteq B_k^*}{\sum}\chi(T_{\theta,v}, O)$ should belong to $5B_k$, which violates the assumption that $T_{\theta,v}\nsim B_k$. In our special case, we have \begin{equation}\label{one tube iota} \underset{O'}{\sum} \chi(T_{\theta,v}, O')\gtrsim \gamma. \end{equation} Assume that there are $(\frac{R}{r})^{\beta_1}$ nonzero wavepackets $Ef_{\theta,v}$ such that $\theta\subseteq \tau$ and $T_{\theta,v}\nsim B_k$ intersecting $O_1$. Combine inequality~\ref{nsim tube iota} and inequality~\ref{one tube iota}, we obtain a lower bound for \ref{double count for iota2018}, \begin{equation}\label{lower bound for iota 2018} \underset{O_2\nsubseteq 5B_k}{\sum} \underset{\theta\subseteq \tau, v}{\sum} \chi(T_{\theta,v}, O_1)\chi(T_{\theta,v}, O_2)\gtrsim (\frac{R}{r})^{\beta_1}\gamma_l. \end{equation} We shall point out that $(\frac{R}{r})^{\beta_1}$ might be smaller than the size of bush $u$, since we add an extra condition $T_{\theta,v}\nsim B_k$. Next we are going to give an upper bound for the quantity~\ref{double count for iota2018}. Fix a pair of cells $O_1$ and $O_2$ with distance $R^{1-\epsilon_0}$, each one inside a ball of radius $r\leq R^{1/2}$, the number of large wavepackets shared by two cells is at most $R^{O(\epsilon_0)}$. Since the bush in direction $G(\tau)$ rooted at $O_2$ has size about $u$, for a pair of far apart cells $O_1$ and $O_2$, \begin{equation}\label{one pair of cells iota} \underset{\theta\subseteq\tau, v}{\sum} \chi(T_{\theta,v}, O_1)\chi(T_{\theta,v}, O_2)\lesssim R^{O(\epsilon_0)} u^{-1} \underset{\theta\subseteq\tau} \chi(T_{\theta,v}, O_2). \end{equation} In our special case, for each nonzero wavepacket $Ef_{\theta,v}$, we have $\underset{O'}{\sum} \chi(T_{\theta,v}, O')\sim \gamma$. Assume that there are $(\frac{R}{r})^{\beta_0}$ nonzero wavepackets $Ef_{\theta,v}$ with $\theta\subseteq \tau$, we have \begin{equation}\label{sum over O2} \underset{\theta\subseteq \tau, v}{\sum}\underset{O'}{\sum} \chi(T_{\theta,v}, O') \lesssim \gamma (\frac{R}{r})^{\beta_0}. \end{equation} We sum inequality~\ref{one pair of cells iota} over all the cells $O_2\nsubseteq 5B_k$ and apply inequality~\ref{sum over O2} to obtain the following upper bound for \ref{double count for iota2018}, \begin{equation}\label{upper bound for iota 2018} \underset{O_2\nsubseteq 5B_k}{\sum} \underset{\theta\subseteq \tau,v}\chi(T_{\theta,v}, O_1)\chi(T_{\theta,v}, O_2)\lesssim R^{O(\epsilon_0)} u^{-1} (\frac{R}{r})^{\beta_0} \gamma. \end{equation} Compare inequality~\ref{lower bound for iota 2018} to inequality~\ref{upper bound for iota 2018}, we have \begin{equation}\label{counting for iota} (\frac{R}{r})^{\beta_1-\beta_0} u \lesssim R^{O(\epsilon_0)}. \end{equation} We apply Lemma~\ref{bush estimate} with $f^{\nsim}_{O,\tau}=g_{\mathcal{U}}$, $$\|f^{\nsim}_{O,\tang,\tau}\|_{L^2}^2 \lesssim r^{-1/2} u\|f^{\nsim}_{O, \tau}\|_{L^2}^2.$$ For the fixed $\tau$, there are $(\frac{R}{r})^{\beta_1}$ nonzero wavepackets $Ef_{\theta,v}$ with $T_{\theta,v}\nsim B_k$ intersecting $O$, $$\|f^{\nsim}_{O,\tau}\|_{L^2}^2 \lesssim (\frac{R}{r})^{\beta_1-\beta_0}\|f_{\tau}\|_{L^2}^2.$$ We apply inequality~\ref{counting for iota}, $$\|f^{\nsim}_{O,\tang,\tau}\|_{L^2}^2 \lesssim r^{-1/2}R^{O(\epsilon_0)} \|f_{\tau}\|_{L^2}^2.$$ \end{proof} After Lemma~\ref{two ends2018}, it suffices to consider when $Ef^{\nsim}_O$ dominates. Again by the white lie, $Ef_{O, \trans}$ is zero, we may assume that $Ef^{\nsim}_{O,\tang} = Ef^{\nsim}_O$. This is not true in general and we are going to treat it carefully in the following sections. \begin{lemma}\label{small r2018allD} If $r\leq R^{1/2}$, and $Ef^{\nsim}_{O,\tang}$ dominates for most of the cells $O$, then for any $p> 3+1/5$ and for any small $\epsilon >0$, $$ \|Ef\|_{BL^p(B_R)}\leq C_{\epsilon}R^{\epsilon}\|f\|_{L^2}^{2/p}\underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L_{avg}^{2}(\theta)}^{1-2/p}.$$ \end{lemma} \begin{proof} The proof is separated into two cases. The first case is for $D\geq r^{1/2}$. We need only the information from polynomial partitioning. We apply inequality~\ref{average cell} and inequality~\ref{tangential 1}, for any cell $O$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{O(\delta)} \#\{O\}\|Ef_{O,\tang}\|_{BL^p(O)}^p \\ &\lesssim R^{O(\delta)}r^{\frac{5}{2}-\frac{3p}{4}}\#\{O\}\|f_{O,\tang}\|_{L^2}^p . \end{align*} By inequality~\ref{cellulartrans L2}, $\sum_{O}\|f_{O,\tang}\|_{L^2}^2\lesssim D R^{\delta} \|f\|_{L^2}^2$. There exists a cell $O$, such that $$\|f_{O,\tang}\|_{L^2}^2\lesssim DR^{\delta} \#\{O\}^{-1} \|f\|_{L^2}^2.$$ We plug in the $L^2$--estimate for $\|f_{O,\tang}\|_{L^2}^2$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p & R^{O(\delta)}r^{\frac{5}{2}-\frac{3p}{4}}\#\{O\}^{1-p/2} D^{p/2}\|f\|_{L^2}^2\\ &\lesssim D^{3-p} R^{O(\delta)}r^{\frac{5}{2}-\frac{3p}{4}} \|f\|_{L^2}^2. \end{align*} We used the property that there are more than $ D^3 R^{-\delta}$ cells $O$. Since $D\geq r^{1/2}$ and $p>3$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{O(\delta)} r^{\frac{3-p}{2}+\frac{5}{2}-\frac{3p}{4} } \|f\|_{L^2}^2\\ &\lesssim R^{O(\delta)} r^{4-\frac{5p}{4}}\|f\|_{L^2}^2. \end{align*} When $p> 16/5$, the constant term is bounded by $R^{\epsilon}$. The second case is for $D\leq r^{1/2}$, we apply the extra information we have from bush structure. By assumption, $\|Ef_{O}\|_{BL^p(O)}$ is dominated by $\|Ef^{\nsim}_{O,\tang}\|_{BL^p(O)}$ for most of the cells. We apply inequality~\ref{tangential 1}, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{\delta} \sum_{O}\|Ef^{\nsim}_{O,\tang}\|_{BL^p(O)}^p \\ &\lesssim R^{\delta} r^{\frac{5}{2}-\frac{3p}{4}} \sum_{O}\|f^{\nsim}_{O,\tang}\|_{L^2}^p. \end{align*} By Lemma~\ref{geometric larry} and Lemma~\ref{iota}, \begin{align*} \|f^{\nsim}_{O,\tang}\|_{L^2}^2 &\lesssim r^{-1/2+O(\delta)} \underset{|\tau|=r^{-1/2}}{\max}\|f^{\nsim}_{O,\tang, \tau}\|_{L^2_{avg}(\tau)}^2 \\ &\lesssim r^{-1 }R^{O(\epsilon_0)} \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^2 \end{align*} We plug in the estimation for $\|f^{\nsim}_{O,\tang}\|_{L^2}$ into previous inequality, $$\|Ef\|_{BL^p(B_R)}^p \lesssim R^{O(\epsilon_0)} r^{\frac{5}{2}-\frac{3p}{4} -\frac{p-2}{2}} \sum_{O}\|f^{\nsim}_{O,\tang}\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}.$$ By Lemma~\ref{L2 orthogonality for nsim}, $$\|Ef\|_{BL^p(B_R)}^p \lesssim R^{O(\epsilon_0)} D r^{\frac{5}{2}-\frac{3p}{4} -\frac{p-2}{2}} \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}.$$ Since we consider only when $D\leq r^{1/2}$, $$\|Ef\|_{BL^p(B_R)}^p \lesssim R^{O(\epsilon_0)} r^{4-\frac{5p}{4}} \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}. $$ When $p> 16/5$, the constant term is bounded by $R^{\epsilon}$. \end{proof} \section{Polynomial Structure lemma}\label{structure} In the white lie version proof, the main properties of $f_{O,\tang}$ we need to bound its $L^2$--norm are the following: \begin{itemize} \item[(a)] $\sum_O \|f_{O,\tang}\|_{L^2}^2 \lesssim DR^{\delta} \|f\|_{L^2}^2$, \item[(b)] $f_O$ is the sum of some large wavepackets $Ef_{\theta,v}$ and $f_{O,\tang}$ is obtained by redoing wavepacket decomposition in $B_r$ and by restricting $f_{O}$ on the tangential wavepackets to some ow degree algebraic surface \begin{equation}\label{white lie equation} f_{O,\tang} = \sum_{T_{\tau,w}\in \mathbb{T}_{O,\tang}}(f_{O})_{\tau,w} \end{equation} \end{itemize} In general it is difficult for a function to satisfy both properties. In this section, we state a structure lemma that decomposes the function $Ef$ into functions satisfying the above properties separately. \begin{definition}Fix a large constant $d$ about $\log R$ and some $1\leq r\leq R$, a \emph{fat $r$--surface} $S$ is the $r^{1/2+\delta}$--neighborhood of a degree $d$ algebraic surface, which we denote $S_0$, inside a ball $B_r$ of radius $r$. \end{definition} \begin{definition}\label{tangential} Let $T_{\tau,w}$ be a tube of length $r$, radius $r^{1/2}$, we say that $T_{\tau,w}$ is tangential to $S$ if it satisfies that $2T_{\tau,w}\cap S_0\neq \emptyset$ and $$ \Angle(G(\tau),T_x(S_0))\leq r^{-1/2+2\delta} $$ for any nonsingular point $x\in 10T_{\tau,w}\cap 2B_r\cap S_0$. Recall that $G(\tau)$ is the direction of the tube $T_{\tau,w}$. We define $\mathbb{T}_{S,\tang}$ as the collection of tubes $T_{\tau,w}$ tangential to $S$. We define $\mathbb{T}_{S,\trans}$ as the collection of tubes $T_{\tau,w}$ such that $2T_{\tau,w}\cap S_0\neq \emptyset$ and $T\notin \mathbb{T}_{S,\tang}$. \end{definition} \begin{lemma}\label{structure2018} If $f$ is supported in the unit disc and we consider $Ef$ inside $B_R$, then there exist a collection of disjoint cells $O$ and $n $ collections $\mathcal{S}_t$ of fat $r_t$--surfaces $S_t$ with $1 \leq t\leq n\leq \delta^{-2}$ and $r_n < \cdots< r_1$, such that \begin{equation} \|Ef\|_{BL^p(B_R)}^p \lesssim R^{O(\delta)}\sum_O \|Ef\|_{BL^p(O)}^p \end{equation} satisfying the following properties : \begin{itemize} \item[(1)] Each $O$ is contained in a ball of radius $r_0\leq R^{\delta}< r_n$ and $\|Ef\|_{BL^p(O)}^p$ has about the same size for all $O$, the number of cells is greater than $D^3 R^{O(-\delta)}$. \item[(2)] Each collection $\mathcal{S}_t, 1\leq t \leq n$, consists of more than $D_t^3 R^{-O(\delta)}$ disjoint the fat $r_t$--surfaces $S_t$ with $D_t\leq R/r_t$. Every $S_t$ contains about the same number of $S_{t+1}$ for $1\leq t\leq n-1$ and every $S_n$ contains about the same number of $O$. \item[(3)] For each $O$, there exists a containing chain $O\subseteq S_n \cdots \subseteq S_1$ with $$Ef= Ef_O+ \sum_{t=1}^n Ef_{S_t}+ \RapDec(R)\|f\|_{L^2}$$ restricted on $O$, where function $Ef_{S_t}$ is a sum of wavepackets tangential to $S_t$ inside $B_{r_t}$. The functions $f_O$ and $f_{S_t}$ are defined in the proof: \ref{O1}, \ref{Oj} and \ref{S1}, \ref{Sj}. \item[(4)] For each $S_t$, there exists a containing chain $ S_{t} \subseteq S_{t-1} \cdots \subseteq S_1$ with $Ef_{\Pi_{S_t}} = Ef_{S_t} + \sum_{l=1}^{t-1} Ef_{S_l, S_t} + \RapDec(R)\|f\|_{L^2}$ where $Ef_{S_l ,S_t} := \underset{T_{\tau_t', w_t'}\in \mathbb{T}_{S_t,\tang}}{\sum} Ef_{S_l, \tau_t', w_t'}.$ \item[(5)] We have the $L^2$--bound $\sum_{S_t} \|f_{S_t}\|_{L^2}^2 \lesssim D_t R^{\delta}\|f\|_{L^2}^2$ and $\sum_O \|f_{O}\|_{L^2}^2 \leq D R^{\delta}\|f\|_{L^2}^2$. \end{itemize} \end{lemma} \begin{proof} We apply the polynomial partitioning on $Ef$ iteratively until the diameter of the cell is reduced to $R^{\delta}$. We record the tangential parts along the iteration process. \textbf{Initial Step}. We apply polynomial partitioning on $Ef$ in $B_R$ as in Subsection~\ref{polynomial partitioning}. Let $Z_1$ be the zero set of the degree $d$ partitioning polynomial and $W_1$ be the $R^{1/2}$--neighborhood of $Z_1$. If we are in cellular case, then each cell $O_1$ lies inside a ball of radius $R_1= R/d$ and $$\sum_{O_1}\|Ef\|_{BL^p(O_1)}^p \gtrsim \|Ef\|_{BL^p(B_R)}^p,$$ $$\|Ef\|_{BL^p(O_1)}^p \lesssim d^{-3} \|Ef\|_{BL^p(B_R)}^p.$$ We define \begin{equation}\label{O1} Ef_{O_1}= \underset{T_{\theta,v} \cap O_1\neq \emptyset}{\sum} Ef_{\theta,v}, \end{equation} and we have $\sum_{O_1}\|f_{O_1}\|_{L^2}^2 \lesssim d \|f\|_{L^2}^2$ and $\|Ef\|_{BL^p(O)}^p \leq \|Ef_{O_1}\|_{BL^p(O_1)}^p +\RapDec(R)\|f\|_{L^2}^p$. Otherwise we are in algebraic case, $\|Ef\|_{BL^p(W_1)}^p\gtrsim \|Ef\|_{BL^p(B_R)}^p$. We cover $W_1$ with balls $B_k$ of radius $R_1= R^{1-\delta}$ and we define $r_1=R_1$, $S_1= B_{k}\cap W_1$ and \begin{equation}\label{S1} Ef_{S_1} = Ef_{k,\tang}. \end{equation} One can see that $S_1$ is a fat $r_1$--surface in $B_k$. $Ef_{S_1}$ is a sum of wavepackets tangential to $S_1$ and $Ef_{S_1}=Ef_{\Pi_{S_1}}+\RapDec(R)\|f\|_{L^2}$. We also define $D_1=1$, $O_1=S_1$ and $Ef_{O_1}= Ef_{k, \trans}$. The $Ef_{O_1}$ satisfies: $\sum_{O_1}\|f_{O_1}\|_{L^2}^2 \lesssim \Poly(d) \|f\|_{L^2}^2$. Restricted on $O_1 \subseteq S_1$ , we have $Ef= Ef_{O_1}+ Ef_{S_1} + \RapDec(R)\|f\|_{L^2}$. \textbf{Iteration Step}. Assume that we have run the polynomial partitioning $j$ steps and we defined $O_j \subseteq B_{R_j}$ and $Ef_{O_j}, Ef_{S_1}, \dots, Ef_{S_t}$ satisfying: \begin{itemize} \item there exists a containing chain $O_j \subseteq S_t \subseteq \cdots \subseteq S_1$. Restricted on each $O_j$, we have $Ef = Ef_{O_j} +Ef_{S_1}+\cdots + Ef_{S_t} +\RapDec(R)\|f\|_{L^2}$ for the containing chain; \item we have the following $L^2$ estimates: $\sum_{O_j} \|f_{O_j}\|_{L^2}^2 \lesssim d^j \Poly(d)^{t}\|f\|_{L^2}^2$ and $\sum_{S_l}\|f_{S_l}\|_{L^2}^2 \lesssim R^{\delta} D_l \|f\|_{L^2}^2$ for $1\leq l\leq t$; \item $Ef_{\Pi_{S_{l}}} =Ef_{S_l} + Ef_{S_1, S_l} +\cdots +Ef_{S_{l-1}, S_l} +\RapDec(R)\|f\|_{L^2}$ for all $1\leq l\leq t$; \item $\sum_{O_j}\|Ef\|_{BL^p(O_j)}^p \gtrsim \|Ef\|_{BL^p(B_R)}^p$ and $\|Ef\|_{BL^p(O_j)}^p \lesssim d^{-3(j-t)} \|Ef\|_{BL^p(B_R)}^p$. \end{itemize} We apply polynomial partitioning on $\|Ef\|_{BL^p(O_j)}^p$ in each $O_j$. Let $Z_{j+1}$ be the zero set of the partitioning polynomial and $W_{j+1}$ be the $R_j^{1/2}$--neighborhood of $Z_{j+1}$. We do wavepacket decomposition of $Ef_{O_j}$ inside $B_{R_j}$: $Ef_{O_j}=\sum_{\tau_j, w_j} Ef_{O_j, \tau_j, w_j}$. If for more than $1/2$ of the cells $O_j$ we are in cellular case, then we keep only those $O_j$ and we define $Ef_{O_{j+1}}= \underset{T_{\tau_j, w_j}\cap O_{j+1}\neq \emptyset}{\sum}Ef_{O_j, \tau_j, w_j}$ and we write $R_{j+1}= R_j/d$. We have the following $L^2$--estimate, $$ \sum_{O_{j+1}} \|f_{O_{j+1}}\|_{L^2}^2 \leq \sum_{O_j} d\|f_{O_j}\|_{L^2}^2 \leq d^{j+1}\Poly(d)^t \|f\|_{L^2}^2. $$ We have the following $BL^p$--estimates, $$\sum_{O_{j+1}} \|Ef\|_{BL^p(O_{j+1})}^p \gtrsim \|Ef\|_{BL^p(B_R)}^p,$$ and $$\|Ef\|_{BL^p(O_{j+1})}^p \lesssim d^{-3}\|Ef\|_{BL^p(O_j)}^p \lesssim d^{-3(j+1-t)}\|Ef\|_{BL^p(B_R)}^p.$$ Otherwise for more than $1/2$ of the cells we are in algebraic case, then we keep only those $O_j$ and we define $R_{j+1}= R_j^{1-\delta}$ and we cover $W_{j+1}$ with balls of radius $R_{j+1}$, we denote $r_{t+1}=R_{j+1}$ and $S_{t+1}= O_{j+1}= W_{j+1}\cap B_{R_{j+1}}$. Here $S_{t+1}$ is a fat $r_{t+1}$--surface inside $B_{r_{t+1}}$. We define \begin{equation}\label{Oj} Ef_{O_{j+1}} =\underset{T_{\tau_j, w_j} \in \mathbb{T}_{S_{t+1}, \trans}}{\sum} Ef_{O_j, \tau_j, w_j} \end{equation} and \begin{equation}\label{Sj} Ef_{S_{t+1}}= \underset{T_{\tau_j, w_j} \in \mathbb{T}_{S_{t+1}, \tang }}{\sum} Ef_{O_j, \tau_j, w_j}. \end{equation} Restricted on each $O_{j+1}= S_{t+1}$, we have $Ef_{O_j} = Ef_{O_{j+1}} + Ef_{S_{t+1}} $, so \begin{equation}\label{decomposition on O} Ef= Ef_{O_{j+1}}+ Ef_{S_{t+1}}+\cdots +Ef_{S_1}+ \RapDec(R)\|f\|_{L^2}. \end{equation} We do wavepackets decomposition of all the functions in the equation~\ref{decomposition on O} inside $B_{R_j}$ and we take only the wavepackets tangential to $S_{t+1}$, then $$Ef_{\Pi_{S_{t+1}}}= Ef_{S_{t+1}} + Ef_{S_t, S_{t+1}} +\cdots + Ef_{S_1, S_{t+1}}+\RapDec(R)\|f\|_{L^2}.$$ We define $r_{t+1}= R_{j+1}$ and $D_{t+1}= d^{j-t}$. From the definition of $r_{t+1}$, we know that $D_{t+1}\leq R/r_{t+1}$. We have the following $L^2$--estimates for $f_{O_{j+1}}$ and $f_{S_{t+1}}$: $$ \sum_{O_{j+1}} \|f_{O_{j+1}}\|_{L^2}^2 \lesssim \sum_{O_j} d\|f_{O_j}\|_{L^2}^2 \leq d^{j} \Poly(d)^{t+1} \|f\|_{L^2}^2, $$ $$\sum_{S_{t+1}}\|f_{S_{t+1}}\|_{L^2}^2 \lesssim \sum_{O_j} \sum_{S_{t+1}\subseteq O_j} \|f_{S_{t+1}}\|_{L^2}^2 \lesssim R^{\delta} \sum_{O_j}\|f_{O_j}\|_{L^2}^2 \leq R^{\delta}d^{j} \Poly(d)^t \|f\|_{L^2}^2.$$ We have as well the $BL^p$--estimate: $$\sum_{O_{j+1}}\|Ef\|_{BL^p(O_{j+1})}^p \gtrsim \|Ef\|_{BL^p(B_R)}^p$$ and $$\|Ef\|_{BL^p(O_{j+1})}^p \lesssim \|Ef\|_{BL^p(O_{j})}^p \lesssim d^{-3(j+1-(t+1))}\|Ef\|_{BL^p(B_R)}^p .$$ When $R_{j+1}\leq R^{\delta}$, we stop and define $r_0= R_{j+1}$, $O= O_{j+1}$ and $D= d^{j+1-t}$. For each algebraic case, we reduce the radius by a power of $1-\delta$. So the number of algebraic steps is bounded by $n$ with $R^{(1-\delta)^{n}}\leq R^{\delta}$. In particular, $n\leq \delta^{-2}$. Recall that we choose $d= \log R$, and we can choose $R$ large enough such that $\Poly(d)^{n} \leq R^{\delta^2}$. Now we have more than $D^{3}$ cells $O$ with \begin{equation}\label{lower bound of cell} \sum_{O} \|Ef\|_{BL^p(O)}^p \gtrsim \|Ef\|_{BL^p(B_R)}^p, \end{equation} and \begin{equation}\label{upper bound of cell} \|Ef\|_{BL^p(O)}^p \lesssim D^{-3}R^{\delta}\|Ef\|_{BL^p(B_R)}^p. \end{equation} So far we have verified property (3), (4), (5) in the lemma. In order to satisfy property (1) and (2), we need to sort cells $O$ and fat $r_t$--surfaces $S_t$. We sort the cells $O$ according to $\|Ef\|_{BL^p(O)}^p $, which we denote $\lambda$. There exists a dyadic $\lambda_0$ with about $Y_0$ cells $O$ such that $\|Ef\|_{BL^p(O)}^p \sim \lambda_0$ and $Y_0 \lambda_0 \gtrsim (\log R)^{-1} \|Ef\|_{BL^p(B_R)}^p$. We keep only those $O$. By inequality~\ref{upper bound of cell}, $Y_0 \gtrsim R^{-2\delta} D^3$. Now we have fixed our choice of $O$, we then sort $ S_n$, and select a collection $\mathcal{S}_n$ of $S_n$ such that : each $S_n\in \mathcal{S}_n$ contains about the same number of $O$, and the number of cells $O$ contained in the $S_n\in \mathcal{S}_n$ is greater than $O( (\log R)^{-1} R^{-2\delta} D^3)$. By the iteration process, each $S_n$ contains at most $(D/D_n)^3 $ cells $O$, so the number of $S_n\in \mathcal{S}_n$ is greater than $$\frac{ O( (\log R)^{-1} R^{-2\delta} D^3) }{(D/D_n)^3} \gtrsim O((\log R)^{-1}R^{-2\delta} D_n^3).$$ We sort $S_{n-1}$ according to the number of $S_n$ contained in $S_{n-1}$, which we denote $\lambda$. Since $1\leq \lambda\leq R^3$, there exists a dyadic number $\lambda$, such that $$|\mathcal{S}_n| \lesssim \lambda \log R \cdot \#\{S_{n-1}: S_{n-1} \text{~contains ~ about ~} \lambda \text{~fat~} r_{n}\text{--surfaces~} S_n\}.$$ We consider only thoses $S_{n-1}$ and we denote the collection $\mathcal{S}_{n-1}$. By the iteration process, $\|Ef\|_{BL^p(S_{n-1})}^p\lesssim D_{n-1}^{-3} \|Ef\|_{BL^p(B_R)}^p$. The way we choose $S_{n-1}$ shows that $$\underset{S_{n-1}\in \mathcal{S}_{n-1}}{\sum} \sum_{O\subset S_{n-1}} \|Ef\|_{BL^p(O)}^p \gtrsim R^{O(\delta)} |Ef\|_{BL^p(B_R)}^p, $$ so the number of $S_{n-1}$ in $\mathcal{S}_{n-1}$ is greater than $R^{-O(\delta)}D_{n-1}^3$. We sort $S_{n-2}, \dots, S_1$ in the same way. In the end, we have a collection of $O$ and collections $\mathcal{S}_t$, $1\leq t\leq n$ satisfying property (1) and (2). \end{proof} \begin{corollary}\label{linear structure2018} If $f= f_1+ f_2$ are supported in the unit disc, then for $S_t$ and $O$ defined in Lemma~\ref{structure2018}, then we can define $Ef_{i, S_t}$, $Ef_{i, \Pi_{S_t}}$ and $Ef_{i,O}$, $i=1,2$ satisfying property (3), (4), (5) in Lemma~\ref{structure2018} with $Ef_{S_t}= Ef_{1,S_t}+Ef_{2, S_t}$, $Ef_{\Pi_{S_t}}=Ef_{1, \Pi_{S_t}}+Ef_{2, \Pi_{S_t}}$ and $Ef_{O}=Ef_{1,O}+Ef_{2,O}$. \end{corollary} \begin{proof} By the proof of Lemma~\ref{structure2018}, we can see that the construction of $Ef_O$, $Ef_{S_t}$ and $Ef_{\Pi_{S_t}}$ for $1\leq t\leq n$ is linear and only depends on $O$ and $S_t$, $1\leq t\leq n$. \end{proof} \section{Two ends argument and some easy cases}\label{two ends} We cover $B_R$ with balls $B_k$ of radius $R^{1-\epsilon_0}$, $\delta\ll \epsilon_0 \ll \epsilon$. For each $B_k$, we define in the next section some relation $\sim$ between each large tube $T_{\theta,v}$ and $B_k$ such that the number of balls $B_k\sim T_{\theta,v}$ for a fixed $T_{\theta,v}$ is bounded by $ O_{\delta}(1)$. For each ball $B_k$, we define $Ef_{k}^{\sim}= \underset{T_{\theta,v}\sim B_k}{\sum} Ef_{\theta,v}$ and $Ef_{k}^{\nsim}=\underset{T_{\theta,v}\nsim B_k}{\sum} Ef_{\theta,v}$. Restricted on each cell $O\subseteq B_k$, $Ef=Ef_k^{\sim}+ Ef_k^{\nsim}+ \RapDec(R)\|f\|_{L^2}$. If for most cells $O $, $\|Ef\|_{BL^p(O)}\lesssim \|Ef^{\sim}\|_{BL^p(O)}$, then we apply Lemma~\ref{two ends2018} to conclude the proof of Theorem~\ref{main induction theorem}. Otherwise, for most cells $O$, $\|Ef\|_{BL^P(O)} \lesssim \|Ef^{\nsim}\|_{BL^p(O)}$. By Corollary~\ref{linear structure2018}, restricted on $O$, we have $Ef^{\nsim} = Ef^{\nsim}_O +\sum_{t=1}^{n} Ef^{\nsim}_{S_t}.$ If there exists $R^{-\delta}$ of the cells $O$ such that $\|Ef^{\nsim}\|_{BL^p(O)}$ is dominated by $\|Ef^{\nsim}_O\|_{BL^p(O)}$, then we apply the following lemma~\ref{small r2018}. \begin{lemma}\label{small r2018} If there exists $R^{-\delta}$ of the cells $O$ with $$\|Ef\|_{BL^p(O)}^p \lesssim \|Ef^{\nsim}\|_{BL^p(O)} \lesssim R^{\delta} \|Ef^{\nsim}_O\|_{BL^p(O)}^p,$$ then Theorem~\ref{main induction theorem} holds for $Ef$ and for all $p>3$. \end{lemma} \begin{proof} By Lemma~\ref{structure2018} and the assumption of this lemma, for $R^{-\delta}$ of the cells $O$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{O(\delta)}\#\{O\} \|Ef\|_{BL^p(O)}^p \\ &\lesssim R^{O(\delta)}\#\{O\} \|Ef^{\nsim}_{O}\|_{BL^p(O)}^p \\ &\lesssim R^{O(\delta)} r_0^{O(1)} \#\{O\} \|f^{\nsim}_O\|_{L^2}^p \end{align*} Since each cell $O$ lies inside a ball of radius $r_0\leq R^{\delta}$, $r_0^{O(1)}\lesssim R^{O(\delta)}$. By Corollary~\ref{linear structure2018}, $Ef^{\nsim}_{O}$ satisfies property (5) of Lemma~\ref{structure2018}, $\sum_{O}\|f^{\nsim}_{O}\|_{L^2}^2 \lesssim DR^{\delta}\|f\|_{L^2}^2.$ There exists a cell $O$, such that $\|f^{\nsim}_{O}\|_{L^2}^2 \lesssim D R^{2\delta} (\#\{O\})^{-1}\|f\|_{L^2}^2$ and $$\|Ef^{\nsim}_{O}\|_{BL^p(O)}^p \gtrsim R^{-\delta}\|Ef^{\nsim}\|_{BL^p(O)}^p \gtrsim R^{-\delta} \|Ef\|_{BL^p(O)}^p.$$ By Lemma~\ref{structure2018}, $\#\{O\}$ is greater than $D^3 R^{-\delta}$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{O(\delta)}\#\{O\} \|f^{\nsim}_{O}\|_{L^2}^p \\ &\lesssim R^{O(\delta)} \#\{O\}^{1-p/2} D^{p/2}\|f\|_{L^2}^p\\ &\lesssim R^{O(\delta)} D^{3-p} \|f\|_{L^2}^p . \end{align*} When $p>3$, the constant term is bounded by $R^{\epsilon}$. \end{proof} If for $r_t^{-\delta}$ of the cells $O$, $\|Ef\|_{BL^p(O)}^p$ is dominated by $r_t^{\delta}\|Ef^{\nsim}_{S_t}\|_{BL^p(O)}^p$ with $r_t\geq R^{13/16}$, then we apply the following lemma~\ref{large r2018} \begin{lemma}\label{large r2018} If there exists $r_t^{-\delta}$ of the cells $O$, $$\|Ef\|_{BL^p(O)}^p\lesssim \|Ef^{\nsim}\|_{BL^p(O)}^p \lesssim r_t^{\delta}\|Ef^{\nsim}_{S_t}\|_{BL^p(O)}^p $$ for some $S_t$ with $r_t\geq R^{13/16}$, then Theorem~\ref{main induction theorem} holds for $Ef$ for any $p> 3+ 3/13$. \end{lemma} \begin{proof} We apply Lemma~\ref{structure2018} and the assumption of this lemma, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{O(\delta)}\sum_O \|Ef\|_{BL^p(O)}^p \lesssim R^{O(\delta)} \sum_O \|Ef^{\nsim}_{S_t}\|_{BL^p(O)}^p\\ &\lesssim R^{O(\delta)} \sum_{S_t\in \mathcal{S}_t} \|Ef^{\nsim}_{S_t}\|_{BL^p(S_t)}^p \\ &\lesssim R^{O(\delta)}r_t^{\frac{5}{2}-\frac{3p}{4}}\sum_{S_t\in \mathcal{S}_t}\|f^{\nsim}_{S_t}\|_{L^2}^p \end{align*} Since $f^{\nsim}_{S_t}$ is tangential to $S_t$, by Lemma~\ref{geometric larry}, \begin{equation}\label{L2estimate1} \|f^{\nsim}_{S_t}\|_{L^2}^2\lesssim r_t^{-1/2+O(\delta)} \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^2. \end{equation} We apply Lemma~\ref{L2 orthogonality for nsim}, \begin{equation}\label{L2estimate2} \sum_{S_t}\|f^{\nsim}_{S_t}\|_{L^2}^2\lesssim D_tR^{\delta}\|f\|_{L^2}^2. \end{equation} We apply inequality~\ref{L2estimate1} and inequality~\ref{L2estimate2}, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{O(\delta)}r_t^{\frac{5}{2}-\frac{3p}{4}}\sum_{S_t\in \mathcal{S}_t}\|f^{\nsim}_{S_t}\|_{L^2}^p \\ &\lesssim R^{O(\delta)}r_t^{\frac{5}{2}-\frac{3p}{4}-\frac{p-2}{4}}\sum_{S_t\in \mathcal{S}_t}\|f^{\nsim}_{S_t}\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}. \end{align*} Since $D_t\leq R/r_t \leq r_t^{3/13}$ by the assumption of $r_t\geq R^{13/16}$, when $p> 3+ 3/13$, the constant is bounded by $R^{O(\delta)}$. \end{proof} Otherwise, we choose the smallest $t$, such that there exists $R^{-\delta}$ of the cells $O$ such that \begin{equation}\label{assumption12018} \|Ef^{\nsim}\|_{BL^p(O)} \leq r_t^{\delta}\|Ef^{\nsim}_{S_t}\|_{BL^p(O)}, \end{equation} and \begin{equation}\label{assumption22018} \|Ef^{\nsim}\|_{BL^p(O)}\geq r_l^{\delta} \|Ef^{\nsim}_{S_l}\|_{BL^p(O)}, \text{~for ~all~} l<t. \end{equation} We observe that $Ef^{\nsim}_{\Pi_{S_t}}$ satisfies property (b) at the beginning of Section~\ref{structure}, which enables us to use the broom structure and the bush structure. After inequatliies~\ref{assumption12018} and \ref{assumption22018}, we can show that $Ef^{\nsim}_{\Pi_{S_t}}$ and $Ef^{\nsim}_{S_t}$ have about the same $BL^p$--norm at most of the cells $O$. \begin{lemma}\label{twin function2018} For the $t$ satisfying inequality~\ref{assumption12018} and inequality~\ref{assumption22018}, we have $\|Ef^{\nsim}_{\Pi_{S_t}}\|_{BL^p(O)}\sim \|Ef^{\nsim}_{S_t}\|_{BL^p(O)}$ for most of the cells $O$. \end{lemma} \begin{proof} By property (4) of Lemma~\ref{structure2018} and Corollary~\ref{linear structure2018}, $Ef^{\nsim}_{\Pi_{S_t}} = Ef^{\nsim}_{S_t} + \sum_{l=1}^{t-1} Ef^{\nsim}_{S_l ,S_t}$. From the assumption~\ref{assumption12018} and assumption~\ref{assumption22018}, we know that for a typical cell $O$, $\|Ef^{\nsim}_{S_l}\|_{BL^p(O)}^p \leq r_l^{-\delta}r_t^{\delta} \|Ef^{\nsim}_{S_t}\|_{BL^p(O)}^p$ for all $l>t$. We show in Lemma~\ref{two tangential} that, $$\|Ef^{\nsim}_{S_l, S_t}\|_{BL^p(O)} \leq \|Ef^{\nsim}_{S_l}\|_{BL^p(O)}.$$ Since $l<t$, $r_l^{1-\delta}> r_t >R^{\delta}$ and $t< \delta^{-2}$, all the $Ef_{Sl, S_t}$ are negligible terms compared to $Ef_{S_t}$, by triangle inequality we finish the proof. \end{proof} \begin{lemma}\label{two tangential} Given a fat $r_1$--surface $S_1$ in $B_{r_1}$ and a fat $r_2$--surface $S_2$ in $B_{r_2}$ with $B_{r_1}\subseteq B_{r_2}$, $r_2^{1-\delta}\geq r_1$. Assume that $Ef$ is tangential to $S_2$ in $B_{r_2}$. We decompose $Ef|_{S_1}=Ef_{\tang}+Ef_{\trans}$ in corresponding tangential and transverse component to $S_1$. The following estimates hold for any ball $B\subseteq S_1$ of radius $K$, where $K$ is the same as in the definition of the $BL^p$--norm~\ref{broad norm}, \begin{itemize} \item$\|Ef_{\tang}\|_{BL^p_A(B)}^p \leq \|Ef\|_{BL^p_A(B)}^p$, \item$\|Ef_{\trans}\|_{BL^p_A(B)}^p \leq \|Ef\|_{BL^p_A(B)}^p$. \end{itemize} \end{lemma} \begin{proof} From the definition of fat $r$--surface, we know that $S_j$ is the $r_j^{1/2+\delta}$--neighborhood of a degree $d$ algebraic surface $S_{j,0}$, $j=1, 2$. For any smooth point $z_j\in (B+B_{r_j}^{1/2}(0))\cap S_{j,0}$, let $\Sigma_j$ be the tangent plane of $S_{j,0}$ at $z_j$, $j=1,2$. By the definition of $Ef_{\tang}$, for any $T_{\theta_j, v_j}\in \mathbb{T}_{S_j, \tang}$, and $T_{\theta_j, v_j}\cap B\neq \emptyset$, we have $$\Angle(G(\theta_j), \Sigma_j)\lesssim r_j^{-1/2+2\delta} \leq r_1^{-1/2+2\delta}.$$ When $\Angle(\Sigma_1, \Sigma_2)\geq K r_1^{-1/2+2\delta}$, the directions of wavepackets in $Ef_{\tang}$ in $B$ can be covered by two caps of radius $K^{-1}$. Hence $\|Ef_{\tang}\|_{BL_A^p(B)}^p=0$, and $\|Ef\|_{BL^p_A(B)}^p\geq \|Ef_{\trans}\|_{BL^p_A(B)}^p$. When $\Angle(\Sigma_1, \Sigma_2)\leq r_1^{-1/2+2\delta}$, $Ef=Ef_{\tang}$ in $B$, $\|Ef\|_{BL^p_A(B)}^p=\|Ef_{\tang}\|_{BL^p_A(B)}^p$. The directions parallel to $\Sigma_j$ can be represented as circles $\mathcal{C}_j$ in the unit sphere in $\mathbb{R}^3$. We decompose $\mathcal{C}_2$ into tangential part, $\mathcal{C}_2 \cap N_{r_1^{-1/2}}\mathcal{C}_1$, and its complement in $\mathcal{C}_2$, the transversal part. The directions in the tangential part contains the directions of wavepackets tangential to $S$ and passing through $B$. When $r_1^{-1/2+2\delta}\leq \Angle(\Sigma_1, \Sigma_2)\leq K r_1^{-1/2+2\delta}$, we cover $\mathcal{C}_2$ with caps $\{\alpha\}$ of radius $K^{-1}$. A cap $\alpha$ lies in either tangential part or the transversal part of $\mathcal{C}_2$. By the definition of the $BL^p$--norm, $$\mu_{Ef}(B_{K}) := \underset{V_1, \dots, V_A : \text{~lines of~} \mathbb{R}^3}{\min}\big( \underset{\tau: \Angle(G(\tau), V_a)\geq K^{-1} \text{~for all~} a}{\max}\int_{B_{K}} |Ef_{\tau}|^p \big).$$ The quantity $\mu_{Ef}(B)=\|Ef\|_{BL^p_A(B)}^p$ takes the $(A+1)$th largest value of $\int_{B} |Ef_{\alpha}|^p$. A cap $\alpha$ either belongs to the tangential part or to the transversal part, \begin{equation}\label{change of A} \|Ef\|_{BL^p_A(B)}^p\geq \max (\|Ef_{\tang}\|_{BL^p_A(B)}^p, \|Ef_{\trans}\|_{BL^p_A(B)}^p). \end{equation} \end{proof} Recall that in the definition of broad $L^p$--norm, we have an underlying $A$: $$\mu_{Ef}(B_{K}) := \underset{V_1, \dots, V_A : \text{~lines of~} \mathbb{R}^3}{\min}\big( \underset{\tau: \Angle(G(\tau), V_a)\geq K^{-1} \text{~for all~} a}{\max}\int_{B_{K}} |Ef_{\tau}|^p \big).$$ The $A$ changes from line to line because of the subadditive property: $$\|Ef+Eg\|_{BL^p_{A_1+A_2}(B_K)} \leq \|Ef\|_{BL^p_{A_1}(B_K)}+\|Eg\|_{BL^p_{A_2}(B_K)}.$$ Everytime we use the triangle inequality of the broad $L^p$--norm, we need to reduce $A$. In inequality~\ref{change of A}, both sides have the same $A$ in the $BL^p$--norm: $\|Ef\|_{BL^p_A(B)}^p \geq \|Ef_{\tang}\|_{BL^p_A(B)}^p$. In order to deal with the change of $A$, it suffices to take the assumption~\ref{assumption12018} and ~\ref{assumption22018} as $$ \|Ef^{\nsim}\|_{BL_A^p(O)} \leq r_t^{\delta}\|Ef^{\nsim}_{S_t}\|_{BL^p_{A_t}(O)}$$ and $$ \|Ef^{\nsim}\|_{BL^p_A(O)}\geq r_l^{\delta} \|Ef^{\nsim}_{S_l}\|_{BL^p_{A_l}(O)}, \text{~for ~all~} l<t$$ with $A_l \geq 2^l A_{l-1} $ and $A\gg A_t$. \section{Estimate about $L^2$--norm}\label{estimate} In this section, we discuss the case when for most of the $O$ $\|Ef\|_{BL^P(O)}^p\lesssim \|Ef^{\nsim}\|_{BL^p(O)}^p$ and there exists a $t$ satisfying \ref{assumption12018} and \ref{assumption22018}, \begin{equation}\label{last assumption} \|Ef\|_{BL^p(O)} \lesssim \|Ef^{\nsim}\|_{BL^p(O)} \lesssim\|Ef^{\nsim}_{S_t}\|_{BL^p(O)}\sim \|Ef^{\nsim}_{\Pi_{S_t}}\|_{BL^p(O)}. \end{equation} The main lemmas we prove in this section are the following. \begin{lemma}\label{last case for large r2018} If for most of the cells $O$, $$\|Ef\|_{BL^p(O)} \lesssim \|Ef^{\nsim}\|_{BL^p(O)} \lesssim \|Ef^{\nsim}_{S_t}\|_{BL^p(O)}\sim \|Ef^{\nsim}_{\Pi_{S_t}}\|_{BL^p(O)}$$ and $r_t\geq R^{1/2}$, then for $p> 3+3/13$, $$\|Ef\|_{BL^p(B_R)}^p \lesssim R^{O(\epsilon_0)} \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}.$$ \end{lemma} \begin{lemma}\label{small r small D2018} If for most of the cells $O$, $$\|Ef\|_{BL^p(O)} \lesssim \|Ef^{\nsim}\|_{BL^p(O)} \lesssim \|Ef^{\nsim}_{S_t}\|_{BL^p(O)}\sim \|Ef^{\nsim}_{\Pi_{S_t}}\|_{BL^p(O)}$$ and $r_t\leq R^{1/2}$, then for $p > 3+ 1/5$, $$\sum_{S_t\in \mathcal{S}_t}\sum_{O\subseteq S_t} \|Ef^{\nsim}_{S_t}\|_{BL^p(O)}^p \lesssim R^{O(\epsilon_0)} \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}.$$ \end{lemma} Recall that in the white lie proof, the key fact is that the $L^2$--norm of $Ef_{\tang}$ near a plane $\Sigma$ is small unless the large wavepackets are organized into large brooms rooted at $\Sigma$. After polynomial partitioning iteration, we obtain collections of fat $r_t$--surfaces $S_t$. Lemma~\ref{plane} in the next subsection says that we can treat $S_t$ as a thin neighborhood of at most $O(r^{O(\delta)})$ planes if we fix a direction of wavepackets. \subsection{Planes}\label{plane section} Let $Z$ be a smooth degree $d$ algebraic surface and $S$ is the $R^{1/2+\delta}$ neighborhood of $Z$ in $B_R$, which is by definition a fat $R$--surface. Let $Ef=Ef_{\tang}$ be a function tangential to $S$ in $B_R$. \begin{definition} We define $\mathbb{T}_{S, \tang,ess}$ as the subcollection of $\mathbb{T}_{S, \tang}$ containing the large tubes $T_{\theta,v}$ satsifying: there exists another $T_{\theta', v'} \in \mathbb{T}_{S, \tang}$ such that \begin{itemize} \item $T_{\theta,v}\cap T_{\theta',v'}\cap S\neq \emptyset$ and \item $\Angle(G(\theta), G(\theta'))\geq K^{-1}$. \end{itemize} We define the $T_{\theta',v'}$ as $T_{\theta,v}^*$. For a fixed $T_{\theta,v}$, there might be multiple $T_{\theta,v}^*$. \end{definition} We define $Ef_{\tang, ess}=\underset{T_{\theta,v}\in \mathbb{T}_{S, \tang, ess}}{\sum} Ef_{\theta,v}$. By the definition of broad $L^p$--norm, $Ef_{\tang, ess}$ essentially represents $Ef_{\tang}$ in the sense that $$\|Ef_{\tang}\|_{BL^p(B_{\rho}\cap W)}^p \leq \|Ef_{\tang, ess}\|_{BL^p(B_{\rho}\cap W)}^p + \RapDec(R)\|f\|_{L^2}^p.$$ From now on, we write $Ef_{\tang}= Ef_{\tang,ess}$ and $\mathbb{T}_{S, \tang}=\mathbb{T}_{S, \tang, ess}$. Fix a direction $\theta$, we show that all the wavepackets $Ef_{\theta,v}$ from $\theta$ have essential support tangential $T_{\theta,v}$ to at most $O(R^{O(\delta)})$ planes. \begin{lemma}\label{plane} Let $Ef_{\tang}$ be as in the previous three paragraphs. Fix a cap $\theta\subseteq \text{supp}f_{\tang}$, there exists at most $O_d(R^{O(\delta)})$ planes, such that every $T_{\theta,v}\in \mathbb{T}_{S,\tang}$ is $R^{-1/2+\delta}$--tangential to of one of the planes. \end{lemma} \begin{proof} We choose two unit vectors $\mathbf{v}_1$ and $\mathbf{v}_2$, such that $\mathbf{v}_1\perp \mathbf{v}_2$ and they are both orthogonal to $G(\theta)$ (up to $R^{-1/2}$--angle difference). Let $\mathbb{T}_{ \theta}$ denote the collection of $T_{\theta,v}$ in $\mathbb{T}_{S, \tang}$. We define $\mathbb{T}_{\theta,i} $, $i=1,2$, as the collection of tubes $T_{\theta,v}$ such that there exists another $T_{\theta',v'}\in \mathbb{T}_{S,\tang}$ such that $G(\theta')\wedge G(\theta)\wedge \mathbf{v}_i\geq K^{-1}$ and $2T_{\theta,v}\cap 2T_{\theta',v'}\cap S\neq \emptyset$. Since $\mathbf{v}_1$, $\mathbf{v}_2$ and $G(\theta)$ are pairwise orthogonal, $$\mathbb{T}_{\theta}\subseteq \mathbb{T}_{\theta, 1}\cup \mathbb{T}_{\theta,2}.$$ We show that tubes in $\mathbb{T}_{\theta,1}$ can be covered by the $R^{1/2+\delta}$--neighborhood of at most $R^{O(\delta)}$ planes. Same arguments apply to $\mathbb{T}_{\theta,2}$. We consider the projection $\Pi_{\mathbf{v}_1}$ along $\mathbf{v}_1$ to the plane $\Sigma_{\mathbf{v}_1}$ that is perpendicular to $\mathbf{v}_1$. We would like to decompose $Z$ into at most $\Poly(d)$ pieces $Z_j$, such that the projection $\Pi_{\mathbf{v}_1}$ on $Z_j$ is injective. We consider the set of planes $\Sigma_{\mathbf{v}_2,t}$ perpendicular to $\mathbf{v}_2$ parametrized by coordinate $t$ along $\mathbf{v}_2$ direction. By our choice of $\mathbf{v}_1$ and $\mathbf{v}_2$, the planes $\Sigma_{\mathbf{v}_2,t}$ are parallel to $G(\theta)$ and $v_1$. We define $Z_{\mathbf{v}_1}$ as the set of points $p$ in $Z$ such that $T_pZ$ contains the direction $\mathbf{v}_1$. $Z_{\mathbf{v}_1}$ is an algebraic curve of degree at most $\Poly(d)$. We color $Z_{\mathbf{v}_1}$ with red and blue. For any point $p\in Z_{\mathbf{v}_1}$, we color it with red if $p$ is singular or if the tangent direction $T_{p} Z_{\mathbf{v}_1}$ satisfies $\Angle( T_{p} Z_{\mathbf{v}_1}, \Sigma_{\mathbf{v}_2, t})\leq R^{-1/2+10\delta}$, otherwise we color it with blue. There are at most $\Poly(d)$ singular points on $Z_{\mathbf{v}_1}$ for a generic $\mathbf{v}_1$. We decompose $Z_{\mathbf{v}_1}$ into connected components, such that points on each component has the same color. We claim that there are at most $\Poly(d)$ red components. The end points of red compoents are the points $p$ such that $\Angle( T_{p} Z_{\mathbf{v}_1}, \Sigma_{\mathbf{v}_2, t})= R^{-1/2+10\delta}$. In fact, we can even choose the number $R^{-1/2+10\delta}$ generically. For example, we can choose $2R^{-1/2+10\delta}$ instead of $R^{-1/2+10\delta}$. The are at most $\Poly(d)$ points with $\Angle( T_{p} Z_{\mathbf{v}_1}, \Sigma_{\mathbf{v}_2, t})= R^{-1/2+10\delta}$. For each red component $Z_{\mathbf{v}_1, red}$ of $Z_{\mathbf{v}_1}$ , we can cover it with $R^{1/2+O(\delta)}$--neighborhood of a plane $\Sigma_{\mathbf{v}_2, t}$ for some $t$. Since we consider everything happen inside a $B_R$, and the red component has angle bounded by $R^{-1/2+10\delta}$ with $\Sigma_{\mathbf{v}_2,t}$. The red component is trapped inside a $R^{1/2+O(\delta)}$--neighborhood of some plane $\Sigma_{\mathbf{v}_2,t}$. We also add two planes $\Sigma_{\mathbf{v}_2, t_A}$ and $\Sigma_{\mathbf{v}_2, t_B}$ that cut off the region where $B_R$ lies inside. There exists $\Poly(d)$ planes $\Sigma_{\mathbf{v}_2, t_1}, \dots , \Sigma_{\mathbf{v}_2, t_M}$, $M\leq \Poly(d)$, such that their $R^{1/2+O(\delta)}$--neighborhoods cover all red components $Z_{\mathbf{v}_1, red}$. We remove those $R^{1/2+O(\delta)}$--neighborhoods of planes from $Z$, let $Z_0$ denote the remaining part. We also remove all tubes $T_{\theta,v}$ intersecting those $R^{1/2+O(\delta)}$--neighborhoods from $\mathbb{T}_{\theta,1}$. Since $\Sigma_{\mathbf{v}_2, t}$ is parallel to $G(\theta)$, the removed tubes $T_{\theta,v}$ are $R^{-1/2+O(\delta)}$--tangential to one of the planes $\Sigma_{\mathbf{v}_2, t_1}, \dots , \Sigma_{\mathbf{v}_2, t_M}$. We observe that all the local extrema points of $Z_{\mathbf{v}_1}$ on $\mathbf{v}_2$ direction are coved by those neighborhoods. The blue compoents $Z_{\mathbf{v}_1, blue}$ of $Z_{\mathbf{v}_1}$ cut $Z_0$ into $\Poly(d)$ connected components $Z_j$. We claim that $\Pi_{\mathbf{v}_1}$ restricted on each $Z_j$ is injective. Each component $Z_j$ is bounded by two planes $\Sigma_{\mathbf{v}_2, t_a}$ and $\Sigma_{\mathbf{v}_2, t_b}$ and some $Z_{\mathbf{v}_1, blue}$. The curve $Z_{\mathbf{v}_1}$ intersects $\Sigma_{\mathbf{v}_2, t_a}$ with at most $\Poly(d)$ points: $p_1, \dots, p_m$. Furthermore, we know that $\Angle( T_{p_l} Z_{\mathbf{v}_1}, \Sigma_{\mathbf{v}_2, t})\geq R^{-1/2+10\delta}$ for $1\leq l\leq m$. Let $Z_{t}$ denote the intersection of $Z$ and $\Sigma_{\mathbf{v}_2,t}$. $Z_t$ is a smooth curve of degree $d$. The points $\{p_l\}$ decompose $Z_{t_a}$ into $\Poly(d)$ components $Z_{t,k}$, each component is bounded by some points $p_{l_1}$ and $p_{l_2}$. The projection $\Pi_{\mathbf{v}_1}$ restricted on each $Z_{t, k}$ is injective. When we move the plane $\Sigma_{\mathbf{v}_2, t}$ from $t_a$ to $t_b$, each point $p_l$ has a unique trajectory. In particular, the number of points in $Z_{\mathbf{v}_1}\cap \Sigma_{\mathbf{v}_2, t}$ for $t$ between $t_a$ and $t_b$ stays the same. The component $Z_j$ is bounded by the trajectory of some $p_{l_1}$ and $p_{l_2}$. In particular, the projection $\Pi_{\mathbf{v}_1}$ restricted on $Z_j$ is injective since $\Pi_{\mathbf{v}_1}$ is injective at each $Z_j\cap \Sigma_{\mathbf{v}_2,t}$ for $t$ between $t_a$ and $t_b$. Now we project $Z_j$ to $\Sigma_{\mathbf{v}_1}$, the plane perpendicular to $\mathbf{v}_1$. We consider the set of tubes \begin{equation}\label{Tj} \mathbb{T}_j :=\{T_{\theta,v}: \exists T_{\theta,v}^* \text{~such ~that~} T_{\theta,v} \cap T_{\theta,v}^*\cap Z_j\neq \emptyset\}. \end{equation} We claim that for any $T_{\theta,v}\in \mathbb{T}_j$, $T_{\theta,v}\cap \partial Z_j=\emptyset$. In otherwords, the projection image $\Pi_{\mathbf{v}_1}(T_{\theta,v})$ is entirely contained in $\Pi_{\mathbf{v}_1}(Z_j)$. If $T_{\theta,v}\cap \partial Z_j\neq\emptyset$, then $T_{\theta,v}\cap Z_{\mathbf{v}_1, blue}\neq \emptyset$ because we have removed all tubes $T_{\theta,v}$ intersecting $R^{1/2+O(\delta)}$--neighborhoods of $\Sigma_{\mathbf{v}_2, t_a}$ and $ \Sigma_{\mathbf{v}_2, t_b}$. Let $p\in T_{\theta,v}\cap Z_{\mathbf{v}_1, blue}$, then the tangent plane $T_{p}Z$ contains the direction $\mathbf{v}_1$ and $T_{p} Z_{\mathbf{v}_1, blue}$. From the definition of tangential tube, we know that $\Angle(T_{p}Z, G(\theta))\leq R^{-1/2+2\delta}$. We have a contradiction because $\Angle( T_{p} Z_{\mathbf{v}_1, blue}, \Sigma_{\mathbf{v}_2, t})\geq R^{-1/2+10\delta}$. We apply Lemma~\ref{tubes in T} to conclude the proof. \end{proof} \begin{lemma}\label{tubes in T} Tubes in $\mathbb{T}_j$ defined in \ref{Tj} are $R^{-1/2+\delta}$--tangent to at most $O(R^{O(\delta)})$ planes. \end{lemma} \begin{proof} Fix a ball $B_{\rho}\cap S\neq \emptyset$ of radius $\rho= R^{1-\delta}$. Let $\mathbb{T}_{B_{\rho}, \theta}$ be the collection of tubes satisfying $T_{\theta,v}\in \mathbb{T}_{j}$ and there exists $T_{\theta,v}^*$ such that $T_{\theta,v}\cap T_{\theta,v}^* \cap B_{\rho}\neq \emptyset$. Since there are $R^{O(\delta)}$ balls $B_{\rho}$, it suffices to prove the lemma for each $B_{\rho}$. From now on, we write $\mathbb{T}_j = \mathbb{T}_{B_{\rho}, \theta}$. On $\Sigma_{\mathbf{v}}$, take a line segment $I$ of length $R^{1-\delta}$ centered at the center of $\Pi_{\mathbf{v}_1}(B_{\rho})$, with direction orthogonal to $G(\theta)$. For any $T_{\theta,v}\in \mathbb{T}_{B_{\rho},\theta}^1$, $\Pi_{\mathbf{v}_1}(T_{\theta,v}) \cap I \neq \emptyset$. To simplify the notation, let $T=T_{\theta,v}^*$. Let $I_{T}$ be the projection of $\Pi_{\mathbf{v}_1}(T)$ along $G(\theta)$ to the line containing $I$. The line segment $I_{T}$ has length at least $R^{1-2\delta}$. For any $T_{\theta,v'}\in \mathbb{T}_{j}$ if the projection image $\Pi_{\mathbf{v}_1}(T_{\theta,v'})\cap I_T\neq \emptyset$, then the tubes $T_{\theta,v'}\cap T\neq \emptyset$. Since $I_{T}$ has length at least $R^{1-2\delta}$, we can choose at most $O(R^{\delta})$ $T$, such that the union of $I_T$ covers $\underset{T_{\theta,v}\in \mathbb{T}_{B_{\rho},\theta}}{\bigcup} \Pi_{\mathbf{v}_1}(T_{\theta,v})\cap I$. Since $\Pi_{\mathbf{v}_1}$ is injective in $Z_j$, $T_{\theta,v}\cap T\neq \emptyset$ and Angle$(T, T_{\theta,v})>K^{-1}$, all tubes $T_{\theta,v}\in \mathbb{T}_j$ intersecting $T$ are $R^{-1/2+\delta}$--tangential to the plane spanned by $T$ and the direction $G(\theta)$. We have at most $O(R^{\delta})$ such planes. % % % % % % % % % \end{proof} \begin{corollary}\label{plane for tau} Let $\mathbb{T}_{S, \tang}$ be the collection of tubes tangential to a fat $r$--surface $S$ as defined in Definition~\ref{tangential}. Then for each cap $\tau$ of radius $r^{-1/2}$, there exist at most $O_d(r^{O(\delta)})$ planes whose $r^{1/2}$--neighborhoods contain all $T_{\tau,w}\in \mathbb{T}_{S, \tang}$ in $G(\tau)$ direction. \end{corollary} \subsection{Brooms}\label{broom section} We define brooms when $r_t\geq R^{1/2}$. Fix a fat $r_t$--surface $S_t $ and a cap $\tau_t$ of radius $r_t^{-1/2}$. By Lemma~\ref{plane} and Corollary~\ref{plane for tau}, there are at most $r_t^{O(\delta)}$ planes $\Sigma_t$ such all tubes $T_{\tau_t,w_t}\in \mathbb{T}_{S_t,\tang}$ in the direction of $\tau_t$ are tangential to one of $\Sigma_t$. Let $\Omega_{S_t, \tau_t}$ denote the collection of those planes $\Sigma_t$, we have $|\Omega_{S_t,\tau_t}|\lesssim r_t^{O(\delta)}$. We also use $N\Sigma_t$ to denote the $r_t^{1/2}$--neighborhood of a plane $\Sigma_t$. \begin{lemma}\label{surface to plane} For each $\tau_t$ of radius $r_t^{-1/2}$ and fat $r_t$--surface $S_t$, \begin{equation} \|Ef^{\nsim}_{\Pi_{S_t},\tau_t}\|_{L^2(B_{r_t})}^2 \leq \sum_{ \Sigma_t \in \Omega_{S_t,\tau_t}} \|Ef^{\nsim}_{\Pi_{S_t},\tau_t}\|_{L^2(N\Sigma_t)}^2 \end{equation} \end{lemma} \begin{proof} This is a consequence of Lemma~\ref{plane}. \end{proof} \begin{definition}\label{broom definition for each t} Let $t$ be such that $R^{1/2}\leq r_t\leq R^{13/16}$ (the case when $r\geq R^{13/16}$ was treated in Lemma~\ref{large r2018}) and let $\tau_t$ be a cap of radius $r_t^{-1/2}$. For each $\Sigma_t\in \Omega_{S_t}$, we define a broom $\mathcal{B}_t$ rooted at $\Sigma_t$ as in Definition~\ref{broom definition} with $r=r_t$, $\Sigma=\Sigma_t$. \end{definition} Recall that in the special case in Subsection~\ref{broom analysis}, the brooms rooted at $\Sigma$ has about the same size, and each wavepackets from those brooms belongs to about the same number of brooms of about the same size. The function $\chi(T_{\theta,v},\Sigma)$ characterizes this uniform broom structure. In general, we obtain an approximation of this function $\chi$ through several steps dyadic pigeonholing. Let $\Gamma_{t,b}$ denote the set of number $\{1, (\frac{R}{r_t})^{100\delta}, (\frac{R}{r_t})^{200\delta},\dots, (\frac{R}{r_t})^{1/2}\}$ and let $\Gamma_{t,\gamma}$ denote the set of numbers $\{1, (\frac{R}{r_t})^{100\delta}, (\frac{R}{r_t})^{200\delta}, \dots, (\frac{R}{r_t})^{100N\delta}= R\}$. Since we consider only the case when $r_t\leq R^{13/16}$, $N$ is bounded by $O(1/\delta)$, which is independent of $R$. We decompose the unit sphere in $\mathbb{R}^3$ into caps $\alpha$ of radius $1/100$ representing the normal directions of the planes $\Sigma$. Let $\Omega_{\tau_t}$ denote the collection of planes $\Sigma_t\in \Omega_{S_t, \tau_t}$ for all $S_t\in \mathcal{S}_t$ in Lemma~\ref{structure2018}. For each $b_1\in \Gamma_{t,b}$ and each $\Sigma_t\in \Omega_{S_t,\tau_t}$, we define $\chi_{\alpha, t, b_1}(T_{\theta,v}, \Sigma_t)=1$ if $T_{\theta,v}$ belongs to a broom ${\mathcal B}_t$ satisfying: \begin{itemize} \item the normal direction of $\Sigma_t$ belongs to cap $\alpha$. \item $b_1 \leq |{\mathcal B}_t| \leq (\frac{R}{r_t})^{100\delta }b_1$, \item $T_{{\mathcal B}_t}\subseteq B_{r_t} \cap N_{r_t^{1/2}}\Sigma_t$, where $N_{r_t^{1/2}}\Sigma_t$ means the $r_t^{1/2}$--neighborhood of $\Sigma_t$ and $B_{r_t}$ is the ball containing $S_t$. \end{itemize} otherwise $\chi_{\alpha, t,b_1}(T_{\theta,v}, \Sigma_t)=0$. For each $\gamma_1\in \Gamma_{t,\gamma}$, we define $\chi_{\alpha, t, b_1, \gamma_1}(T_{\theta,v}, \Sigma_t)=1$ if \begin{itemize} \item $\chi_{\alpha, t, b_1}(T_{\theta,v}, \Sigma_t)=1$, \item $\gamma_1 \leq \underset{\Sigma_t'\in \Omega_{\tau_t}}{\sum} \chi_{\alpha, t, b_1}(T_{\theta,v}, \Sigma_t') \leq (\frac{R}{r_t})^{100\delta }\gamma_1$; \end{itemize} otherwise $\chi_{\alpha, t, b_1, \gamma_1}(T_{\theta, v}, \Sigma_t)=0$. For each $b_2\leq b_1$, we define $\chi_{\alpha, t, b_1,\gamma_1,b_2}(T_{\theta,v},\Sigma_t)=1$ if $T_{\theta,v}$ belongs to a broom ${\mathcal B}_t$ satisfying: \begin{itemize} \item $\chi_{\alpha, t, b_1, \gamma_1}(T_{\theta,v}, \Sigma_t)=1$, \item $b_2\leq \underset{T_{\theta',v'}\in {\mathcal B}_t}{\sum}\chi_{\alpha, t, b_1, \gamma_1}(T_{\theta',v'},\Sigma_t) \leq (\frac{R}{r_t})^{100\delta} b_2$, \end{itemize} Otherwise $\chi_{\alpha,t, b_1,\gamma_1, b_2}(T_{\theta,v}, \Sigma)=0$. For each $\gamma_2\leq \gamma_1$, we define $\chi_{\alpha, t, b_1,\gamma_1,b_2,\gamma_2}(T_{\theta,v}, \Sigma_t)=1$ if \begin{itemize} \item $\chi_{\alpha, t, b_1,\gamma_1,b_2}(T_{\theta,v}, \Sigma_t)=1$ \item $\gamma_2\leq \underset{\Sigma_t'\in \Omega_{\tau_t}}{\sum}\chi_{\alpha, t, b_1,\gamma_1,b_2}(T_{\theta,v}, \Sigma_t')\leq (\frac{R}{r_t})^{100\delta }\gamma_2$, \end{itemize} otherwise $\chi_{\alpha, t, b_1,\gamma_1,b_2,\gamma_2}(T_{\theta,v}, \Sigma_t)=0$. We define $\kappa=(\alpha, t, b_1, \gamma_1, b_2, \gamma_2,\dots, b_l, \gamma_l)$ and $\kappa'=(\alpha, t, b_1, \gamma_1, b_2, \gamma_2,\dots, b_l)$ for $2\leq l\lesssim N$. We define $\chi_{\kappa}$ inductively as above and we stop when $b_l\geq (\frac{R}{r_t})^{-100\delta}b_{l-1}$ and $\gamma_l \leq (\frac{R}{r_t})^{-100\delta}\gamma_{l-1}$. Since the sequences for $b$ and $\gamma$ are decreasing with finite choices, we stop after at most $O(N)$ steps. There are at most $O_{\delta}(1)$ vectors $\kappa$ and $\kappa'$. \begin{definition} For each vector $\kappa=(\alpha, t, b_1, \gamma_1, b_2, \gamma_2,\dots, b_l, \gamma_l)$, we say that $\kappa$ is \emph{admissible} if $b_l\geq (\frac{R}{r_t})^{-100\delta}b_{l-1}$ and $\gamma_l \leq (\frac{R}{r_t})^{-100\delta}\gamma_{l-1}$. \end{definition} For each admissible $\kappa$, the functions $\chi_{\kappa}$ and $\chi_{\kappa'}$ together will play the role of $\chi$ in Subsection~\ref{broom analysis}. For any $T_{\theta,v} \cap \Sigma_t \neq \emptyset$, either $T_{\theta,v}$ intersecting $\Sigma_t$ transversally, in other words, $$\Angle(G(\theta), \Sigma_t)\geq r_t^{-1/2}, $$ or there exists some admissible $\kappa$ such that $\chi_{\kappa}(T_{\theta,v}, \Sigma_t)=1$. \begin{remark}\label{chizero} Let $\mathbb{T}_t^*$ denote the collection of $T_{\theta,v}$ with $\chi_{ \kappa}(T_{\theta,v}, \Sigma_t)=0$ for all $ \kappa$ and $\Sigma_t$. Let $Ef_t^*$ be the sum of those large wavepackets with essential support in tubes in $\mathbb{T}_t^*$: $Ef_t^* =\underset{T_{\theta,v}\in \mathbb{T}_t^*}{\sum}Ef_{\theta,v}$. Either $T_{\theta,v}$ has disjoint support with a plane $\Sigma_t$ or $T_{\theta,v}$ intersects $\Sigma_t$ transversally, hence $\|Ef^*_{t,O,\tang}\|_{BL^p(O)} = \RapDec(R)\|f\|_{L^2}$. In other words, the contribution of wavepackets from $\mathbb{T}_t^*$ is negligable in $Ef_{O,\tang}$. \end{remark} Now we give the definition for $T_{\theta,v}\sim_{\kappa} B_k$ according to the broom structure and the function $\chi_{\kappa}$. \begin{definition}\label{sim for large r} Fix $\kappa$ and a tube $T_{\theta,v}$ with $\theta\subseteq \tau_t$, let $B_k^*$ be the ball that maximizes the quantity $$\underset{S_t\subseteq B_k }{\sum} \underset{\Sigma_t\in \Omega_{S_t, \tau_t}}{\sum}\chi_{\kappa}(T_{\theta,v}, \Sigma_t).$$If there are multiple maximizer balls $B_k$, then we choose only one. We say that $T_{\theta,v}\sim_{ \kappa} B_k$ if $B_k$ lies inside $10B_k^*$. We define $T_{\theta,v}\sim_{\kappa'} B_k$ according to the same rule with function $\chi_{\kappa'}$ instead. \end{definition} There are at most $O_{\delta}(1)$ choices for $\kappa$ and $\kappa'$. We define bushes when $r_t\leq R^{1/2}$. \begin{definition} For a fat $r_t$--surface $S_t$ with $r_t\leq R^{1/2}$ and a cap $\tau_t$ of radius $r_t^{-1/2}$, a bush $\mathcal{U}_t$ rooted at $S_t$ in direction $G(\tau_t)$ is defined as the collection of large tubes $T_{\theta,v}$ passing through $S_t$ with $\theta\subseteq \tau_t$ and the correpsonding wavepacket $Ef_{\theta,v}$ nonzero. \end{definition} Let $\Gamma_{t,u} $ denote the collection of numbers $\{1, (\frac{R}{r_t})^{100\delta}, (\frac{R}{r_t})^{200\delta}, \dots, \frac{R}{r_t}\}$ and let $\Gamma_{t, \gamma}$ denote the collection of numbers $\{1, (\frac{R}{r_t})^{100\delta}, (\frac{R}{r_t})^{200\delta}, \dots, R\}$. For any fat $r_t$--surface $S_t$ with $r_t\leq R^{1/2}$, any $T_{\theta,v}$ and $u_1 \in \Gamma_{t, u}$, we define $\chi_{t, u_1}(T_{\theta,v}, S_t)=1 $ if $Ef_{\theta,v}$ belongs to a bush $\mathcal{U}_t$ rooted at $S_t$ of size between $u_1$ and $u_1 (\frac{R}{r_t})^{100\delta}$ ; otherwise $\chi_{t, u_1}(T_{\theta,v}, S_t)=0$. For each large tube $T_{\theta,v}$ and each $\gamma_1\in \Gamma_{t, \gamma}$, we define $\chi_{t, u_1,\gamma_1}(T_{\theta,v}, S_t) = 1$ if $\chi_{t, u_1}(T_{\theta,v}, S_t)=1$ and if the number $\underset{S_t'}{\sum} \chi_{t,u_1}(T_{\theta,v}, S_t')$ is between $\gamma_1$ and $\gamma_1 (\frac{R}{r_t})^{100\delta}$. Let $\iota$ denote the vector $(t, u_1, \gamma_1, u_2, \gamma_2, \dots, u_l, \gamma_l)$. We define $\chi_{\iota}$ inductively as in the case when $r\geq R^{1/2}$. We stop if $u_l \geq u_{l-1}(\frac{R}{r_t})^{-100\delta} $ and $\gamma_l \geq \gamma_{l-1} (\frac{R}{r_t})^{-100\delta}$. We say that the vector $\iota=(t, u_1, \gamma_1, u_2, \gamma_2, \dots, u_l, \gamma_l)$ is admissible if $u_l \geq u_{l-1}(\frac{R}{r_t})^{-100\delta} $ and $\gamma_l \geq \gamma_{l-1} (\frac{R}{r_t})^{-100\delta}$. One can see that if $T_{\theta,v} \cap S_t\neq \emptyset$, then $\chi_{\iota}(T_{\theta,v}, S_t)=1$ for some admissible $\iota$. \begin{definition} Fix a $T_{\theta,v}$. For each $\iota$, let $B_k*$ denote the ball that attains the maximum quantity $\underset{S_t\subseteq B_k}{\sum} \chi_{\iota}(T_{\theta,v}, S_t)$. If there are multiple choices for $B_k*$, we choose only one. We define $T_{\theta,v}\sim_{\iota} B_k $ if $B_k$ belongs to $10B_k^*$. When we define $\chi_{\iota}$ inductively, we have also defined $\chi_{\iota'}$ for $\iota'= (t, u_1, \gamma_1, \dots, u_l)$. We define $T_{\theta,v}\sim_{\iota'} B_k$ according to the same rule with the function $\chi_{\iota'}$ instead. \end{definition} Finally we define $T_{\theta,v}\sim B_k$. \begin{definition} We say that $T_{\theta,v}\sim B_k$ if there exists $\kappa$, $\kappa'$, $\iota$ or $\iota'$, such that one of the following is true: $T_{\theta,v}\sim_{\kappa} B_k$, $T_{\theta,v}\sim_{\kappa'} B_k$, $T_{\theta,v}\sim_{\iota}B_k$ and $T_{\theta,v}\sim_{\iota'}B_k$. \end{definition} \begin{lemma}\label{sim} For each $T_{\theta,v}$, the number of $B_k$ such that $T_{\theta,v}\sim B_k$ is bounded by $O_{\delta}(1)$. \end{lemma} \begin{proof} By the definition of $\sim_{\kappa}$, for a fixed $T_{\theta,v}$, the number of $B_k$ such that $T_{\theta,v}\sim_{\kappa}$ is bounded by $O(1)$. The number of $B_k$ such that $T_{\theta,v}\sim_{ \iota} B_k$ is also bounded by $O(1)$ for a fixed $T_{\theta,v}$. There are at most $O_{\delta}(1)$ choices for $\kappa$ and $\iota$. Same arguments apply to $\kappa'$ and $\iota'$. \end{proof} Fix an $S_t\subseteq B_k $ and $\Sigma_t\in \Omega_{S_t,\tau_t}$, we might assume that each small tube $T_{\tau_t, w_t}$ of length $r_t$, width $r_t^{1/2}$ is $r_t^{-1/2}$--tangential to only one $\Sigma_t$. If $T_{\tau_t, w_t}$ is $r_t^{-1/2}$--tangential to more than one $\Sigma_t$, we assign it to an arbitrary one. Let $\mathbb{T}_{\Sigma_t}$ be the subset of $\mathbb{T}_{S_t, \tang}$ consisting of tubes in direction $G(\tau_t)$ which are $r_t^{-1/2}$--tangential to $\Sigma_t$. When $r_t\geq R^{1/2}$, for each $\kappa=(\alpha, t, b_1,\gamma_1, \dots, b_l, \gamma_l)$ and each $\Sigma_t \in \Omega_{S_t, \tau_t}$ with $S_t\in B_k$, we define $$Ef^{\nsim}_{\kappa,\Sigma_t} = \underset{\chi_{\kappa}(T_{\theta,v}, \Sigma_t)=1, T_{\theta,v}\nsim B_k}{\sum} Ef_{\theta,v}$$ and $$Ef^{\nsim}_{\kappa, \Sigma_t, S_t, \tau_t} = \underset{T_{\tau_t, w_t}\in \mathbb{T}_{\Sigma_t}}{\sum} Ef^{\nsim}_{\kappa, \Sigma_t, \tau_t, w_t}.$$ From our construction and Remark~\ref{chizero}, we have the decomposition \begin{equation}\label{decomposition for kappa} Ef^{\nsim}_{\Pi_{S_t}, \tau_t}= \underset{\Sigma_t\in \Omega_{S_t, \tau_t}}{\sum}\underset{\kappa}{\sum} Ef^{\nsim}_{\kappa, \Sigma_t, S_t, \tau_t} + \RapDec(R)\|f\|_{L^2}. \end{equation} When we say sum over $\kappa$ we mean sum over all the admissible $\kappa$. When $r_t\leq R^{1/2}$, for each $\iota=(t, u_1, \gamma_1, \dots, u_l, \gamma_l)$ and each $\Sigma_t \in \Omega_{S_t, \tau_t}$ with $S_t\subseteq B_k$, we define $$Ef^{\nsim}_{\iota, \Sigma_t} = \underset{\chi_{\iota}(T_{\theta,v}, S_t)=1, T_{\theta,v}\nsim B_k}{\sum} Ef_{\theta,v} $$ and $$Ef^{\nsim}_{\iota, \Sigma_t, S_t,\tau_t} = \underset{T_{\tau_t, w_t}\in \mathbb{T}_{\Sigma_t}}{\sum} Ef^{\nsim}_{\iota, \Sigma_t, \tau_t, w_t}.$$ Here $Ef^{\nsim}_{\iota, \Sigma_t}$ is the same for all $\Sigma_t\in \Omega_{S_t, \tau_t}$, we use this notation in parallel to the large $r_t$ case. Same as the decomposition~\ref{decomposition for kappa} for large $r_t$, we have the following \begin{equation}\label{decomposition for iota} Ef^{\nsim}_{\Pi_{S_t}, \tau_t}= \underset{\Sigma_t\in \Omega_{S_t, \tau_t}}{\sum}\underset{\iota}{\sum} Ef^{\nsim}_{\iota, \Sigma_t, S_t, \tau_t} + \RapDec(R)\|f\|_{L^2}. \end{equation} Here we sum over all the admissible $\iota$. Recall that in the beginning of this section, there exists a $t$ such that $S_t\in \mathcal{S}_t$ satisfies inequality~\ref{assumption12018} and inequality~\ref{assumption22018} which results in \ref{last assumption} for most of the cells. We discuss separately according to the size of $r_t$. \begin{lemma}\label{real kappa} If $S_t$ is a fat $r_t$--surface with $r_t\geq R^{/12}$, then $$\|f^{\nsim}_{\Pi_{S_t}}\|_{L^2}^2 \lesssim R^{-1/2+O(\epsilon_0)}\underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^2.$$ \end{lemma} Lemma~\ref{real kappa} corresponds to Lemma~\ref{kappa} in the white lie proof. In order to prove Lemma~\ref{real kappa}, we need the following Lemma~\ref{real kappa tau}, which corresponds to Lemma~\ref{kappa tau} in the white lie proof. \begin{lemma}\label{real kappa tau} If $S_t$ is a fat $r_t$--surface with $r_t\geq R^{/12}$ and $\tau_t$ is a cap of radius $r_t^{-1/2}$, $\Sigma_t\in \Omega_{S_t, \tau_t}$, then for each admissible $\kappa = (\alpha, t, b_1, \gamma_1, \dots, b_l, \gamma_l)$, $$\|f^{\nsim}_{\kappa, \Sigma_t, S_t, \tau_t}\|_{L^2_{avg}(\tau_t)}^2 \lesssim (\frac{R}{r_t})^{-1/2} R^{O(\epsilon_0)} \underset{|\theta|=R^{-1/2}, \theta\subseteq \tau_t}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^2.$$ \end{lemma} \begin{proof} Since the Lemma is about a fixed $t$, in the proof, we drop the dependence on $t$ and write $\tau_t=\tau$,$r_t=r$ and $\Sigma_t=\Sigma$, $S_t=S$. We assume that there are about $(\frac{R}{r})^{\beta_0}$ nonzero wavepackets $Ef_{\theta,v}$ with $\theta\subseteq \tau$. Let $B_k$ be the ball of radius $R^{1-\epsilon_0}$ containing $B_r$ and let $\Sigma_1 = \Sigma$. We define $\kappa'= (\alpha, t, b_1, \lambda_1, \dots, b_{l-1}, \gamma_{l-1},b_l)$. We say that $\Sigma_2\nsubseteq B_k$ if $\Sigma_2$ is associated to some fat $r$--surface $S_2$ outside of $5B_k$. The main idea is to double count the number of wavepackets shared by $\Sigma_1$ and those far apart $\Sigma_2$, specifically, the quantity \begin{equation}\label{double count} \underset{\Sigma_2\nsubseteq B_k}{\sum} \underset{\theta\subseteq \tau, v}{\sum}\chi_{\kappa}(T_{\theta,v}, \Sigma_1)\chi_{\kappa'}(T_{\theta,v}, \Sigma_2). \end{equation} By the definition of $T_{\theta,v}\nsim B_k$, for each $T_{\theta,v}$, \begin{equation}\label{each tube2018} \underset{\Sigma_2\nsubseteq B_k}{\sum} \chi_{\kappa'}(T_{\theta,v}, \Sigma_2)\gtrsim \sum_{\Sigma'}\chi_{\kappa'}(T_{\theta,v}, \Sigma'). \end{equation} If inequality~\ref{each tube2018} is not true, then the $B_k^*$ that maximizes $\underset{\Sigma\cap S \subseteq B_k^*}{\sum}\chi_{\kappa'}(T_{\theta,v}, \Sigma)$ belongs to $5B_k$ and $T_{\theta,v} \sim_{ \kappa'} B_k$, which violates the assumption $T_{\theta,v}\nsim B_k$. This is the only part we need to use the information that $T_{\theta,v}\nsim B_k$. For each $T_{\theta,v}\nsim B_k$ and $T_{\theta,v}$ satisfying $\chi_{\kappa}(T_{\theta,v},\Sigma_1)=1$, we have \begin{equation}\label{gammal2018} \sum_{\Sigma'}\chi_{\kappa'}(T_{\theta,v}, \Sigma') \gtrsim \gamma_l. \end{equation} Assume that there are $(\frac{R}{r})^{\beta_1}$ nonzero wavepackets $Ef_{\theta,v}$ such that $ \theta \subseteq \tau$, $T_{\theta,v}\nsim B_k$ and $\chi_{\kappa}(T_{\theta,v}, \Sigma_1)=1$. We have the following lower bound for \ref{double count} by combining inequality~\ref{each tube2018} and inequality~\ref{gammal2018}, \begin{equation}\label{incidence lower bound} \underset{\Sigma_2\nsubseteq B_k}{\sum} \underset{\theta\subseteq \tau,v}{\sum}\chi^{\alpha}_{\kappa}(T_{\theta,v}, \Sigma_1)\chi^{\alpha}_{\kappa'}(T_{\theta,v}, \Sigma_2)\gtrsim \gamma_l (\frac{R}{r})^{\beta_1}. \end{equation} Next we are going to give an upper bound for \ref{double count}. Here we need to apply the following geometric observation. When $S_1=S$ and $S_2$ are $R^{1-\epsilon_0}$ apart and the normals of $\Sigma_1$ and $\Sigma_2$ are both in $\alpha$, a broom rooted at $\Sigma_2$ can intersect with $\Sigma_1$ in at most $R^{O(\epsilon_0)}$ large tubes $T_{\theta,v}$. This is because a broom rooted at $\Sigma_2$ spans on the normal direction of $\Sigma_2$. Since $S_1$ and $S_2$ have distance at least $R^{1-\epsilon_0}$, near $S_1$ the wavepackets in the broom are almost disjoint (up to $R^{O(\epsilon_0)}$ overlapping). Since the normals of $\Sigma_1$ and $\Sigma_2$ belong to the same cap $\alpha$, they have angle difference within $1/100$. A broom rooted at $\Sigma_2$ intersects transversally with $\Sigma_1$. Recall that in Remark~\ref{broom organization} for a fixed $\Sigma_2$, each tube $T_{\theta,v}$ belongs to at most one broom rooted at $\Sigma_2$. The function $\chi_{\kappa'}$ counts brooms of size about $b_{l}$. Hence for each $\Sigma_2\nsubseteq B_k$, we have \begin{equation}\label{one pair} \underset{\theta\subseteq \tau,v}{\sum} \chi_{\kappa}(T_{\theta,v}, \Sigma_1)\chi_{\kappa'}(T_{\theta,v}, \Sigma_2) \lesssim R^{O(\epsilon_0)} b_l^{-1} \underset{\theta\subseteq \tau, v}{\sum} \chi_{\kappa'}(T_{\theta,v}, \Sigma_2). \end{equation} By the definition of $\chi_{\kappa'}$, each wavepacket $T_{\theta,v}$ satisfies $\underset{\Sigma_2}{\sum} \chi_{\kappa'}(T_{\theta,v}, \Sigma_2)\leq \gamma_{l-1}(\frac{R}{r})^{100\delta}.$ There are at most $(\frac{R}{r})^{\beta_0}$ nonzero wavepackets $Ef_{\theta,v}$ with $\theta\subseteq \tau$, so \begin{equation}\label{one sum} \underset{\theta\subseteq \tau, v}{\sum}\underset{\Sigma_2}{\sum} \chi_{\kappa'}(T_{\theta,v}, \Sigma_2)\leq \gamma_{l-1}(\frac{R}{r})^{\beta_0+100\delta}. \end{equation} Sum over the $\Sigma_2\nsubseteq B_k$ with inequality~\ref{one pair} and apply inequality~\ref{one sum}, we have the following upper bound for \ref{double count}, \begin{equation}\label{incidence upper bound} \underset{\Sigma_2\nsubseteq B_k}{\sum}\underset{\theta\subseteq \tau,v}{\sum} \chi_{\kappa}(T_{\theta,v}, \Sigma_1)\chi_{\kappa'}(T_{\theta,v}, \Sigma_2) \lesssim R^{O(\epsilon_0)} (\frac{R}{r})^{\beta_0}\gamma_{l-1} b_l^{-1} \end{equation} Since $\kappa$ is admissible, we have $\gamma_{l}\geq \gamma_{l-1} (\frac{R}{r})^{-100\delta}$. Compare inequality~\ref{incidence lower bound} with inequality~\ref{incidence upper bound}, \begin{equation}\label{total} (\frac{R}{r})^{\beta_1}b_l \leq R^{O(\epsilon_0)} (\frac{R}{r})^{\beta_0}. \end{equation} We apply Lemma~\ref{large brooms} with $Eh_{\tau}= Ef^{\nsim}_{\kappa, \Sigma,\tau}$ and $b= b_l$, \begin{equation}\label{larger L2} \int_{N\Sigma} |Ef^{\nsim}_{\kappa, \Sigma,\tau}|^2 \lesssim (\frac{R}{r})^{-1/2} b_l \int_{B_r}|Ef^{\nsim}_{\kappa, \Sigma,\tau}|^2. \end{equation} By the definition of $Ef^{\nsim}_{\kappa, \Sigma,S, \tau}= Ef^{\nsim}_{\kappa, \Sigma_t, S_t,\tau_t}$ and inequality~\ref{larger L2}, \begin{align*} \int_{B_r} |Ef^{\nsim}_{\kappa, \Sigma,S, \tau}|^2 &\leq \int_{N\Sigma} |Ef^{\nsim}_{\kappa, \Sigma,\tau}|^2 \\ &\lesssim (\frac{R}{r})^{-1/2} b_l \int_{B_r}|Ef^{\nsim}_{\kappa, \Sigma,\tau}|^2. \end{align*} Since $\|Ef^{\nsim}_{\kappa,\Sigma, S,\tau}\|_{L^2(B_r)}^2\sim r\|f^{\nsim}_{\kappa, \Sigma, S, \tau}\|_{L^2}^2$, we have \begin{align*} \|f^{\nsim}_{\kappa, \Sigma,s, \tau}\|_{L^2}^2 &\lesssim r^{-1}\int_{B_r}|Ef^{\nsim}_{\kappa, \Sigma, S ,\tau}|^2 \\ &\lesssim (\frac{R}{r})^{-1/2}b_l r^{-1}\int_{B_r}|Ef^{\nsim}_{\kappa, \Sigma,\tau}|^2 \\ &\lesssim (\frac{R}{r})^{-1/2}b_l\|f^{\nsim}_{\kappa, \Sigma,\tau}\|_{L^2}^2 \end{align*} There are $(\frac{R}{r})^{\beta_1}$ out of $(\frac{R}{r})^{\beta_0}$ nonzero large wavepackets $Ef_{\theta,v}$ with $\theta\subseteq \tau$ intersecting $\Sigma$ and $T_{\theta,v}\nsim B_k$, hence $$ \|f^{\nsim}_{\kappa, \Sigma,\tau}\|_{L^2}^2\lesssim (\frac{R}{r})^{\beta_1-\beta_0} \|f_{\tau}\|_{L^2}^2.$$ By inequality~\ref{total}, \begin{equation} \|f^{\nsim}_{\kappa,\Sigma, S, \tau}\|_{L^2}^2 \leq R^{O(\epsilon_0)}(\frac{R}{r})^{-1/2} \|f_{\tau}\|_{L^2}^2. \end{equation} \end{proof} We prove Lemma~\ref{real kappa} with Lemma~\ref{real kappa tau}. \begin{proof} By Lemma~\ref{surface to plane} and the decomposition of $Ef^{\nsim}_{\Pi_{S_t}, \tau_t}$ in equality~\ref{decomposition for kappa}, \begin{align*} \|Ef^{\nsim}_{\Pi_{S_t}}\|_{L^2(B_{r_t})}^2& \lesssim \sum_{\tau_t} \|Ef^{\nsim}_{\Pi_{S_t}, \tau_t}\|_{L^2(B_{r_t})}^2\\ &\lesssim_{\delta} \sum_{\tau_t} \sum_{\Sigma_t\in \Omega_{S_t,\tau_t}}\sum_{\kappa} \|Ef^{\nsim}_{ \kappa, \Sigma_t, S_t, \tau_t}\|_{L^2(B_{r_t})}^2 \end{align*} Since $\|f^{\nsim}_{\Pi_{S_t}} \|_{L^2}^2\sim r_t^{-1} \|Ef^{\nsim}_{\Pi_{S_t}}\|_{L^2(B_{r_t})}^2$ and $\|f^{\nsim}_{\kappa, \Sigma_t, S_t, \tau_t}\|_{L^2}^2 \sim r_t^{-1} \|Ef^{\nsim}_{ \kappa, \Sigma_t, S_t, \tau_t}\|_{L^2(B_{r_t})}^2$, we apply Corrollary~\ref{real kappa tau} and Lemma~\ref{geometric larry}, \begin{align*} \|f^{\nsim}_{\Pi_{S_t}}\|_{L^2}^2&\lesssim _{\delta}\sum_{\tau_t} \sum_{\Sigma_t\in \Omega_{S_t, \tau_t}}\sum_{\kappa} \|f^{\nsim}_{\kappa, \Sigma_t, S_t, \tau_t}\|_{L^2}^2\\ &\lesssim R^{O(\epsilon_0)} r_t^{-1/2+O(\delta)} \underset{|\tau_t|=r_t^{-1/2}}{\max} \|f^{\nsim}_{ \kappa, \Sigma_t, S_t, \tau_t}\|_{L^2_{avg}(\tau_t)}^2\\ &\lesssim R^{O(\epsilon_0)} r_t^{-1/2+O(\delta)} (\frac{R}{r_t})^{-1/2} \underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^2. \end{align*} \end{proof} With Lemma~\ref{real kappa} we prove Lemma~\ref{last case for large r2018}. \begin{proof} We apply inequality~\ref{tangential 1} on $\|Ef^{\nsim}_{S_t}\|_{BL^p(S_t)}^p$ and $\|Ef^{\nsim}_{\Pi_{S_t}}\|_{BL^p(S_t)}^p$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim \underset{S_t\in \mathcal{S}_t}{\sum} \sum_{O\subseteq S_t} \|Ef^{\nsim}_{S_t}\|_{BL^p(O)}^p\\ &\lesssim \underset{S_t\in \mathcal{S}_t}{\sum} \sum_{O\subseteq S_t}\|Ef^{\nsim}_{S_t}\|_{BL^p(O)}^2 \|Ef^{\nsim}_{\Pi_{S_t}}\|_{BL^p(O)}^{p-2}\\ &\lesssim \underset{S_t\in \mathcal{S}_t}{\sum} \|Ef^{\nsim}_{S_t}\|_{BL^p(S_t)}^2 \|Ef^{\nsim}_{\Pi_{S_t}}\|_{BL^p(S_t)}^{p-2}\\ &\lesssim r_t^{\frac{5}{2}-\frac{3p}{4}} \underset{S_t\in \mathcal{S}_t}{\sum}\|f^{\nsim}_{S_t}\|_{L^2}^2 \|f^{\nsim}_{\Pi_{S_t}}\|_{L^2}^{p-2} \end{align*} We apply Lemma~\ref{real kappa} to estimate $\|f^{\nsim}_{\Pi_{S_t}}\|_{L^2}$, $$ \|Ef\|_{BL^p(B_R)}^p \lesssim r_t^{\frac{5}{2}-\frac{3p}{4}} R^{-\frac{p-2}{4}+O(\epsilon_0)}\underset{S_t\in \mathcal{S}_t}{\sum}\|f^{\nsim}_{S_t}\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2}^{p-2}.$$ By Corollary~\ref{linear structure2018}, $f^{\nsim}_{S_t}$ for all $S_t\in \mathcal{S}_t$ satisfies property (5) of Lemma~\ref{structure2018}, hence \begin{equation}\label{estimate1for large r} \|Ef\|_{BL^p(B_R)}^p \lesssim r_t^{\frac{5}{2}-\frac{3p}{4}} R^{-\frac{p-2}{4}+O(\epsilon_0)}D_t\|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2}^{p-2}. \end{equation} Estimate~\ref{estimate1for large r} is good when $D_t$ is small. When $D_t$ is large, we apply Corollary~\ref{linear structure2018} and Lemma~\ref{structure2018}. Since each $S_t\in \mathcal{S}_t$ contains about the same number of $O$, for more than $R^{-\delta}|\mathcal{S}_t|\gtrsim D_t^3 R^{-O(\delta)}$ of the fat $r_t$--surfaces $S_t$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{\delta} \#\{O\} \|Ef\|_{BL^p(O)}^p \\ & \lesssim R^{O(\delta)}|\mathcal{S}_t| \|Ef^{\nsim}_{S_t}\|_{BL^p(S_t)}^p \\ &\lesssim R^{O(\delta)} r_t^{\frac{5}{2}-\frac{3p}{4} }|\mathcal{S}_t| \cdot \|f^{\nsim}_{S_t}\|_{L^2}^p. \end{align*} By Lemma~\ref{L2 orthogonality for nsim}, $\underset{S_t\in \mathcal{S}_t}{\sum} \|f^{\nsim}_{S_t}\|_{L^2}^2 \lesssim R^{O(\delta)}D_t \|f\|_{L^2}^2$. There exists an $S_t$, such that $\|f^{\nsim}_{S_t}\|_{L^2}^2 \lesssim R^{O(\delta)} |\mathcal{S}_t|^{-1} D_t \|f\|_{L^2}^2$. We use this $S_t$ to estimate $\|Ef\|_{BL^p(B_R)}^p$, $$ \|Ef\|_{BL^p(B_R)}^p\lesssim R^{O(\delta)} r_t^{\frac{5}{2}-\frac{3p}{4} }|\mathcal{S}_t|^{1-\frac{p}{2}} D_t^{\frac{p}{2}} \|f\|_{L^2}^2. $$ Since $|\mathcal{S}_t|\gtrsim D_t^3R^{-O(\delta)}$, \begin{equation}\label{estimate2for large r} \|Ef\|_{BL^p(B_R)}^p\lesssim R^{O(\delta)} r_t^{\frac{5}{2}-\frac{3p}{4} } D_t^{3-p} \|f\|_{L^2}^2. \end{equation} We compare estimate~\ref{estimate1for large r} and \ref{estimate2for large r}, \begin{equation} \|Ef\|_{BL^p(B_R)} \lesssim R^{O(\epsilon_0)}r_{t}^{\frac{5}{2}-\frac{3p}{4}} \min\big( D_t R^{\frac{-(p-2)}{4}}, D_t^{3-p} \big) \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2} \end{equation} The worst case is when $R= D_t^4$, since $r_t\leq R/D_t$, when $p\geq 3+3/13$, the constant term is bounded by $O(R^{O(\epsilon_0)})$. \end{proof} Now we discuss the case when $r_t\leq R^{1/2}$. Our main ingredient is Lemma~\ref{real iota}, which corresponds to Lemma~\ref{iota}. In order to prove Lemma~\ref{real iota}, we need the following Lemma~\ref{real iota tau}, which corresponds to Lemma~\ref{iota tau}. \begin{lemma}\label{real iota} If $S_t$ is a fat $r_t$--surface with $r_t\leq R^{/12}$, then $$\|f^{\nsim}_{\Pi_{S_t}}\|_{L^2}^2 \lesssim r^{-1} R^{O(\epsilon_0)}\underset{|\theta|=R^{-1/2}}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^2.$$ \end{lemma} \begin{proof} The proof is the same as the one in Lemma~\ref{real kappa}. We use equality~\ref{decomposition for iota} instead of equality~\ref{decomposition for kappa}, and Lemma~\ref{real iota tau} instead of Lemma~\ref{real kappa tau}. \end{proof} \begin{lemma}\label{real iota tau} If $S_t$ is a fat $r_t$--surface with $r_t\leq R^{/12}$ and $\tau_t$ is a cap of radius $r_t^{-1/2}$, $\Sigma_t\in \Omega_{S_t, \tau_t}$, then for each admissible $\iota = (t, u_1, \gamma_1, \dots, u_l, \gamma_l)$, $$\|f^{\nsim}_{\kappa, \Sigma_t, S_t, \tau_t}\|_{L^2_{avg}(\tau_t)}^2 \lesssim r_t^{-1/2} R^{O(\epsilon_0)} \underset{|\theta|=R^{-1/2}, \theta\subseteq \tau_t}{\max} \|f_{\theta}\|_{L^2_{avg}(\theta)}^2.$$ \end{lemma} \begin{proof} Since the Lemma is for a fixed $t$, to simplify the notation, we will write $r_t=r, \tau_t=\tau, S_t=S, \Sigma_t=\Sigma$ in this proof. By the decomposition~\ref{decomposition for iota}, it suffices to prove that for each admissible $\iota$, $$\|f^{\nsim}_{\iota, \Sigma, S,\tau}\|_{L^2_{avg}(\tau)}^2\lesssim r^{-1/2}R^{O(\epsilon_0)} \underset{|\theta|=R^{-1/2}, \theta\subseteq \tau}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^2.$$ Let $\iota' = (t, u_1, \gamma_1, \dots, u_{l})$. We apply similar arguments as in Lemma~\ref{kappa}. We count the number of large wavepackets shared by two far apart fat $r$--surface $S_1=S$ and $S_2$: \begin{equation}\label{iota2018} \sum_{S_2\nsubseteq 5B_k} \sum_{\theta\subset \tau,v} \chi_{\iota}(T_{\theta,v}, S_1)\chi_{\iota'}(T_{\theta,v}, S_2). \end{equation} For each tube $T_{\theta,v}\nsim B_k$ we have \begin{equation}\label{non sim condition} \sum_{S_2\nsubseteq 5 B_k} \chi_{\iota'}(T_{\theta,v}, S_2) \gtrsim \sum_{S'}\chi_{\iota'}(T_{\theta,v}, S'). \end{equation} Otherwise, the ball $B_k^*$ that maximizes $\underset{S'\subseteq B_k^*}{\sum}\chi_{\iota'}(T_{\theta,v}, S')$ must belong to $5B_k$, which violates the assumption $T_{\theta,v}\nsim B_k$. For each tube $T_{\theta,v}$ satisfying $\chi_{\iota}(T_{\theta,v}, S_1)=1$, by the definition of $\chi_{\iota}$, we know \begin{equation}\label{iota condition} \sum_{S'}\chi_{\iota'}(T_{\theta,v}, S')\gtrsim \gamma_l. \end{equation} Assume that we have $(\frac{R}{r})^{\beta_1}$ nonzero wavepackets $Ef_{\theta,v}$ such that $\theta\subset \tau$, $T_{\theta,v}\nsim B_k$, and $\chi_{\iota}(T_{\theta,v}, S_1)=1$. Combine inequality~\ref{non sim condition} and inequality~\ref{iota condition}, we have a lower bound for the quantity~\ref{iota2018}, \begin{equation}\label{lower bound for iota} \sum_{S_2\nsubseteq 5B_k} \sum_{\theta\subset \tau,v} \chi_{\iota}(T_{\theta,v}, S_1)\chi_{\iota'}(T_{\theta,v}, S_2)\gtrsim (\frac{R}{r})^{\beta_1} \gamma_l \end{equation} We shall point out that $(\frac{R}{r})^{\beta_1}$ might be smaller than $u_l$, since we add an extra condition that $T_{\theta,v}\nsim B_k$. Next we are going to give an upper bound for the quantity~\ref{iota2018}. Fix a pair of fat $r$--surfaces $S_1$ and $S_2$ with distance $R^{1-\epsilon_0}$, and each one lies inside a ball of radius $r\leq R^{1/2}$, the number of large wavepackets shared by two fat $r$--surfaces is at most $R^{O(\epsilon_0)}$. Specifically, \begin{equation}\label{geometric bush} \underset{\theta\subseteq \tau,v}{\sum} \chi_{\iota} (T_{\theta,v}, S_1)\chi_{\iota'}(T_{\theta,v}, S_2) \lesssim R^{O(\epsilon_0)}. \end{equation} Since $\chi_{\iota'}$ counts bushes of size about $u_l$, inequality~\ref{geometric bush} can be written as \begin{equation}\label{rewrite geometric bush} \underset{\theta\subseteq \tau,v}{\sum} \chi_{\iota} (T_{\theta,v}, S_1)\chi_{\iota'}(T_{\theta,v}, S_2) \lesssim R^{O(\epsilon_0)} u_l^{-1} \underset{\theta\subseteq \tau}{\sum} \chi_{\iota'}(T_{\theta,v}, S_2) \end{equation} Assume that there are $(\frac{R}{r})^{\beta_0}$ nonzero wavepackets $Ef_{\theta,v}$ with $\theta\subseteq \tau$. By definition of $\chi_{\iota'}$, $$\sum_{S'}\chi_{\iota'}(T_{\theta,v}, S')\lesssim \gamma_{l-1} (\frac{R}{r})^{100\delta}.$$ We sum over all the nonzero wavepackets $Ef_{\theta,v}$ with $\theta\subseteq \tau$, \begin{equation}\label{upper bound bush} \sum_{\theta\subseteq \tau,v} \sum_{S'}\chi_{\iota'}(T_{\theta,v}, S')\lesssim \gamma_{l-1} (\frac{R}{r})^{\beta_0+ 100\delta}. \end{equation} We sum inequality~\ref{rewrite geometric bush} over all the cells $S_2\nsubseteq 5B_k$ and apply inequality~\ref{upper bound bush} to obtain the following upper bound for \ref{iota2018} \begin{equation}\label{upper bound for iota} \sum_{S_2\nsubseteq 5B_k} \sum_{\theta\subseteq \tau,v} \chi_{\iota}(T_{\theta,v}, S_1)\chi_{\iota'}(T_{\theta,v}, S_2)\lesssim R^{O(\epsilon_0)} u_l^{-1}(\frac{R}{r})^{\beta_0} \gamma_{l-1} \end{equation} Since $\iota$ is admissible, $\gamma_{l} \geq \gamma_{l-1}(\frac{R}{r})^{-100\delta}$. Compare inequality~\ref{lower bound for iota} with inequality~\ref{upper bound for iota}, \begin{equation}\label{count iota} (\frac{R}{r})^{\beta_1-\beta_0} u_l \lesssim R^{O(\epsilon_0)} \end{equation} Now we are ready to estimate $\|f^{\nsim}_{\iota, \Sigma, S,\tau}\|_{L^2}$. Recall that $S=S_1$ and $\Sigma=\Sigma_1$. We apply Lemma~\ref{bush estimate} with $f^{\nsim}_{\iota, \Sigma,\tau}=g_{\mathcal{U}}$ and $u=u_l$. By the definition of $f^{\nsim}_{\iota, \Sigma, S, \tau}$, which takes the tangential part to $S$ and $\Sigma$, we have $$\|f^{\nsim}_{\iota, \Sigma, S, \tau}\|_{L^2}^2 \lesssim r^{-1/2} u_l \|f^{\nsim}_{\iota, \Sigma, \tau}\|_{L^2}^2.$$ Since there are $(\frac{R}{r})^{\beta_1}$ nonzero wavepackets $Ef_{\theta,v}$ such that $\theta\subset \tau$, $T_{\theta,v}\nsim B_k$ and $\chi_{\iota}(T_{\theta,v}, S)=1$, $$ \|f^{\nsim}_{\iota, \Sigma, \tau}\|_{L^2}^2 \leq (\frac{R}{r})^{\beta_1-\beta_0} \|f_{\tau}\|_{L^2}^2.$$ We apply inequality~\ref{count iota}, $$\|f^{\nsim}_{\iota, \Sigma,S, \tau}\|_{L^2}^2\lesssim r^{-1/2} R^{O(\epsilon_0)} \|f_{\tau}\|_{L^2}^2.$$ \end{proof} With Lemma~\ref{real iota} we prove Lemma~\ref{small r small D2018} corresponding to Lemma~\ref{small r2018allD}. \begin{proof} When $D_t\geq r_t^{1/2}$, by Lemma~\ref{structure2018} and the assumptions of this lemma, there exist more than $R^{-\delta}|\mathcal{S}_t| \gtrsim D_t^3R^{-O(\delta)}$ fat $r_t$--planes $S_t$, such that $$ \|Ef\|_{BL^p(B_R)}^p \lesssim |\mathcal{S}_t| R^{O(\delta)} \|Ef^{\nsim}_{S_t}\|_{BL^p(S_t)}^p . $$ We apply Lemma~\ref{tangential 1} on scale $r_t$, $$\|Ef\|_{BL^p(B_R)}^p \lesssim |\mathcal{S}_t| R^{O(\delta)} r_t^{\frac{5}{2}-\frac{3p}{4}} \|f^{\nsim}\|_{L^2}^p.$$ By Lemma~\ref{L2 orthogonality for nsim}, $$\sum_{S_t\in \mathcal{S}_t}\|f^{\nsim}_{S_t}\|_{L^2}^2 \lesssim D_t R^{O(\delta)}\|f\|_{L^2}^2.$$ There exists an $S_t$, such that $\|f^{\nsim}_{S_t}\|_{L^2}^2 \lesssim D_t R^{O(\delta)} |\mathcal{S}_t|^{-1} \|f\|_{L^2}^2$. We use this $S_t$ to estimate $\|Ef\|_{BL^p(B_R)}^p$, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim |\mathcal{S}_t| R^{O(\delta)} r_t^{\frac{5}{2}-\frac{3p}{4}} \|f^{\nsim}\|_{L^2}^p\\ &\lesssim |\mathcal{S}_t|^{1-\frac{p}{2}} D^{\frac{p}{2}} R^{O(\delta)} r_t^{\frac{5}{2}-\frac{3p}{4}} \|f\|_{L^2}^p \\ &\lesssim D_t^{3-p}r_t^{\frac{5}{2}-\frac{3p}{4}} \|f\|_{L^2}^p \\ &\lesssim r_t^{\frac{5}{2}-\frac{3p}{4}+\frac{3-p}{2}} \|f\|_{L^2}^p. \end{align*} We applied that $|\mathcal{S}_t|\gtrsim D_t^3 R^{O(\delta)}$ and the assumption that $D_t\geq r_t^{1/2}$, when $p> \frac{16}{5}$, the constant term is bounded by $R^{\epsilon}$. When $D_t\leq r_t^{1/2}$, we shall use the improved estimate in Lemma~\ref{real iota}. By the assumptions of this Lemma, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{\delta} \sum_{O}\|Ef\|_{BL^p(O)}^p \\ &\lesssim R^{\delta} \underset{S_t\in \mathcal{S}_t}{\sum} \sum_{O\subseteq S_t}\|Ef^{\nsim}_{S_t}\|_{BL^p(O)}2\|Ef^{\nsim}_{\Pi_{S_t}}\|_{BL^p(O)}^{p-2}\\ &\lesssim R^{\delta} \underset{S_t\in \mathcal{S}_t}{\sum} \|Ef^{\nsim}_{S_t}\|_{BL^p(S_t)}^2 \|Ef^{\nsim}_{\Pi_{S_t}}\|_{BL^p(S_t)}^{p-2}. \end{align*} We apply inequality~\ref{tangential 1} on scale $r_t$, Lemma~\ref{real iota} and Lemma~\ref{L2 orthogonality for nsim}, \begin{align*} \|Ef\|_{BL^p(B_R)}^p &\lesssim R^{\delta}r_t^{\frac{5}{2}-\frac{3p}{4}} \underset{S_t\in \mathcal{S}_t}{\sum} \|f^{\nsim}_{S_t}\|_{L^2}^2 \|f^{\nsim}_{\Pi_{S_t}}\|_{L^2}^{p-2}\\ &\lesssim R^{O(\epsilon_0)} r_t^{\frac{5}{2}-\frac{3p}{4} -\frac{p-2}{2}} \underset{S_t\in \mathcal{S}_t}{\sum} \|f^{\nsim}_{S_t}\|_{L^2}^2\underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}\\ &\lesssim R^{O(\epsilon_0)} r_t^{\frac{5}{2}-\frac{3p}{4} -\frac{p-2}{2}} D_t \|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}\\ &\lesssim R^{O(\epsilon_0)} r_t^{\frac{5}{2}-\frac{3p}{4} -\frac{p-2}{2}+\frac{1}{2}}\|f\|_{L^2}^2 \underset{|\theta|=R^{-1/2}}{\max}\|f_{\theta}\|_{L^2_{avg}(\theta)}^{p-2}. \end{align*} We used the assumption that $D_t\leq r_t^{1/2}$, when $p>\frac{16}{5}$, the constant term is bounded by $R^{\epsilon}$. \end{proof} \section{Proof of Theorem~\ref{main induction theorem}}\label{proof} The proof of Theorem~\ref{main induction theorem} is a combination of the lemmas that we proved in previous sections. We summarize as follows: First we apply Lemma~\ref{structure2018} we obtain $\|Ef\|_{BL^p(B_R)}^p \lesssim \sum_O \|Ef\|_{BL^p(O)}^p $ and $Ef|_O = Ef_{O}+\sum_{t=1}^n Ef_{S_t} + \RapDec(R)\|f\|_{L^2}$. Then we apply two ends argument and decompose $Ef= Ef^{\sim}+Ef^{\nsim}$. If $Ef^{\sim}$ dominates, then Lemma~\ref{two ends2018} gives the answer. Otherwise $Ef^{\nsim}$ dominates. By Corollary~\ref{linear structure2018}, $Ef^{\nsim}=Ef^{\nsim}_{O}+ \sum_{t=1}^n Ef^{\nsim}_{S_t}+ \RapDec(R)\|f\|_{L^2}$. If $Ef^{\nsim}_{O}$ dominates, then we apply Lemma~\ref{small r2018}. Otherwise there exists a $t$ such that $Ef^{\nsim}_{S_t}$ dominates for most of the $O$ and $\|Ef^{\nsim}_{S_t}\|_{BL^p(O)}\sim \|Ef^{\nsim}_{\Pi_{S_t}}\|_{BL^p(O)}$. We apply \begin{itemize} \item Lemma~\ref{large r2018} when $r_t\geq R^{13/16}$, \item Lemma~\ref{last case for large r2018} when the corresponding $R^{1/2} \leq r_t\leq R^{13/16}$, \item and Lemma~\ref{small r small D2018} when $r_t\leq R^{1/2}$. \end{itemize}
{ "timestamp": "2018-02-14T02:00:55", "yymm": "1802", "arxiv_id": "1802.04312", "language": "en", "url": "https://arxiv.org/abs/1802.04312" }
\subsubsection*{Example}} \newcommand{\subsubsection*{Examples}}{\subsubsection*{Examples}} \newcommand{\subsubsection*{Definition}}{\subsubsection*{Definition}} \newcommand{\subsubsection*{Definitions}}{\subsubsection*{Definitions}} \newcommand{\subsubsection*{Notation}}{\subsubsection*{Notation}} \newcommand{\mathrm{proj}}{\mathrm{proj}} \def\mathbf{I}{\mathbf{I}} \def\mathop{\not\mathrm{I}}{\mathop{\not\mathrm{I}}} \def\mathop{\parallel}{\mathop{\parallel}} \def\mathop{\not\|}{\mathop{\not\|}} \def\mathop{\widehat{\parallel}}{\mathop{\widehat{\parallel}}} \def\mathop{\mathrm{spg}}\nolimits{\mathop{\mathrm{spg}}\nolimits} \def\mathop{\mathrm{pg}}\nolimits{\mathop{\mathrm{pg}}\nolimits} \def\mathop{\mathrm {srg}}\nolimits{\mathop{\mathrm {srg}}\nolimits} \def\mathop{\mathrm{min}}\nolimits{\mathop{\mathrm{min}}\nolimits} \def\mathbf{PG}{\mathbf{PG}} \def\mathrm{AG}{\mathrm{AG}} \def\mathrm{GF}{\mathrm{GF}} \def\mathcal{N}{\mathcal{N}} \def\hspace*{\fill}{\footnotesize$\blacksquare$}{\hspace*{\fill}{\footnotesize$\blacksquare$}} \newcommand{\mathrm{Aut}}{\mathrm{Aut}} \newcommand{\mathrm{id}}{\mathrm{id}} \newcommand{\mathbf{1}}{\mathbf{1}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathbf{F}}{\mathbf{F}} \newcommand{\mathbf{H}}{\mathbf{H}} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathcal{U}}{\mathcal{U}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathbf{W}}{\mathbf{W}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \newcommand{\mathcal{Z}}{\mathcal{Z}} \newcommand{\widetilde}{\widetilde} \newcommand{\overline}{\overline} \newcommand{\mathbf{T}}{\mathbf{T}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathcal{H}}{\mathcal{H}} \title[A question of Frohardt]{A question of Frohardt on $2$-groups, and skew translation quadrangles of even order} \subjclass[2000]{05B25, 20B10, 20B25, 20E42, 51E12.} \author{Koen Thas} \address{{Ghent University}, {Department of Mathematics}, {Krijgslaan 281, S25, B-9000 Ghent, Belgium}} \email{koen.thas@gmail.com} \thanks{} \date{} \begin{document} \maketitle \begin{abstract} We solve a fundamental question posed in Frohardt's 1988 paper \cite{Fro} on finite $2$-groups with Kantor familes, by showing that finite groups with a Kantor family $(\mathcal{F},\mathcal{F}^*)$ having distinct members $A, B \in \mathcal{F}$ such that $A^* \cap B^*$ is a central subgroup of $H$ and the quotient $H/(A^* \cap B^*)$ is abelian cannot exist if the center of $H$ has exponent $4$ and the members of $\mathcal{F}$ are elementary abelian. In a similar way, we solve another old problem dating back to the 1970s by showing that finite skew translation quadrangles of even order $(t,t)$ are always translation generalized quadrangles. \end{abstract} \setcounter{tocdepth}{1} \tableofcontents \bigskip \section{Introduction} In a stunning and seminal paper of 1988 \cite{Fro}, Frohardt solved a big part of a famous question of Kantor which states that a finite group admitting a Kantor family is a $p$-group. Such groups are very important since they produce generalized quadrangles, the central Tits buildings of rank $2$. In fact, groups with Kantor families yield the most important tool to construct generalized quadrangles, and a large literature exists on these objects | see for instance \cite{PT,TGQBook,LEGQ,HB,TiWe}. Generalized quadrangles constructed from Kantor families | by a procedure which is explained in section \ref{KF} | carry an interesting automorphism group which locally fixes some point linewise, and are called ``elation generalized quadrangles'' (EGQs). The theory of EGQs was initially devised through the 1970s and 1980s (see \cite[chapters 8--10]{PT} and \cite{LEGQ}) in order to set up a framework to study automorphisms, and especially Moufang conditions, of generalized quadrangles from a local point of view, much like translation planes were studied in the theory of projective planes. Together with a deep combinatorial theory, this enabled an almost entirely synthetic-geometric proof of the celebrated results of Fong and Seitz \cite{BN1,BN2} on split BN-pairs of rank $2$, in the case of type $\texttt{B}_2$ BN-pairs | see \cite[chapters 8 and 9]{PT}, \cite[chapters 10 and 11]{TGQBook}, and also the related recent note \cite{localhalf}. \\ In \cite{Fro}, Frohardt also studied groups $H$ with Kantor families $(\mathcal{F},\mathcal{F}^*)$ which have distinct members $A, B \in \mathcal{F}$ such that $A^* \cap B^* \leq Z(H)$ (where $Z(H)$ denotes the center of $H$) and $H/(A^* \cap B^*)$ is abelian. Such Kantor families are fundamental, since almost all examples of Kantor families have this natural property | see, e.g., the monograph \cite{LEGQ} or Payne's famous paper \cite{9}. We also refer to section \ref{STGQover} for more background information on this matter. Only a very few classes of finite groups are known which have Kantor families; the most used ones are elementary abelian groups, and Heisenberg groups defined over finite fields in odd characteristic, of dimension $3$ or $5$ (see e.g. the monograph \cite{LEGQ}). They all satisfy Frohardt's condition. \begin{theorem}[D. Frohardt \cite{Fro}, 1988] \label{Fro88} Let $H$ be a finite group with a Kantor family $(\mathcal{F},\mathcal{F}^*)$ having distinct members $A, B \in \mathcal{F}$ such that $A^* \cap B^* \leq Z(H)$ and $H/(A^* \cap B^*)$ is abelian. Then one of the folowing three cases occurs: \begin{itemize} \item[(1)] $Z(H)$ and the elements of $\mathcal{F}$ are elementary abelian. \item[(2)] $Z(H)$ is an elementary abelian $2$-group and the elements of $\mathcal{F}$ have exponent $4$. \item[(3)] $Z(H)$ has exponent $4$ and the elements of $\mathcal{F}$ are elementary abelian $2$-groups. \end{itemize} \end{theorem} In {\em loc. cit.}, Frohardt asked whether cases (2) and (3) actually occur. \\ In 2006, Rostermundt \cite{Roster} and, independently Thas \cite{Basic}, constructed an infinite class of examples (related to Hermitian varieties in projective dimension $3$), which fitted in class (2), so class (3) became the final challenge. Frohardt's problem resisted many years of tries (see for instance the work of Hachenberger \cite{H}), and the problem also occurs in the large literature of translation nets. Besides, the existence of elements in class (3) presents one of the main obstacles in classifying so-called ``skew translation generalized quadrangles'' (STGQs) | see \cite{noteann,STGQ}, and also \cite{STGQ2,Leug}. We also refer to section \ref{STGQover} for some milestone results in that classification theory. \\ In this paper, we resolve the question by showing that class (3) is empty. The proof consists of a mixture of synthetic and geometric reasoning. Most of the proof was obtained about 4 years ago, but the key lemma, Lemma \ref{subGQ}, which constructs certain subquadrangles from local data and can be seen as a group-theoretical generalization of a result in \cite{notenet}, was only found very recently, in January 2018. It uses the finer geometry of the projective representation (through the Andr\'{e}-Bruck-Bose construction) of the local translation plane at the regular elation point, once we have shown that in case (3), the parameters of hypothetical examples necessarily are of type $(t,t)$. The details can be found in section \ref{secsubGQ}. \\ As a bonus of this new technique, we are able to solve another long-standing open problem on which virtually no progress has been made since Payne's first paper on skew translation generalized quadrangles \cite{STGQbirth} from 1975, marking the birth of the theory. In its long history, no examples were found of STGQs of even order $(t,t)$ for which the elation group was {\em not} abelian | when $t$ is odd, the theory is entirely different, since then it is easy to prove that such a group cannot be abelian at all. Since one expects that such STGQs would have to meet the requirements of Frohardt's theorem, and since for STGQs of order $(t,t)$ the elements of $\mathcal{F}$ are always elementary abelian (see \cite{STGQ} for those details), such examples would conjecturally live in class (3) of Theorem \ref{Fro88}. And indeed they do, if they would exist. The best result up to now was the same result when $(t,t) = (8,8)$ \cite{Leug}, with a very long and computer-aided technical proof. Our proof of the general solution is very short (see sections \ref{secsubGQ} and \ref{sol}); it states the any finite STGQ of even order $(t,t)$ necessarily is a {\em translation generalized quadrangle} | that is, the associated elation group is an elementary abelian $2$-group, and hence the quadrangle has a projective representation in some projective space. This essentially means that there is no ``proper'' theory of STGQs in this case. On the combinatorial level, there is also an other interesting aspect to this result. If every line incident with some point in a finite generalized quadrangle of even order $(t,t)$ is {\em regular} (see section \ref{reg} for a formal definition of this very important notion), then it can be shown that the point itself is also regular (\cite[section 1.5.2]{PT}). Our second main result provides a group-theoretical converse: if a finite EGQ of even order $(t,t)$ has a regular elation point (this is another way of defining STGQs in this specific case), then it is a translation quadrangle, so all the lines incident with $x$ are regular. \subsection*{Organization} In sections \ref{intro} and \ref{sett}, we introduce the basic notions that we will need. In section \ref{prep}, we make a number of synthetic observations; we essentially show that in case (3) of Frohardt's theorem, it is sufficient to work with Kantor families of type $(t,t)$. This is done in the geometrical language of generalized quadrangles. In section \ref{secsubGQ}, we obtain the new subquadrangle lemma. Then, in section \ref{first}, we show that Frohardt's class (3) is empty. And finally, in the last section, we show that STGQs of even order $(t,t)$ are always translation quadrangles. \bigskip \section{Synopsis of definitions} \label{intro} Let $\Gamma$ be a thick generalized quadrangle (GQ). It is a rank $2$ geometry $\Gamma = (\mathcal{P},\mathcal{B},\mathbf{I})$ (where we call the elements of $\mathcal{P}$ ``points'' and those of $\mathcal{L}$ ``lines'') such that the following axioms are satisfied: \begin{itemize} \item[(a)] there are no ordinary digons and triangles contained in $\Gamma$; \item[(b)] each two elements of $\mathcal{P} \cup \mathcal{L}$ are contained in an ordinary quadrangle; \item[(c)] there exists an ordinary pentagon. \end{itemize} It can be shown that there are exist constants $s$ and $t$ such that each point is incident with $t + 1$ lines and each line is incident with $s + 1$ points. We say that $(s,t)$ is the {\em order} of $\Gamma$. Note that an ordinary quadrangle is just a (necessarily thin) GQ of order $(1,1)$ | we call such a subgeometry also ``apartment'' (of $\Gamma$). \subsection{Subquadrangles} A {\em subquadrangle} (subGQ) $\Gamma' = (\mathcal{P}',\mathcal{B}',\mathbf{I}')$ of a generalized quadrangle $\Gamma = (\mathcal{P},\mathcal{B},\mathbf{I})$, is a GQ for which $\mathcal{P}' \subseteq \mathcal{P}$, $\mathcal{B}' \subseteq \mathcal{B}$, and $\mathbf{I}'$ is the incidence relation which is induced by $\mathbf{I}$ on $(\mathcal{P}' \times \mathcal{B}') \cup (\mathcal{B}' \times \mathcal{P}')$. \subsection{Regularity} \label{reg} Let $\Gamma$ be a thick GQ of order $(s,t)$. If $X$ and $Y$ are (not necessarily different) points, or lines, $X \sim Y$ denotes the fact that there is a line incident with both, or a point incident with both. Then $X^{\perp} := \{ Y \ \vert\ Y \sim X \}$, and if $S$ is a point set, or a line set, $S^{\perp} := \cap_{s \in S}s^{\perp}$ and $S^{\perp\perp} := {(S^{\perp})}^{\perp}$. A particularly important example is the case where $S = \{X,Y\}$ is a set of distinct noncollinear points (or nonconcurrent lines, but this is merely the dual situation, which we leave to the reader); then each line incident with $X$ is incident with precisely one point of $\{ X,Y\}^{\perp}$ (so if $\Gamma$ is finite, $\vert \{ X,Y\}^{\perp} \vert = t + 1$). The set $\{ X,Y\}^{\perp\perp}$ consists of all points which are collinear with every point of $\{X,Y\}^{\perp}$, so \begin{equation} \{ X, Y\} \subseteq \{X,Y\}^{\perp\perp}. \end{equation} Let $Z$ be any point of $\{X,Y\}^{\perp}$; if each line incident with $Z$ is incident with exactly one point of $\{X,Y\}^{\perp\perp}$, then this property is independent of the choice of $Z$, and we say that $\{ X,Y\}$ is {\em regular}. In the finite case, we could equivalently have asked that $\vert \{ X,Y\}^{\perp\perp} \vert = t + 1$. We call a point/line $X$ {\em regular} if $\{ X,Y \}$ is regular for all $Y \not\sim X$. \subsection{Symmetry} Isomorphisms and automorphisms of generalized quadrangles are defined in the usual manner. See chapter 1 of \cite{PT}. The automorphism group of a GQ $\Gamma$ will be denoted by $\mathrm{Aut}(\Gamma)$. Let $X$ be a point or a line in a thick GQ $\Gamma$. A {\em symmetry} with {\em center} $X$ (in the case of a point) or {\em axis} $X$ (in the case of a line) is an element of $\mathrm{Aut}(\Gamma)$ that fixes each element of $X^{\perp}$. We say that $X$ is a {\em center of symmetry} (point case) or an {\em axis of symmetry} (line case) if there exist $Y$ and $Z$ in $X^{\perp}$ such that $Y \not\sim Z$, for which the group of all symmetries $\mathcal{S}(X)$ with center/axis $X$ acts transitively on $\{ Y,Z\}^{\perp} \setminus \{X\}$. This definition does not depend on the choice of $(Y,Z)$, and one easily shows that ``transitive'' implies ``sharply transitive.'' In the finite, we could also have asked that \begin{equation} \vert \mathcal{S}(X) \vert = t \end{equation} if the order of $\Gamma$ be $(s,t)$. Note that a center/axis of symmetry is necessarily regular. \subsection{Kantor families} \label{KF} In this section, we will recall the very important Kantor family construction. Groups with Kantor families produce generalized quadrangles, and vice versa, a certain type of generalized quadrangle gives rise to Kantor families. We will only define {\em finite} Kantor families, as that is the case we will need in due course. So suppose $K$ is a finite group of order $u^2v$ for positive integers $u$ and $v$, both at least $2$. A {\em Kantor family} (of type $(u,v)$) in $K$ is a pair $(\mathcal{F},\mathcal{F}^*)$ of sets of subgroups of $K$ for which the properties below are satisfied: \begin{itemize} \item[(a)] $\vert \mathcal{F} \vert = \vert \mathcal{F}^* \vert = v + 1$, and there is a bijection $*: \mathcal{F} \mapsto \mathcal{F}^*$ which maps each $A \in \mathcal{F}$ to $A^* \in \mathcal{F}^*$ such that $A \leq A^*$; \item[(b)] for each $A \in \mathcal{F}$, $\vert A \vert = u$ and $\vert A^* \vert = uv$; \item[(c)] if $A, B, C$ are different elements in $\mathcal{F}$, then $AB \cap C = \{ \mathrm{id} \}$; \item[(d)] if $A$ and $B$ are different elements in $\mathcal{F}$, then $A \cap B^* = \{ \mathrm{id} \}$. \end{itemize} From the data $\Big(K,(\mathcal{F},\mathcal{F}^*)\Big)$ one constructs a thick GQ $\Gamma = \Gamma\Big(K,(\mathcal{F},\mathcal{F}^*)\Big)$ of order $(u,v)$ as follows. Its points are a symbol $(\infty)$, left cosets of type $gA^*$ ($A \in \mathcal{F}$ and $g \in K$), and the elements of $K$. Its lines are symbols $[ A ]$ ($A \in \mathcal{F}$) and left cosets of type $gA$ ($A \in \mathcal{F}$ and $g \in K$). The point $(\infty)$ is incident with all lines of the first type, and no other lines. A line $[A]$ is also incident with all points $gA^*$. All other incidences are (reversed) containment. The group $K$ acts naturally as an automorphism group on $\Gamma$, by left multiplication on the cosets, while fixing the symbolic point and symbolic lines. Then $K$ fixes all the lines incident with $(\infty)$, and acts sharply transitively on the points of $\Gamma$ which are not collinear with $(\infty)$. Conversely, let $\Omega$ be a thick GQ of order $(u,v)$, with an automorphism group $L$ which fixes some point $x$ linewise while acting sharply transitively on the points not collinear with $x$. Then $\vert L \vert = u^2v$. Now take one arbitrary point $y$ not collinear with $x$ (due to the transitivity of $L$ on these points, this choice is not important). For each line $U \mathbf{I} y$, let $u$ be the point which is incident with $U$ and collinear with $x$; let $L_U^* := L_u$. then with $\mathcal{F} := \{ L_U\ \vert\ U \mathbf{I} y \}$ and $\mathcal{F}^* := \{ L_U^*\ \vert\ U \mathbf{I} y \}$, $(\mathcal{F},\mathcal{F}^*)$ is a Kantor family of type $(u,v)$ in $L$. Also, we have a natural isomorphism \begin{equation} \Gamma\Big(L,(\mathcal{F},\mathcal{F}^*)\Big)\ \mapsto\ \Omega \end{equation} which maps $L$ to itself and $(\infty)$ to $x$. \subsection{Skew translation quadrangles} \label{STGQover} If $(\Gamma,K)$ is an EGQ with elation point $x$, and $x$ is a center of symmetry such that the corresponding symmetry group $\mathcal{S}$ is a subgroup of $K$, then we call $(\Gamma,K)$ a {\em skew translation generalized quadrangle} (STGQ). In terms of Kantor families (using the notation of the previous subsection), a Kantor family $(\mathcal{F},\mathcal{F}^*)$ gives rise to an STGQ if and only if there is a normal subgroup $C$ of $K$ such that for each $A \in \mathcal{F}$, we have $A^* = AC$ (see \cite{STGQbirth} or \cite[chapter 10]{PT}); $C$ then corresponds to the group of symmetries with center the elation point. Note that for different $A, B$ in $\mathcal{F}$, we have $A^* \cap B^* = C$. This type of GQ is very general: each known finite generalized quadrangle which is not isomorphic to the Hermitian quadrangle $\mathcal{H}(4,q^2)$ for some finite field $\mathbb{F}_q$, is, modulo point-line duality, either an STGQ, or the Payne-derived quadrangle of an STGQ. More details on this statement can be found in \cite{STGQ}; for a more or less recent but rather detailed census on the known GQs, we also refer to \cite{Payneproc}. We recall some basic (``decisive'') classification results in the large theory of STGQs. Much more can be found in \cite{STGQ}. The first result was first obtained by the author in \cite{STGQ} (see \cite{noteann} for chronological details in its conception). It was also obtained in \cite{Leug} with virtually the same proof. \begin{theorem} \label{oddsq} An STGQ of odd order $(t,t)$ is isomorphic to the symplectic quadrangle $\mathcal{W}(t)$. \end{theorem} The symplectic quadrangle $\mathcal{W}(t)$ arises as the $\mathbb{F}_t$-rational points in a projective space $\mathbb{P}^3(\mathbb{F}_t)$, together with the absolute lines with respect to a symplectic polarity of $\mathbb{P}^3(\mathbb{F}_t)$ (see \cite[chapter 3]{PT}); it is one of the natural geometric modules of the group $\mathbf{PSp}_4(t)$.\\ The most investigated type of STGQ is arguably the class of ``flock quadrangles.'' All the known flock quadrangles arise through Kantor families in one and the same type of elation group: finite Heisenberg groups of dimension $5$ (over a finite field). The next result proves the converse, and is taken from \cite{Isomflock}. \begin{theorem} If $(\Gamma,K)$ is an EGQ of order $(s,t)$ and $K$ is isomorphic to a Heisenberg group of dimension $5$ over $\mathbb{F}_t$, then $\Gamma$ is a flock quadrangle. \end{theorem} The following result is one of the only known results on STGQs of even order $(t,t)$; its proof takes up more than half of the paper \cite{Leug}, and is computer-aided. \begin{theorem} An STGQ of order $(8,8)$ is a translation generalized quadrangle. \end{theorem} In the last section of the present paper, we will handle all STGQs of even order. \medskip \section{Notation} \label{sett} Let $K$ a finite group with a Kantor family $(\mathcal{F},\mathcal{F}^*)$ of type $(s,t)$. For certain distinct elements $A, B \in \mathcal{F}$, we suppose that $K/(A^* \cap B^*)$ is abelian and that $A^* \cap B^* \leq Z(K)$, the center of $K$. Put $A^* \cap B^* =: \mathcal{S}$. We want to work solely in case (3) of Frohardt's theorem in sections \ref{prep} and \ref{secsubGQ}, so that the elements of $\mathcal{F}$ are elementary abelian, but $Z(K) =: Z$ has exponent $4$. The associated generalized quadrangle with parameters $(s,t)$ is denoted $\Gamma^x$, or $\Gamma$. The Kantor family is defined relative to the point $z \not\sim x$. We say that a group $G \leq K$ satisfies (*) if for any element $\alpha$ of $G$, we have that if $\alpha$ fixes some point $y \sim x \ne y$, $\alpha$ also fixes $xy$ pointwise. We say that $G \leq K$ satisfies (*) {\em at} $U \mathbf{I} x$ if this property is locally fulfilled at $U$. \medskip \section{Preliminary properties} \label{prep} In this section we observe some initial properties which narrow down the possibilities for elements in Theorem \ref{Fro88}(3). In the final part of the proof of the main theorem, we will not need all these properties, but it is interesting to see how far one can go ``synthetically.'' \\ First note that for any line $U \mathbf{I} x$, $K$ acts transitively on $U \setminus \{x\}$. As $\mathcal{S} \leq Z(K)$, it hence follows that $\mathcal{S}$ fixes $[A]$ and $[B]$ pointwise (so $\mathcal{S}$ is in the kernel of the action of $K$ on the point set of $[A]$, and $[B]$). As $K/\mathcal{S}$ is abelian and transitive on both $[A] \setminus \{x\}$ and $[B] \setminus \{x\}$, we have that $K$ has (*) at $[A]$ and $[B]$. Let $z \in Z^\times$; if $z$ would fix an affine line, $z$ is in some conjugate of an element of $\mathcal{F}$, implying that $z^2 = 1$. So we may assume that $z$ does not have this property. Now suppose there is an affine line $U$ such that $U^z$ does not intersect $U$. Take any element $V \in \{U,U^z\}^{\perp}$ which is not incident with $x$; choose the element $\beta$ in $K_V$ which sends $V^z$ to $V$; then $z\beta$ fixes $U$, and $[z,\beta] = \mathrm{id}$, so $\mathrm{id} = (z\beta)^2 = z^2\beta^2 = z^2$. So we may suppose that $z$ fixes each point of $x^{\perp}$, i.e., $z$ is a symmetry with center $x$. In its turn, this implies that $z \in \mathcal{S}$. \\ \subsection{Reduction of parameters} Now suppose that $z$ is a (nontrivial) symmetry with center $x$, and suppose that $U \sim [A]$ is an affine line. Let $\alpha \in A^*$ be an element which sends $U^z$ to $U$, such that $z\alpha \ne \mathrm{id}$ (and note that such $\alpha$ exist, of course). Then $z\alpha$ fixes $U$. As $[z,\alpha] = \mathrm{id}$, $z$ is an involution if and only if $\alpha$ is an involution. As $(\mathcal{F},\mathcal{F}^*)$ is a Kantor family, we can write $A^* = A\mathcal{S}$ (with $A \cap \mathcal{S} = \{\mathrm{id}\}$). We can write $\alpha = z^{-1}a$, with $a \in K_U$, so that the fixed points structure of $\alpha$ is that of $a$ | precisely the points incident with $[A]$. Suppose $\alpha$ does not fix affine lines. Applying a theorem of Benson, we conclude \begin{equation} (t + 1)(s + 1) + st \equiv st + 1 \mod{s + t}, \end{equation} which gives a contradiction unless $s = t$ is even. In the other cases, $\alpha$ must fix affine lines, so that it is an involution, and hence $z$ also is. So if $s \ne t$, each element of $Z$ is an involution, showing that case (3) of Frohardt's theorem cannot occur under these assumptions. \subsection{The centralizers $C_B(\alpha)$} \label{cent} Now consider the case where $s = t$, and $t$ is even. Note that $\mathcal{S}$ only consists of symmetries and involutions. Suppose $\beta \in \mathcal{S}^\times$ is not an involution (it is then a symmetry); compose $\beta$ with a hypothetical involution $\iota$ in $Z$ which is not a symmetry. Then $\beta\iota \in Z$ is not a symmetry, so it is an involution, and $\mathrm{id} = (\beta\iota)^2 = \beta^2\iota^2 = \beta^2$, contradiction. So all involutions in $\mathcal{S}$ are symmetries, and whence \ul{all elements of $\mathcal{S}$ are symmetries.} It follows easily that $x$ is a regular point. If $K$ is abelian, the theory of translation generalized quadrangles gives us that $K$ is elementary abelian (see \cite[chapter 3]{TGQBook}), so we may assume that $K$ be not abelian. By Proposition 3.1 of Hachenberger \cite{H}, it follows that $Z(K) = \mathcal{S}$. \\ Let $C \ne D$ be in $\mathcal{F}$; then $[H,H] = [C,D]$ (as $H = CD\mathcal{S}$) $= \langle (cd)^2 \vert c \in C, d \in D \rangle$. As this group is a subgroup of $\mathcal{S}$, it follows that $[H,H]$ is elementary abelian. As $\Phi = H^2[H,H]$, it easily follows that $\Phi$ is the elementary abelian subgroup of $\mathcal{S}$ of all squares. Then on the other hand, if $s = \gamma^2 \in \mathcal{S}$ is a square, there is an $e \in E \in \mathcal{F}$ and $c \in \mathcal{S}$ such that $\gamma = ec$ (as $H = \cup_{A \in \mathcal{F}}A^*$, and $s = ecec = c^2$. So $\Phi = \mathcal{S}^2$. Also, for $A \in \mathcal{F} \setminus \{ E\}$, there is a unique $B \in \mathcal{F}$ such that $\gamma \in AE$ | write $\gamma ab$ with $a \in A$ and $b \in B$. Then $(ab)^2 = \gamma^2 = s$, so that $s \in [H,H]$. We have obtained that $[H,H] = \mathcal{S}^2 = \Phi$.\\ Let $A \ne B$ be arbitrary in $\mathcal{F}$. Now let $\alpha \in A^\times$ be arbitrary, and let $\beta \in B^\times$ be such that $[\alpha,\beta] = \mathrm{id}$. Put $\mathcal{F} \setminus \{A,B\} = \{C_1,\ldots,C_{t - 1}\}$. Then for each $i \in \{1,\ldots,t - 1\}$ there is precisely one triple $(c_i,\beta_i,s_i) \in C_i \times B \times \mathcal{S}$ such that $\alpha\beta = c_i\beta_is_i$. Note that the maps $\mu: \{ 1,\ldots,t - 1\} \longrightarrow \mathcal{S}^\times: j \longrightarrow s_j$ and $\mu': \{ 1,\ldots,t - 1\} \longrightarrow B \setminus \{\beta\}: j \longrightarrow \beta_j$ are surjective. Suppose that $s_e^2 = \mathrm{id}$; then as $(\alpha\beta)^2 = \mathrm{id}$, $[c_e,\beta_e] = \mathrm{id}$, and so $[\beta_e,c_e\beta_e s_e] = \mathrm{id} = [\beta_e,\alpha\beta]$, implying that $[\alpha,\beta_e] = \mathrm{id}$. The converse is also true. So \begin{equation} \vert \mbox{involutions}\ \mbox{in}\ \mathcal{S}\vert (\mbox{including} \ \mathrm{id}) = \vert C_B(\alpha)\vert =: \ell. \end{equation} On the other hand we have $t = \vert \alpha^B \vert \times \vert C_B(\alpha) \vert$, so for any such $\alpha$, the size of $\alpha^K$ (which is the same as $\alpha^B$) is a constant ($\frac{t}{\ell}$) independent of the choice of $\alpha$. As $\alpha^K = \alpha^B$, it also follows that $\alpha$ fixes precisely $\vert C_B(\alpha) \vert = \ell$ points of $[A] \setminus \{x\}$ linewise. Note at this point that if we prove that $\mathcal{S}$ is an elementary abelian $2$-group, then each $A^*$ is also elementary abelian, and as $K$ is covered by the $A^*$s, it follows that $K$ itself is elementary abelian, contradiction with the assumption that $K$ has exponent $4$. In the rest of this paper, we will denote the subgroup of $\mathcal{S}$ that consists of all its involutions, by $\mathcal{I}$. \subsection{Extra info on $\mathcal{I}$, and $\ell$} Keeping the same notation as before, we recall that $[H,H] = [C,D] = \{ (cd)^2 \vert c \in C, d \in D\}$. Each $(cd)^2 = cdcd = [c,d]$ is an element in $\mathcal{S}$ which obviously is an involution if it is nontrivial, so $[H,H] \leq \mathcal{I}$. It follows that \begin{equation} \ell = \vert \mathcal{I} \vert\ \geq\ \Big\vert [H,H] \Big\vert = t/\ell, \end{equation} so that $\ell \geq \sqrt{t}$. \medskip \section{A new subquadrangle lemma} \label{secsubGQ} In this section, we will prove an extremely useful lemma which produces subquadrangles from subplanes in a derived substructure. We use essentially the same notation as before, but the setting is more general. In \cite{notenet} such results were already obtained in less general circumstances, but without the presence of elation groups. Let me first define the basic object of our interest. Let $\Gamma$ be a thick finite GQ of order $(t,t)$. Suppose $\Gamma$ has some regular point $x$. Then an affine plane $\pi(x)$ of order $t$ can be constructed as follows. Its points are sets $\{u,v\}^{\perp\perp}$, where $u$ and $v$ are noncollinear points in $x^{\perp}$. The lines are the points in $x^{\perp} \setminus \{x\}$. Incidence is natural. The parallel classes correspond to the lines incident with $x$ (so we can see $x$ as the line at infinity of $\pi(x)$). In \cite{notenet}, the same structure is considered in a quadrangle of order $(s,t)$ (one then gets a {\em net} instead of an affine plane), and subplanes of order $t$ are used to construct subquadrangles of order $(t,t)$. Such results are obviously highly applicable, but they are of no use in the present setting. On the other hand, if $\Gamma$ is as in the previous section, then $\pi(x)$ is a translation plane with translation group $K/\mathcal{S}$, and translation planes always have proper subplanes, if the order is not a prime. Those can be small (for instance, defined over the kernel in the projective representation of the translation plane). So in stark contrast with the setting of \cite{notenet}, such planes in general have a much smaller number of parallel classes than the translation plane itself. In terms of group theory, we want to consider the group extension \begin{equation} 1 \mapsto \mathcal{S} \mapsto K \mapsto T \mapsto 1, \end{equation} and a subplane $\pi'$ in $\pi(x)$ with translation group $T' \leq T$; the objective then is to define a subGQ $\underline{\Gamma}$ of $\Gamma$ which also has $x$ as a regular point, and which induces the translation plane $\pi'$. Of course, we know that $K$ contains a subgroup $\widehat{K}$ containing $\mathcal{S}$ such that $\widehat{K}/\mathcal{S} = T'$, but that group will not do in general (as such groups need not produce subquadrangles), as the reader will see below. \begin{lemma}[Subquadrangle Lemma] \label{subGQ} Let $(\Gamma,K)$ be a thick finite elation generalized quadrangle of order $(t,t)$ with elation group $K$ and regular elation point $x$. If $\pi'$ is an affine subplane of the affine plane $\pi(x)$, which is a translation plane of order $r$ with translation group induced by the translation group of $\Pi(x)$, then $\Gamma$ contains subGQs of order $(r,r)$ which are also elation quadrangles with as elation group a subgroup of $K$, and with regular elation point $x$. \end{lemma} {\em Proof}.\quad First of all, note that as $x$ is a regular point, $x$ is a center of symmetry (see \cite{STGQ} as a general reference on the structure of STGQs); denote the corresponding group of symmetries by $\mathcal{S}$ (and note that $\mathcal{S}$ is a subgroup of $K$). Then $K/\mathcal{S} =: T$ naturally is a translation group for $\pi(x)$. Suppose $\pi'$ is as in the statement, and let the corresponding translation group be $T' \leq T$. Let $z \not\sim x$ be arbitrary but fixed, and construct the Kantor family $(\mathcal{F},\mathcal{F}^*)$ with respect to this point. We denote the lines on $z$ by $U_i$ (indexed by a set $S$ of order $t + 1$), and put $K_{U_i} := A_i$. Put $\mathrm{proj}_{A_i}x = a_i$. We use the notation $[A_i]$ both for the line of $\Gamma$ incident with $x$ and $a_i$, and also for the corresponding point at infinity of $\pi(x)$. To $z$ corresponds a point $\{x,z\}^{\perp\perp}$ of $\pi(x)$. Without loss of generality, we suppose that this is also a point of $\pi'$. With respect to $\{x,z \}^{\perp\perp}$, the translation plane $\pi(x)$ is defined by the congruence partition (see \cite[chapter VII, section 3]{HP}) $\{ T_i := A_i/\mathcal{S}\ \vert\ i \in S \}$, where $A_i/\mathcal{S} \cong A_i\mathcal{S}/\mathcal{S}$ for each $i$. One notes that $A_i \cap \mathcal{S} = \{ \mathrm{id}\}$. Now let $S' \subseteq S$ be such that the indexes in $S'$ correspond to the points at infinity of $\pi'$; then the congruence partition of $\pi'$ is given by \begin{equation} \{ T'_i := T_i \cap T' \ \vert\ i \in S' \}. \end{equation} For each $j \in S'$, define $A'_j$ as the subgroup of $A_j$ defined by \begin{equation} A'_j/\mathcal{S} = T'_j \cong A'_j\mathcal{S}/\mathcal{S}. \end{equation} Now consider the subgroup \begin{equation} K' := \Big\langle A'_j\ \vert\ j \in S' \Big\rangle \end{equation} of $K$, and let $\mathcal{S}'$ be the kernel of the action of $K'$ on the plane $\pi'$; then obviously $\mathcal{S}'$ is a subgroup of $\mathcal{S}$. For each $j \in S'$, define ${A_j'}^*$ as $A'_j\mathcal{S}'$. Define $\mathcal{F}' := \{ A'_j\ \vert\ j \in S'\}$ and ${\mathcal{F}'}^* := \{ {A'_j}^*\ \vert\ j \in S' \}$. \\ \ul{Claim: $(\mathcal{F}',{\mathcal{F}'}^*)$ is a Kantor family with parameters $(r,\vert \mathcal{S}' \vert)$}.\quad Put $\vert \mathcal{S}' \vert = \sigma$. First note that $\vert K' \vert = r^2\sigma$ by its mere definition. Also, note that $\sigma \ne 1$, as otherwise $\mathcal{F}'$ defines a congruence partition in $K'$, which would yield triangles in $\Gamma$. By definition of the elements in $\mathcal{F}'$ and ${\mathcal{F}'}^*$, we have, for each $j \in S'$, that $\vert A'_j \vert = r$ and $\vert {A'_j}^* \vert = r\sigma$. All the other required properties follow from the fact that for each such $j$, $A'_j \leq A_j$ and ${A'_j}^* \leq A_j^*$. The claim is proved. \\ It follows that the Kantor family $(\mathcal{F}',{\mathcal{F}'}^*)$ defines a subquadrangle $\Gamma'$ of order $(r,\sigma)$ of $\Gamma$ that contains $x$ and $z$, and that $K' \leq K$ is an elation group for the elation point $x$. Also, since for all $m \ne n$ in $S'$ we have that \begin{equation} {A'_m}^* \cap {A'_n}^* = \mathcal{S}', \end{equation} it easily follows that $x$ is a regular point of $\Gamma'$ with group of symmetries $\mathcal{S}'$ (cf. section \ref{STGQover}). \hspace*{\fill}{\footnotesize$\blacksquare$} \\ Note that in general, $\mathcal{S}$ is not a subgroup of $K'$ (cf. the discussion before the statement of the subGQ lemma). \\ Since $\pi(x)$ is a translation plane of order $t$, or since $\Gamma$ is an EGQ of order $(t,t)$, we know that $t$ is the power of a prime $p$, cf. \cite{Order}. The next corollary uses this fact. \begin{corollary}[Existence of classical subquadrangles] \label{classubGQ} Use the notation of Lemma \ref{subGQ}. Put $t = p^h$ with $p$ a prime. Then $\Gamma$ contains subquadrangles of order $(p,p)$ isomorphic the symplectic quadrangle $\mathcal{W}(p)$, which enjoy all the properties of Lemma \ref{subGQ}. \end{corollary} {\em Proof}.\quad Any translation plane of order $t = p^h$ contains sub translation planes of order $p$ (which is easily seen in the Andr\'{e}-Bruck-Bose representation of the plane in a projective space over $\mathbb{F}_p$ | see \cite[Theorem 1.5 and consequent remark]{Knarr}). The corresponding subGQs of order $(p,\sigma)$ which arise from the construction of Lemma \ref{subGQ}, are EGQs with a regular point $x$. Since $p$ is prime, we have that such subGQs are classical by Bloemen, Thas and Van Maldeghem \cite{bloem}, and since $p$ is regular, we have that such subGQs are indeed isomorphic to $\mathcal{W}(p)$ by {\em loc. cit.} \hspace*{\fill}{\footnotesize$\blacksquare$} \\ \begin{remark} {\rm Lemma \ref{subGQ} can to some extent be generalized to elation generalized quadrangles of general order $(s,t)$ with a regular point, and can also be adapted to the infinite case. We will do this in the forthcoming paper \cite{STGQ2}. It is of no concern now. } \end{remark} \medskip \section{Proof of the first main result} \label{first} We keep using the notation of section \ref{sett} and section \ref{prep}. Let $a \in A \in \mathcal{F}$, and suppose $a \ne \mathrm{id}$. Then $a$ fixes precisely $\ell + 1$ points incident with $[\infty]$ pointwise. Let $v$ be a point incident with $[A]$ which is not such a point. Consider the translation plane $\pi(x)$. Applying the Andr\'{e}-Bruck-Bose construction (see \cite[Theorem 1.5 and consequent remark]{Knarr}) over $\mathbb{F}_2$, we represent $\pi(x)$ in a projective space $\mathbb{P}^{2h}(\mathbb{F}_2)$, where $t = 2^h$. So we have a set $S$ of $t + 1$ mutually disjoint $(h - 1)$-spaces in some hyperplane $\kappa$ of $\mathbb{P}^{2h}(\mathbb{F}_2)$, and the GQ-points in $x^{\perp} \setminus \{x\}$, which are lines of $\pi(x)$, correspond to the $h$-spaces in $\mathbb{P}^{2h}(\mathbb{F}_2)$ which intersect $\kappa$ in an element of $S$. The GQ-sets of type $\{u,v\}^{\perp\perp}$ with $u$ and $v$ noncollinear GQ-points in $x^{\perp}$, which are the points of $\pi(x)$ not incident with the translation line at infinity, correspond to the $\mathbb{F}_2$-points of $\mathbb{P}^{2h}(\mathbb{F}_2) \setminus \kappa$. The GQ-point $x$, which is the translation line of $\pi(x)$, corresponds to the set $S$, and the GQ-lines incident with $x$, which are the points of the translation line of $\pi(x)$, correspond to the elements of $S$. The translation group $T = K/\mathcal{S}$ corresponds to the group of translations of $\mathbb{P}^{2h}(\mathbb{F}_2)$ with axis $\kappa$; it is an elementary abelian $2$-group of size $t^2 = 2^{2h}$. Suppose $U \mathbf{I} z$ is the line of $\Gamma$ for which $K_U = A$. Let $U \cap [A]$ be $u$. Let $\delta \in S$ be the element corresponding to $[A]$. Let $\alpha$ be the $h$-space in $\mathbb{P}^{2h}(\mathbb{F}_2)$ corresponding with $u$, and let $\beta$ be the $h$-space corresponding with $v$; we have that $\alpha \cap \beta = \delta$. Consider the translation of $\mathbb{P}^{2h}(\mathbb{F}_2)$ which $a$ induces on $\mathbb{P}^{2h}(\mathbb{F}_2)$; it is an involution with center a point $c \in \delta$ (we also denote it by $a$). Let $L$ be any line (a $\mathbb{P}^1(\mathbb{F}_2)$) in $\alpha$ which is not contained in $\kappa$, and which is incident with $c$; then $L^a = a$. Now consider a line $L'$ in $\beta$ but not contained in $\kappa$, which contains $c$ as well, and such the the projective plane $\Big\langle L,L' \Big\rangle$ meets $\kappa$ in a line which is not contained in $\delta$. Note that such a line $L'$ exists: just let it vary in the $h$-space $\beta$: then \begin{equation} f: L' \ \mapsto\ \Big\langle L,L' \Big\rangle \cap \kappa \end{equation} is an injection, and as $\mathrm{dim}(\delta) = \mathrm{dim}(\beta) - 1$, the existence follows. Note that $a$ fixes $\Big\langle L,L' \Big\rangle$. Applying Lemma \ref{subGQ} and Corollary \ref{classubGQ}, we conclude that $\Gamma$ has a subquadrangle $\underline{\Gamma}$ isomorphic to $\mathcal{W}(2)$ which contains $x$, $u$ and $v$ (a GQ of order $(2,2)$ is isomorphic to $\mathcal{W}(2)$; see \cite[chapter 6]{PT}), and for which $a$ is an element in its elation group. Now $a$ fixes these points, and also the $\mathcal{W}(2)$-lines on $x$, as well as the $\mathcal{W}(2)$-lines on $u$ (since $a$ fixes $u$ linewise). Let $V \mathbf{I} v$ in $\underline{\Gamma}$, $V \ne [A]$. Let $U' \mathbf{I} u$ also be in $\underline{\Gamma}$ with $U' \ne [A]$; then $\{ V,U' \}$ is regular in $\underline{\Gamma}$ (by \cite[\S 3.2.1 and \S 3.3]{PT}). Let $W$ and $X$ be the lines of $\underline{\Gamma}$ in $\{ V,U' \}^{\perp} \setminus \{ [A]\}$; it is clear that $W^a = X$ and $X^a = W$, as $a$ fixes the lines in $\underline{\Gamma}$ on $u$ and $x$, and as $a$ fixes $\Big\langle L, L' \Big\rangle$. Since $V$ meets both $W$ and $X$, and since $v^a = v$, we conclude that $V^a = V$, contradiction, as $a$ does not fix any line on $v$ different from $[A]$, by assumption. It follows that $\ell = t$, that is, $a$ is a symmetry with axis $[A]$. Since the choice of $A$ and $a \in A^{\times}$ was arbitrary, each line incident with $x$ is an axis of symmetry, and hence $\Gamma$ is indeed a TGQ with translation group $K$. But then $K$ is an elementary abelian $2$-group by \cite[Theorem 3.4.2]{TGQBook}, contradicting the assumption that $\Gamma$ was a member of class (3) in Theorem \ref{Fro88}. (Other way: since $a$ is a symmetry with axis $[A]$, one observes that $[a,K] = \{\mathrm{id}\}$, so that $a \in Z(K) \setminus \mathcal{S}$, and this contradicts the fact that $Z(K) = \mathcal{S}$ | see \S \ref{cent}.) \\ This concludes the proof of the main result. \hspace*{\fill}{\footnotesize$\blacksquare$} \\ \begin{remark} {\rm We could also have directly proved, relying on the fact that $s = t$ and using Lemma \ref{subGQ}, that each $a \in A$, with $A \in \mathcal{F}$ arbitrary, is a symmetry with axis $[A]$. (On the other hand, once the work in section \ref{prep} is done, the ``indirect way'' is very fast.) We will illustrate this idea in detail in the next section. } \end{remark} \medskip \section{STGQs of order $(t,t)$ with $t$ even are TGQs} \label{sol} We now turn to the case of STGQs of order $(t,t)$ with $t$ even. The main difference with the setting of the Frohardt's question, is that we know a priori now that the parameters of the quadrangle are of the form $(t,t)$ (with $t$ even), but we do not know that $\mathcal{S}$ is a subgroup of the center of $K$. So a lot of information is lost in comparison with our analysis in Frohardt's setting. The main theorem of this final section tells us that there is no ``proper'' theory of STGQs if the order is $(t,t)$, $t$ even. \begin{theorem} \label{evensq} Let $(\Gamma,K)$ be a skew translation quadrangle with elation point $x$, of order $(t,t)$, where $t$ is even. Then $(\Gamma,K)$ is a translation generalized quadrangle. In particular, $K$ is an elementary abelian $2$-group. \end{theorem} {\em Proof}.\quad Since $(\Gamma,K)$ is an STGQ, we know that $x$ is regular, and that $\pi(x)$ is a translation plane with translation group $T := K/\mathcal{S}$, where $\mathcal{S}$ is the group of symmetries with center $x$. We use the exact same notation as in the previous section. So $A = K_U$, $U$ is a line meeting $[A]$ in a point $u$ different from $x$. Now $v$ is {\em any} point incident with $[A]$ and different from $x$ and $u$, and $a \in A^\times$; also, $c$ denotes the center of $a$ as before. Etc. The $(h + 1)$-space $\Big\langle \alpha,\beta \Big\rangle$ meets $\kappa$ in an $h$-space which contains $\delta$, so it meets any other element of $S$ in precisely one point. For $W \in S \setminus \{ \delta \}$, we will denote this point by $w$. We fix the line $L$ in $\alpha$, and consider the plane $\Big\langle cw,L \Big\rangle$; this plane is contained in $\Big\langle \alpha,\beta \Big\rangle$ and meets $\beta$ in a line $\omega(W)$ which is not contained in $\delta$. Applying Lemma \ref{subGQ} and Corollary \ref{classubGQ}, we conclude that $\Gamma$ has a subquadrangle $\underline{\Gamma}$ isomorphic to $\mathcal{W}(2)$ which contains $x$, $u$ and $v$, and also the GQ-line $\widehat{W}$ on $x$ that corresponds to $W$. Also, the elation group $K'$ as defined in Lemma \ref{subGQ} contains the element $a$ by construction, and the line $U$ is also a line of $\underline{\Gamma}$. Let $\widetilde{V} \mathbf{I} v$ be arbitrary, but different from $[A]$. Since the line $\widehat{W} \mathbf{I} x$ is arbitrary, it follows that we can find such a subGQ with the same properties as before, and containing $\widetilde{V}$. Since $a$ fixes $U$, we now conclude as in the previous section that $\widetilde{V}$ is also fixed by $a$. So $a$ fixes every line in $[A]^\perp$ which is not incident with $u$. Now we can take $U$ to be some line in this set to conclude that $a$ is a symmetry with axis $[A]$. So as in the previous section, we finally conclude that $(\Gamma,K)$ is a TGQ, and $K$ is an elementary abelian $2$-group. \hspace*{\fill}{\footnotesize$\blacksquare$} \\ The following corollary is a culmination of Theorem \ref{evensq}, and Theorem \ref{oddsq}. \begin{corollary}[Classification of STGQs of order $(t,t)$] A finite STGQ of order $(t,t)$ is either isomorphic to the symplectic quadrangle $\mathcal{W}(t)$, or is a translation generalized quadrangle. \hspace*{\fill}{\footnotesize$\blacksquare$} \end{corollary} \newpage
{ "timestamp": "2018-02-13T02:17:08", "yymm": "1802", "arxiv_id": "1802.03999", "language": "en", "url": "https://arxiv.org/abs/1802.03999" }
\section{Introduction} This paper can be considered as a natural continuation of the investigations started in \cite{lamis} and \cite{lamisii} where we started to build the theory of means on infinite sets. An ordinary mean is for calculating the mean of two (or finitely many) numbers. This can be extended in many ways in order to get a more general concept where we have a mean on some infinite bounded subsets of $\mathbb{R}$. The various general properties of such means, the relations among those means were studied thoroughly in \cite{lamis} and \cite{lamisii}. In this paper our main aim is to study means that domain may contain unbounded sets as well. First we investigate the general properties of such means. We also study already known properties for this type of means and new attributes are presented as well. Then we examine how a mean defined on bounded sets can be extended to a mean that is defined also on some unbounded sets. We check which properties of the original mean are inherited to the extension. We also present many new examples for means defined on unbounded sets too and we find natural generalizations for some classic means in order to get a mean defined on some unbounded sets as well. Finally we analyse the behavior of one of the most important generic means ${\cal{M}}^{\mu}\ (Avg)$ regarding unbounded measurable sets. \subsection{Basic notions and notations} Let us recall some very basic notions from \cite{lamis} and \cite{lamisii}. \medskip We call ${\cal{K}}$ an \textbf{ordinary mean} if it is for calculating the mean ${\cal{K}}(a_1,\dots,a_n)$ of finitely many numbers $a_1,\dots,a_n\in\mathbb{R}$. \smallskip A \textbf{generalized mean} is a function ${\cal{K}}:C\to \mathbb{R}$ where $C\subset P(\mathbb{R})$ consists of some (finite or infinite) subsets of $\mathbb{R}$ and $\inf H\leq {\cal{K}}(H)\leq\sup H$ holds for all $H\in C$. \smallskip A mean $\cal{K}$ is called \textbf{monotone} if $\sup H_1\leq\inf H_2$ implies that ${\cal{K}}(H_1)\leq {\cal{K}}(H_1\cup H_2)\leq {\cal{K}}(H_2)$. ${\cal{K}}$ is \textbf{base-monotone} if $H_1,H_2\in Dom({\cal{K}}), H_1\cap H_2=\emptyset$ then $\min\{{\cal{K}}(H_1),{\cal{K}}(H_2)\}\leq{\cal{K}}(H_1\cup H_2)\leq\max\{{\cal{K}}(H_1),{\cal{K}}(H_2)\}.$ ${\cal{K}}$ is \textbf{part-slice-continuous} if $H_1,H_2\in Dom({\cal{K}})$ then $H_2^{\varliminf H+},H_2^{\varlimsup H-}\in Dom({\cal{K}})$ and $f(x)={\cal{K}}(H_1\cup H_2^{x-})$ and $g(x)={\cal{K}}(H_1\cup H_2^{x+})$ are continuous where $Dom(f)=\{x:H_1\cup H_2^{x-}\in Dom({\cal{K}})\}, Dom(g)=\{x:H_1\cup H_2^{x+}\in Dom({\cal{K}})\}$. $\cal{K}$ is \textbf{finite-independent} if $H$ being infinite implies that ${\cal{K}}(H)={\cal{K}}(H\cup V)={\cal{K}}(H-V)$ where $V$ is any finite set. \medskip Throughout this paper $\lambda$ will denote the Lebesgue measure. \begin{df}(cf. \cite{lambm} Def 2.1) Let $\mu$ be a Borel measure on $\mathbb{R}$. Let $H\subset\mathbb{R}$ be a $\mu$-measurable set such that $0<\mu(H)<+\infty$. Then $${\cal{M}}^{\mu}(H)=\frac{\int\limits_Hx d\mu}{\mu(H)}$$ is a mean defined on unbounded subsets as well. \end{df} We get a special case for Hausdorff measures. \begin{df}\label{davg}Let $\mu^s$ denote the s-dimensional Hausdorff measure ($0\leq s\leq 1$). If $0<\mu^s(H)<+\infty$ (i.e. $H$ is an $s$-set) and $H$ is $\mu^s$ measurable then $$Avg(H)=\frac{\int\limits_H x\ d\mu^s}{\mu^s(H)}.$$ If $0\leq s\leq 1$ then set $Avg^s=Avg|_{\{\text{measurable s-sets}\}}$. E.g. $Avg^1$ is $Avg$ on all Lebesgue measurable sets with positive finite measure. \end{df} If $H\subset\mathbb{R},x\in\mathbb{R}$ then set $H+x=\{h+x:h\in H\}$. Similarly $\alpha H=\{\alpha h:h\in H\}\ (\alpha\in\mathbb{R})$. We use the convention that this operation $+$ has to be applied prior to the set theoretical operations, e.g. $H\cup K\cup L+x=H\cup K\cup (L+x)$. \smallskip The extended real line is $\bar{\mathbb{R}}=\mathbb{R}\cup\{-\infty,+\infty\}$ equipped with the usual topology: the neighbourhood of $+\infty$ is $\{(k,+\infty]:k\in\mathbb{R}\}$ and similarly $\{[-\infty,k):k\in\mathbb{R}\}$ for $-\infty$. For $K\subset\mathbb{R},\ y\in\bar{\mathbb{R}}$ let us use the notations $$K^{y-}=K\cap(-\infty,y],K^{y+}=K\cap[y,+\infty),K^{+\infty-}=K^{-\infty+}=K.$$ If $x>y$ then let $[x,y]$ denote the interval $[y,x]$. \smallskip Let us define some usual operations, relation with $\pm\infty$: $(+\infty)+(+\infty)=+\infty,\ (-\infty)+(-\infty)=-\infty$, if $r\in\mathbb{R}$ then $r+(+\infty)=+\infty,\ r+(-\infty)=-\infty,\ -\infty<r<+\infty$. \smallskip $cl(H), H'$ will denote the closure and accumulation points of $H\subset\mathbb{R}$ respectively. Let $\varliminf H=\inf H',\ \varlimsup H=\sup H'$ for $H\subset\mathbb{R},\ H'\ne\emptyset$. \smallskip Usually ${\cal{K}},{\cal{M}}$ will denote means, $Dom({\cal{K}})$ denotes the domain of ${\cal{K}}$. \section{Properties of means defined on unbounded sets} In the sequel a mean ${\cal{K}}$ is always a mean defined on some unbounded sets as well rather than bounded sets only. \medskip If a mean is defined on some unbounded sets then we require the usual basic property, internality that is $$\inf H\leq {\cal{K}}(H)\leq\sup H.$$ However almost always we require the stroger condition, strong internality $\varliminf H\leq {\cal{K}}(H)\leq\varlimsup H$ when $H'\ne\emptyset$. \medskip The properties of $Dom\ {\cal{K}}$ that we require are: $Dom\ {\cal{K}}$ must be closed under finite union, intersection and if $H\in Dom\ {\cal{K}}, I$ is an interval (finite or infinite) then $H\cap I\in Dom({\cal{K}})$ if $H\cap I\ne\emptyset$. \medskip We got used to the fact regarding means on bounded sets that it can happen that $\forall h\in H\ h<{\cal{K}}(H)$ or the opposite way around $\forall h\in H\ h>{\cal{K}}(H)$ (when ${\cal{K}}(H)=\sup H\not\in H$ or ${\cal{K}}(H)=\inf H\not\in H$ respectively). The same scenario can occur on means on unbounded sets too i.e. ${\cal{K}}(H)$ can be either $+$ or $-\infty$ whenever $H\subset\mathbb{R}$ (i.e. $\pm\infty\not\in H$). \begin{df}Let $H\in Dom\ {\cal{K}}$ be unbounded. We call $H$ essentially unbounded above regarding ${\cal{K}}$ if $\forall x\in\mathbb{R}\ \exists y\in\mathbb{R}$ such that ${\cal{K}}(H^{x-})<{\cal{K}}(H^{y-})$. We call $H$ essentially unbounded below regarding ${\cal{K}}$ if $\forall x\in\mathbb{R}\ \exists y\in\mathbb{R}$ such that ${\cal{K}}(H^{x+})>{\cal{K}}(H^{y+})$. We call $H$ essentially unbounded regarding ${\cal{K}}$ if it is essentially unbounded above and below. \end{df} Now we enumerate some properties of means defined on unbounded sets as well that we refer and analyze to later. \begin{itemize} \item ${\cal{K}}$ is \textbf{i-strong-internal} if $H\in Dom({\cal{K}}),\ H'\ne\emptyset$ then $\inf (H'-\{-\infty\})\leq {\cal{K}}(H)\leq\sup (H'-\{+\infty\})$. \item ${\cal{K}}$ is \textbf{slice-continuous} if $H\in Dom({\cal{K}})$ then $\forall y\in\mathbb{R}\ f_y(x)={\cal{K}}(H\cap[x,y])$ is continuous on the extended real line $\bar{\mathbb{R}}$ where $Dom( f_y)=\{x:H\cap[x,y]\ne\emptyset\}$ (cf. \cite{lamisii} 2.5). \item ${\cal{K}}$ is \textbf{i-slice-continuous} if $H\in Dom({\cal{K}})$ then $f(x)={\cal{K}}(H^{x-})$ and $g(x)={\cal{K}}(H^{x+})$ are continuous on the extended real line $\bar{\mathbb{R}}$ where $Dom( f)=\{x:H^{x-}\ne\emptyset\},\ Dom(g)=\{x:H^{x+}\ne\emptyset\}$. \item \textbf{The bounded sets are small for sets with infinite mean} if $K\in Dom\ {\cal{K}}$ is bounded and ${\cal{K}}(H)=+\infty$ then ${\cal{K}}(H\cup K)=+\infty$ and similarly if ${\cal{K}}(H)=-\infty$ then ${\cal{K}}(H\cup K)=-\infty$. \item $K\in Dom\ {\cal{K}}$ is said to be t-infinite regarding ${\cal{K}}$ if $H\subset\mathbb{R}$ is bounded and $H\cup K+x\in Dom\ {\cal{K}}$ then $\lim\limits_{x\to+\infty}{\cal{K}}(H\cup K+x)=+\infty,\ \lim\limits_{x\to-\infty}{\cal{K}}(H\cup K+x)=-\infty$. ${\cal{K}}$ is \textbf{interval-infinite} if $I$ is a non-degenerative finite interval then it is t-infinite regarding ${\cal{K}}$. \item $K\in Dom\ {\cal{K}}$ is said to be t-continuous regarding ${\cal{K}}$ if $H,H\cup K+x\in Dom\ {\cal{K}}$ then the function $x\mapsto{\cal{K}}(H\cup K+x)$ is continuous. ${\cal{K}}$ is called \textbf{interval-continuous} if $I$ is a non-degenerative finite interval then it is t-continuous regarding ${\cal{K}}$. \item ${\cal{K}}$ is called \textbf{finite} if ${\cal{K}}(H)$ is finite for all $H\in Dom\ {\cal{K}}$. \item ${\cal{K}}$ is called \textbf{subset-finite} if $H,K\in Dom\ {\cal{K}},\ |{\cal{K}}(H)|<+\infty,\ K\subset H$ then $|{\cal{K}}(K)|<+\infty$. \item ${\cal{K}}$ is called \textbf{bounded-finite} if $|{\cal{K}}(H)|<+\infty,\ K\in Dom\ {\cal{K}}$ is bounded then $|{\cal{K}}(H\cup K)|<+\infty$. \item $H\in Dom\ {\cal{K}}$ is called limit-finite for ${\cal{K}}$ if $\lim\limits_{x\to+\infty}{\cal{K}}(H^{x+})-\inf H^{x+}=\lim\limits_{x\to-\infty}{\cal{K}}(H^{x-})-\sup H^{x-}=0$. ${\cal{K}}$ is called \textbf{limit-finite} if all $H\in Dom\ {\cal{K}},\ |{\cal{K}}(H)|<+\infty$ are limit-finite. \item ${\cal{K}}$ is called \textbf{strong-base-monotone} if it is base-monotone and the following holds. Let $H_1,H_2,K\in Dom\ {\cal{K}}$ be bounded sets such that $H_1\subset H_2,\ H_2\cap K=\emptyset$. By base-monotonicity there are constants $0\leq c,d,c',d'\leq 1$ such that $c+d=c'+d'=1$ and ${\cal{K}}(H_1\cup K)=c\cdot {\cal{K}}(H_1)+d\cdot {\cal{K}}(K),\ {\cal{K}}(H_2\cup K)=c'\cdot {\cal{K}}(H_2)+d'\cdot {\cal{K}}(K).$ If ${\cal{K}}(H_1\cup K)={\cal{K}}(H_1)={\cal{K}}(K)$ then let $c=0$. Strong-base-monotonicity requires that $c\leq c'$ has to hold. \end{itemize} \begin{prp}Let ${\cal{K}}$ be i-slice-continuous and let the bounded sets are small for sets with infinite mean. Let $H$ be unbounded such that $H^{0-},H^{0+}\in Dom\ {\cal{K}}$ and ${\cal{K}}(H^{0-})=-\infty,\ {\cal{K}}(H^{0+})=+\infty$. Then $H\not\in Dom\ {\cal{K}}$. \end{prp} \P If assuming the contrary $H$ was in $Dom\ {\cal{K}}$ then ${\cal{K}}(H)={\cal{K}}(H^{-\infty+})=\lim\limits_{x\to-\infty}{\cal{K}}(H^{x+})$ by i-slice-continuity. But $x<0$ implies that ${\cal{K}}(H^{x+})={\cal{K}}(H^{0+})$ since $H^{x+}=[x,0)\cup H^{0+}$ and $[x,0)$ does not change the mean. Therefore ${\cal{K}}(H)={\cal{K}}(H^{0+})=+\infty$. Exactly the same way we would get that ${\cal{K}}(H)={\cal{K}}(H^{0-})=-\infty$ that is a contradiction. $\hfill{\Box}$ \begin{prp}\label{pafaiui}Let ${\cal{K}}$ be i-slice-continuous and let the bounded sets are small for sets with infinite mean. Let $H\in Dom\ {\cal{K}}$ be unbounded such that ${\cal{K}}(H^{a-})>-\infty,\ {\cal{K}}(H^{a+})=+\infty\ (a\in\mathbb{R})$. Then ${\cal{K}}(H)=+\infty$. \end{prp} \P ${\cal{K}}(H)={\cal{K}}(H^{-\infty+})=\lim\limits_{x\to-\infty}{\cal{K}}(H^{x+})$ by i-slice-continuity. But $x<a$ implies that ${\cal{K}}(H^{x+})={\cal{K}}(H^{a+})$ since $H^{x+}=[x,a)\cup H^{a+}$ and $[x,a)$ does not change the mean. Therefore ${\cal{K}}(H)={\cal{K}}(H^{a+})=+\infty$. $\hfill{\Box}$ \begin{prp}Let ${\cal{K}}$ be i-slice-continuous and let the bounded sets are small for sets with infinite mean. If $|{\cal{K}}(H)|<+\infty$ then $\forall x\in\mathbb{R}\ {\cal{K}}(H^{x+})<+\infty,\ {\cal{K}}(H^{x-})>-\infty$. \end{prp} \P Let us show the first and let $x\in\mathbb{R}$. Suppose indirectly that ${\cal{K}}(H^{x+})=+\infty$. Then by similar argument that we followed in the previous propositions one gets that ${\cal{K}}(H)=+\infty$ which is a contradiction. $\hfill{\Box}$ \begin{prp}Let ${\cal{K}}$ be slice-continuous, monotone and let the bounded sets are small for sets with infinite mean. Let $H\in Dom\ {\cal{K}}$ be unbounded such that ${\cal{K}}(H^{0-})=-\infty,\ {\cal{K}}(H^{0+})=+\infty$. Then for every $d\in\mathbb{R}$ there are sequences $(x_n),\ (y_n)$ such that $x_n\to-\infty,\ y_n\to+\infty$ and ${\cal{K}}(H\cap[x_n,y_n])=d$. \end{prp} \P Suppose $d>0$ (the remaining cases can be handled similarly). Take $x_1<0$ such that $H\cap[x_1,0]\in Dom\ {\cal{K}}$. Then by slice-continuity one can find $y_1>0$ such that ${\cal{K}}(H\cap[x_1,y_1])=d$. If $x_{n-1},y_{n-1}$ is already given then take $x_n<x_{n-1}-1$ such that ${\cal{K}}(H\cap[x_n,y_{n-1}])<d$. Then by slice-continuity find $y_n>y_{n-1}$ such that ${\cal{K}}(H\cap[x_n,y_n])=d$. Clearly $(x_n)\to-\infty$, $(y_n)$ is increasing hence has a limit $y_n\to\beta\in\bar{\mathbb{R}}$. If $\beta<+\infty$ then we get ${\cal{K}}(H\cap[x_n,\beta])\to{\cal{K}}(H^{\beta-})$ but ${\cal{K}}(H^{\beta-})={\cal{K}}(H^{0-})=-\infty$. By monotonicity we have ${\cal{K}}(H\cap[x_n,y_n])<{\cal{K}}(H\cap[x_n,\beta])$ hence ${\cal{K}}(H\cap[x_n,y_n])\to-\infty$ - a contradiction. $\hfill{\Box}$ \begin{prp}Let ${\cal{K}}$ be i-slice-continuous. Then for every unbounded sets $H$ with finite mean there is a bounded set that is not small for $H$. \end{prp} \P Assume the contrary and let $H$ be unbounded and ${\cal{K}}(H)=h\in\mathbb{R}$ such that all bounded sets are small for $H$. Then either $H^{h-}$ or $H^{h+}$ is unbounded. Say $H^{h+}$ is unbounded. Then ${\cal{K}}(H^{(h+1)+})>h$. Then ${\cal{K}}(H)={\cal{K}}(H^{-\infty+})=\lim\limits_{x\to-\infty}{\cal{K}}(H^{x+})$ by i-slice-continuity. But $x<h+1$ implies that ${\cal{K}}(H^{x+})={\cal{K}}(H^{(h+1)+})$ since $H^{x+}=[x,h+1)\cup H^{(h+1)+}$ and $[x,h+1)$ does not change the mean. Therefore $h={\cal{K}}(H)={\cal{K}}(H^{(h+1)+})>h$ that is a contradiction. $\hfill{\Box}$ \begin{prp}\label{piscifhi}If ${\cal{K}}$ is i-slice-continuous, interval-infinite and the finite intervals are in $Dom\ {\cal{K}}$ then $\forall \epsilon>0$ there is $H\subset\mathbb{R}$ such that $\lambda(H)<\epsilon$ and ${\cal{K}}(H)=+\infty$. \end{prp} \P Let $I_1=[0,\frac{\epsilon}{3}]$. Then choose an interval $I_2$ such that $\lambda(I_2)<\frac{\epsilon}{2^2}$ and ${\cal{K}}(I_1\cup I_2)>2$. If we have choosen $I_1,\dots,I_{n-1}$ already then choose an interval $I_n$ such that $\lambda(I_n)<\frac{\epsilon}{2^n}$ and ${\cal{K}}(I_1\cup\dots\cup I_n)>n$. Let $H=\bigcup\limits_1^{\infty}I_i$. Then obviously $\lambda(H)<\epsilon$ and by i-slice-continuity we get that ${\cal{K}}(H)=+\infty$. $\hfill{\Box}$ \begin{prp}If ${\cal{K}}$ is interval-infinite and the finite intervals are in $Dom\ {\cal{K}}$ then $\forall \epsilon>0$ there is $H\subset\mathbb{R}$ and sequences $(x_n),\ (y_n)$ such that $\lambda(H)<\epsilon,\ x_n\to-\infty,\ y_n\to+\infty$ and ${\cal{K}}(H\cap[x_n,y_n])$ is divergent. \end{prp} \P Let $I_1=[0,\frac{\epsilon}{3}]$. Then choose an interval $I_2=[a_{2},b_{2}]$ such that $\lambda(I_2)<\frac{\epsilon}{2^2}$ and ${\cal{K}}(I_1\cup I_2)>1$. Let $x_2=0$ and $y_2=b_2$. Then choose an interval $I_3=[a_{3},b_{3}]$ such that $\lambda(I_3)<\frac{\epsilon}{2^3}$ and ${\cal{K}}(I_1\cup I_2\cup I_3)<-1$. Let $y_3=y_2$ and $x_3=a_3$. If we have choosen $I_1,\dots,I_{2n-1}$ already then choose an interval $I_{2n}=[a_{2n},b_{2n}]$ such that $a_{2n}>b_{2n-2}+1,\ \lambda(I_{2n})<\frac{\epsilon}{2^{2n}}$ and ${\cal{K}}(I_1\cup\dots\cup I_{2n})>1$. Let $x_{2n}=x_{2n-1}$ and $y_{2n}=b_{2n}$. Then choose an interval $I_{2n+1}=[a_{2n+1},b_{2n+1}]$ such that $b_{2n+1}<a_{2n-1}-1,\ \lambda(I_{2n+1})<\frac{\epsilon}{2^{2n+1}}$ and ${\cal{K}}(I_1\cup\dots\cup I_{2n+1})<-1$. Let $y_{2n+1}=y_{2n}$ and $x_{2n+1}=a_{2n+1}$. Let $H=\bigcup\limits_1^{\infty}I_i$. Then obviously $x_n\to-\infty,\ y_n\to+\infty,\ \lambda(H)<\epsilon$ and ${\cal{K}}(H\cap[x_n,y_n])$ is divergent. $\hfill{\Box}$ \begin{prp}Let ${\cal{K}}$ be base-monotone, i-slice-continuous. If $H_1,H_2\subset(0,+\infty),\ H_1\cap H_2=\emptyset,\ {\cal{K}}(H_1)={\cal{K}}(H_2)=+\infty$ then ${\cal{K}}(H_1\cup H_2)=+\infty$. \end{prp} \P By i-slice-continuity we know that $\lim\limits_{x\to+\infty}{\cal{K}}(H_1^{x-})={\cal{K}}(H_1)$ and similarly for $H_2$ and $H_1\cup H_2$. Base-monotonicity gives that $$\min\{{\cal{K}}(H_1^{x-}),{\cal{K}}(H_2^{x-})\}\leq{\cal{K}}(H_1^{x-}\cup H_2^{x-})={\cal{K}}((H_1\cup H_2)^{x-}).$$ The limit of the left hand side is infinite hence so is the limit of the right hand side which gives the statement. $\hfill{\Box}$ \begin{prp}\label{pufmm}Let ${\cal{K}}$ be base-monotone, i-slice-continuous, subset-finite. If $H_1,H_2\subset(0,+\infty),\ {\cal{K}}(H_1)<\infty,\ {\cal{K}}(H_2)<\infty$ then ${\cal{K}}(H_1\cup H_2)<\infty$. \end{prp} \P By subset-finiteness we can assume that $H_1,H_2$ are disjoint. By i-slice-continuity we know that $\lim\limits_{x\to+\infty}{\cal{K}}(H_1^{x-})={\cal{K}}(H_1)$ and similarly for $H_2$ and $H_1\cup H_2$. Base-monotonicity gives that $${\cal{K}}((H_1\cup H_2)^{x-})={\cal{K}}(H_1^{x-}\cup H_2^{x-})\leq\max\{{\cal{K}}(H_1^{x-}),{\cal{K}}(H_2^{x-})\}.$$ The limit of the right hand side is finite hence so is the limit of the left hand side which gives the statement. $\hfill{\Box}$ \begin{prp}\label{peulekf}Let ${\cal{K}}$ be part-slice-continuous, i-slice-continuous, finite-independent. Moreover let the finite intervals be in $Dom\ {\cal{K}}$. Then $\forall\epsilon>0$ there is a unbounded $H\in Dom\ {\cal{K}}$ such that $\lambda(H)\leq\epsilon$ and ${\cal{K}}(H)<\infty$. \end{prp} \P Let $I_0=[0,\frac{\epsilon}{2}]$. If we have choosen $I_0,\dots,I_{n-1}$ already then find interval $I_n$ such that $I\subset[n,n+\frac{\epsilon}{2^{n+1}}]$ and ${\cal{K}}(\bigcup\limits_{i=0}^n I_i)<1$. Such interval can be found because consider $H_n=\bigcup\limits_{i=0}^n I_i\cup[n,n+\frac{\epsilon}{2^{n+1}}]$. If ${\cal{K}}(H_n)\geq 1$ then let $f(x)={\cal{K}}\big(\bigcup\limits_{i=0}^n I_i\cup[n,x]\big)\ (n\leq x\leq n+\frac{\epsilon}{2^{n+1}})$. By part-slice-continuity $f$ is continuous and by finite-independece $f(n)={\cal{K}}(\bigcup\limits_{i=0}^n I_i)<1$. Hence there is $x>n$ such that $f(x)<1$ still. Let $I_n=[n,x]$. Let $H=\bigcup\limits_{i=0}^{\infty} I_i$. Clearly $H$ is unbounded, $\lambda(H)\leq\epsilon$ and by i-slice-continuity ${\cal{K}}(H)\leq 1$. $\hfill{\Box}$ \begin{prp}Let ${\cal{K}}$ be part-slice-continuous, i-slice-continuous, finite-independent, base-monotone, subset-finite and ${\cal{K}}([0,+\infty))=+\infty,\ {\cal{K}}((-\infty,0])=-\infty$. Moreover let the bounded sets be small for sets with infinite mean and let the finite intervals be in $Dom\ {\cal{K}}$. Then $\forall h\in\mathbb{R}$ there is $H\in Dom\ {\cal{K}}$ such that both $H^{0-},H^{0+}$ are unbounded and ${\cal{K}}(H)=h$. \end{prp} \P Let $h\in\mathbb{R}$. First let us observe that base-monotonicity gives that $\forall a\in\mathbb{R}^+\ {\cal{K}}([a,+\infty))=+\infty,\ {\cal{K}}((-\infty,-a])=-\infty$ since base-monotonicity yields that $+\infty={\cal{K}}([0,+\infty))={\cal{K}}([0,a)\cup[a,+\infty))\leq\max\{{\cal{K}}([0,a)),{\cal{K}}([a,+\infty))\}$. According to \ref{peulekf} we can construct unbounded $H_1,H_2\subset\mathbb{R}$ such that $H_1\subset(0,+\infty),\ H_2\subset(-\infty,0)$ and ${\cal{K}}(H_1)<\infty,\ {\cal{K}}(H_2)<\infty$. By \ref{pufmm} $k={\cal{K}}(H_1\cup H_2)<\infty$. If $k=h$ then we are done. Say $k<h$ (the other inequality is similar). Let $$f(x)={\cal{K}}\big(H_1\cup H_2\cup [h,x)\big)\ \ (x\in[h,+\infty)).$$ By part-slice-continuity $f$ is continuous. By \ref{pafaiui}, our first observation and i-slice-continuity we get that $\lim\limits_{x\to+\infty}f(x)=+\infty$. Hence there is $x\in[h,+\infty)$ such that ${\cal{K}}(H_1\cup H_2\cup [h,x))=h$. $\hfill{\Box}$ \begin{prp}If ${\cal{K}}$ is monotone and ${\cal{K}}(H)=+\infty,\ x\in\mathbb{R}$ then ${\cal{K}}(H^{+x})=+\infty$. \end{prp} \P Suppose that ${\cal{K}}(H^{x+})$ was finite for some $x\in\mathbb{R}$. Clearly $\sup (H\cap(-\infty,x))\leq\inf H^{x+}$ holds which gives that ${\cal{K}}(H)={\cal{K}}\big((H\cap(-\infty,x))\cup H^{x+}\big)\leq{\cal{K}}(H^{x+})<\infty$ that is a contradiction. $\hfill{\Box}$ \begin{prp}If ${\cal{K}}$ is monotone, bounded-finite and not finite then ${\cal{K}}$ is not Cantor-continuous. \end{prp} \P If ${\cal{K}}$ is not finite then there is $H\in Dom\ {\cal{K}}$ such that ${\cal{K}}(H)=\pm\infty$, say ${\cal{K}}(H)=+\infty$. Find an $x_0\in\mathbb{R}$ such that $H^{x_0-}\ne\emptyset$. Let $C_0=H^{x_0-},\ C_n=C_0\cup H^{(x_0+n)+}$. Evidently $\bigcap\limits_{i=1}^{\infty}C_i=C_0$. Then $\forall i>0\ {\cal{K}}(C_i)=+\infty$ because otherwise by bounded-finiteness we would get that ${\cal{K}}(H)={\cal{K}}(C_i\cup (H\cap(x_0,x_0+i))<+\infty$. But ${\cal{K}}(C_0)<+\infty$ showing that ${\cal{K}}$ is not Cantor-continuous. $\hfill{\Box}$ \begin{lem}\label{lsfs}Let ${\cal{K}}$ be strong-base-monotone, i-slice-continuous. If $H,K\in Dom\ {\cal{K}},\ H\cap K=\emptyset,\ {\cal{K}}(H)=+\infty,\ K$ is bounded and ${\cal{K}}(H\cup K)\ne{\cal{K}}(K)$ then $K$ small for $H$. \end{lem} \P By i-slice-continuity $\lim\limits_{x\to+\infty}{\cal{K}}(H^{x-})=+\infty$. By base-monotonicity there are constants $0\leq c_x,d_x\leq 1$ such that $c_x+d_x=1$ and $${\cal{K}}(H^{x-}\cup K)=c_x{\cal{K}}(H^{x-})+d_x{\cal{K}}(K).$$ By assumption there is $x\in\mathbb{R}$ such that $c_x>0$ since otherwise we would get that ${\cal{K}}(H\cup K)={\cal{K}}(K)$. Strong-base-monotonicity implies that if $x<y$ then $c_x\leq c_y$. But then we get that $\lim\limits_{x\to+\infty}{\cal{K}}(H^{x-}\cup K)=+\infty$. $\hfill{\Box}$ \begin{prp}\label{pbss}Let ${\cal{K}}$ be strong-base-monotone, i-slice-continuous and $H,K\in Dom\ {\cal{K}},\ H\cap K=\emptyset,\ {\cal{K}}(H)=+\infty,\ K$ being bounded implies that ${\cal{K}}(H\cup K)\ne{\cal{K}}(K)$. Then the bounded sets are small for sets with infinite mean. \end{prp} \P Let $H,K\in Dom\ {\cal{K}}$ such that $K$ is bounded and ${\cal{K}}(H)=+\infty$. We can assume that $H,K$ are disjoint. Then by \ref{lsfs} we get the statement. $\hfill{\Box}$ \begin{ex}Similarly to the proof of \ref{piscifhi} one can show that for every $0\leq s\leq 1,\ \epsilon>0$ there is $H\subset\mathbb{R}$ such that $\mu^s(H)<\epsilon$ and $Avg^s(H)=+\infty$. \end{ex} \begin{ex}${\cal{M}}^{\mu}$ is strong-base-monotone. \end{ex} \P We have to show the "strong" part only. Set ${\cal{K}}={\cal{M}}^{\mu}$. Let $H_1,H_2,K\in Dom\ {\cal{K}}$ be bounded sets such that $H_1\subset H_2,\ H_2\cap K=\emptyset$. Then $${\cal{K}}(H_1\cup K)=\frac{\mu(H_1){\cal{K}}(H_1)+\mu(K){\cal{K}}(K)}{\mu(H_1)+\mu(K)}$$ and similarly for $H_2$. We have to show that $$\frac{\mu(H_1)}{\mu(H_1)+\mu(K)}\leq\frac{\mu(H_2)}{\mu(H_2)+\mu(K)}$$ which is straightforward. $\hfill{\Box}$ \begin{ex}The bounded sets are small for sets with infinite mean with respect to ${\cal{M}}^{\mu}$. \end{ex} \P Apply \ref{pbss}. $\hfill{\Box}$ \begin{ex}For $Avg$ there are $H,K\in Dom\ Avg$ such that $K$ is bounded, $Avg(H)=+\infty$ and $Avg(H\cup K)<\infty$. Let $K=[0,1]$ and $H$ be a $0.9$-set such that $Avg(H)=+\infty$. Then $Avg(H\cup K)=Avg(K)=0.5$. \end{ex} \begin{prp}${\cal{K}}$ is subset-finite iff $|{\cal{K}}(K)|=+\infty,\ K\subset H$ implies that either $H\not\in Dom\ {\cal{K}}$ or $|{\cal{K}}(H)|=+\infty$. $\hfill{\Box}$ \end{prp} \begin{cor}If ${\cal{K}}$ is subset-finite then the bounded sets are small for sets with infinite mean. $\hfill{\Box}$ \end{cor} \begin{ex}${\cal{M}}^{\mu}$ is subset-finite, bounded-finite. \end{ex} \P Both follow from \begin{equation}\label{eq1} {\cal{M}}^{\mu}(A\cup^* B)=\frac{\mu(A){\cal{M}}^{\mu}(A)+\mu(B){\cal{M}}^{\mu}(B)}{\mu(A)+\mu(B)} \end{equation} when $A,B\in Dom\ {\cal{M}}^{\mu}$ (see \cite{lambm} Proposition 2.8). If $|{\cal{M}}^{\mu}(H)|<\infty,\ K\subset H$ then set $A=K,\ B=H-K$ and then (\ref{eq1}) gives that ${\cal{M}}^{\mu}$ is subset-finite. If $|{\cal{M}}^{\mu}(H)|<\infty,\ K\in Dom\ {\cal{M}}^{\mu}$ is bounded then set $A=H,\ B=K-H$ and then (\ref{eq1}) gives that ${\cal{M}}^{\mu}$ is bounded finite. $\hfill{\Box}$ \begin{ex}${\cal{M}}^{\mu}$ is interval-continuous. \end{ex} \P It simply follows from (\ref{eq1}) if we substitute $B$ with $I+x$ and use the fact that $\mu$ is $\epsilon-\delta$ absolut continuous with respect to $\lambda$. $\hfill{\Box}$ \begin{ex}$Avg^1$ is interval-infinite. \end{ex} \P It can be derived from (\ref{eq1}) if we substitute $B$ with $I+x$ and remark that $\lambda(I+x)=\lambda(I)$ and $Avg^1(I+x)=Avg^1(I)+x$. $\hfill{\Box}$ \begin{prp}If ${\cal{K}}$ is monotone then ${\cal{K}}(H^{+x})$ is increasing ($H\in Dom\ {\cal{K}}$). \end{prp} \P Let $x<y\ (x,y\in\mathbb{R})$. Let $H_1=H\cap[x,y],\ H_2=H\cap[y,+\infty)$. Then $\sup H_1\leq\inf H_2$ which implies that ${\cal{K}}(H^{+x})={\cal{K}}(H_1\cup H_2)\leq{\cal{K}}(H_2)={\cal{K}}(H^{+y})$. $\hfill{\Box}$ \begin{cor}If ${\cal{K}}$ is monotone, $x<y$ and ${\cal{K}}(H^{+x})=+\infty$ then ${\cal{K}}(H^{+y})=+\infty$. If $x<y$ and ${\cal{K}}(H^{+y})<+\infty$ then ${\cal{K}}(H^{+x})<+\infty.$ $\hfill{\Box}$ \end{cor} \begin{prp}Let ${\cal{K}}$ be shift-invariant, $H\subset\mathbb{R},\ x\in\mathbb{R},\ x\ne 0$ such that $H+x=H$. Then $H\not\in Dom\ {\cal{K}}$. \end{prp} \P Clearly it would mean that ${\cal{K}}(H)+x={\cal{K}}(H)$. $\hfill{\Box}$\bigskip The following can be shown similarly. \begin{prp}Let ${\cal{K}}$ be homogeneous, $H\subset\mathbb{R},\ \alpha\in\mathbb{R}$ such that $\alpha H=H$. Then ${\cal{K}}(H)=0$ or $\alpha=1$. $\hfill{\Box}$ \end{prp} We close this section with some examples. \begin{ex}Let ${\cal{K}}=Avg$. Let $H_1\subset[0,1]$ be a set such that $H_1^{(1-\frac{1}{n})+}$ has Hausdorff dimension $\frac{1}{2}$ and $\mu^{\frac{1}{2}}(H_1^{(1-\frac{1}{n})+})>0$ for all $n\in\mathbb{N}$. Let $H_2\subset[2,+\infty)$ be a set with Hausdorff dimension $\frac{1}{3}$ such that $Avg(H_2)=+\infty$. Then ${\cal{K}}(H^{x+})<1$ when $x<1$ and if $x\geq 1$ implies that ${\cal{K}}(H^{x+})=+\infty$. \end{ex} \begin{ex}We present an example that is not interval-infinite. Let us take the Borel measure associated to the harmonic mean: $\mu([a,b])=\mu_f([a,b])=\frac{1}{a^2}-\frac{1}{b^2}\ (a,b>0)$ and the mean by that measure for $H$ with $0<\mu(H)<+\infty$ $${\cal{M}}^{\mu}(H)=\frac{\int\limits_Hx d\mu}{\mu(H)}$$ (see 3.7 in \cite{lambm}). Take $H=[1,2],I=[1,2]$. Then $${\cal{M}}^{\mu}(H\cup I+x)=\frac{\big(\frac{1}{1}-\frac{1}{2}\big)+\big(\frac{1}{1+x}-\frac{1}{2+x}\big)}{\big(\frac{1}{1^2}-\frac{1}{2^2}\big)+\big(\frac{1}{(1+x)^2}-\frac{1}{(2+x)^2}\big)}\to\frac{\frac{1}{1}-\frac{1}{2}}{\frac{1}{1^2}-\frac{1}{2^2}}$$ when $x$ tends to infinity. \smallskip It is also an example for a finite mean. For simplicity let us restrict ${\cal{M}}^{\mu}$ for measurable subsets of $[1,+\infty)$. First let us show that ${\cal{M}}^{\mu}([1,+\infty))=2$. Easy calculation shows that $${\cal{M}}^{\mu}([1,+\infty))=\frac{\int\limits_1^{\infty}\frac{2}{x^2}\cdot d\lambda}{1}=2\bigg[-\frac{1}{x}\bigg]_1^{\infty}=2.$$ Now if $K\subset[1,+\infty),\ 0<\mu(K)<+\infty$ then $\int\limits_K xd\mu\leq\int\limits_{[1,+\infty)} xd\mu=2$ therefore ${\cal{M}}^{\mu}(K)$ is finite. \smallskip For similar reason this mean is not limit-finite: ${\cal{M}}^{\mu}([a,+\infty))=2a\ (a>0)$. \end{ex} \begin{ex}$Avg^1$ is not limit-finite. Let $H=\bigcup\limits_{i=1}^{\infty}[i,\frac{1}{2^i}]$. Then $$Avg^1(H)=\frac{1}{2}\frac{\sum\limits_{i=1}^{\infty}(i+\frac{1}{2^i})^2-i^2}{\sum\limits_{i=1}^{\infty}\frac{1}{2^i}}= \frac{1}{2}\frac{\sum\limits_{i=1}^{\infty}2\frac{i}{2^i}+\frac{1}{2^{2i}}}{1}$$ that is clearly finite. For $n\in\mathbb{N}$ we get $$Avg^1(H^{n+})=\frac{1}{2}\frac{\sum\limits_{i=n}^{\infty}2\frac{i}{2^i}+\frac{1}{2^{2i}}}{\sum\limits_{i=n}^{\infty}\frac{1}{2^i}}=\frac{1}{2}\frac{2\frac{n+1}{2^{n-1}}+\frac{1}{3\cdot 4^{n-1}}}{\frac{1}{2^{n-1}}}=n+1+\frac{1}{3\cdot 2^{n}}$$ showing that $Avg^1(H^{n+})-n\not\to 0$. \end{ex} \begin{ex}Let $\mu$ a Borel measure on $(0,+\infty)$ such that $\mu(\{x\})=0\ (x\in\mathbb{R})$. Then ${\cal{M}}^{\mu}$ is i-strong-internal. \end{ex} \P If ${\cal{K}}$ is not i-strong-internal then there is a set $H$ with $\sup H=+\infty$ and $H^{n+}$ consists of isolated points for some $n\in\mathbb{N}$ such that ${\cal{K}}(H)=+\infty$. It is trivial that for such $H$ we get that ${\cal{M}}^{\mu}(H)\leq n$. $\hfill{\Box}$ \section{Examples} In this section we present some examples on means that are defined on some unbounded sets as well. \medskip \begin{ex}Let $H\subset[0,+\infty),\ H\ne\emptyset$. Let ${\cal{K}}^0(H)=\inf H$ and let Let $\beta<\omega_1$ be an ordinal number and ${\cal{K}}^{\alpha}$ be already defined for $\alpha<\beta$. If $\beta$ is a successor ordinal, $\beta=\alpha+1$ then let $${\cal{K}}^{\beta}(H)= \begin{cases} \inf[{\cal{K}}^{\alpha}(H),+\infty)-H&\text{if }[{\cal{K}}^{\alpha}(H),+\infty)-H\ne\emptyset\\ {\cal{K}}^{\alpha}(H)&\text{otherwise.} \end{cases} $$ If $\beta<\omega_1$ is a limit ordinal then set ${\cal{K}}^{\beta}(H)=\sup\{{\cal{K}}^{\alpha}(H):\alpha<\beta\}$. \end{ex} \begin{ex}Let $H\subset[1,+\infty)$ be unbounded. Let $${\cal{K}}(H)=\inf H+\sum\limits_{i=1}^{\infty}\frac{f(i)}{i}$$ where $$f(i)= \begin{cases} 1&\text{if }[i,i+1)\cap H\ne\emptyset\\ 0&\text{otherwise.} \end{cases} $$ \end{ex} \begin{ex}Let $(a_n)$ be an increasing sequence such that $a_n\to+\infty$. Let $H\subset[0,+\infty)$. Let $${\cal{K}}(H)= \begin{cases} \sup\{a_n:a_n\in H\}&\text{if }\exists n\ a_n\in H\\ \inf H&\text{otherwise.} \end{cases} $$\end{ex} \begin{ex}Let ${\cal{K}}$ be a mean defined on bounded subsets. Let us extend ${\cal{K}}$ in the simpliest way: $$\tilde{{\cal{K}}}(H)= \begin{cases} +\infty&\text{if }H\text{ is unbounded}\\ {\cal{K}}(H)&\text{otherwise.} \end{cases} $$ \end{ex} \begin{ex}If $H\subset(0,+\infty)$ then set $\frac{1}{H}=\{\frac{1}{h}:h\in H\}$. Let ${\cal{K}}$ be a mean defined on bounded subsets of $(0,+\infty)$ such that $H\in Dom\ {\cal{K}}$ implies that $\frac{1}{H}\in Dom\ {\cal{K}}$. If $\sup H=+\infty,\ \inf H>0$ then we can extend ${\cal{K}}$ to $H$ in the following way: $${\cal{K}}(H)=\frac{1}{{\cal{K}}\big(\frac{1}{H}\big)}.$$ \end{ex} \begin{ex}We can slightly generalize the definition of mean-set ${\cal{MS}}^{hf}(H)$ given in \cite{lamis} Definition 15 . For $0<\lambda(H)<+\infty$ let ${\cal{MS}}^{hf}(H)=\{x:\lambda(H^{-x})=\lambda(H^{+x})\}$. Clearly ${\cal{MS}}^{hf}(H)$ is a closed finite interval. \end{ex} \section{Extending means} In this section we are going to investigate how the domain of a mean can be extended to unbounded sets as well. \begin{prp}\label{pembb}Let ${\cal{K}}$ be a monotone mean whose domain contains bounded sets only. If $H\subset\mathbb{R}$ such that $\forall x\in\mathbb{R}\ H^{x+},H^{x-}\in Dom\ {\cal{K}}$, $\inf H>-\infty$ then $\lim\limits_{x\to+\infty}{\cal{K}}(H^{x-})$ exists. \end{prp} \P Obviously ${\cal{K}}(H^{x-})$ is increasing. $\hfill{\Box}$\bigskip One could formulate a similar statement for $+\infty$. \smallskip Now we are going to define a way of extending means. \begin{df}Let ${\cal{K}}$ be a mean whose domain contains bounded sets only. Let $H\subset\mathbb{R}$ such that $\forall x\in\mathbb{R}\ H^{x+},H^{x-}\in Dom\ {\cal{K}}$. Set $$\hat{\cal{K}}(H)=\mathop{\lim\limits_{x\to-\infty}}_{y\to+\infty}{\cal{K}}(H\cap[x,y])$$ if the limit exits. \end{df} \begin{prp}Evidently $Dom\ {\cal{K}}\subset Dom\ \hat{\cal{K}}$ , moreover if $H\in Dom\ {\cal{K}}$ then $\hat{\cal{K}}(H)={\cal{K}}(H)$ i.e. $\hat{\cal{K}}$ is an extension of ${\cal{K}}$. $\hfill{\Box}$ \end{prp} \begin{rem}By \ref{pembb} if $H\subset\mathbb{R}$ unbounded such that $\forall x\in\mathbb{R}\ H^{x+},H^{x-}\in Dom\ {\cal{K}}$, $\inf H>-\infty$ or $\sup H<+\infty$ then ${\cal{K}}$ can be extended to $H$. \end{rem} \begin{prp}\label{pmls}$\hat{\cal{K}}(H)=h\in\bar{\mathbb{R}}$ iff for all sequences $(x_n),(y_n)$ such that $x_n\to-\infty,y_n\to+\infty$ $\lim\limits_{n\to\infty}{\cal{K}}(H\cap[x_n,y_n])=h$ holds. $\hfill{\Box}$ \end{prp} \begin{prp}Let $H\subset\mathbb{R}$ such that $\forall x\in\mathbb{R}\ H^{x+},H^{x-}\in Dom\ {\cal{K}}$. Then ${\cal{K}}$ can be extended to $H$ iff for all sequences $(x_n),(y_n)$ such that $x_n\to-\infty,y_n\to+\infty$ the limit $\lim\limits_{n\to\infty}{\cal{K}}(H\cap[x_n,y_n])$ always exists. $\hfill{\Box}$ \end{prp} \P Merging two sequences shows that both sequences have to provide the same limit hence \ref{pmls} is applicable. $\hfill{\Box}$ \begin{prp}Let ${\cal{K}}$ be a mean whose domain contains bounded sets only. Let $H\subset\mathbb{R}$ such that $\forall x\in\mathbb{R}\ H^{x+},H^{x-}\in Dom\ {\cal{K}}$. Let $f(x,y)={\cal{K}}(H\cap[x,y])$ be integrable. If $\hat{\cal{K}}(H)$ exists then $$\hat{\cal{K}}(H)=\lim\limits_{p\to+\infty}\frac{1}{(2p)^2}\int\limits_{-p}^p\int\limits_{-p}^p{\cal{K}}(H\cap[x,y])dxdy.$$ \end{prp} \P Denote $A=\hat{\cal{K}}(H)$. Let $A\in\mathbb{R}$. The cases $A=\pm\infty$ can be handled similarly. Let $\epsilon>0$ be given. Choose $N\in\mathbb{R}^+$ such that if $x<-N<N<y$ then $|{\cal{K}}(H\cap[x,y])-A|<\frac{\epsilon}{3}$. Choose $M\in\mathbb{R}^+,\ M>N$ such that $$\frac{N(2N)^2}{(2M)^2}<\frac{\epsilon}{2}\text{ and }A-\frac{\epsilon}{2}<\frac{(A-\frac{\epsilon}{3})((2M)^2-(2N)^2)}{(2M)^2}.$$ Let $p>M$. Let us use the notations $$L=[-N,N]\times[-N,N],\ K=[-p,p]\times[-p,p]-L.$$ Then $$\lim\limits_{p\to+\infty}\frac{1}{(2p)^2}\int\limits_{-p}^p\int\limits_{-p}^p{\cal{K}}(H\cap[x,y])dxdy=$$ $$\lim\limits_{p\to+\infty}\frac{1}{(2p)^2}\int\limits_{L}{\cal{K}}(H\cap[x,y])dxdy+\frac{1}{(2p)^2}\int\limits_{K}{\cal{K}}(H\cap[x,y])dxdy.$$ Clearly $x,y\in L$ implies that $|{\cal{K}}(H\cap[x,y])|<N$ hence $$\left\vert\frac{1}{(2p)^2}\int\limits_{L}{\cal{K}}(H\cap[x,y])dxdy\right\vert<\frac{N(2N)^2}{(2M)^2}<\frac{\epsilon}{2}.$$ Moreover $$A-\frac{\epsilon}{2}<\frac{(A-\frac{\epsilon}{3})((2p)^2-(2N)^2)}{(2p)^2}=\frac{1}{(2p)^2}\int\limits_{K}A-\frac{\epsilon}{3}dxdy<\frac{1}{(2p)^2}\int\limits_{K}{\cal{K}}(H\cap[x,y])dxdy$$ $$<\frac{1}{(2p)^2}\int\limits_{K}A+\frac{\epsilon}{3}dxdy=\frac{(A+\frac{\epsilon}{2})((2p)^2-(2N)^2)}{(2p)^2}<A+\frac{\epsilon}{2}.$$ Therefore if $p>M$ then $$A-\epsilon<\frac{1}{(2p)^2}\int\limits_{-p}^p\int\limits_{-p}^p{\cal{K}}(H\cap[x,y])dxdy<A+\epsilon.$$ $\hfill{\Box}$ \begin{prp}If ${\cal{K}}_1,{\cal{K}}_2$ are two means whose domain contains bounded sets only, $Dom\ {\cal{K}}_1=Dom\ {\cal{K}}_2$ and ${\cal{K}}_1\leq{\cal{K}}_2$ then $\hat{\cal{K}}_1\leq\hat{\cal{K}}_2$. $\hfill{\Box}$ \end{prp} For some means one can ask if the straightforward (algebric) generalization of the mean to unbounded sets equals to the extension that we have just defined. We investigate two means in this respect. \begin{prp}Let ${\cal{K}}={\cal{M}}^{\mu}$ restricted to the bounded sets. If $0<\mu(H)<+\infty$ and $\hat{\cal{K}}(H)$ exists then $\hat{\cal{K}}(H)={\cal{M}}^{\mu}(H)$. \end{prp} \P Clearly $$\hat{\cal{K}}(H)=\mathop{\lim\limits_{x\to-\infty}}_{y\to+\infty}\frac{\int\limits_{H\cap[x,y]} zd\mu(z)}{\mu(H\cap[x,y])}=\frac{\int\limits_{H} zd\mu(z)}{\mu(H)}={\cal{M}}^{\mu}(H).$$ $\hfill{\Box}$ \begin{ex}Let ${\cal{K}}={\cal{M}}^{lis}$ (i.e. ${\cal{K}}(H)=\frac{\varliminf H+\varlimsup H}{2}$ for a bounded $H$). Then $\hat{\cal{K}}(H)$ is finite iff there is $n\in\mathbb{R}^+$ such that there is no finite accumulation point of $H-[-n,n]$. We can conclude that $\hat{\cal{K}}(H)\ne\frac{\varliminf H+\varlimsup H}{2}$ in general (e.g. for $H=\mathbb{N}\cup\{\frac{1}{n}:n\in\mathbb{N}\},\ \hat{\cal{K}}(H)=0,\ \frac{\varliminf H+\varlimsup H}{2}=+\infty$). It is also clear that $H\in\ Dom\ \hat{\cal{K}} \iff |H''\cap\{-\infty,+\infty\}|<2$. $\hfill{\Box}$ \end{ex} \subsection{Inherited properties} In this section we investigate some properties which the extension inherite from the original mean. \begin{prp}If ${\cal{K}}$ is monotone then so is $\hat{\cal{K}}$. \end{prp} \P Let $H_1,H_2,H_1\cup H_2\in Dom\ \hat{\cal{K}}$ and let $\sup H_1\leq\inf H_2$. If $x<\sup H_1$ and $\inf H_2<y$ then $\sup H_1\cap[x,y]\leq\inf H_2\cap[x,y]$ which gives that $${\cal{K}}(H_1\cap[x,y])\leq{\cal{K}}((H_1\cup H_2)\cap[x,y])\leq{\cal{K}}(H_2\cap[x,y])$$ which gives the statement when $x\to-\infty,y\to+\infty$. $\hfill{\Box}$ \begin{prp}If ${\cal{K}}$ is base-monotone then so is $\hat{\cal{K}}$. \end{prp} \P Let $H_1,H_2,H_1\cup H_2\in Dom\ \hat{\cal{K}},\ H_1\cap H_2=\emptyset$. If $x,y\in\mathbb{R}$ then $(H_1\cap[x,y])\cap (H_2\cap[x,y])=\emptyset$ which gives that $$\min\{{\cal{K}}(H_1\cap[x,y]),{\cal{K}}(H_2\cap[x,y])\}\leq{\cal{K}}((H_1\cup H_2)\cap[x,y])\leq\max\{{\cal{K}}(H_1\cap[x,y]),{\cal{K}}(H_2\cap[x,y])\}$$ which gives the statement when $x\to-\infty,y\to+\infty$. $\hfill{\Box}$ \begin{prp}If ${\cal{K}}$ is monotone, symmetric then $\hat{\cal{K}}$ is symmetric as well. \end{prp} \P Let $H\in Dom({\cal{K}})$ symmetric i.e. $\exists s\in\mathbb{R}\ T_s(H)=H$ where $T_s$ denote the reflection to point $s\in\mathbb{R}$ that is $T_s(x)=2s-x\ (x\in\mathbb{R})$. Let $\hat{\cal{K}}(H)=A\in\bar{\mathbb{R}}$. Suppose that $s\ne A$. Take a neighbourhood $K$ of $A$ such that $s\not\in K$. We know that there are numbers $N,M$ such that $x\leq N<M\leq y$ implies that ${\cal{K}}(H\cap[x,y])\in K$. We can assume that $N<s,\ M=T_s(N)$. Then clearly ${\cal{K}}(H\cap[N,M])=s$ which is a contradiction. $\hfill{\Box}$\bigskip By the defnition it is clear that \begin{prp}If ${\cal{K}}$ is slice-continuous then $\hat{\cal{K}}$ is i-slice-continuous. $\hfill{\Box}$ \end{prp} \begin{prp}If ${\cal{K}}$ is shift-invariant then so is $\hat{\cal{K}}$. \end{prp} \P The obvious facts that $H^{x+}=(H+y)^{(x+y)+},(H+y)^{x+}=H^{(x-y)+}$ and similarly for $H^{x-}$ give the statement. $\hfill{\Box}$ \begin{prp}If ${\cal{K}}$ is disjoint-monotone then so is $\hat{\cal{K}}$. \end{prp} \P Let $H_1,H_2,H_1\cup H_2\in Dom({\cal{K}}),\ H_1\cap H_2=\emptyset$. Let $\hat{\cal{K}}(H_1)\leq\hat{\cal{K}}(H_2)$. First suppose that $\hat{\cal{K}}(H_1)<\hat{\cal{K}}(H_2)$. Then there is $N\in\mathbb{R}$ such that if $x<-N<N<y$ then ${\cal{K}}(H_1\cap[x,y])\leq{\cal{K}}(H_2\cap[x,y])$. Then by disjoint-monotonicity of ${\cal{K}}$ we get that ${\cal{K}}(H_1\cap[x,y])\leq{\cal{K}}((H_1\cup H_2)\cap[x,y])\leq{\cal{K}}(H_2\cap[x,y])$. If we take the limit we get the statement. Now let $\hat{\cal{K}}(H_1)=\hat{\cal{K}}(H_2)\in\mathbb{R}$. Take two sequences $(x_n),(y_n)$ such that $x_n\to-\infty,\ y_n\to+\infty$. Let $(x_n'),(y_n')$ be the subsequences of $(x_n),(y_n)$ for which ${\cal{K}}(H_1\cap[x_n',y_n'])\leq{\cal{K}}(H_2\cap[x_n',y_n'])$ holds. Let us denote he remainder of the original sequences by $(x_n''),(y_n'')$ for which ${\cal{K}}(H_1\cap[x_n'',y_n''])>{\cal{K}}(H_2\cap[x_n'',y_n''])$ holds. For the first sequences we have ${\cal{K}}(H_1\cap[x_n',y_n'])\leq{\cal{K}}((H_1\cup H_2)\cap[x_n',y_n'])\leq{\cal{K}}(H_2\cap[x_n',y_n'])$. If $n\to\infty$ then we get that $\hat{\cal{K}}(H_1)=\hat{\cal{K}}(H_1\cup H_2)=\hat{\cal{K}}(H_2)$. Similarly using the second sequences we get exactly the same $\hat{\cal{K}}(H_1)=\hat{\cal{K}}(H_1\cup H_2)=\hat{\cal{K}}(H_2)$ that completes the proof. $\hfill{\Box}$ \section{Some properties of ${\cal{M}}^{\mu}$} Here we examine ${\cal{M}}^{\mu}\ (Avg^1)$ more closely in the sense of infinite behavior. \begin{prp}\label{paii}If $H\subset\mathbb{R},\ \inf H>-\infty,\ \mu(H)=+\infty$ then $\hat{\cal{K}}(H)=+\infty$ for ${\cal{K}}={\cal{M}}^{\mu}$. \end{prp} \P Let $n>\inf H$. Then $$\hat{\cal{K}}(H)=\lim\limits_{m>n,m\to+\infty}{\cal{M}}^{\mu}(H^{m-})=$$ $$\lim\limits_{m>n,m\to+\infty}\frac{\mu(H^{n-}){\cal{M}}^{\mu}(H^{n-})+\mu(H\cap[n,m]){\cal{M}}^{\mu}(H\cap[n,m])}{\mu(H^{n-})+\mu(H\cap[n,m])}\geq$$ $$\lim\limits_{m>n,m\to+\infty}\frac{\mu(H\cap[n,m])\cdot n}{\mu(H^{n-})+\mu(H\cap[n,m])}=$$ $$\lim\limits_{m>n,m\to+\infty}\frac{n}{\frac{\mu(H^{n-})}{\mu(H\cap[n,m])}+1}=n$$ because $\mu(H\cap[n,m])\to+\infty$ which gives that $\frac{\mu(H^{n-})}{\mu(H\cap[n,m])}\to 0$. $\hfill{\Box}$ \begin{prp}Let ${\cal{K}}={\cal{M}}^{\mu}$. If $\hat{\cal{K}}(H^{0-})>-\infty,\ \hat{\cal{K}}(H^{0+})<+\infty$ then $\hat{\cal{K}}(H)$ exists and $|\hat{\cal{K}}(H)|<\infty$. \end{prp} \P We could simply refer to \ref{pufmm} but we give a direct proof too. By \ref{paii} we get that $\mu(H^{0-})<\infty,\mu(H^{0+})<\infty$. Clearly $$\hat{\cal{K}}(H)=\mathop{\lim\limits_{x\to-\infty}}_{y\to+\infty}\frac{\int\limits_{H\cap[x,y]} zd\mu(z)}{\mu(H\cap[x,y])}= \mathop{\lim\limits_{x\to-\infty}}_{y\to+\infty}\frac{\int\limits_{H\cap[x,0]} zd\mu(z)+\int\limits_{H\cap[0,y]} zd\mu(z)}{\mu(H\cap[x,y])}=$$ $$\mathop{\lim\limits_{x\to-\infty}}_{y\to+\infty}\frac{\mu(H\cap[x,0]){\cal{M}}^{\mu}(H\cap[x,0])+\mu(H\cap[0,y]){\cal{M}}^{\mu}(H\cap[0,y])}{\mu(H\cap[x,y])}=$$ $$\frac{\mu(H^{0-}){\cal{M}}^{\mu}(H^{0-})+\mu(H^{0+}){\cal{M}}^{\mu}(H^{0+})}{\mu(H)}.$$ $\hfill{\Box}$ \begin{ex}Let two sequences $(b_n),(c_n)$ be given such that $0<c_n<1,\ \sum\limits_{n=1}^{\infty}c_n<+\infty,\ b_n+c_n\leq b_{n+1}$. Let $I_n=[b_n,b_n+c_n],\ H=\bigcup\limits_{n=1}^{\infty}I_n$. Then $Avg^1(H)$ gets finite iff $\sum\limits_{n=1}^{\infty}b_n\cdot c_n<+\infty$. \end{ex} \P Clearly $$Avg^1(H)=\frac{1}{2}\frac{\sum\limits_{n=1}^{\infty}(b_n+c_n)^2-b_n^2}{\sum\limits_{n=1}^{\infty}c_n}= \frac{\sum\limits_{n=1}^{\infty}c_n(b_n+\frac{1}{2}c_n)}{\sum\limits_{n=1}^{\infty}c_n}=\frac{\sum\limits_{n=1}^{\infty}b_n\cdot c_n+\frac{1}{2}\sum\limits_{n=1}^{\infty}c_n^2}{\sum\limits_{n=1}^{\infty}c_n}$$ which is finite iff $\sum\limits_{n=1}^{\infty}b_n\cdot c_n<+\infty$. $\hfill{\Box}$
{ "timestamp": "2018-02-27T02:11:59", "yymm": "1802", "arxiv_id": "1802.04110", "language": "en", "url": "https://arxiv.org/abs/1802.04110" }
\section{Introduction} The conventional description of black hole evaporation is based on quantum field theory on curved spacetime, with the back-reaction on the geometry taken into account via a mean-field approximation \cite{Hawking1974}. The approximation breaks down before evaporation brings the black hole mass down to the Planck mass ($m_{{Pl}}\!=\!\sqrt{\hbar c/G} \sim$ the mass of a $\frac12$-centimeter hair). To figure out what happens next we need quantum gravity. A quantum-gravitational process that disrupts a black hole was studied in \cite{Rovelli2014,Haggard2014,DeLorenzo2016,Christodoulou2016,Christodoulou2018}. It is a conventional quantum tunneling, where classical equations (here the Einstein equations) are violated for a brief interval. This alters the causal structure predicted by classical general relativity \cite{Frolov:1979tu,Frolov:1981mz,Stephens1994,modesto2004disappearance,Modesto2006a,Mazur:2004, Ashtekar:2005cj,Balasubramanian:2006,Hayward2006, Hossenfelder:2009fc,Hossenfelder:2010a, frolov:BHclosed, GambiniPullin2014a,GambiniPullin2014b, Bardeen2014,Giddings1992b}, by modifying the dynamics of the local apparent horizon. As a result, the \emph{apparent} horizon fails to evolve into an \emph{event} horizon. Crucially, the black hole does not just `disappear': it tunnels into a white hole \cite{Narlikar1974,HAJICEK2001,Ambrus2005,Olmedo:2017lvt} (from the outside, an object very similar to a black hole), which can then leak out the information trapped inside. The likely end of a black hole is therefore not to suddenly pop out of existence, but to tunnel to a white hole, which can then slowly emit whatever is inside and disappear, possibly only after a long time \cite{Aharonov1987,Giddings1992a,Callan1992,Giddings1993,Preskill1993,Banks1993,Banks1995, Ashtekar:2008jd,Ashtekar:2010qz,Ashtekar:2010hx,Rama2012,Almheiri:2013wka, Chen:2015,Malafarina:2017}. The tunneling probability may be small for a macroscopic black hole, but becomes large toward the end of the evaporation. This is because it increases as the mass decreases. Specifically, it will be suppressed at most by the standard tunneling factor \begin{equation} p\sim e^{-{S_E}/{\hbar}}\label{suppression} \end{equation} where $S_E$ is the Euclidean action for the process. This can be estimated on dimensional grounds for a stationary black hole of mass $m$ to be $S_E\sim G m^2/c$, giving \begin{equation} p\sim e^{-({m}/{m_{{Pl}}})^2},\label{suppression2} \end{equation} which becomes of order unity towards the end of the evaporation, when $m \to m_{{Pl}}$. A more detailed derivation is in \cite{Christodoulou2016,Christodoulou2018}. As the black hole shrinks towards the end of its evaporation, the probability to tunnel into a white hole is no longer suppressed. The transition gives rise to a long-lived white hole with Planck size horizon and very large but finite interior. Remnants in the form of geometries with a small throat and a long tail were called ``cornucopions" in \cite{Banks1992} by Banks \emph{et.al.} and studied in \cite{Giddings1992c, Banks1993b,Giddings1994,Banks1995}. As far as we are aware, the connection to the conventional white holes of general relativity was never made. This scenario offers a resolution of the information-loss paradox. Since there is an {\em apparent} horizon but no {\em event} horizon, a black hole can trap information for a long time, releasing it after the transition to white hole. If we have a quantum field evolving on a black hole background metric and we call $S$ its (renormalized) entanglement entropy across the horizon, then consistency requires the metric to satisfy non-trivial conditions: (a) The remnant has to store information with entropy $S\sim m_o^2/\hbar$ (we adopt units $G\!\!=\!\!c\!\!=\!\!1$, while keeping $\hbar$ explicit), where $m_o$ is the \emph{initial} mass of the hole, before evaporation \cite{Marolf2017}. This is needed to purify Hawking radiation. (b) Because of its small mass, the remnant can release the inside information only slowly---hence it must be long-lived. Unitarity and energy considerations impose that its lifetime be equal to or larger than $\tau_R\sim m_o^4/\hbar^{3/2}$ \cite{Preskill1993,Bianchi:2014bma}. (c) The metric has to be stable under perturbations, so as to guarantee that information can be released \cite{Frolov2012,Barrabes:1993,Poisson:1994, DeLorenzo2016}. In this paper we show that under simple assumptions the effective metric that describes standard black hole evaporation followed by a transition to a Planck-mass white hole satisfies precisely these conditions. This result shows that this scenario is consistent with known physics and does not violate unitarity. One reason this scenario may not have been recognised earlier is because of some prejudices (including against white holes), which we discuss below. But the scenario presented here turns out to be consistent with general expectations that are \emph{both} in the AdS/CFT community (see for instance \cite{Engelhardt2016, Fitzpatrick2016}) and in the quantum gravity community (see for instance the `paradigm' \cite{Ashtekar:2005cj}). \section{The internal geometry before quantum gravity becomes relevant} We begin by studying the geometry \emph{before} any quantum gravitational effect becomes relevant. The standard classical conformal diagram of a black hole formed by collapsing matter is depicted in Figure \ref{uno}, for the case of spherical symmetry. \begin{figure}[b] \includegraphics[height=4.5cm]{evaporation4.pdf} \caption{\em Conformal diagram of a classical black hole. The dashed line is the horizon. The dotted line is a Cauchy surface $\Sigma$. In regions $A$ and $B$ we expect (distinct) quantum gravitational effects and classical GR is unreliable.} \label{uno} \end{figure} Classical general relativity becomes insufficient when either ($a$) curvature becomes sufficiently large, or ($b$) sufficient time has ellapsed. The two corresponding regions, $A$ and $B$, where we expect classical general relativity to fail are depicted in the figure. Consider the geometry \emph{before} these regions, namely on a Cauchy surface $\Sigma$ that crosses the horizon at some (advanced) time $v$ after the collapse. See Figure \ref{uno}. We are interested in particular in the geometry of the portion $\Sigma_i$ of $\Sigma$ which is \emph{inside} the horizon. Lack of focus on this interior geometry is, in our opinion, one of the sources of the current confusion. Notice that we are here fully in the expected domain of validity of established physics. The interior Cauchy surface can be conveniently fixed as follows. First, observe that a (2d, spacelike) sphere $\mathcal{S}$ in (4d) Minkowski space determines a preferred (3d) ball $\Sigma_i$ bounded by $\mathcal{S}$: the one sitting on the same linear subspace---simultaneity surface---as $\mathcal{S}$; or, equivalently, the one with maximum volume. (Deformations from linearity in Minkowski space \emph{decrease} the volume). The first characterisation---linearity---makes no sense on a curved space, but the second---extremized volume---does. Following \cite{Christodoulou2015}, we use this characterization to fix $\Sigma_i$, which, incidentally, provides an invariant definition of the ``Volume inside $\mathcal{S}$". Large interior volumes and their possible role in the information paradox have also been considered in \cite{Stanford2014,Perez2015,Ori:2016, AshtekarILQGS:2015,Susskind:2018fmx}. The interior is essentially a very long tube. As time passes, the radius of the tube shrinks, while its length increases, see Figure 2. \begin{figure}[h] \includegraphics[height=3.5cm]{LongBH.pdf} \caption{\em The interior geometry of an old black hole: a very long thin tube, whose length increases and whose radius decreases with time. Notice it is finite, unlikely the Einstein-Rosen bridge.} \label{due} \end{figure} It is shown in \cite{Christodoulou2015,Bengtsson2015,Ong2015,Wang2017}, that for large time $v$ the volume of $\Sigma_i$ is proportional to the time from collapse: \begin{equation} V \sim 3\sqrt{3}\ m_o^2\, v. \label{tre} \end{equation} Christodoulou and De Lorenzo have shown \cite{Christodoulou2016a} that this picture is not changed by Hawking evaporation: toward the end of the evaporation the area of the (apparent) horizon of the black hole has shrunk substantially, but the length of the interior tube keeps growing linearly with time elapsed from the collapse. This can be huge for a black hole that started out as macroscopic ($m_o\gg m_{Pl}$), even if the horizon area and mass have become small. The key point is that \eqref{tre} still hold, with $m_o$ being the \emph{initial} mass of the hole \cite{Christodoulou2016a}, see also \cite{Ong:2015}. The essential fact that is often neglected, generating confusion, is that an old black hole that has evaporated down to mass $m$ has the same exterior geometry as a young black hole with the same mass, {\em but not the same interior}: an old, largely evaporated hole has an interior vastly bigger than a young black hole with the same mass. This is conventional physics. \\ To understand the end of a black hole's evaporation, it is important to distinguish the phenomena concerning the two regions $A$ and $B$ where classical general relativity becomes unreliable. Region $A$ is characterised by large curvature and covers the singularity. According to classical general relativity the singularity never reaches the horizon. (N.B.: Two lines meeting at the boundary of a conformal diagram does \emph{not} mean that they meet in the physical spacetime.) Region $B$, instead, surrounds the end of the evaporation, which involves the horizon, and affects what happens outside the hole. Taking evaporation into account, the area of the horizon shrinks progressively until reaching region $B$. The quantum gravitational effects in regions $A$ and $B$ are distinct, and confusing them is a source of misunderstanding. Notice that a generic spacetime region in $A$ is \emph{spacelike separated} and in general \emph{very distant} from region $B$. By locality, there is no reason to expect these two regions to influence one another. The quantum gravitational physical process happening at these two regions must be considered separately. \section{The $A$ region: Transitioning Across the Singularity} To study the $A$ region, let us focus on an arbitrary finite portion of the collapsing interior tube. As we approach the singularity, the Schwarzschild radius $r_s$, which is a temporal coordinate inside the hole, decreases and the curvature increases. When the curvature approaches Planckian values, the classical approximation becomes unreliable. Quantum gravity effects are expected to bound the curvature \cite{Narlikar1974,Frolov:1979tu,Frolov:1981mz,Stephens1994,modesto2004disappearance,Mazur:2004, Ashtekar:2005cj, Balasubramanian:2006,Hayward2006, Hossenfelder:2009fc,Hossenfelder:2010a,frolov:BHclosed, Rovelli2013d, Bardeen2014,Giddings1992a,Giddings1992b,Yonika:2017qgo,Olmedo:2017lvt}. Let us see what a bound on the curvature can yield. Following \cite{DAmbrosio}, consider the line element \begin{equation} ds^2=-\frac{4(\tau^2+l_{})^2}{2m-\tau^2}d\tau^2+\frac{2m-\tau^2}{\tau^2+l_{}}dx^2+(\tau^2+l_{})^2d\Omega^2, \label{me2} \end{equation} where $l_{}\!\ll\!m$. This line element defines a genuine Riemannian spacetime, with no divergences and no singularities. Curvature is bounded. For instance, the Kretschmann invariant $K\equiv R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ is easily computed to be \begin{eqnarray} K(\tau)&\approx& \frac{9\, l_{}^2-24\, l_{} \tau^2+ 48\, \tau^4}{ (l_{} + \tau^2)^8}m^2 \label{boundedDive} \end{eqnarray} in the large mass limit, which has the \emph{finite} maximum \begin{equation} K(0)\approx\frac{9\, m^2}{ l_{}^6}. \label{K} \end{equation} For all the values of $\tau$ where $ l_{}\ll \tau^2 < 2m$ the line element is well approximated by taking $l_{}=0$ which gives \begin{equation} ds^2=-\frac{4\tau^4}{2m-\tau^2}d\tau^2+\frac{2m-\tau^2}{\tau^2}dx^2+\tau^4d\Omega^2. \label{me} \end{equation} For $\tau<0$, this is the Schwarzschild metric inside the black hole, as can be readily seen going to Schwarzschild coordinates \begin{equation} t_s=x, \ \quad \text{and} \quad \ r_s=\tau^2. \end{equation} For $\tau>0$, this is the Schwarzschild metric inside a \emph{white} hole. Thus the metric \eqref{me2} represents a continuous transition of the geometry of a black hole into the geometry of a white hole, across a region of Planckian, but bounded curvature. Geometrically, $\tau=constant$ (space-like) surfaces foliate the interior of a black hole. Each of these surfaces has the topology $\mathcal{S}^2 \times \mathbb{R}$, namely is a long cylinder. As time passes, the radial size of the cylinder shrinks while the axis of the cylinder gets stretched. Around $\tau=0$ the cylinder reaches a minimal size, and then smoothly bounces back and starts increasing its radial size and shrinking its length. The cylinder never reaches zero size but bounces at a small finite radius $l_{}$. The Ricci tensor vanishes up to terms $O(l_{}/m)$. The resulting geometry is depicted in Figure \ref{4}. The region around $\tau=0$ is the smoothing of the central black hole singularity at $r_s=0$. \begin{figure}[t] \includegraphics[height=4cm]{BHinteriorv2.pdf} \caption{\small \em The transition across the $A$ region.} \label{4} \end{figure} This geometry can be given a simple physical interpretation. General relativity is not reliable at high curvature, because of quantum gravity. Therefore the ``prediction" of the singularity by the classical theory has no ground. High curvature induces quantum particle creation, including gravitons, and these can have an effective energy momentum tensor that back-reacts on the classical geometry, modifying its evolution. Since the energy momentum tensor of these quantum particles can violate energy conditions (Hawking radiation does), the evolution is not constrained by Penrose's singularity theorem. Equivalently, we can assume that the expectation value of the gravitational field will satisfy \emph{modified} effective quantum corrections that alter the classical evolution. The expected scale of the correction is the Planck scale. As long as $l_{}\ll m$ the correction to the classical theory is negligible in all regions of small curvature; as we approach the high-curvature region the curvature is suppressed with respect to the classical evolution, and the geometry continues smoothly past $\tau=0$. One may be tempted to take $l$ to be Planckian $l_{Pl}=\sqrt{\hbar G/c^3}\sim\sqrt{\hbar}$, but this would be wrong. The value of $l$ can be estimated from the requirement that the curvature is bounded at the Planck scale, $K(0) \sim 1/\hbar^2$. Using this in \eqref{K} gives \begin{equation} \label{lscale} l\sim (m\,\hbar)^{\frac13}, \end{equation} or, restoring for a moment physical units \begin{equation} {l}\sim l_{Pl} \left(\frac{m}{m_{Pl}}\right)^{\frac13}, \end{equation} which is much larger than the Planck length when $m\gg m_{Pl}$ \cite{Rovelli2014}. The three-geometry inside the hole at the transition time is \begin{equation} ds_3^2=\frac{2m}{l}dx^2+l^2d\Omega^2. \label{me3} \end{equation} The volume of the ``Planck star" \cite{Rovelli2014}, namely the minimal radius surface is \begin{equation} V=4\pi l^2\, \sqrt{\frac{2m}{l}}\,(x_{max}-x_{min}). \end{equation} The range of $x$ is determined by the lifetime of the hole from the collapse to the onset of region $B$, as $x=t_s$. If region $B$ is at the end of the Hawking evaporation, then $(x_{max}-x_{min})\sim m^3/\hbar$ and from Eq. \eqref{lscale}, $l \sim (m \hbar)^{1/3}$, leading to an internal volume at crossover that scales as \begin{equation} V \sim m^4/\sqrt{\hbar}. \label{V} \end{equation} We observe that in the classical limit the interior volume diverges, but quantum effects make it finite. \\ The $l\to 0 $ limit of the line element \eqref{me2} defines a metric space which is a Riemannian manifold almost everywhere and which can be taken as a solution of the Einstein's equations that is not everywhere a Riemannian manifold \cite{DAmbrosio}. Geodesics of this solution crossing the singularity are studied in \cite{DAmbrosio}: they are well behaved at $\tau=0$ and they cross the singularity in a \emph{finite} proper time. The possibility of this natural continuation of the Einstein equations across the central singularity of the Schwarzschild metric has been noticed repeatedly by many authors. To the best of our knowledge it was first noticed by Synge in the fifties \cite{Synge1950} and rediscovered by Peeters, Schweigert and van Holten in the nineties~\cite{Peeters:1994jz}. A similar observation has recently been made in the context of cosmology in~\cite{Koslowski:2016hds}. As we shall see in the next section, what the $\hbar \to 0$ limit does is to confine the transition inside an event horizon, making it invisible from the exterior. Reciprocally, the effect of turning $\hbar$ on is to de-confine the interior of the hole. \section{The transition and the global structure} The physics of the $B$ region concerns gravitational quantum phenomena that can happen around the horizon after a sufficiently long time. The Hawking radiation provides the upper bound $\sim m_o^3/\hbar$ for this time. After this time the classical theory does not work anymore. Before studying the details of the $B$ region, let us consider what we have so far. \begin{figure}[h] \includegraphics[height=3.8cm]{ecaporationold2.pdf}\hspace{.3cm} \raisebox{1.6cm}{$\Rightarrow$}\hspace{.7cm} \raisebox{-2mm}{\includegraphics[height=4.1cm]{bounce2.pdf}} \caption{\em Left: A commonly drawn diagram for black hole evaporation that we argue against. Right: A black-to-white hole transition. The dashed lines are the horizons.} \label{7} \end{figure} The spacetime diagram utilized to discuss the black hole evaporation is often drawn as in the left panel of Figure \ref{7}. What happens in the circular shaded region? What physics determines it? This diagram rests on an unphysical assumption: that the Hawking process proceeds beyond the Planck curvature at the horizon and pinches off the large interior of the black hole from the rest of spacetime. This assumption uses quantum field theory on curved spacetimes beyond its regime of validity. Without a physical mechanism for the pinching off, this scenario is unrealistic. Spacetime diagrams representing the possible formation and full evaporation of a black hole more realistically abound in the literature \cite{Narlikar1974,Frolov:1979tu,Frolov:1981mz,Stephens1994,modesto2004disappearance,Mazur:2004, Ashtekar:2005cj, Balasubramanian:2006,Hayward2006, Hossenfelder:2009fc,Hossenfelder:2010a,frolov:BHclosed, Bardeen2014,Giddings1992a,Giddings1992b} and they are all similar. In particular, it is shown in \cite{Haggard2014,DeLorenzo2016} that the spacetime represented in the right panel of Figure \ref{7}, can be an {\em exact solution of the Einstein equations}, except for the two regions $A$ and $B$, but including regions within the horizons. If the quantum effects in the region $A$ are simply the crossing described in the previous section, this determines the geometry of the region past it, and shows that the entire problem of the end of a black hole reduces to the quantum transition in the region $B$. The important point is that there are \emph{two} regions inside horizons: one below and one above the central singularity. That is, the black hole does not simply pop out of existence: it tunnels into a region that is screened inside an (anti-trapping) horizon. Since it is anti-trapped, this region is actually the interior of a \emph{white} hole. Thus, black holes die by tunneling into white holes. Unlike for the case of the left panel of Figure \ref{7}, now running the time evolution backwards makes sense: the central singularity is screened by an horizon (`time reversed cosmic censorship') and the overall backward evolution behaves qualitatively (not necessarily quantitively, as initial conditions may differ) like the time-forward one. Since we have the explicit metric across the central singularity, we know the features of the resulting white hole. The main consequence is that its interior is what results from the transition described in the above section: namely a white hole born possibly with a small horizon area, but in any case with {\em a very large interior volume}, inherited from the black hole that generated it. If the original black hole is an old hole that started out with a large mass $m_o$, then its interior is a very long tube. Continuity of the size of the tube in the transition across the singularity, results in a white hole formed by the bounce, which initially also consists of a very long interior tube, as in Figure \ref{9}. Subsequent evolution shortens it (because the time evolution of a white hole is the time reversal of that of a black hole), but this process can take a long time. Remarkably, this process results in a white hole that has a small Planckian mass and a long life determined by how old the parent black hole was. In other words, the outcome of the end of a black hole evaporation is a long-lived remnant. \begin{figure}[h] \includegraphics[height=7cm]{bounce4.pdf} \vspace{-1em} \caption{\em Black hole bounce, with a sketch of the inside geometries, before and after the quantum-gravitational transition.} \label{9} \end{figure} The time scales of the process can be labelled as in Figure \ref{9}. We call $v_o$ the advanced time of the collapse, $v_-$ and $v_+$ the advanced time of the onset and end of the quantum transition, $u_o$ the retarded time of the final disappearance of the white hole, and $u_-$ and $u_+$ the retarded times of the onset and end of the quantum transition. The black hole lifetime is \begin{equation} \tau_{bh}=v_--v_o. \end{equation} The white hole lifetime is \begin{equation} \tau_{wh}=u_o-u_+. \end{equation} And we assume that the duration of the quantum transition of the $B$ region satisfies $u_+-u_- = v_+-v_-\equiv \Delta \tau$. Disregarding Hawking evaporation, a metric describing this process outside the $B$ region can be written explicitly by cutting and pasting the extended Schwarzschild solution, following \cite{Haggard2014}. This is illustrated in Figure \ref{12}: two Kruskal spacetimes are glued across the singularity as described in the previous section and the shaded region is the metric of the portion of spacetime outside a collapsing shell (here chosen to be null). \begin{figure}[h] \includegraphics[height=3cm]{BH-to-WH-Kruskal.pdf}\hspace{1.6cm} \includegraphics[height=3cm]{BH-to-WH-glued.pdf} \vspace{-1em} \caption{\em Left: Two Kruskal spacetimes are glued at the singularity. The grey region is the metric of a black to white hole transition outside a collapsing and the exploding null shell. Right: The corresponding regions in the physical spacetime.} \label{12} \end{figure} While the location of the $A$ region is determined by the classical theory, the location of the $B$ region, instead, is determined by quantum theory. The $B$ process is indeed a typical quantum tunneling process: it has a long lifetime. A priori, the value of $\tau_{bh}$ is determined probabilistically by quantum theory. As in conventional tunneling, in a stationary situation (when the horizon area varies slowly), we expect the probability $p$ per unit time for the tunneling to happen to be time independent. This implies that the normalised probability $P(t)$ that the tunneling happens between times $t$ and $t+dt$ is governed by $dP(t)/dt=-p P(t)$, namely is \begin{equation} P(t)=\frac{1}{\tau_{bh}}e^{-\frac{t}{\tau_{bh}}}, \end{equation} which is normalised ($\int_0^\infty P(t) dt =1$) and where $\tau_{bh}$ satisfies \begin{equation} \tau_{bh}=1/p. \end{equation} We note parenthetically that the quantum spread in the lifetime can be a source of apparent unitarity violation, for the following reason. In conventional nuclear decay, a tunneling phenomenon, the quantum indetermination in the decay time is of the same order as the lifetime. The unitary evolution of the state of a particle trapped in the nucleus is such that the state slowly leaks out, spreading it over a vast region. A Geiger counter has a small probability of detecting a particle at the time where it happens to be. Once the detection happens, there is an apparent violation of unitarity. (In the Copenhagen language the Geiger counter measures the state, causing it to collapse, loosing information. In the Many Worlds language, the state splits into a continuum of branches that decohere and the information of a single branch is less than the initial total information.) In either case, the evolution of the quantum state from the nucleus to a \emph{given} Geiger counter detection is not unitary; unitarity is recovered by taking into account the full spread of different detection times. The same must be true for the tunneling that disrupts the black hole. If tunneling will happen at a time $t$, unitarity can only be recovered by taking into account the full quantum spread of the tunneling time, which is to say: over different future goemetries. The quantum state is actually given by a quantum superposition of a continuum of spacetimes as in Figure \ref{9}, each with a different value of $v_-$ and $v_+$. We shall not further pursue here the analysis of this apparent source of unitarity, but we indicate it for future reference. \section{The $B$ region: the Horizon at the Transition} The geometry surrounding the transition in the $B$ region is depicted in detail in Figure \ref{11}. \begin{figure}[h] \includegraphics[height=3.25cm]{BRegionSmall.pdf} \hspace{.5cm} \includegraphics[height=3cm]{BRegionsigns.pdf} \vspace{-1em} \caption{\em The $B$ region. Left: Surfaces of equal Schwarzschild radius are depicted. Right: The signs of the null Kruskal coordinates around $B$.} \label{11} \end{figure} The metric of the entire neighbourhood of the $B$ region is an extended Schwarzschild metric. It can therefore be written in null Kruskal coordinates \begin{equation} ds^2=-\frac{32m^3}{r}e^{-\frac{r}{2m}} du dv + r^2 d\Omega^2,\label{kr} \end{equation} where \begin{equation} \left(1-\frac{r}{2m}\right)e^{\frac{r}{2m}}=uv. \label{r} \end{equation} On the two horizons we have respectively $v=0$ and $u=0$, and separate regions where $u$ and $v$ have different signs as in the right panel of Figure \ref{11}. Notice the rapid change of the value of the radius across the $B$ region, which yields a rapid variation of the metric components in \eqref{kr}. To fix the region $B$, we need to specify more precisely its boundary, which we have not done so far. It is possible to do so by identifying it with the diamond (in the 2d diagram) defined by two points $P_+$ and $P_{-}$ with coordinates $v_\pm, u_\pm$ both outside the horizon, at the same radius $r_P$, and at opposite timelike distance from the bounce time, see Figure \ref{PPP}. \begin{figure}[h] \includegraphics[height=4cm]{PPP.pdf} \vspace{-2em} \caption{\em The $B$ transition region.} \label{PPP} \end{figure} The same radius $r_P$ implies \begin{equation} v_+u_+=v_-u_-\equiv \left(1-\frac{r_P}{2m}\right)e^{\frac{r_P}{2m}}. \end{equation} The same time from the horizon implies that the light lines $u=u_-$ and $v=v_+$ cross on $t_s=0$, or $u+v=0$, hence \begin{equation} u_-=-v_+. \end{equation} This crossing point is the outermost reach of the quantum region, with radius $r_m$ determined by \begin{equation} v_+u_- \equiv \left(1-\frac{r_m}{2m}\right)e^{\frac{r_m}{2m}}. \end{equation} The region is then entirely specified by two parameters. We can take them to be $r_P$ and $\Delta\tau = v_+-v_-\sim u_+-u_-$. The first characterizes the radius at which the quantum transition starts. The second its duration. (Strictly speaking, we could also have $ v_+-v_-$ and $u_+-u_-$ of different orders of magnitude, but we do not explore this possibility here.) There are indications about both metric scales in the literature. In \cite{Haggard2014,Haggard2016}, arguments where given for $r_P\sim 7/3 \ m$. Following \cite{Christodoulou2016}, the duration of the transition has been called ``crossing time" and computed by Christodoulou and D'Ambrosio in \cite{Christodoulou2018,Marios:2018} using Loop Quantum Gravity: the result is $\Delta \tau \sim m$, which can be taken as a confirmation of earlier results \cite{Ambrus2005,Barcelo2014b,Barcelo2016} obtained with other methods. The two crucial remaining parameters are the black hole and the white hole lifetimes, $\tau_{bh}$ and $\tau_{wh}$. The result in \cite{Christodoulou2018} indicates also that $p$, the probability of tunneling per unit time, is suppressed exponentially by a factor $e^{-m^2/\hbar}$. Here $m$ is not the initial mass $m_o$ of the black hole at the time of its formation, rather, it is the mass of the black hole at the decay time. This is in accord with the semiclassical estimate that tunneling is suppressed as in (\ref{suppression}) and (\ref{suppression2}). As mentioned in the introduction, because of Hawking evaporation, the mass of the black hole shrinks to Planckian values in a time of order $m_o^3/\hbar$, where the probability density becomes of order unit, giving \begin{equation} \tau_{{bh}}\sim m_o^3/\hbar \end{equation} and \begin{equation} \Delta\tau\sim \sqrt{\hbar}. \end{equation} We conclude that region $B$ has a Planckian size. We notice parenthetically that the value of $p$ above is at odds with the arguments given in \cite{Haggard2014} for a shorter lifetime $\tau_{{bh}}\sim m_o^2/\sqrt{\hbar}$. This might be because the analysis in \cite{Christodoulou2018} captures the dynamics of only a few of the relevant degrees of freedom, but we do not consider this possibility here. The entire range of possibilities for the black to white transition lifetime, $ m_o^2/\sqrt{\hbar}\le\tau_{{bh}}\le m_o^3/\hbar$, may have phenomenological consequences, which have been explored in \cite{Barrau2014c,Barrau2014b,Barrau2015,Barrau2016,Rovelli2017}. (On hypothetical white hole observations see also \cite{Retter2012}). \section{Interior Volume and Purification Time} Consider a quantum field living on the background geometry described above. Near the black hole horizon there is production of Hawking radiation. Its back-reaction on the geometry gradually decreases the area of the horizon. This, in turn, increases the transition probability to a white hole. After a time $\tau_{{bh}}\sim m_o^3/\hbar$, the area of the black hole reaches the Planckian scale $A_{{bh}}(\text{final})\sim \hbar$, and the transition probability becomes of order unity. The volume of the transition surface is huge. To compute it with precision, we should compute the back-reaction of the inside component of the Hawking radiation, which gradually decreases the value of $m$ as the coordinate $x$ increases. Intuitively, the inside components of the Hawking pairs fall toward the singularity, decreasing $m$. Since most of the decrease is at the end of the process, we may approximate the full interior of the hole with that of a Schwarzschild solution of mass $m_o$ and the first order estimate of the inside volume should not be affected by this process. Thus we may assume that the volume at the transition has the same order as the one derived in Eq. \eqref{V}, namely \begin{equation} V_{{bh}}(\text{final})\sim \sqrt{\hbar} \, m_o \, \tau_{bh} \sim m_o^4/\sqrt{\hbar}.\label{finalV} \end{equation} Using the same logic in the future of the transition, we approximate the inside metric of the white hole with that of a Schwarzschild solution of Planckian mass, since in the future of the singularity, the metric is again of Kruskal type, but now for a white hole of Plankian mass. The last parameter to estimate is the lifetime $\tau_{{wh}}=u_0-u_+$ of the white hole produced by the transition. To do so, we can assume that the internal volume is conserved in the quantum transition. The volume of the region of Planckian curvature inside the white hole horizon is then \begin{equation} V_{{wh}}(u)\sim l^2\sqrt{\frac{m}{l}}\; \tau_{wh}, \label{Vw} \end{equation} where now $l\sim m \sim \sqrt{\hbar},$ and therefore \begin{equation} V_{wh}(\text{initial}) \sim \hbar \, \tau_{wh}. \end{equation} Gluing the geometry on the past side of the singularity to the geometry on the future side requires that the two volumes match, namely that \eqref{Vw} matches \eqref{V} and this gives \begin{equation} \tau_{{wh}}\sim m_o^4/\hbar^{3/2}. \label{tauwh} \end{equation} This shows that the Planck-mass white hole is a long-lived remnant \cite{Christodoulou2016a}. With these results, we can address the black hole information paradox. The Hawking radiation reaches future infinity before $u_-$, and is described by a \emph{mixed} state with an entropy of order $m_o^2/\hbar$. This must be purified by correlations with field excitations inside the hole. In spite of the smallness of the mass of the hole, the large internal volume \eqref{finalV} is sufficient to host these excitations \cite{Rovelli2017a}. This addresses the requirement (a) of the introduction, namely that there is a large information capacity. To release this entropy, the remnant must be long-lived. During this time, any internal information that was trapped by the black hole horizon can leak out. Intuitively, the interior member of a Hawking pair can now escape and purify the exterior quantum state. The long lifetime of the white hole allows this information to escape in the form of very low frequency particles, thus respecting bounds on the maximal entropy contained in a given volume with given energy. The lower bound imposed by unitarity and energy considerations is $\tau_R\sim m_o^4/\hbar^{3/2}$ \cite{Preskill1993,Bianchi:2014bma,Marolf2017} and this is precisely the white hole lifetime \eqref{tauwh} deduced above; hence we see that they satisfy the requirement (b) of the introduction. Therefore white holes realize precisely the long-lived remnant scenario for the end of the black hole evaporation that was conjectured and discussed mostly in the 1990's \cite{Banks1992,Giddings1992a,Giddings1992c,Giddings1993,Giddings1994,Banks1993,Banks1993b,Banks1995}. The last issue we should discuss is stability. Generically, white holes are known to be unstable under perturbations (see for instance Chapter 15 in \cite{Frolov2012} and references therein). The instability arises because modes of short-wavelength are exponentially blue-shifted along the white hole horizon. In the present case, however, we have a Planck-size white hole. To run this argument for instability in the case of a planckian white hole, it is necessary to consider transplanckian perturbations. Assuming no transplanckian perturbations to exist, there are no instabilities to be considered. This addresses the requirement (c). Alternatively: a white hole is unstable because it may re-collapse into a black hole with similar mass; therefore a Planck size white hole can at most re-collapse into a Planck size black hole; but this has probability of order unity to tunnel back into a white hole in a Planck time. Therefore the proposed scenario addresses the consistency requirements (a), (b), and (c) for the solution of the information-loss paradox and provides an effective geometry for the end-point of black hole evaporation: a long-lived Planck-mass white hole. \section{On White Holes} Notice that from the outside, a white hole is indistinguishable from a black hole. This is obvious from the existence of the Kruskal spacetime, where the same region of spacetime (region I) describes both the exterior of a black hole and the exterior of a white hole. For $r_s\!>\!2m$, the conventional Schwarzschild line element describes equally well a black hole exterior and a white hole exterior. The difference is only what happens at $r=2m$. The only locally salient difference between a white and a black hole is that if we add some \emph{generic} perturbation or matter on a given constant $t_s$ surface, in (the Schwarzschild coordinate description of) a black hole we see matter falling towards the center and accumulating around the horizon. While in (the Schwarzschild coordinate description of) a white hole we see matter accumulated around the horizon in the past, moving away from the center. Therefore the distinction is only one of ``naturalness" of initial conditions: a black hole has ``special" boundary conditions in the future, a white hole has ``special" boundary conditions in the past. This difference can be described physically also as follows: if we look at a black hole (for instance when the Event Horizon Telescope \cite{Doeleman2009} examines Sagittarius A*), we see a black disk. This means that generic initial conditions on past null infinity give rise on future null infinity to a black spot with minimal incoming radiation: a ``special" configuration in the future sky. By time reversal symmetry, the opposite is true for a white hole; generic initial conditions on future null infinity require a black spot with minimal incoming radiation from past null infinity: a ``special" configuration in the past. We close this section by briefly discussing the ``no transition principle" considered by Engelhardt and Horowitz in \cite{Engelhardt2016}. By assuming ``holographic" unitarity at infinity and observing that consequently information cannot leak out from the spacetime enclosed by a single asymptotic region, these authors rule out a number of potential scenarios, including the possibility of resolving generic singularities inside black holes. Remarkably, the scenario described here circumvents the no transition principle and permits singularity resolution in the bulk: the reason is that this singularity is confined in a finite spacetime region and does not alter the global causal structure. \section{On remnants} The long-lived remnant scenario provides a satisfactory solution to the black-hole information paradox. The main reason for which it was largely discarded was the fact that remnants appeared to be exotic objects extraneous to known physics. Here we have shown that they are not: white holes are well known solutions of the Einstein equations and they provide a concrete model for long-lived remnants. Two other arguments made long-lived remnants unpopular: Page's version of the information paradox; and the fact that if remnants existed they would easily be produced in accelerators. Neither of these arguments applies to the long-lived remnant scenario of this paper. We discuss them below. In its interactions with its surroundings, a black hole with horizon area $A$ behaves thermally as a system with entropy $S_{bh}=A/4\hbar$. This is a fact supported by a large number of convincing arguments and continues to hold for the dynamical horizons we consider here. The Bekenstein-Hawking entropy provides a good notion of entropy that satisfies Bekenstein's generalized second law, {\em in the approximation in which we can treat the horizon as an event horizon}. In the white hole remnant scenario this is a good approximation for a long time, but fails at the Planck scale when the black hole transitions to a white hole. Let us assume for the moment that these facts imply the following hypothesis (see for instance \cite{Marolf2017}) \begin{quote} (H) The total number of available states for a quantum system living on the internal spatial slice $\Sigma_i$ of Figure \ref{uno} is $N_{bh}=e^{S_{bh}}=e^{A/4\hbar}$. \end{quote} Then, as noticed by Page \cite{Page1993a}, we have immediately an information paradox regardless of what happens at the end of the evaporation. The reason is that the entropy of the Hawking radiation grows with time. It is natural to interpret this entropy as correlation entropy with the Hawking quanta that have fallen inside the hole, but for this to happen there must be a sufficient number of available states inside the hole. If hypothesis (H) above is true, then this cannot be, because as the area of the horizon decreases with time, the number of available internal states decreases and becomes insufficient to purify the Hawking radiation. The time at which the entropy surpasses the area is known as the Page time. This has lead many to hypothesize that the Hawking radiation is already purifying itself by the Page time: a consequence of this idea is the firewall scenario \cite{Almheiri2013}. The hypothesis (H) does not apply to the white-hole remnants. As argued in \cite{Rovelli2017a}, growing interior volumes together with the existence of local observables implies that the number of internal states grows with time instead of decreasing as stated in (H). This is not in contradiction with the fact that a black hole behaves thermally in its interactions with its surroundings as a system with entropy $S=A/4\hbar$. The reason is that ``entropy" is not an absolute concept and the notion of entropy must be qualified. Any definition of ``entropy" relies on a coarse graining, namely on ignoring some variables: these could be microscopic variables, as in the statistical mechanical notion of entropy, or the variables of a subsystem over which we trace, as in the von Neumann entropy. The Bekenstein-Hawking entropy correctly describes the thermal interactions of the hole with its surroundings, because the boundary is an outgoing null surface and $S_{bh}$ counts the number of states {\em that can be distinguished from the exterior}; but this is not the number of states that can be distinguished by local quantum field operators on $\Sigma_i$ \cite{Rovelli2017a}. See also \cite{Giddings2013}. Therefore there is no reason for the Hawking radiation to purify itself by the Page time. This point has been stressed by Unruh and Wald in their discussion of the evaporation process on the spacetime pictured in the left panel of Figure \ref{7}, see e.g. \cite{Unruh2017}. Our scenario differs from Unruh and Wald's in that the white hole transition allows the Hawking partners that fell into the black hole to emerge later and purify the state. They emerge slowly, over a time of order $m_o^4/\hbar^{3/2}$, in a manner consistent with the long life of the white hole established here. The second standard argument against remnants is that, if they existed, it would be easy to produce them. This argument assumes that a remnant has a small boundary area and little energy, but can have a very large number of states. The large number of states would contribute a large phase-space volume factor in any scattering process, making the production of these objects in scattering processes highly probable. Actually, since in principle these remnants could have an \emph{arbitrarily} large number of states, their phase-space volume factor would be infinite, and hence they would be produced spontaneously everywhere. This argument does not apply to white holes. The reason is that a white hole is screened by an anti-trapping horizon: the only way to produce it is through quantum gravity tunneling from a black hole! Even more, to produce a Planck mass white hole with a large interior volume, we must first produce a \emph{large} black hole and let it evaporate for a long time. Therefore the threshold to access the full phase-space volume of white holes is high. A related argument is in \cite{Banks1993}, based on the fact that infinite production rate is prevented by locality. In \cite{Giddings1994} Giddings questions this point treating remnants as particles of an effective field theory; the field theory, however, may be a good approximation of such a highly non-local structure as a large white hole only in the approximation where the large number of internal states is not seen. See also \cite{Banks1995}.\\ \section{Conclusion} As a black hole evaporates, the probability to tunnel into a white hole increases. The suppression factor for this tunneling process is of order $e^{-{m^2}/{m^2_{Pl}}}$. Before reaching sub-Planckian size, the probability ceases to be suppressed and the black hole tunnels into a white hole. Old black holes have a large volume. Quantum gravitational tunneling results in a Planck-mass white hole that also has a large interior volume. The white hole is long-lived because it takes awhile for its finite, but large, interior to become visible from infinity. The geometry outside the black to white hole transition is described by a single asymptotically-flat spacetime. The Einstein equations are violated in two regions: The Planck-curvature region A, for which we have given an effective metric that smoothes out of the singularity; and the tunneling region B, whose size and decay probability can be computed \cite{Christodoulou2018}. These ingredients combine to give a white hole remnant scenario. This scenario provides a way to address the information problem. We distinguish two ways of encoding information, the first associated with the small area of the horizon and the second associated to the remnant's interior. The Bekenstein-Hawking entropy $S_{bh}=A/4\hbar$ is encoded on the horizon and counts states that can only be distinguished from outside. On the other hand, a white hole resulting from a quantum gravity transition has a large volume that is available to encode substantial information even when the horizon area is small. The white hole scenario's apparent horizon, in contrast to an event horizon, allows for information to be released. The long-lived white hole releases this information slowly and purifies the Hawking radiation emitted during evaporation. Quantum gravity resolves the information problem. \centerline{---} CR thanks Ted Jacobson, Steve Giddings, Gary Horowitz, Steve Carlip, and Claus Kiefer for very useful exchanges during the preparation of this work. EB and HMH thank Tommaso De Lorenzo for discussion of time scales. EB thanks Abhay Ashtekar for discussion of remnants. HMH thanks the CPT for warm hospitality and support, Bard College for extended support to visit the CPT with students, and the Perimeter Institute for Theoretical Physics for generous sabbatical support. MC acknowledges support from the SM Center for Space, Time and the Quantum and the Leventis Educational Grants Scheme. This work is supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. \bibliographystyle{utcaps}
{ "timestamp": "2018-03-20T01:06:40", "yymm": "1802", "arxiv_id": "1802.04264", "language": "en", "url": "https://arxiv.org/abs/1802.04264" }
\section{Introduction} One of the main tasks condensed matter physics deals with is the understanding of phases of matter. Traditionally, phase transitions were characterised following Landau's prescription, in terms of an order parameter. Then, the discovery of new phases of matter that did not break any symmetry, nor could be characterised by the usual order parameters, lead to the appearance of topology in condensed matter systems. This new scenario emerged from the merging of physics and topology, and on a more subtle order that lies in the mathematical properties of the electronic wavefunctions. Experimentally, the first developments happened in the study of phase transitions in 2D electronic systems, which displayed a quantised Hall conductance \cite{qhe1982}. Then, the discovery of the fractional QHE \cite{fqhe1983} and of the Spin Hall insulator in HgTe quantum wells \cite{sphi2007} lead to the large variety of systems displaying topological properties that have been discovered so far. Systems with non-trivial topological properties are changing the way electronics is developing. In particular, the discovery of materials with insulating bulk and metallic edges, which are also robust under a wide range of perturbations, will allow for important advances in spintronics \cite{spintopology2014}, magnetism \cite{magtopology2013} or even further, to the development of topological quantum computers \cite{qckitaev2003,majoranabox2017,firstthirdneighbours,generalizedSSH}. Understanding how these materials behave in realistic situations is crucial, and the study of the classical toy models with new terms is of the utmost relevance. This work is framed within this context. The starting point of this study is a canonical model of a 1D topological insulator: the Su-Schrieffer-Heeger (SSH) model \cite{ssh1980}. It is a tight-binding model for non-interacting, spinless electrons confined in a dimer chain. It has been extensively studied both theoretically and experimentally \cite{continuumSSH1980,hubbardvspeierls}. In this work, we analyze the effect of adding arbitrarily long-range hoppings to the SSH model, what we call hereafter the extended SSH model. By examining the symmetries that are preserved or broken in the resulting system, we can conclude that the presence of even and odd hopping terms has different implications on the topological properties. Hoppings to even neighbours break particle-hole and chiral (also known as sublattice) symmetries, but under certain constraints we are able to find gapped configurations with edge states. On the other hand, odd neighbours do not break any fundamental symmetries of the chain, allowing for the appearance of larger values of the topological invariant. More concretely, we study in detail the case with first- and second-neighbour hoppings, as well as first-, second- and third-neighbour hoppings. We also discuss the feasibility of larger winding number configurations by including an AC driving field. This allows to tune the hopping amplitudes into unconventional configurations. Furthermore, we examine the effect of diagonal and off-diagonal disorder in the previous results. From the topological point of view, diagonal disorder breaks sublattice symmetry, and therefore affects the topological protection, while off-diagonal disorder maintains this symmetry. The paper is organized as follows: In section \ref{sec:model} we introduce the extended SSH model; in section \ref{sec:topology} we include a characterization of its topological properties, considering some relevant concrete examples. Section \ref{sec:driving} presents an analysis on the effect of an AC driving field on the system, studying several drives with different shapes; in section \ref{sec:disorder} we study different types of disorder and check their effect on the edge states of the system. Finally, in section \ref{sec:conclusions} we present our conclusions. \section{Extended SSH model}\label{sec:model} The Hamiltonian of the dimer chain with hoppings up to $N^{th}$-neighbours is given by: \begin{equation} H_N = \sum_{|i-j|\leq N} J_{ij}c^\dagger_i c_j+ \mathrm{H.c.} \,, \quad J_{ij} = J^*_{ij} = J(|x_i - x_j|) \,, \label{eq:hdimer} \end{equation} where $c^\dagger_i$ creates a fermion in the $i^{th}$ site of the chain, and $J_{ij}=J_{ji}$ is the hopping amplitude connecting the $i^{th}$ and the $j^{th}$ sites. We can group all the sites in two sublattices $A$ and $B$. All the sites with odd indices belong to sublattice $A$ and all the sites with even indices belong to sublattice $B$ (see \fref{SSHschematic} for a schematic). If we restrict the model to nearest-neighbours only ($N=1$), we recover the original SSH model. \begin{figure} \includegraphics[scale=0.40]{cadenaesquemaOK.eps} \caption{Dimer chain with arbitrarily long-ranged hoppings. For clearness, only hoppings to first, second and third neighbour atoms have been depicted. The unit cell length will be set to $a=1$ hereafter without loss of generality. The intracell parameter is $b$. } \label{SSHschematic} \end{figure} We assume hopping amplitudes are decaying functions of the distance between sites and define $n = |i-j|$ as the range of the corresponding hopping $J_{ij}$. Hoppings are denoted as odd or even according to their range. It is important to note that in the case of \textit{odd hoppings}, for any $n\in \mathbb{N}_\mathrm{odd}$ and site $i$, the $(i+n)^{th}$ and $(i-n)^{th}$ sites are located at different distances. On the contrary, for \textit{even hoppings}, all sites are located at the same distance for any $n\in \mathbb{N}_\mathrm{even}$. For the sake of simplicity, we will use the following notation from now on \begin{equation} \eqalign{ J_{2i-n,2i} \equiv J_{n} \,, \quad J_{2i,2i+n} \equiv J'_{n} \,, \quad n\in \mathbb{N}_\mathrm{odd} \\ J_{i,i\pm n} \equiv J_{n} \,, \quad n\in \mathbb{N}_\mathrm{even}} \end{equation} For a translationally-invariant system, the Hamiltonian is block-diagonal in the momentum-space basis. Transforming $c_{2j-1} = \frac{1}{\sqrt{M}}\sum_k e^{ikj}a_k$ and $c_{2j} = \frac{1}{\sqrt{M}}\sum_k e^{ikj}b_k$, for $j=1,\dots,M$ ($M$ is the number of unit cells in the chain), we can express the Hamiltonian in \Eref{eq:hdimer} with periodic boundary conditions as $H_N =\sum_k\Psi_{k}^{\dagger}\mathcal{H}_N\left(k\right)\Psi_{k}$, where we have defined $\Psi_{k}=\left(a_{k},b_{k}\right)^{T}$. The bulk momentum-space Hamiltonian $\mathcal{H}_{N}\left(k\right)$ is a $2\times2$ matrix with the following structure: even hoppings contribute to diagonal elements, whereas odd hoppings appear in off-diagonal ones, \begin{equation} \mathcal{H}_N = \sum_{p} \left(\begin{array}{cc} 2J_{2p}\cos\left(pk\right) & J'_{2p-1}e^{ik(p-1)}+J_{2p-1}e^{-ikp}\\ J'_{2p-1}e^{-ik(p-1)}+J_{2p-1}e^{ikp} & 2J_{2p}\cos\left(pk\right) \end{array}\right) \,, \label{eq:H_k} \end{equation} with $p$ ranging from $1$ to $N/2$ if $N$ is even, or $(N+1)/2$ if $N$ is odd. $\mathcal{H}_N$ can be written in the basis of the Pauli matrices $\vec{\sigma}=\{\sigma_{x},\sigma_{y},\sigma_{z}\}$ and the identity $\mathbf{1}$ as $\mathcal{H}_N=d_{0}(k)\mathbf{1}+\vec{d}(k)\cdot\vec{\sigma}$. The vector $\vec{d}(k)$ is called the Bloch vector, and its components are \begin{eqnarray} d_0(k) = \sum_p 2J_{2p}\cos(pk) \,, \\ d_x(k) = \sum_p \left[J'_{2p-1}\cos\big((p-1)k\big) + J_{2p-1}\cos(pk)\right] \,, \\ d_y(k) = \sum_p \left[J_{2p-1}\sin(pk) - J'_{2p-1}\sin\big((p-1)k\big)\right] \,, \\ d_z(k) = 0 \,. \end{eqnarray} The dispersion relation takes the form $E_\pm (k) = d_0(k) \pm |\vec{d}(k)|$, where ``$+$'' and ``$-$'' correspond to the conduction and valence band, respectively. Importantly, the fact that even hoppings of a given range $n$ have the same value in both sublattices makes $d_z(k)=0$. \section{Topology in extended SSH models}\label{sec:topology} For one-dimensional topological insulators, the topological invariant that characterizes different topological phases is the Zak phase $\mathcal{Z}$. Equivalently, they can be characterized by the winding of the Bloch vector around the origin as $k$ varies across the first Brillouin zone. This quantity $\mathcal{W}$ is well-defined only when the Bloch vector lays in a plane containing the origin. Both are related to each other as $\mathcal{Z}=\pi \mathcal{W} \ \mathrm{mod} \ 2\pi$. Owing to the bulk-edge correspondence, the bulk topology manifests itself in presence or absence of edge states in a finite system. The number of pairs of edge states a system supports corresponds to $|\mathcal{W}|$. The winding number can be calculated in terms of the Bloch vector components (see \ref{appendixZ}). The Zak phase is a gauge invariant quantity and as such can be measured \cite{measurementZak}. Apart from the SSH model of polyacetylene \cite{ssh1980}, the Zak phase has also been used to characterized linearly conjugated diatomic polymers \cite{conjugatedpolymers}, photonic systems \cite{photonic1,photonic2}, acoustic systems \cite{acousticsystems}, and recently, water wave states \cite{topologicalwater}. In the standard SSH model, the winding number can only take two values depending on the ratio between first-neighbour hopping amplitudes: $\mathcal{W} = 0$ (trivial phase) if $J'_{1}/J_{1}>1$, and $\mathcal{W} = 1$ (non-trivial phase) if $J'_{1}/ J_{1}<1$. Furthermore, since there are only first-neighbour hoppings in the model, it possesses particle-hole symmetry along with time-reversal symmetry and chiral (sublattice) symmetry. Therefore, it belongs to the one-dimensional BDI class of the Atland-Zirnbauer classification of topological insulators and superconductors \cite{tenfoldway}, which admits an infinite number of distinct topological phases. In the extended SSH model the presence of even hoppings breaks particle-hole as well as chiral symmetry, changing the system Hamiltonian from BDI class to the AI class, which is trivial in 1D. Two clarifications must be made to this statement. First, for sufficiently small even hoppings this model supports edge states in the band's gap, despite the absence of the aforementioned symmetries. Second, even hoppings preserve space-inversion symmetry when chosen as detailed in equation \eref{eq:hdimer}, which ensures that the winding number is still well-defined. Mathematically, terms proportional to the identity matrix (included in $d_{0}$), do not change the eigenstates, and therefore the parallel transport, i.e. the Berry connection, is unaffected. However, the presence of even hoppings does affect the energy bands and the energy levels of a finite system. They may lead to the disappearance of the edge states into the bulk bands without the corresponding change in the winding number, contrary to the expectation for a true topological phase transition. Thus, in general, there is not a one-to-one correspondence between the topological invariant and the number of edge states pairs supported by the chain as long as even hoppings are present, as shown in \fref{energy13J2}. \begin{figure} \centering \includegraphics[scale=0.4]{energiat3con123.eps} \caption{\label{energy13J2} Spectrum for a finite system of $M=30$ and $N=3$, with $J_1=2J'_1$, $J'_3=0.5J'_1$, and $J_3=2J'_1$, as a function of $J_2$. The blue and fuchsia lines represent the maximum and minimum value for the conduction and valence band, respectively. First- and third-neighbour hoppings are chosen such that the system has $\mathcal{W}=2$, i.e., with two pairs of edge states. Second-neighbour alter the energy spectrum, taking the system to a metallic phase because of the overlapping of the two bands. In the gapped phase, note the different behaviour of each pair of edge states. However, the winding number is the same regardless of the value of the hopping amplitude $J_2$. This means that the one-to-one correspondence between $\mathcal{W}$ and the number for edge states is broken.} \end{figure} Regarding long-range odd hoppings, they preserve all the symmetries of the standard SSH model, and permit larger values of the topological invariant. For a given $N$, the maximum winding number possible is $\mathcal{W}_\mathrm{max} = \lfloor (N+1)/2\rfloor$, which is also the maximum number of pairs of edge states supported by the chain. However, one difficulty for obtaining these phases with larger invariant is that long-range hopping amplitudes must be chosen in a specific way. We will show in next section how we can achieve this by applying ac driving fields. In the following lines we will examine in detail two different configurations. \subsection{First and second neighbour hoppings} We now study in more detail the effect of even hoppings by considering the case of first- and second-neighbour hoppings. As explained before, the study of the topology of the system requires the analysis of both bulk and edge properties. \subsubsection{Bulk physics \label{bulkphysics}} The momentum-space Hamiltonian in \eref{eq:H_k} takes the form \begin{equation} \mathcal{H}_2=\left(\begin{array}{cc} 2J_{2}\cos\left(k\right) & J'_{1}+J_{1}e^{-ik}\\ J'_{1}+J_{1}e^{ik} & 2J_{2}\cos\left(k\right) \end{array}\right), \end{equation} whereas Bloch's vector changes to: \begin{eqnarray} d_{0}\left(k\right) & = &2J_{2}\cos\left(k\right)\,, \\ d_{x}\left(k\right) & = & J'_{1}+J_{1}\cos\left(k\right)\,, \quad d_{y}\left(k\right) = J_{1}\sin\left(k\right)\,, \quad d_{z}\left(k\right) = 0 \, \end{eqnarray} and the energy dispersion is given by $E_{\pm}\left(k\right)=2J_{2}\cos\left(k\right)\pm\sqrt{J_{1}^{'2}+J_{1}^{2}+2J'_{1}J_{1}\cos\left(k\right)}$. This expression makes clear that second-neighbour hoppings break particle-hole symmetry, which translates into an assymetric band structure about $E=0$. Still, the specific value of $J_2$ is of utmost importance, as the system properties change drastically. We can distinguish two regimes (see \fref{bulkandedge2}): \begin{enumerate} \item When $J_{2}<J'_{1}/2$ and $J'_1/J_1>1$ ($\mathcal{W}=0$) or $J_{2}<J_{1}/2$ and $J'_1/J_1<1$ ($\mathcal{W}=1$), the system has insulating properties. This regime corresponds to a gapped phase in which the winding number is still defined by the ratio $J'_1/J_1$ and has a one-to-one correspondence with the number of edge states. It is also significant that the direct gap turns into an indirect gap at $J_{2}=J_{1}/2$ (trivial phase), or $J_{2}=J'_{1}/2$ (topological phase), which means that the minimum energy in the conduction band and the maximum energy in the valence band occur at different values of $k$. \item When $J_{2}\geq J'_{1}/2$ and $J'_1/J_1>1$ or $J_{2}\geq J{}_{1}/2$ and $J'_1/J_1<1$, the behaviour is expected to be metallic. In this regime the gap is indirect, but the maximum of the valence band (at $k=0$) is equal or greater to the minimum of the conduction band (at $k=\pi$). This means that the energy bands overlap without crossing, which signals the absence of a topological phase transition. \end{enumerate} \begin{figure} \centering \includegraphics[scale=0.4]{energiat2N20.eps}\\ \vspace{0.7cm} \includegraphics[scale=0.4]{bandas2trest2.eps} \caption{Effect of second-neighbour hoppings on the band structure and energy levels of a finite system. Top: Spectrum for a finite system of $M=20$ and $N=2$, with $J_{1}=2J'_1$, as a function of $J_{2}$. The blue and fucsia lines represente the maximum and minimum value for the conduction and valence band, respectively. For $J_2<1$, the system has two edge states within the band gap. Their energy decreases as $J_2$ is increased until they penetrate the bulk bands for $J_2\geq J'_1$. Also, the gap goes from direct to indirect at $J_2=0.5J'_1$ (see main text, section \ref{bulkphysics}). In the metallic phase, the bands overlap without crossing. Bottom: Bulk band structure for different values of $J_{2}$, given the previous SSH hoppings. Note how particle-hole symmetry is gradually lost as $J_2$ is increased, which is reflected in the loss of symmetry about $E=0$ in the energy spectrum. It is also important to notice that the case with $J_2=0.3J'_1$ has a direct gap, with both the minimum of the conduction band and the maximum of the valence band occur at $k=\pi$. However, for $J_2=0.7J'_1$, the gap is indirect (the minimum of the conduction band occurs at $k=\pm \pi$ and the maximum of the valence band at $k=0$). This corresponds to case (i) discussed in the previous section. On the other hand, the maximum of the valence band is greater than the minimum of the conduction band for the configuration with $J_2=1.7J'_1$, although the bands never touch. This is an example of band structure of a system in regime (ii), when the system is expected to have metallic properties.} \label{bulkandedge2} \end{figure} \subsubsection{Edge physics \label{edgephysics12}} The topological phase of the SSH chain is characterized by the appearance of two edge modes. If the thermodynamic limit ($M \rightarrow \infty$), edge states will be degenerate at $E=0$, each of them being exponentially located at either the right of left end of the chain. If not, a small splitting of the order of $(J'_{1}/J_{1})^{N}$ is expected \cite{asboth2016book}; edge states hybridize and become an even and odd superposition of the states located at one of the ends. The presence of chiral symmetry, represented by the operator $\mathcal{C}$, ensures that these hybridized edge states have symmetric energies about $E=0$, since they are chiral partners of each other: $|\mathrm{edge_{o}}\mathcal{i}=\mathcal{C}|\mathrm{edge}_{\mathrm{e}}\mathcal{i}\rightarrow E_{\mathrm{o}}=-E_{\mathrm{e}}$, where e and o stands for even and odd parity, respectively. By solving the dispersion relation we get that edge states have associated a complex $k=\pi+i\zeta_{\mathrm{SSH}}$, where $\zeta_{\mathrm{SSH}}$ is the inverse of the localization length. The value of $\zeta_{\mathrm{SSH}}$ is a function of the ratio $J'_{1}/J_{1}$ \cite{delplace2011} \begin{equation} \frac{J'_{1}}{J_{1}}\approx e^{-\zeta_{\mathrm{SSH}}}\,. \label{sshdecay} \end{equation} When second-neighbour hoppings are added, we find the following changes in the behaviour of the edge states: \begin{enumerate} \item The absence of chiral symmetry implies that $|\mathrm{edge_{o}}\mathcal{i}=\mathcal{C}|\mathrm{edge}_{\mathrm{e}}\mathcal{i}$ does not hold anymore (see \fref{sublaticces}). \begin{figure} \centering \includegraphics[scale=0.6]{sublatticesv2} \caption{Wave functions of the hybridized edge states (even parity) of a chain with $M=10$ unit cells, with: \\ Left: only first-neighbour hoppings, $J_{1}=2J'_1$. Edge states in the SSH chain fulfill $|\mathcal{h}\mathrm{edge_{\mathrm{o}}|\mathcal{C}|}\mathrm{edge}_{\mathrm{e}}\mathcal{i}|=1$. \\ Right: first- and second-neighbour hoppings, $J_{1}=2J'_1$ and $J_2=0.6J'_1$. The presence of $J_2$ breaks chiral symmetry, and hence $|\mathcal{h}\mathrm{edge_{\mathrm{o}}|\mathcal{C}|}\mathrm{edge}_{\mathrm{e}}\mathcal{i}|=0.8<1$. This quantity becomes smaller as $J_2$ is increased.} \label{sublaticces} \end{figure} \item The edge states energy moves away from zero as $J_{2}$ increases. Using numerical analysis, we find that the energy of both edge states varies linearly with $J_{2}$ according to \begin{equation} E=E_{\mathrm{edge}}-2J_{2}\frac{J'_{1}}{J_{1}}\,, \label{energy2neighbours} \end{equation} where $E_{\mathrm{edge}}$ is the energy of the edge states in the SSH chain ($J_2=0$). This expression holds until the enery bands overlap. \item Interestingly, we find that the addition of $J_{2}$ modifies the localization length of the edge states, which become less localized as $J_{2}$ is increased. First, knowing that the energy of the edge states depends linearly on $J_{2}$ as shown in equation \eref{energy2neighbours}, we can solve the dispersion relation, obtaining a expression for the $k$ associated with the edge states in terms of the hopping amplitudes \begin{equation} k=\pm\arccos(\alpha),\,\alpha=-\frac{J'_{1}}{J_{1}}+\frac{J_{1}J'_{1}}{4J_{2}^{2}}-\frac{1}{4J_{2}^{2}}\sqrt{4J_{2}^{2}(J_{1}^{2}-J_{1}^{'2})+J_{1}^{2}J_{1}^{'2}}\,. \end{equation} In order for the state to be localized, we search for a solution $k$ of the form $k=\pi\pm i\zeta$, where $\zeta=1/\lambda_{\mathrm{loc}}$. If we rewrite the previous equation as $\cos(k)=\cos(\pi+i\zeta)=-\cosh(\zeta)=\alpha$, we can give an analytic expression for $\zeta$ \begin{equation} \zeta=\frac{1}{\lambda_{\mathrm{loc}}}=\mathrm{arccosh}\left(-\alpha\right)\,. \label{zeta} \end{equation} In the limit $J_{2}\rightarrow J_{1}/2$ (when the bands overlap and the edge states penetrate the energy bands), for which $\zeta\rightarrow0$. In the limit $J_{1}\rightarrow J'_{1}$, i.e. one-dimensional atomic chain, (when the band gap is closed and the system has metallic behaviour), $\zeta\rightarrow0$ independently of the value of $J_{2}$. In both cases, localized behaviour is lost, which agrees with the analytic and numerical results previously obtained. As can be seen in \fref{loclength}, $\zeta$ is affected differently by $J_2$ depending on the value of first-neighbour hopping amplitudes. As $J'_1/J_1$ gets closer to one, that is, as we approach the metallic limit, the presence of $J_2$ has less impact on $\zeta$. \end{enumerate} \begin{figure} \centering \includegraphics[scale=0.5]{decaysecondneighbours.eps} \caption{\label{loclength} a) Localization length of edge states in a chain with first- and second-neighbour hoppings, given fixed $J'_1$, as a function of $J_2$, in units of $J'_1$, see equation \eref{zeta}. For each curve, $\zeta$ goes to zero when $J_2=J_1/2$, which corresponds to the overlapping of the bands. Colored dots in the $J_1=2$ curve correspond to the numerical data obtained by fitting the envelope of the edge states in a finite chain with $M=20$ to a exponential function of the form $\sim e^{-\lambda_{\mathrm{loc}}x}$, for different values of $J_2$ (see legend). \\ b) Probability amplitude of edge states wavefunction in logarithmic scale for $J_1=2J'_1$, and $M=20$. Each color corresponds to the values of $J_2$ shown in the legend and in figure \ref{loclength}a. Plotmarkers represent the peak values of the edge states wavefunction, whereas continuous lines depict the numerical fitting to an exponential function. In logarithmic scale, they are represented as lines with slope $-\lambda_{\mathrm{loc}}$. As can be seen, the edge states do decay exponentially into the bulk when second-neighbour hoppings are added.} \end{figure} \subsection{First- and third-neighbour hoppings} When first- and third-neighbour hoppings are considered, the system preserves time-reversal, particle-hole and chiral symmetry, and thus it belongs to the BDI class, just as the standard SSH model. Therefore, the topological invariant is well-defined and there is a one-to-one correspondence between its value and the number of edge states supported by the system. The Bloch vector has the following non-zero components, $d_x(k)=J'_1+(J_1+J'_3)\cos(k)+J_3\cos(2k)$, and $d_y(k)=(J_1-J'_3)\sin(k)+J_3\sin(2k)$, in terms of which the winding number can be calculated. A topological phase diagram is obtained as a function of $J'_3$ and $J_3$ for different first-neighbour hoppings (see \fref{phasemap}), setting to zero second-neighbour hoppings in order to preserve chiral symmetry. The presence of long-range hoppings enriches the phase map, making possible the existence of configurations with $\mathcal{W}=2$ and $\mathcal{W}=-1$. \begin{figure} \centering \includegraphics[width=\linewidth]{phasemap_miguel.eps} \caption{\label{phasemap} Topological phase diagram of as a function of third-neighbour hoppings, for different $J'_3$, and $J_3$ (expressed in units of $J'_1$). Second neighbours are set to zero in all of them $J_2=0$: a) $J_1=2J'_1$, b) $J_1=1.5J'_1 $, c) $J_1=J'_1$. Figures a) and b) fulfill $J'_1/J_1<1$, which corresponds to a SSH topological insulator. Figure c) corresponds to an homogeneous chain ($J'_1=J_1$) which is gapped due to third neighbour hoppings.} \end{figure} Interestingly, dimer chains with $\mathcal{W}=2$ support two pair of edge states. Owing to the presence of chiral symmetry, each pair carries two chiral partners, whose energies are related by $E_{\mathrm{e}}=-E_{\mathrm{o}}$. In the thermodynamic limit, when $M\rightarrow \infty$, these zero modes are located at either the right or left edge of the chain and can be chosen to have support on one of the sublattices, just as those in the SSH model. However, one remarkable distinction from the latter is the fact that each pair has a different spatial dependence, which in turn differ from that of the SSH model edge states. First, the peak of maximum probability amplitude is located at a different site for each pair. Depending on how hopping amplitudes are tuned, pairs can be maximally located at either the first, third, or fifth site of the chain. Moreover, the envelope of the edge states wavefunction decays exponentially into the bulk, but the probability amplitude on each site does not decrease monotonically. The larger the system, the more nicely the envelope fits into a exponential decay (see figure \ref{edgestates13}). \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{edgestates13fitting.eps} \caption{\label{edgestates13} Absolute value of the edge states wavefunction of a chain with $M=40$ unit cells, and hopping amplitudes $J_1=4J'_1/3$, $J_2=0$, $J'_3=J'_1/5$, and $J_3=J'_1$ ($\mathcal{W}=2$). The continuous, blue line represents the fitting of the envelope to a exponential function. Each edge state depicted belongs to a different pair, and thus the peak of maximum probability occurs in a different site of the chain.} \end{figure} \section{Periodic driving}\label{sec:driving} As we have shown, phases with more than a single pair of edge states are possible, although they require unconventional hopping parameters, such that hopping amplitudes to further neighbours are larger than those to closer neighbours. In a regular system, however, one may expect hopping amplitudes to decrease with increasing distance. One way to overcome this consists in using a periodic driving, which in the high-frequency regime makes the system behave as if it were governed by an effective static Hamiltonian, with the possibility to change the effective hopping amplitudes by tuning the driving parameters \cite{drivenchain2013}. With this purpose in mind, we include in the system Hamiltonian $H_N$ a time-dependent term $H_{AC}(t) = E(t)\sum_{j}x_j n_j$, corresponding to a homogeneous ac field $E(t)$ that couples to the charge (or mass) of the particles. $E(t)$ is a periodic function of time with period $T=2\pi/\omega$. Using a high-frequency expansion, we can derive an effective Hamiltonian $H_\mathrm{eff}$ expressed as a power series in $1/\omega$, see appendix \ref{app:Floquet}. To lowest order, $H_\mathrm{eff}$ is simply the time average of the total Hamiltonian over one period. Thus, the structure of the hoppings is maintained, but the hopping amplitudes become renormalized as \begin{equation} J_{ij}\rightarrow J_{ij}\frac{1}{T}\int_0^T dt e^{i A(t) d_{ij}} \equiv J_{ij}f(E_0d_{ij}/\omega)\,. \label{eq:Jeff} \end{equation} Here $A(t)$ is the vector potential corresponding to the ac field $E(t)=-\partial_t A(t)$ and $d_{ij}=x_i-x_j$ is the distance between the $i^{th}$ and $j^{th}$ sites. We will assume that the decay of hopping amplitudes with distance is exponential, $J_{ij}=J_0 e^{-d_{ij}/\lambda}$. Below, in table \ref{tab:renormalizations} we specify three different driving protocols studied in this work, with the corresponding hopping renormalization they produce. \begin{table} \begin{tabular}{c c c} \br Driving & Vector potential, $A$ & Hopping renormalization, $f$ \tabularnewline \hline simple sinusoidal & $-\frac{E_0}{\omega}\sin(\omega t)$ & $\mathcal{J}_0\left(\frac{E_0 d_{ij}}{\omega}\right)$ \tabularnewline double sinusoidal & $-\frac{E_0}{\omega} \left[\sin(\omega t) + \sin(3\omega t)\right]$ & $\sum_n \mathcal{J}_{-3n}\left(\frac{E_0 d_{ij}}{\omega}\right) \mathcal{J}_{n}\left(\frac{E_0 d_{ij}}{\omega}\right)$ \tabularnewline square-wave & $\left\{ \begin{array}{l l l} -E_0 t & \mathrm{if} & 0<t<T/2 \\ E_0(t-T) & \mathrm{if} & T/2<t<T \end{array}\right.$ & $2i \left(e^{-iE_0Td_{ij}/2} - 1\right)/E_0 T d_{ij}$ \tabularnewline \br \end{tabular} \caption{Different driving protocols with the corresponding hopping renormalization} \label{tab:renormalizations} \end{table} \begin{figure}[!htb] \centering \includegraphics[scale=1.2]{renormalization.eps} \caption{Comparison between the hopping renormalization functions of the different drivings studied. Note how the zeros of $f$ for the square-driving are equally spaced, while its envelope (grey line) decays faster than for the sinusoidal drivings.} \label{fig:renormalizations} \end{figure} For a simple sinusoidal drive with amplitude $E_0$ and frequency $\omega$, the hopping renormalization is given by the zeroth-order Bessel function of the first kind $\mathcal{J}_0(E_0 d_{ij}/\omega)$ \cite{Hanggi1998}. This allows to cancel the hoppings to next-nearest neighbours by tuning $E_0 a/\omega$ to one of the zeros of $\mathcal{J}_0$. In this manner, it is possible to recover the chiral symmetry in chains with hoppings up to third neighbours. Nonetheless, it is impossible to zero out all even hoppings with this driving. Interestingly, we obtain winding numbers up to $\mathcal{W}=2$, but only for metallic phases, see figure \ref{fig:PhaseDiagram_driven}. \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{PhaseDiagram_driven.jpg} \caption{Phase diagram of the extended SSH model with a sinusoidal driving $A(t)=-\frac{E_0}{w}\sin(wt)$. Hopping amplitudes decay exponentially with distance. Chosing $\lambda=a$, only hoppings with range up to $N=10$ have a significant contribution. The gap is expressed in units of $J_0$. In the plot of the winding number, black curves show the contour level where the gap vanishes.} \label{fig:PhaseDiagram_driven} \end{figure} We can also consider more complicated drivings, such as a combination of two sinusoids with commensurate frequencies $E(t) = E_0\left[\cos(\omega t) + \cos(3\omega t) \right]$. As it can be seen in figure \ref{fig:PhaseDiagram_driven_super}, with this driving we are able to produce gaped phases with winding numbers larger than 1, although the gap is smaller than in phases with smaller winding number. \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{PhaseDiagram_driven_super.jpg} \caption{Phase diagram of the extended SSH model with a double sinusoidal driving $A(t)=-\frac{E_0}{w}[\sin(wt)+\sin(3w)]$. Hopping amplitudes decay exponentially with distance. Chosing $\lambda=a$, only hoppings with range up to $N=10$ have a significant contribution. The energy gap is expressed in units of $J_0$.} \label{fig:PhaseDiagram_driven_super} \end{figure} An appealing option is to use a square drive. As we show below, with this kind of driving it is possible to zero out all even hoppings simultaneously. Let us consider \begin{equation} E(t) = \left\{ \begin{array}{l l l} E_0 & \mathrm{if} & 0<t<T/2 \\ -E_0 & \mathrm{if} & T/2<t<T \end{array}\right. \,, \end{equation} which leads to a renormalization function $f$ whose zeros are evenly spaced on the possitive real axis, see figure \ref{fig:renormalizations} and table \ref{tab:renormalizations}. Since the distances for all even hoppings are multiples of the lattice parameter $a$, it is now possible to cancel all of them by tuning $E_0 /\omega = 2 a^{-1}$. In this way, we can enforce chiral symmetry on a system with arbitrarily long-range hopping terms. Despite this, with this kind of driving it is not possible to obtain winding numbers larger than 1 if the bare hopping amplitudes decay exponentially with distance. \section{Disorder}\label{sec:disorder} The effect of disorder in electronic systems has been an important subject since Anderson's discussions on localisation \cite{AndersonLocalization}. Originally, he studied the propagation of a particle in a random potential, and showed that above certain critical values of disorder, localisation of the wavepackets happened. Strikingly, localisation was extremely dependent on the spatial dimension of the system, and in 1D, they were expected to localise for infinitesimal disorder \cite{DisorderRG}. Further studies have shown that there are exceptions to localisation in low dimensions, being the random dimer model one of the most well known cases, where inversion symmetry leads $\sqrt{N}$ states which are delocalised and contribute to the conductivity (i.e., they do not have zero measure in the thermodynamical limit \cite{RandomDimerModel}). More recent studies, including its effect on the topological phases \cite{DisorderedTI1,DisorderedTI2,TopologicalAndersonInsulator}, and on the role played by off-diagonal disorder \cite{Off-diagonal-Disorder1D,Off-diagonal-Disorder,DisorderBipartiteLattices} have also been done. \\ In this section we numerically study the effect of diagonal and off-diagonal uncorrelated disorder (in the onsite energies and hopping amplitudes, respectively) in the spectrum of both the standard and the extended SSH model. It must be stressed that this is different to the previously mentioned random dimer model, where disorder forms a random bipartite lattice with homogeneous hoppings.\\ First, we study diagonal disorder by considering the following Hamiltonian \begin{equation} H'=H_\mathrm{N}+H_{\mathrm{diag}}=H_\mathrm{N}+\sum_{j=1}^{2M}\epsilon_j c^{\dagger}_jc_j\,, \end{equation} where the second term shifts the onsite energies differently for each site by an amount $\epsilon_j$. We use random numbers following a Gaussian distribution centered at zero, so it is necessary to average the results over several repetitions. The figures included in this section have been obtained by taking the average over 100 repetitions. Diagonal disorder breaks sublattice symmetry and eliminates the zero-energy modes, therefore destroying the topological protection of the edge modes, both in the standard and the extended SSH model (see figure \ref{diagonaldisorder4ES}). \begin{figure} \centering \includegraphics[scale=0.7]{diagonaldisorder4ESok.eps} \caption{\label{diagonaldisorder4ES} Effect of diagonal disorder on edge states in the extended SSH model with first- and third-neighbour hoppings, as a function of the diagonal disorder strength $\sigma$. Each pair of edge states has been depicted in a different color from the states in the bands (light purple). The dimer chain has $M=20$ unit cells and hopping amplitudes $J_1=1.2J'_1$, $J_2=0$, $J'_3=0.3J'_1$, $J_3=0.9J'_1$, which are chosen such that the system has $\mathcal{W}=2$ and preserves sublattice symmetry initially ($\sigma=0$). As can be seen, the absence of sublattice symmetry separates the edge states, destroying the topological phase and leading to the usual exponential localization for arbitrary disorder strength.} \end{figure} On the other hand, off-diagonal disorder refers to random hopping amplitudes, \begin{equation} H'' =H_{\mathrm{N}}+H_{\mathrm{off-diag}}=H_{\mathrm{N}}+ \sum_{|i-j|\leq N}\epsilon_{ij}c^\dagger_i c_j+ \mathrm{H.c.} \,. \end{equation} As it was shown in \cite{DisorderBipartiteLattices}, systems with bipartite lattices display anomalous behaviour when off-diagonal disorder is considered. One reason for this is the presence of zero energy modes at the band centre. These states appear when sites in one sublattice couple only to sites of the other one, which is related to the differences observed in the previous section between the effect of adding even and odd neighbour hopping. Importantly, they showed that this type of disorder produces, at large distance and for states at $E = 0$, slow decaying localisation of the form $\propto e^{-\lambda\sqrt{r}}$ (which produces a slower random walk behaviour than the usual exponential $e^{-\lambda r}$). Debate about whether these states are truly localised or not can be found in the literature \cite{Off-diagonal-Disorder}. Figure \ref{fig:offdiagonal} shows how off-diagonal disorder affects edge states in both the standard and extended SSH model for a configuration with $\mathcal{W}=2$ and first- to third- neighbour hoppings. As expected, the pair of zero-energy modes in a SSH chain is robust under this type of perturbation, until disorder is of the order of the gap $\sigma \propto \Delta$. Then, the intra- and inter-dimer hopping cannot be differentiated, the bands mix and eventually the edge modes separate. However, it is interesting to see how each pair of edge states behaves differently when disorder is increased in the extended-SSH configuration under consideration. \begin{figure}[h] \centering \includegraphics[scale=0.5]{offdiagonal.eps} \caption{\label{fig:offdiagonal}Effect of off-diagonal disorder on edge states in the standard and extended SSH model with first- and third-neighbour hoppings. Each pair of edge states has been depicted in a different color from the states in the bands (light purple).\\ (a) SSH model: finite chain with $M=20$ and $J_1=1.5J'_1$ , as a function of the off-diagonal disorder strength $\sigma$.\\ (b) Extended SSH model: finite chain with $M=20$ and hoppings: $J_1=1.5J'_1$, $J_2=0$, $J'_3=0.3J'_1$, $J_3=0.9J'_1$, as a function of the off-diagonal disorder strength $\sigma$. Hoppings are chosen such that the system has ($\mathcal{W}=2$) initially.} \end{figure} \section{Conclusions\label{sec:conclusions}} In this work, we have studied a generalized model for a dimer chain including long-range hoppings, which naturally occur in physical systems. Although seemingly equal, the effect of hopping processes connecting the same sublattice (even hoppings), and processes connecting different sublattices (odd hoppings) is very different. The former breaks particle-hole symmetry, and changes the topological class from BDI to AI. Nevertheless, the presence of space inversion symmetry forces the topological invariant to have quantized values, and the appearance of edge states protected only by this symmetry. As a consequence, the number of edge states now changes independently of the topological invariant, as they can enter the bulk bands if the hopping amplitudes connecting different sublattices is large enough. On the contrary, hopping between different sublattices preserves the fundamental symmetries, and allows for phases with larger values of the topological invariant and larger numbers of edge-state pairs. We propose the use of an ac driving to tune the topological properties of the system. Three different drivings are analyzed. Interestingly, we show that with a square-wave driving it is possible to cancel all even hoppings simultaneously, restoring the symmetries of the standard SSH model. Finally, we have investigated the effect of disorder. In the case of a chain with only odd hoppings, the edge states are robust against off-diagonal disorder, while they loose their protection as long as we introduce even hoppings. We also show that in phases with more than a single pair of edge states, their energies departure from zero at different rates as the strength of diagonal disorder is increased. \section*{Acknowledgements} This work was supported by the Spanish Ministry of Economy and Competitiveness through Grant No.MAT2014-58241-P. M. Bello acknowledges the FPI program (BES-2015-071573), and A. G\'omez-Le\'on acknowledges the Juan de la Cierva program.
{ "timestamp": "2018-10-05T02:11:30", "yymm": "1802", "arxiv_id": "1802.03973", "language": "en", "url": "https://arxiv.org/abs/1802.03973" }
\chapter{Bitcoin Point of Sale Terminals: Evaluation and Deployment} \section{Introductory Remarks} Bitcoin is a cryptographic currency publicly proposed in 2008~\cite{Nak08}. It has reached a level of adoption unrealized by decades of previously proposed digital currencies (from 1982~\cite{Cha82} onward). Unlike most previous proposals, Bitcoin does not distribute digital monetary units to users. Instead, a public ledger (called the blockchain) maintains a list of every transaction\footnote{Technically, a transaction specifies a short script that encodes how the balance can be claimed as the input to some future transaction.} made by all Bitcoin users since the deployment of the currency in January 2009. While Bitcoin was originally envisioned for online currencies, a number of businesses have begun to accept Bitcoin in person. To our knowledge, the academic community has not given any attention to Bitcoin point-of-sale (PoS) terminals and their unique requirements in terms of security, usability, and deployability. In this paper, we develop a framework for evaluating competing approaches. We then provide a case study detailing the design and implementation of our open source PoS terminal, \textsf{Aunja PoS}\xspace, which was made for a caf\'{e} in Montreal\footnote{Cafe Aunja \url{http://aunja.com}} following the SCRAM requirements engineering approach\footnote{Scenario-based Requirements Analysis Method} (see Section~\ref{SCRAM}) and has been in operation since 2014. \section{Evaluation Framework} We propose a framework for comparing Bitcoin PoS solutions, scoring the competing systems on usability, deployability, and security (following~\cite{BHOS12}). These are not a full set of requirements for a general purpose PoS system but are tailored to in-person low volume transactions that you might find in a small business. The requirements are adapted from our previous framework for Bitcoin wallets~\cite{eskandari2015first} and originated with new requirements based on our expertise. The requirements are used to score each system in Table ~\ref{tab:method-comp}. For simplifying the figure, we use three score indicators. ($\bullet$) for a complete score on the requirement, ($\circ$) if the requirements has not met completely and empty space if it is not satisfying the need. For some of the requirements the scoring system might not be intuitive (\textit{e.g.,}\xspace low cost to run) however we justify each score later in the paper. \subsection{Usability} We consider the following aspects of usability. \begin{itemize} \item \textbf{User-Friendly:} This is a general category to note any usability violations that would result in a payment process being too technical or complex for the employee or customer. A single training session for the employee should suffice and the system should be intuitive to a Bitcoin user. There should be a clear and mutual understanding when the payment is finalized. A PoS that has all of these features would score ($\bullet$), having some would result in ($\circ$). \item \textbf{Time-Efficient: }Processing payments should not take significantly more time than common payment systems such as credit card payments. If the process takes the same time as credit card payments it would score ($\bullet$), anything more than that would be ($\circ$) or none. \item \textbf{Fair Exchange Rate: } There should be an easy and verifiable approach for the payer and payee to come to a consensus on fiat currency to Bitcoin exchange rate. If the price is retrieved from commonly accepted sources it would score ($\bullet$). \item \textbf{Availability: }All employees should be able to do the Bitcoin payment process without the need to know any credentials. If it is on a public domain for anyone to access it will score ($\bullet$), if it needs some private information it will score ($\circ$) and if it needs credentials it will score none. \end{itemize} \subsection{Deployability} We use this category to state the requirements regarding implementation. \begin{itemize} \item \textbf{Low Cost to Run: }PoS should be accessible with one of the currently owned devices of the caf\'{e} such as the cashier computer, the PoS terminal\footnote{The common PoS that accepts Visa/Debit Cards} or mobile devices. There should not be a need for buying new hardware or expensive software. For this requirement, we would score a ($\bullet$) to a free of monetary cost system, and a ($\circ$) score to a moderate amount of spendings. \item \textbf{Enables Branching: }The ability to install the point of sale on multiple branches of the business. Configuration might be needed to differentiate two branches in the system. If the PoS is packaged and easy to install on the second branch of the business it will score ($\bullet$), if it needs some modification ($\circ$) and if it is the same procedure to install it as the first one it will score none. \end{itemize} \subsection{Privacy} As transactions in Bitcoin are published to the blockchain, it is important to consider both payer or payee privacy. \begin{itemize} \item \textbf{No Information leakage: } There should no sensitive information available to the customer when she wants to pay with Bitcoin. This information might include the infrastructure of the business's network or a private domain used for accounting purposes. If it leaks any sensitive information it will score none and if it leaks some non-sensitive information it will score ($\circ$). \item \textbf{Maintains Payee's Privacy: }The payer should not be able to see how much the payee has received prior or after her payment but just her own amount of payment. If there is no link between the payments visible to the payer the PoS will score ($\bullet$). \item \textbf{Maintains Payer's Privacy: }The payee should not be able to see how much the payer owns. Note that this challenge has not been fully solved (\textit{c.f.,} ~\cite{androulaki2013evaluating}). All the PoS in this evaluation scored ($\circ$), including our own, but we include this property to have a complete framework for the evaluation of future software that may utilize privacy-preserving add-ons~\cite{BNMC+14} or cryptocurrencies~\cite{MGGR13}. \item \textbf{Confidential Payments List: }The ability to see the payments list, only available for the manager by an authentication method, such as a password-protected panel. If the PoS offers a report page for the manager it will score ($\circ$), if the report page could have hierarchal authentication for employees with limited access it will score ($\bullet$). \end{itemize} \subsection{Security} Security is one of the most important aspects in any financial payment system. Security of the system represents more than just the PoS code, it includes the environment which PoS is being used, the people using the software and the operating environment of the software. \begin{itemize} \item \textbf{No 3rd-Party Trust: }There should be as little third party trust as possible to accept and hold Bitcoin. Full trust to a third party will result in scoring none, some trust on the main functionality of the PoS result in ($\circ$) and no trust will result in ($\bullet$) score. \item \textbf{Data Encryption: }In the case of any attacks on the service, there should be security measures that makes sure the attacker will not be able to have access to the private keys and transfer Bitcoin funds. Only if all the sensitive data is encrypted, the PoS will score ($\bullet$). \item \textbf{No Software Dependency: }The system should use as little dependencies as possible to minimize the attack vector on the server. If the PoS needs complex set of software or hardware to work, it will score none, and if it could be executed in a browser\footnote{In order to use a software PoS a mobile device or a computer is needed and we assume a web browser is by default installed on these devces} without the need to run any other software it will score ($\bullet$). \end{itemize} \section{Evolution of PoS proposals} \input{sections/newtable} Most existing payment systems suit the online markets (\textit{e.g.,}\xspace e-commerece) and not physical points of sale.\footnote{\url{https://en.Bitcoin.it/wiki/How_to_accept_Bitcoin,_for_small_businesses}} We list all the available approaches to accept Bitcoin payments that can at least be adapted for in-person transactions. \subsection{Single Bitcoin address displayed} A simple way for small businesses to accept Bitcoin is to generate one Bitcoin address and display it, say as a QR code. Customers can scan the QR code and input the dollar value on their Bitcoin wallet and pay the business with the equivalent Bitcoins. \\ \textbf{Usability:} This approach puts the employee in a position to prepare, receive and check Bitcoin payments manually (User friendly: none). This makes the time spent on the payment longer than an integrated payment system (Time-efficient: none). Price conversion from the local currency to BTC would also be a manual lookup (Fair exchange rate: none). Technical training is required for each employee responsible for handling Bitcoin payments. As long as the QR-code print is visible to the payer, it is available to pay (availability: $\bullet$). \textbf{Deployability:} The cost to implement this method is almost zero (Low cost to run: $\bullet$). In case there are multiple branches, more print outs suffice to have multiple point of sales (Enables branching: $\circ$). \textbf{Privacy:} This method provides no privacy for the seller (Payee's privacy: none). As all the Bitcoin transactions are publicly available in the Blockchain, anyone with the knowledge of the Bitcoin address could see all the received payments, thus anyone could have access to the reporting page (Confidential Payments list: none). \textbf{Security:} Other than the system holding the private key, security does not factor into this approach (No 3rd-party trust: $\bullet$). The private key should be kept in a secure place, preferably a cold storage unless the funds should be transferred to another address (\textit{e.g.,}\xspace to exchange for cash). There are no software or data involved thus there is no software dependency (Data Encryption: none, No software dependency: $\bullet$). \subsection{Hardware terminals} There are multiple hardware terminals proposed for accepting Bitcoin\footnote{Bitstraat \url{bitstraat.nl}, Xbterminal \url{xbterminal.com} , Coinkite \url{coinkite.com}}, however due to the high cost to run (\textit{e.g.,}\xspace Coinkite\footnote{\url{https://coinkite.com/store/products/all}} PoS are for sale at the starting price of 970USD), they have not been used in most of the small businesses and have not been reviewed before. At the time of writing all the proposed hardware terminals are unavailable to purchase and the future of Bitcoin hardware terminals is indeterminate. \textbf{Usability:} The interfaces of each of the provided terminals are different. The most popular ones mimic the look and feel of a normal point of sale terminal used by credit card companies. However adding a new device to the payment routine would make it less user friendly and arises the need for training the employees (User friendly: $\circ$). The time and availability of the payment through a hardware terminal should be the same as credit card payments if not lower (Time-efficient: $\bullet$) . The customer, nor the payee has any control over the exchange rate and it is provided by the PoS terminal operator (Fair exchange rate: $\circ$). The device is accessible to anyone who has access to the other payment terminals (Availability: $\bullet$). \textbf{Deployability:} Due to the high costs, they score low in our framework (Low cost to run: none). Also in case there are multiple branches of the business, there should be one devices bought for each branch this makes the costs even higher (Enables branching: none). \textbf{Privacy:} Accepting Bitcoin with a hardware terminal should persevere the privacy the same as the regular credit card terminals, however the payees privacy depends on the implementation of the Bitcoin payment system (Payee's privacy: $\bullet$). The terminal providers also offer similar interface to credit card terminals to list the payments (Confidential Payments list: $\bullet$). \textbf{Security:} The payee has no control over his private keys nor holds the funds (No 3rd-party trust: none), thus he needs to trust the third-party company that provided the terminals to keep the funds safe, and will receive the payments upon the agreed time frame with probably small transaction fees. As for other aspects of security, we assume the back-end implementation keeps the private keys encrypted and secure (Data encryption: $\bullet$). There are security risks involved in adding new hardware or software to the cashier's computer that will fall out of the scope of this paper (No software dependency: none). \subsection{Online Merchant Services} Most of these services do not have an explicit implementation for a physical payment system. Two popular ones, at the time of writing, are Bitpay\footnote{\url{https://bitpay.com}} (0\% fees) and Coinbase\footnote{\url{http://coinbase.com}} (1\% on exchanging Bitcoins to fiat currency). \textbf{Usability:} Implementing a Bitpay payment is straightforward and easy to implement. There are not many jargon or technical options for the employee (User friendly: $\bullet$). They have their own exchange rate (Fair exchange rate: $\circ$) that the business owner could set to exchange to cash as soon as he receives payments, this will remove the possible effect that Bitcoin price volatility could have on the payments. It requires some credentials to access the PoS page (Availability: $\circ$). \textbf{Deployability:} The only thing required by this approach is a smart phone or a small computer that users could interact with and browse to the Bitpay payment page, preferably with a touchscreen for easier price input and user interaction (Low cost to run: $\bullet$). It is easy to add more branches to the original account or even make a new account for the second branch (Enables branching: $\bullet$). \textbf{Privacy:} Bitpay has another approach for preserving the privacy. As they generate a new address for each transaction, the payee's privacy is safe(Payee's privacy: $\bullet$). However there has been reports of account suspensions because the payments were coming from flagged Bitcoin addresses (\textit{e.g.,}\xspace black markets\footnote{Darknet Blackmarkets\url{https://en.wikipedia.org/wiki/Darknet_market}} or LocalBitcoins \footnote{Peer to peer Bitcoin trading site \url{http://localBitcoins.com}}). In this case, the privacy, as the sense that we are evaluating, is being held but maybe not in all aspects needed in a payment system. In order to view the payments, business owner should log in his account and view the payments but other employees cannot see the list using any other accounts (Confidential Payments list: $\circ$) \textbf{Security:} Every aspect of the payment system is implemented by Bitpay, they offer one of the most secure payment systems so far and there has been no big hacks reported (Data encryption: $\circ$) . However, user has no control over his private keys and all the keys are being stored on Bitpay servers (No 3rd-party trust: none) which means complete trust to a third party. As they are a web-based solution, a device with a browser is enough to use their PoS (No software dependency: $\bullet$) \subsection{Mycelium Gear} Mycelium Gear \footnote{\url{https://gear.mycelium.com/}} is a service offered by the Mycelium group that offers a widget as an interface to the user and a service that would use the BIP32\footnote{Hierarchical Deterministic Wallets}~\cite{bip32proposal} public key provided on the Admin panel to generate new addresses securely. This means that they don't hold any private keys, but still uses the same set of paths for address generation as their Mycelium Mobile wallet uses. \begin{figure}[htb!p] \centering \includegraphics[scale=0.4]{fig/Mycelium_gear.png} \caption{Mycelium Gear Widget} \label{fig:mycelium-widget} \end{figure} \textbf{Usability:} Mycelium Gear is designed for e-commerce business and should be customized to suit a physical business PoS (User friendly: $\circ$) . There are no fees related to using this service, and they offer fast verifications on 0-confirmation transactions (Time-efficient: $\bullet$) and it's possible to chose from a list of supported exchanges to retrieve the Bitcoin exchange rate from (Fair exchange rate: $\circ$). A unique URL is needed to access the payment page and the employees should be aware of this link (Availability: $\circ$). \textbf{Deployability:} This method would be simple to implement but somehow more complicated to customize as there's not that much access to the code to be able to customize for business needs. Although the cost-to-run depending on the implementation could be almost zero (Low cost to run: $\circ$). The only deployability downside is that the payee is forced to use Mycelium Mobile wallet to manage his payments, however doing so makes it easy to use the PoS in other branches and dedicate different accounts to each branch (Enables branching: $\bullet$). \textbf{Privacy:} As Mycelium Gear uses BIP32 to generate a new address for each transaction request the payees privacy is held (Payee's privacy: $\bullet$). However, there is no user management for the report page, If the customer closes the successful payment page, the employee would not be able to check if the payment was received or not unless he has the administrator password to check the transaction list (Confidential Payments list: $\circ$). \textbf{Security:} Nothing related to the PoS holds any private information or keys that might be in danger of exposure, however all other aspects of the system is running on their infrastructure (No 3rd-party trust: $\circ$) . Although all the private keys would be in the Mycelium mobile wallet (No software dependency: $\circ$) that is not prone to mobile malwares or hardware failure (Data Encryption: $\circ$). \subsection{Discussion} As seen in Table \ref{tab:method-comp}, there is no perfect solution out of the box for a small business to start accepting Bitcoin. After further discussions with the business owner, we decided to implement our own custom PoS using available open source software. This way it would be easy to incrementally change the PoS system with the customer and employees feedback to meet the needs of the business. In the following sections, we describe \textsf{Aunja PoS}\xspace. \section {Requirements Engineering} Requirement engineering is a subfield of software engineering devoted to the pre-implementation process of software design, focusing on eliciting requirements from stakeholders, negotiating a balanced approach, and producing a system specification. RE originated in 1979~\cite{alford1979software} and was popularized about a decade later~\cite{dorfman1990system}. \subsection{SCRAM} \label{SCRAM} We adapted SCRAM (Scenario-based Requirements Analysis Method)~\cite{REScenario} as a framework to gather the requirements of this system, as there are finite scenarios for payments in a small business. Scenarios are examples of real world experiences that we use to model what is required from the system. SCRAM defines the following four phases of requirement engineering: \begin{itemize} \item \textbf{Initial requirements capture and domain familiarisation: } This is done by interviewing and fact-finding to gain a full understanding of how the business works. \item \textbf{Storyboarding and design visioning: } This is done by creating walkthroughs to show to the business and get feedback on feasibility. \item \textbf{Requirement exploration: } This uses the early prototypes and designs to get feedback from the business and validate the requirements. \item \textbf{Prototyping and requirement validation: } This is done by developing fully functional prototypes and continues refining the requirements until the product is acceptable to the business. \end{itemize} \textit{Phase 1:} We asked the caf\'{e} owner, two employees and two customers for a scenario involving Bitcoin payment to create the common ``normal use case.'' Exceptions to the normal use case could be something like a power failure, however this would also fail for current methods such as credit cards. As the caf\'{e} already has other payment systems in place, there proved to be no need to go through the caf\'{e}'s business plan or any other specifications to check for conflicts. Our only change is to implement an additional payment system at the cashier's desk. However there are Bitcoin-specific requirements like realtime Bitcoin exchange rates and obvious alert of successful or failed payments. \textit{Phase 2:} Based on the information gathered from Phase 1 and further analysis, such as user survey on the design, a storyboard was developed. \textit{Phase 3:} We developed a ``concept demonstrator''~\cite{REScenario} capable of doing a simple Bitcoin payment. The Bitcoin exchange rate and transaction amount was hard coded and the transaction would be executed manually. We asked the employees to run a mock purchase with the demonstrator to see how they would interact with the system. As Bitcoin concepts might be ambiguous for the new user, there should not be any interactions with Bitcoin concepts and terminology. After the transaction was done, the owner pointed out that there is a need for a central logging system that could be checked from time to time for accounting purposes. \textit{Phase 4:} We used the feedback gathered from phase 3 to make the first prototype. The prototype retrieved the Bitcoin exchange rate in realtime and the employee only had to input the dollar amount in \textsf{Aunja PoS}\xspace. This made it possible to keep the Bitcoin terminology out of the scope of the training for the employees. However on the first prototype, to show the successful payments, the system was showing the transaction on Blockchain Explorer\footnote{\url{http://blockchain.info}} using web-based APIs. This was not clear enough for a novice user to determine the state of the transaction. On the second round of prototyping, we designed an interface to show that the transaction has been broadcasted to the Bitcoin network (called a 0-conf transaction; security discussed below). \section{Design and Implementation} \label{Design and Implementation} Multiple approaches for implementing \textsf{Aunja PoS}\xspace were apparent. One of the lower cost methods would be to use a computer on the caf\'{e}'s network as the web server however maintenance and support could be a difficult task. The network might be overwhelmed by the high number of connected devices and might not function properly. Uptime is one of the most important aspects for a payment system. The next low cost solution is to use shared hosting to host the wallet server and design a web based payment interface for the employees, including a secure reporting page. We opted for this approach and naturally chose to implement \textsf{Aunja PoS}\xspace in PHP, a popular language for shared hosting. \subsection{Implementation measurements} \label{Implementation measurements} After multiple rounds of surveying employees and customers to understand their needs and also researching the subject, here is the break down of the results: \subsubsection{Usability} \begin{itemize} \item \textbf{User Friendly ($\bullet$): } The interface should be minimal and simple, with the ability to show the exchange rate of Bitcoin to fiat, input box for the price in dollars, estimation of Bitcoin amount equivalent to the price and a note section. As for the user facing interface, it should be simple, showing all the required information such as Bitcoin amount, the exchange rate and the QRCode for the deposit Bitcoin address. Both interfaces should indicate when the transaction is complete. \item \textbf{Time-Efficient ($\bullet$): } It should not take more than normal payment system to initiate the payment. A web based interface would have the advantage that it can be loaded from any device with Internet access. Also to verify the payment it should not take longer than needed. It also needs to use fast verification methods to indicate that the payment is propagated to Bitcoin network. Knowing that a propagated transaction is not same as confirmed transaction but is an accepted risk for low volume transactions. \item \textbf{Fair Exchange Rate ($\bullet$): } After some research, we opted for an HTTPS-enabled webservice called Bitcoinaverage\footnote{\url{https://bitcoinaverage.com}} which offers a transparent aggregation of various exchange rates to produce a fair spot price. \item \textbf{Availability ($\bullet$): } The payment interface should be open to public and should be loaded on any device. \end{itemize} \subsubsection{Deployability} \begin{itemize} \item \textbf{Low Cost to Run ($\circ$): } The only costs associated with this implementation would be the annual cost of the shared hosting that is less than \$100 for an unlimited web host. For the sake of this research, there would be no other implementation and development costs. \item \textbf{Enables Branching ($\circ$): } For now there's no plan to have more branches for this business, but depending on the implementation, launching additional branches would only involve running additional instances of the application on the server. \end{itemize} \subsubsection{Privacy} \begin{itemize} \item \textbf{No Information leakage: } The payment interface does not reveal any information about the backend nor the business' internal infrastructure. \item \textbf{Maintains Payee's Privacy ($\bullet$): } There should be a new address generated for each transaction request so no one can see how much the business have received in Bitcoin prior or after each transaction. \item \textbf{Maintains Payer's Privacy ($\circ$) : } This would be the payers Bitcoin wallet client responsibility and it would be out of the scope of this PoS system. \item \textbf{Confidential Payments list ($\bullet$): } A reporting and administration interface is made accessible to the business owner or designated personals. \end{itemize} \subsubsection{Security} \begin{itemize} \item \textbf{No 3rd-Party Trust ($\bullet$): } There should not be any sensitive usage of 3rd parties in the system, it should work as a stand alone system. Note that while a trusted party is referenced for the exchange rate, the received value is treated as an assertion to be verified. \item \textbf{Data Encryption ($\bullet$): } All the private keys should be encrypted and then stored on the server. \item \textbf{No Software dependency ($\circ$): } There should not be any software dependency on the payment page for the business. The software dependencies on the server side should all be included in the package as open source software. \end{itemize} \subsection{Open source libraries and software applications} After the requirement engineering phase, we looked for PHP components to form the following base. \begin{itemize} \item \textbf{Bitcoin SCI: }Bitcoin Shopping Card Interface. \item \textbf{PHP Elliptic Curve library\footnote{\url{http://matejdanter.com}}: } Used as a dependency to Bitcoin SCI to generate Bitcoin addresses. \item \textbf{Bitcoin-prices\footnote{\url{https://github.com/miohtama/Bitcoin-prices}}: } Display Bitcoin prices in human-friendly manner in fiat currency using Bitcoinaverage.com market data \item \textbf{Bitcoin SCI } (Bitcoin Shopping Cart Interface \footnote{\url{http://bitfreak.info/?page=tools&t=bitsci}}): is a set of libraries and tools that enables the user to process Bitcoin transactions with only PHP. It is originally designed to be integrated in e-commerce websites but it could be easily modified to meet our needs. \end{itemize} \begin{figure}[htb!p] \centering \includegraphics[scale=0.5]{fig/Payment_btsci_screen.png} \caption{Bitcoin SCI (Bitcoin Shopping Cart Interface)} \label{fig:Bitcoin-sci} \end{figure} The latter is not a complete project to process payments. The first decision was to use this package for building the prototype and then if we failed to modify the package to meet our needs, use another approach, however we did make it suit the needs and Bitcoin SCI was used in the end product. A break down of the tools Bitcoin SCI provides us are as follow: \begin{itemize} \item \textbf{Bitcoin Address generation: } Bitcoin SCI uses PHP Elliptic Curve library to generate new secure Bitcoin addresses (set of public and private keys) \item \textbf{Private key encryption: } Using phpseclib\footnote{\url{http://phpseclib.sourceforge.net}}, all the private information (Bitcoin private keys, transaction details) are stored encrypted \item \textbf{Payment Confirmation: } It uses APIs from a web tool\footnote{blockexplorer.com} to confirm receiving payments. \item \textbf{Input Interface: } Even though this package was meant to be used as an e-commerce payment system, it has the basic tools and methods to build the price input page. \end{itemize} However it lacks some other features that should be added: \begin{itemize} \item \textbf{Database: }The management and report page requires saving the transaction details into a database. \item \textbf{Fair Bitcoin Exchange rate: } It uses a predefined source to obtain the exchange rate of Bitcoin and it is not possible to set different Fiat currencies. \item \textbf{User-Friendly interface: } All the interfaces are poorly designed and need to be modified to suit the PoS system. \item \textbf {Report Page: } The report page requires authentication. \item \textbf {Input Validation: } Other than security perspective of input validation, this is needed because of the way we want the PoS to work. It should alert the employee if she has done something wrong before going to the next page. \item \textbf {Cash out option: } As all the private keys are stored encrypted in the server, we need a way to cash out the available Bitcoins and send them to another Bitcoin address. It is possible to retrieve the private keys of each Bitcoin address separately from the tool, but it's not scalable to multiple weekly transactions. \end{itemize} We use Bitcoin-prices to set Bitcoinaverage.com prices as our main source of price conversion, and it gives nice tools for interface design, such as the ability to switch between different currencies by just clicking on the price. This allows anyone deploying the system to reach a fair exchange rate in many different currencies. We use Sweet Alert\footnote{\url{http://t4t5.github.io/sweetalert}} to facilitate user-friendly javascript alert messages. In the case of data validation, we needed a simple way to inform the employee that there is a mistake to be fixed. For this case, browser-based Javascript validation saves a roundtrip to the server. \subsection{Prototyping} With the full knowledge of the requirements and a few sketches of the interface, we started developing \textsf{Aunja PoS}\xspace. Although the first prototype was ready to launch within a week, we did 3 prototypes in the month after, as each had bugs fixed and features added as we surveyed and obtained feedback from the employees on each round of prototyping. \subsubsection{PoS main functionalities} The PoS was hosted on a shared hosting service named Host Monster \footnote{\url{http://hostmonster.com}}. They offer low cost annual plans that offer PHP and MySQL which are the requirements that we needed. Then we started working with Bitcoin SCI to add the database functionality and defined tables for transaction requests and payments on MySQL. Other tasks were involved in integrating the above mentioned open source projects into each other to have a complete solution package. One of the features added on the second round of prototyping was the ability to show the Bitcoin price in USD other than the default CAD. This was added with the usage of the Bitcoin-prices library. It was possible to implement a drop down menu with all the caf\'{e}'s menu options to be added to the list but as we discussed this solution with the caf\'{e} owner, he mentioned that the items in the menu might not stay the same during the year and there might also be price changes, so that approach was not suitable for this business, although it might be a good option for an e-commerce site. \subsubsection{Private reporting page} One other aspect of the requirements was a reporting page. This was based on the feedbacks from the caf\'{e}'s owner and his preferences. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{fig/report_page.png} \caption{Report Page} \label{fig:report_page} \end{figure*} One of the important fields added later to the report page was the ``Sale Dollar Amount.'' Bitcoin's price is volatile compared to other currencies and the caf\'{e} owner did not want to risk losing money by accepting Bitcoin. So as an agreement, we decided to lock the price of each sale on the sale time, as if he was selling his products with cash.\footnote{this method is actually one of the common methods recently used by Bitcoin payment processors.} Thus on the second prototype of the report page, this field was added for accounting purposes. Another feature request was the ability to check each transaction on a blockchain explorer and also decrypt and export the private keys of those addresses that has some balance. This has been done for the admin report page. \textsf{Aunja PoS}\xspace has been made open source and available to public\footnote{\url{https://github.com/shayanb/Bitcoin-PoS-PHP}} under GNU General Public License v2 and has already been used in some other small businesses. \subsection{Training} There is no jargon or technical requirements to use \textsf{Aunja PoS}\xspace, but some details specific to Bitcoin transactions have to be taught to the employees to be able to recover from human errors while a transaction is being processed. Other than in-person training that was done with every employee, a manual was made (Figure~\ref{fig:payment_manual}) and was attached to the cashier's counter for future reference by all caf\'{e} employees. \begin{figure}[htb!p] \centering \includegraphics[scale=0.1]{fig/Payment_manual.png} \caption{PoS - Step by step manual for Bitcoin payments} \label{fig:payment_manual} \end{figure} \section{Real-world Deployment} Caf\'{e} Aunja started accepting Bitcoin with \textsf{Aunja PoS}\xspace on Oct 23, 2014, and by our knowledge was the first caf\'{e} in eastern Canada that accepts Bitcoin. \subsection{Lessons learned} One of the missing features that should be implemented in such a system is a secure fast verification method. In early Bitcoin PoS designs, for each payment, the customer needs to wait 10 minutes in average for the transaction to be confirmed and included in the blockchain. We sidestep this issue by flagging the transactions as successful as soon as the transaction is broadcasted to the Bitcoin network, also known as a 0-confirmation transaction. This could work for a PoS in a caf\'{e} as the amount of each transaction is small and it is not significantly more risky to take 0-confirmation transactions than to risk a credit card chargeback or even a customer leaving the store without paying. Future work might consider PoS devices for the Bitcoin Lightening Network~\cite{poon2015bitcoin}. It remains an open problem to remedy the risk for higher value transactions and prevent double spend attacks\cite{karame2012two,bamert2013have}. Bitcoin and Bitcoin transactions are still new concepts for most people. We encountered a countless number of questions from customers to explain what Bitcoin is and how it works and mostly they became more interested to know more about Bitcoin when they observed a payment done with the Bitcoin PoS, mostly because they would not reveal any personal information with each payment. Another interesting lesson is the concept of locked price that is the price of Bitcoin for each sale is locked to the exact exchange rate at the time of the transaction. This makes the acceptance of Bitcoin payments for the business risk-free, considering bitcoin to fiat conversion is done using the locked price in either monthly intervals or when a threshold is reached (\textit{e.g.,}\xspace 100 dollars).
{ "timestamp": "2018-02-13T02:23:18", "yymm": "1802", "arxiv_id": "1802.04236", "language": "en", "url": "https://arxiv.org/abs/1802.04236" }
\subsection*{Effective Hamiltonian of the system} \label{subsec:formalism} \noindent Consider the circuit with four connected superconducting islands and the corresponding effective lumped circuit element model in Figure \ref{fig:circuit}. The top-bottom symmetry between the capacitors and the Josephson junctions cancels the direct exchange interaction mediated by the capacitances and the leading order term from the Josephson junctions. On the other hand, we choose an asymmetry in the inductive couplings such that the difference between them controls the Heisenberg-like exchange coupling, which can be made arbitrarily small if so desired. Furthermore, while the exchange interaction mediated by the leading-order Josephson term ($E_{J1}/2$ in Supplemental Note 1) is canceled by symmetry, the dispersive ($ZZ$) coupling survives. The couplings are defined in Eq. (A61) and as seen in Supplementary Note 2, a $ZZ$ coupling strength of $\sim \SI{30}{\mega\hertz} $ is realistic. Moreover, the $ZZ$ coupling can be tuned in-situ by an external flux. \begin{figure*}[htb!] \centering \includegraphics[width=\textwidth]{realistic_circuit_plus_lumped.pdf} \caption{ \textbf{a} Sketch of a possible physical implementation of the proposed circuit. Each colored box is a superconducting island corresponding to a node in a lumped circuit element model. Josephson junctions are shown schematically as yellow crosses. Bent black wires are inductors. The numbered colored lines are controls for readout and driving of the circuit: 1 and 3 are the flux lines for frequency tuning of the outer qubits, 2 and 4 are resonators capacitively coupled to left and right qubits, and lines 5 and 6 are control and driving of the two middle islands forming the middle qutrit. \textbf{b} Effective Lumped circuit scheme of the same circuit. The four nodes in the system are shown as dots and Josephson Junctions are shown as crosses.} \label{fig:circuit} \end{figure*} Using the effective lumped-element circuit of Figure \ref{fig:circuit}b and following the standard procedure for circuit quantization \cite{Devoret,transmon_original}, we derive the Hamiltonian of the system involving a suitable set of variable for the relevant dipole modes of the circuit, as detailed in the Method and in more details in the Supplementary Note 1. The Hamiltonian of the resulting qubit-qutrit-qubit system shown in Figure 2 takes the form: \begin{equation} \label{eq:H_full} \begin{split} H = & \frac{1}{2} \Delta_L \sigma^z_L + \Delta_M \ket{1} \bra{1} + \left( \Delta_M + \delta_M \right) \ketbra{2} + \frac{1}{2} \Delta_R \sigma^z_R \\ & + J_{LM_{01}}\left( \sigma_L^- \ketbra{1}{0} + \sigma_L^+ \ketbra{0}{1} \right) \\ & + J_{RM_{01}}\left( \sigma_R^-\ketbra{1}{0} + \sigma_R^+\ketbra{0}{1} \right) \\ & + J_{LM_{12}} \left( \sigma_L^-\ketbra{2}{1} + \sigma_L^+\ketbra{1}{2} \right) \\ & + J_{RM_{12}}\left( \sigma_R^-\ketbra{2}{1} + \sigma_R^+\ketbra{1}{2} \right) \\ & + J_{LM}^{(z)} \sigma^z_L\left( D_1 \ketbra{1} + D_2 \ketbra{2} \right) \\ & + J_{RM}^{(z)}\sigma^z_R \left( D_1\ketbra{1} + D_2\ketbra{2} \right) , \end{split} \end{equation} where $\sigma_{\alpha}^+$ and $\sigma_{\alpha}^-$ are the spin-1/2 raising and lowering operators for the left ($\alpha =L$) and right ($\alpha =R$) qubits, $\sigma_{\alpha}^z $ is the Pauli $Z$ operator, and $\Delta_{L,R}$ is the energy differences between the spin-up and spin-down states of the corresponding qubit. The states of the qutrit are denoted by $\ket{j}$ ($j =0,1,2$), $\Delta_{M}$ is the energy of state $\ket{1}$ and $\Delta_{M} + \delta_{M}$ is the energy of state $\ket{2}$, making the anharmonicity equal to $ \Delta_M - \delta_M $, with the energy of the ground state $\ket{0}$ set to zero. $\Delta_{M}$ and $\delta_{M}$ can be tuned dynamically by an external flux if additional flux lines are added to the circuit, or by an AC-Stark shift stemming from off-resonant microwave driving \cite{Blais_cQED} using the lines 5 and 6 in fig. \ref{fig:circuit} (a) (see Supplementary Note 1). We note that the AC-Stark shift is included here for additional dynamical tuning of the circuit but it is not essential for the gate implementations discussed below. \begin{figure}[htb!] \centering \includegraphics[width=0.99\columnwidth]{diagram.pdf} \caption{ Energy diagram of the system of two qubits (left, $L$, and right, $R$) and a qutrit (middle, $M$) described by the Hamiltonian in Equation \eqref{eq:H_full}. Also shown are the exchange couplings $J_{\alpha M}$ and state-dependent energy shifts $J_{\alpha M}^{(z)}$ of \eqref{eq:H_full}. $D_i$ depend on the state of the qutrit $\ket{i}$, with, $D_0=0$, and typically, $D_{1} \gtrsim 2$ and $D_{2} \lesssim 4$.} \label{fig:diagram} \end{figure} The exchange ($XY$) coupling strengths between the qubits and the qutrit are given by $J_{\alpha M}$. Typically, the coupling $J_{\alpha M_{12}}$ ($ \alpha = L,R$) to the $\ket{1} \leftrightarrow \ket{2}$ transition is stronger than the coupling $J_{\alpha M_{01}}$ to the $\ket{0} \leftrightarrow \ket{1}$ transition by a factor $\sim \sqrt{2}$. The coefficients $J_{\alpha M}^{(z)}$ determine the dispersive ($ZZ$) interaction between the qubits and the qutrit with $D_1 \gtrsim 2$ and $D_2 \lesssim 4$, with the two parameters converging at 3 as the mixing is increased via the off-resonant driving. For clarity, these parameters are shown in Figure \ref{fig:diagram}. In a perfectly left-right symmetric circuit, we have $\Delta_L = \Delta_R$ and $J_{L M} = J_{R M}$ for both qutrit transitions which is assumed below unless otherwise stated. For typical experimental parameters, the coupling strengths $J$'s are in the range of few to some tenths of MHz, while the energies $\Delta$'s and $\delta$'s are in the $~10$ GHz range. We choose realistic values of the experimental parameters so as to stay within the transmon regime \cite{transmon_original}. Apart from the intrinsic dynamics of the system, we will employ an external microwave (mw) field of (variable) frequency $\omega_{\mathrm{mw}}$ to drive the system. Physically, the driving can be applied through resonators 2 and 4 capacitively coupled to the outer qubits, and control lines 5 and 6 capacitively coupled to the qutrit, as shown in Figure 1(a). The microwave field induces transitions between the qubit and qutrit states as described by the Hamiltonian \begin{equation} \label{eq:mw_driving} \begin{split} H_{\mathrm{mw}} =& \cos(\omega_{\mathrm{mw}}t) (\Omega_L \sigma^+_L + \Omega_R \sigma^+_R \\ &+ \Omega_{1} \ketbra{0}{1} + \Omega_{2} \ketbra{1}{2} + \mathrm{H. c} ), \end{split} \end{equation} where $\Omega$'s are the corresponding Rabi frequencies. Moreover, multifrequency pulses generated by an appropriate microwave source and directed to the qutrit via the control lines can be used to dynamically tune the qutrit transitions (see Supplementary Note 1 for the general treatment of transitions of a capacitively coupled qutrit). We note that unlike our qubits and qutrit, in flux qubits optical selection rules may depend on the magnetic flux \cite{PhysRevLett.95.087001_selection_rules_flux_qubit}. Our system can be used to achieve many quantum information tasks, examples of which are described below. The qutrit can encode a qubit in either states $\left(\ket{0},\ket{1}\right)$ or states $\left(\ket{0},\ket{2}\right)$. This is solely a matter of convenience and it is straightforward to toggle between these two encodings by applying a $\pi$-pulse on the $\ket{1}\leftrightarrow\ket{2}$ transition. \subsection*{Qutrit dissociation and entangled state preparation} \label{subsec:dissociation} \noindent We now discuss how to deterministically prepare entangled states in the setup, which is of great importance for quantum computation and information tasks \cite{QC_implementation,entanglement_role}. In our system, we can employ the qutrit to deterministically prepare an entangled Bell state between the outer qubits $ \frac{1}{\sqrt{2}}(\ket{\downarrow \downarrow} + \ket{\uparrow \uparrow})$, as detailed below. First, we tune the energy levels of the qutrit to make its two transitions $ \ket{0} \leftrightarrow \ket{1} $ and $ \ket{1} \leftrightarrow \ket{2} $ non-resonant with the transitions $ \ket{\downarrow} \leftrightarrow \ket{\uparrow}$ of the qubits, i.e., we require that $ \abs{\Delta_M - \Delta_\alpha} \gg J_{\alpha M_{01}} $ and $ \abs{\delta_M - \Delta_\alpha} \gg J_{\alpha M_{12}} $. Starting from the ground state $\ket{0}$, we produce the superposition state $\frac{1}{\sqrt{2}}(\ket{0} + \ket{2}) $ of the qutrit by external driving, employing the STIRAP (STImulated Raman Adiabatic Passage) sequence of pulses \cite{STIRAP}, as has been also proposed \cite{PhysRevLett.100.113601} and implemented \cite{Kumar2016} in superconducting circuits before. Namely, we drive the transitions $ \ket{0} \leftrightarrow \ket{1} $ and $ \ket{1} \leftrightarrow \ket{2} $ with resonant mw-pulses of Rabi frequencies $\Omega_1$ and $\Omega_2$. The $\Omega_2$ pulse precedes the $\Omega_1$ pulse, and we adjust the overlap between the pulses so as to obtain the transfer to state $\ket{2}$ with minimal population of the intermediate state $\ket{1}$. The two pulses are suddenly turned off when their amplitudes are equal, resulting in the desired superposition state. The dynamics of the qutrit under the STIRAP driving is shown in Figure \ref{fig:dissociation} (a), with the inset showing the pulse sequence. Next, the Bell state is obtained as $ \frac{1}{\sqrt{2}}\ket{\downarrow \downarrow} (\ket{0} + \ket{2}) \to \frac{1}{\sqrt{2}}(\ket{\downarrow \downarrow} + \ket{\uparrow \uparrow}) \ket{0}$ via ``dissociation'' of the qutrit excitation $\ket{2}$ into two qubit excitations $\ket{\uparrow \uparrow}$. To this end, we set $ \Delta_M + \delta_M = \Delta_L + \Delta_R$ and choose $\abs{\Delta_M - \Delta_\alpha} > J_{\alpha M_{01}}$ via tuning the frequencies of the outer qubits with flux control and the qutrit with the dynamical driving. Note that this condition applies when $J_{\alpha M_{01}} = J_{\alpha M_{12}}$. If the exchange coefficients are different, as is normally the case, the qutrit is moved out of the two-photon resonance by unequal second order level shifts $|J_{\alpha M_{01}}|^2/(\Delta_M - \Delta_\alpha) \neq |J_{\alpha M_{12}}|^2/(\delta_M - \Delta_\alpha)$, which can be compensated for by adjusting $\Delta_M$ or $\delta_M$. Making the intermediate state $\ket{1}$ non-resonant precludes its population but prolongs the dissociation, which results in a more pronounced effect of the noise and relaxations. The dissociation dynamics $ \ket{\downarrow 2 \downarrow} \rightarrow \ket{\uparrow 0 \uparrow} $ is shown in Figure \ref{fig:dissociation}. \begin{figure} \includegraphics[width=0.99\columnwidth]{dissociation_and_STIRAP.pdf} \caption{\label{fig:dissociation}\textbf{a)} Populations of states $\ket{0}$, $\ket{1}$ and $\ket{2}$ of the middle qutrit during the half STIRAP in the case of off-resonant qutrit levels and $\max(\Omega_{1,2})/2\pi = \SI{20}{\mega\hertz}$. The inset shows the envelopes of the mw pulses. \textbf{b)} Dissociation of the initial state $\ket{\downarrow 2 \downarrow} $ into the final state $\ket{\uparrow 0 \uparrow} $ with the two-photon resonance $\Delta_M + \delta_M = \Delta_L + \Delta_R $ while the intermediate states $\ket{\uparrow 1 \downarrow}$, $\ket{\downarrow 1 \uparrow}$ are off-resonant. We used the parameters $J_{\alpha M_{01}}/2\pi \simeq \SI{15.1}{\mega\hertz} $ and $J_{\alpha M_{12}}/2\pi \simeq \SI{19.4}{\mega\hertz}$. We also include finite coherence times as described in Methods.} \end{figure} Using the pairwise $ZZ$-interactions, the fully entangled three-particle $GHZ$ (Green-Horne-Zeilinger) state $\frac{1}{\sqrt{2}}(\ket{\downarrow 0 \downarrow} + \ket{\uparrow 1 \uparrow})$ can be obtained from the prepared Bell state $\frac{1}{\sqrt{2}}(\ket{\downarrow 0 \downarrow} + \ket{\uparrow 0 \uparrow})$ by the external driving of the middle qutrit. To this end, we apply to the circuit a weak $\pi$ pulse $\Omega_1$ which is resonant only for the $\ket{\uparrow 0 \uparrow} \leftrightarrow \ket{\uparrow 1 \uparrow}$ transition and non-resonant for the $\ket{\downarrow 0 \downarrow} \leftrightarrow \ket{\downarrow 1 \downarrow}$ transition, due to the $ZZ$ interactions with the strengths $J_{\alpha M}^{(z)} \gg \Omega_1$. Alternatively, we can encode a qubit in the $\left(\ket{0},\ket{2}\right)$ states of the qutrit and produce a different maximally entangled state $ \frac{1}{\sqrt{2}}(\ket{\downarrow 2 \downarrow} + \ket{\uparrow 0 \uparrow})$, equivalent to the $GHZ$ state above. Starting from the simple initial state $\ket{\downarrow 2 \downarrow}$, we use only the intrinsic system dynamics by tuning the parameters until $\Delta_L = \Delta_R = \Delta_M = \delta_M$ and $D_2J_{\alpha M}^{(z)} = \frac{2\sqrt{6}J_{\alpha M_{01}}}{\sqrt{(n\pi/c_n)^2 - 1}}$ for $n=1,2,3,\dots$ and $c_n = \cos[-1](\frac{(-1)^{n+1}}{8}) $. Here, $n$ is a parameter controlling at which oscillation between the states $ \ket{\downarrow 2 \downarrow} $ and $\ket{\uparrow 0 \uparrow} $ their equal superposition is obtained (lower $n$ is quicker). The disadvantage of this scheme is that it requires very precise tuning of the interactions $J_{\alpha M}^{(z)}$. In contrast, for the method above, only the frequencies have to be adjusted, which is easier using the dynamical tuning or an equivalent flux tuning. Further details are given in Supplementary Note 3. \subsection*{Toffoli and CCZ gates} \label{subsec:CCNOT} \noindent The controlled-controlled \textsc{not} (\textsc{ccnot}) gate, also called the Toffoli gate, is a reversible and universal 3-bit gate for classical computation \cite{Toffoli1980}. It performs a \textsc{not} (bit-flip) operation on the target bit if the two control bits are in state `1', and does nothing otherwise. The Toffoli gate is an important element in many quantum algorithms, such as quantum error correction \cite{PhysRevLett.81.2152} and Shor's algorithm \cite{doi:10.1137/S0097539795293172}. It has been implemented in systems ranging from trapped ions \cite{PhysRevLett.102.040501} to superconducting circuits \cite{implementation_toffoli}, including a proposal for an implementation with static control optimized with machine learning \cite{Banchi2016}. We can implement the \textsc{ccnot} gate with the left qubit and the middle qutrit acting as controls and the right qubit being the target. The state of the right qubit is then inverted only if the left qubit is in the spin up (excited) state $\ket{\uparrow}$ and the qutrit is in the ground state $\ket{0}$. The second control qubit is encoded in the qutrit states $\ket{0}$ and $\ket{2}$. The quantum \textsc{ccnot} gate is realized by first executing a double-controlled phase (\textsc{ccz}) gate that shifts the phase of the state $\ket{\uparrow0\uparrow}$ by $\pi$ (sign change) while nothing happens if either of the outer qubits are in the spin down state or if the qutrit is in state $\ket{2}$. The \textsc{ccnot} can then be obtained by the transformation: $\operatorname{\textsc{ccnot}} = \mathcal{H} \cdot \mbox{\textsc{ccz}} \cdot \mathcal{H}$, where $\mathcal{H}$ is the Hadamard gate that acts on the target qubit $R$. In practice, the Hadamard gate can be obtained by a $ \pi/2 $ rotation about the $y$ axis. \begin{figure} \centering \makebox[0.5\columnwidth][c]{\includegraphics[width=0.9905\columnwidth]{CCPHASE.pdf}} \caption{Numerical simulation of the implementation of the \textsc{ccz} gate in the rotating frame. The phase of the right qubit is flipped, $\ket{0_H} \to \ket{1_H}$, if the left qubit is in state $\ket{\uparrow}$ and the qutrit is in state $\ket{0}$, otherwise no change occurs as exemplified in a) for the state $\ket{\uparrow 2 \, 0_H} $ and in b) for the state $\ket{\downarrow 0 \, 0_H} $. A subsequent Hadamard gate on the right qubit will yield the desired \textsc{ccnot} gate. The standard circuit representation of the Toffoli gate is shown as an inset in the upper panel of the figure. See Supplementary Material for the parameters used in the simulation.} \label{fig:CCPHASE} \end{figure} The \textsc{ccz} gate is implemented by choosing suitable parameters such that the transitions between the qubit and qutrit states are non-resonant, while $J_{\alpha M}^{(z)}$ ($>\SI{10}{\mega\hertz}$) is large. We apply a weak microwave field on the qutrit transition $ \ket{\uparrow 0 \uparrow} \leftrightarrow \ket{\uparrow 1 \uparrow}$ with the Rabi frequency $\Omega_1 \ll J_{\alpha M}^{(z)}$. Because of the $ZZ$ interactions, which yield a state-dependent frequency shift of the qutrit, the microwave field frequency can be chosen such that it is resonant only when both outer qubits are in the spin-up state. The microwave $2\pi$-pulse then results in the transformation $ \ket{0} \to i \ket{1} \to - \ket{0}$ that leads to the double conditional $\pi$ phase change of (only) the state $\ket{\uparrow 0 \uparrow}$. For simplicity, we have here used a standard square-pulse control. In a real-life implementation, a DRAG pulse \cite{PhysRevLett.103.110501_DRAG} or similar optimized pulses could be used, suppressing leakage to, and phase errors from, other levels and thus further improving the fidelity. In Figure \ref{fig:CCPHASE} we show the results of the numerical simulations of the \textsc{ccz} gate in the Hadamard basis for the right qubit (defined as $\ket{0_H} = \ket{+} = (\ket{\downarrow} + \ket{\uparrow})/\sqrt{2} $ and $ \ket{1_H} = \ket{-} = (\ket{\downarrow} - \ket{\uparrow})/\sqrt{2} $). Subsequent application of the Hadamard gate to the right qubit will complete the \textsc{ccnot} gate. We note that because of the symmetry of the driving, we could have also chosen the qutrit state $\ket{1}$ instead of $\ket{0}$ as the `open' state, but here we can view this merely as an ancillary state. \begin{figure*}[tb!] \includegraphics[width=0.9\textwidth]{CSWAP_all.pdf} \caption{\label{fig:CSWAP} \textbf{a)} and \textbf{b)} Numerical simulations of the \textsc{acswap} (almost \textsc{cswap}) gate for different computational basis states, with the exchange interaction $J_{\alpha M_{12}}$ resonant for time $T = \pi / \sqrt{2}J_{\alpha M_{12}}$. The standard circuit representation of the Fredkin gate is shown as an inset in the top part of the figure. \textbf{b)} Numerical simulation of the full \textsc{cswap} gate for the initial superposition state $ \left[\cos(\theta_1)\ket{\uparrow} + \text{e}^{i\phi_1}\sin(\theta_1)\ket{\downarrow}\right] \ket{1} \left[\cos(\theta_2)\ket{\uparrow} + \text{e}^{i\phi_2}\sin(\theta_2)\ket{\downarrow}\right] $ with $\theta_1 = \pi/4,\ \phi_1 = 3\pi/4,\ \theta_2 = 3\pi/4 \qq{and} \phi_2 = \phi_1 $. In part 1, we perform the \textsc{acswap} operation during time $T_1 = \pi/2J_{\alpha M_{01}}$ with the parameters as in figure \ref{fig:CSWAP}. In part 2 we perform the \textsc{ccz} gate during time $T_2 = 2\pi/ \Omega_{2}$ with the resonant mw field of frequency $ \omega_\mathrm{mw} = \delta_M - 2J_{\alpha M}^{(z)}$. The full \textsc{cswap} fidelity of this state is $>0.98$, with finite coherence times included. See Supplementary Material for the parameters used in the simulation.} \end{figure*} \subsection*{Fredkin gate} \label{subsec:CSWAP} \noindent Another classically universal 3-bit gate is the Fredkin gate, whose quantum analog is the controlled \textsc{swap} (\textsc{cswap}) gate. Its effect is to swap the states of the two qubits, $\ket{\uparrow \downarrow} \leftrightarrow \ket{\downarrow \uparrow}$, conditional upon the state of a control qubit, here encoded in the qutrit. We now use the two lowest states ($\ket{0}$ and $\ket{1}$) of the qutrit to encode the qubit such that the excited state $\ket{1}$ is `on' and the ground state $\ket{0}$ is `off'. To realize \textsc{cswap}, we tune the energy levels of the qutrit such that the transition $\ket{1} \leftrightarrow \ket{2}$ is resonant with the qubit transitions $\ket{\uparrow} \leftrightarrow \ket{\downarrow}$, i.e. $\Delta_L \simeq \delta_M \simeq \Delta_R$. Simultaneously, the qutrit transition $\ket{0} \leftrightarrow \ket{1}$ is largely detuned, $\abs{\Delta_M - \Delta_{L,R}} \gg J_{\alpha M_{01}}$. We then keep the resonance of the exchange interaction $J_{\alpha M_{12}} \gg J_{\alpha M}^{(z)}$ for time $T = \pi / \sqrt{2}J_{\alpha M_{12}} $. If the qutrit is in state $\ket{0}$, the qubits remain in their initial states due to absence of resonant transitions. But if the qutrit is in state $\ket{1}$, it would induce the swap between the qubit states, $\ket{\uparrow 1 \downarrow} \leftrightarrow \ket{\downarrow 1 \uparrow}$, via the resonant intermediate state $\ket{\downarrow 2 \downarrow}$ involving the qutrit excitation. (Resonant swap between the qubits would also occur for the qutrit initially in state $\ket{2}$, with the intermediate state being $\ket{\uparrow 1 \uparrow}$.) This is illustrated in Figure \ref{fig:CSWAP} (a). As can also be seen in Figure \ref{fig:CSWAP} (a), however, the initial state $\ket{\downarrow 1 \downarrow}$ has trivial dynamics, unlike the rest of the swapped states which attain a $\pi$ phase shift during the interaction time $T$. This means that we have a \textsc{swap} operation only up to a conditional phase for an arbitrary superposition input state. This is related to the phase shift of the swapped terms arising in the i\textsc{swap} gate, obtained by directly coupling two resonant qubits, which has recently attracted great interest \cite{Blais_cQED,iSWAP_IBM}. In our case with the qutrit mediating the swap, only one state has a sign that needs correction, similarly to what Kivlichan et al. has recently called the ``fermionic simulation gate'' \cite{SWAP_google}. Because of this, we can easily mitigate this problem by using the \textsc{ccz} gate (see Subsection ``Toffoli and CCZ gates'') to attain the $\pi$ phase shift of state $\ket{\downarrow 1 \downarrow}$ and obtain the correct \textsc{cswap} gate. In Figure \ref{fig:CSWAP} (b) we show the results of our numerical simulations of the complete \textsc{cswap} protocol, including the conditional resonant \textsc{swap} followed by the \textsc{ccz} gate with a total fidelity $>\SI{98}{\percent}$. More detailed analysis is given in Supplementary Note 4. We note that we could have equivalently performed the \textsc{cswap} gate between the two qubits via the resonant qutrit transition $\ket{0} \leftrightarrow \ket{1}$, while the other transition $\ket{1} \leftrightarrow \ket{2}$ is non-resonant. In our scheme, the qutrit has to play the role of control and thus our Fredkin gate is not a universal multi-qubit gate in itself. We could, however, imagine another qubit with controlled coupling to the qutrit as part of a larger universal circuit. \subsection*{Double-controlled holonomic gate} \label{subsec:holonomic} \begin{figure} \includegraphics[width=0.99\columnwidth]{CCHolonomic_pi_over_4.pdf} \caption{\textbf{a} and \textbf{b} Populations (left vertical axes) as function of time during the operation of the controlled-controlled holonomic gate in the case of $ \theta = \pi/4 $ and $ \phi = 0 $. Panel \textbf{a} also shows the envelope of the external fields plotted with dotted lines with the corresponding vertical axis on the right of the plot. See Supplementary Material for the parameters used in the simulation.} \label{fig:holonomic_dynamics} \end{figure} \noindent Another concept with importance to quantum computation \cite{nielsen_chuang_2010} is the implementation of general (non-abelian) one-qubit gates of the form (neglecting overall phase factors) \begin{equation} \label{eq:universal_singlequbit} U = \begin{pmatrix} \text{e}^{i\phi_1}\cos(\theta) & \text{e}^{i\phi_2}\sin(\theta) \\ -\text{e}^{-i\phi_2}\sin(\theta) & \text{e}^{-i\phi_1}\cos(\theta) \end{pmatrix}. \end{equation} Together with a non-trivial (entangling) multi-qubit gate, they form a universal set of quantum gates \cite{nielsen_chuang_2010}. We can implement the non-adiabatic one-qubit holonomic gate \cite{Pachos:2012:ITQ:2331123,HQC} with our qutrit, choosing states $\left(\ket{0},\ket{2}\right)$ to encode the qubit. Such gates have the advantages of being robust to parameter fluctuations due to the geometric nature, without the limitations of long gate operation times required to satisfy the adiabatic requirement \cite{PhysRevA.70.042316,doi:10.1142/S0217979201004836}. Holonomic gates have been implemented in a range of different systems \cite{holonomic_realization,PhysRevLett.110.190501}, and their stability has been well tested \cite{PhysRevLett.102.030404,PhysRevLett.112.143603}. Choosing the same system parameters as for the \textsc{ccz} gate above, we use a driving scheme inspired by \cite{holonomic_realization} and thereby realize the single-qubit gate \begin{equation} \label{eq:holonomic} U(\phi,\theta) = \begin{pmatrix} \cos(\theta) & \text{e}^{i\phi}\sin(\theta) \\ \text{e}^{-i\phi}\sin(\theta) & -\cos(\theta) \end{pmatrix}, \end{equation} with the computational qubit states as basis. This transformation is less general than \eqref{eq:universal_singlequbit}, but it is still universal for one-qubit rotations. We drive the two transitions $ \ket{0} \leftrightarrow \ket{1} $ and $ \ket{1} \leftrightarrow \ket{2} $ with the external fields having the same Gaussian envelope $\Omega(t)$ but different complex coupling amplitudes $a$ and $b$, i.e. $\Omega_1(t) = a\Omega(t)$ and $\Omega_2(t) = b\Omega(t)$ in Equation \eqref{eq:mw_driving}, satisfying $\abs{a}^2 + \abs{b}^2 = 1 $. The pulse $\Omega(t)$ is turned on at time $t=0$ and turned off at $t=\tau$, such that we get a $2\pi$-pulse, $ \int_0^\tau \Omega(t) \text{d}t = 2\pi$. Notice that this condition ensures that we end up with a closed path in parameter space and the gate is indeed holonomic. Starting with the qutrit in the ground state $\ket{0}$, we then obtain the final transformation $U(\phi,\theta)$ of Equation (\ref{eq:holonomic}) acting on the qutrit states $\ket{0}, \ket{2}$. Here $\theta$ and $\phi$ are defined via $\text{e}^{i\phi}\tan({\theta/2}) = a/b $. By using the $J_{\alpha M}^{(z)}$ couplings to shift the qutrit frequencies, we can make the external driving field resonant or not, depending on the states of the outer qubits. This condition would then result in a controlled-controlled holonomic gate transforming the state of the qutrit according to \eqref{eq:holonomic} only when the outer (control) qubits are in e.g. the spin up state. We can show that this new gate is universal for quantum computing by first writing it in the three-qubit computational basis with $\ket{6} = \ket{\uparrow 0 \uparrow}$ and $ \ket{7} = \ket{\uparrow 2 \uparrow}$, and the rest of the 8 basis states numbered from 0 to 5: \begin{align} U^\text{c}(\phi,\theta) = \left(\begin{array}{c | c} \mathbb{I} & 0 \\ \hline 0 & \begin{smallmatrix} \cos(\theta) & \text{e}^{i\phi}\sin(\theta)^{\phantom{I^2}} \\ \text{e}^{-i\phi}\sin(\theta) & -\cos(\theta) \end{smallmatrix} \\ \end{array}\right), \end{align} where $\mathbb{I}$ is the $6 \times 6$ identity matrix and the superscript c indicates that this is the controlled version of the holonomic gate. We now apply this transformation twice: \begin{align} U^\text{c}(\pi/2,\theta)U^\text{c}(0,0) = \left(\begin{array}{c | c} \mathbb{I} & 0 \\ \hline 0 & \begin{smallmatrix} \cos(\theta) & -i\sin(\theta)^{\phantom{I}} \\ -i\sin(\theta) & \cos(\theta) \end{smallmatrix} \\ \end{array}\right). \end{align} This is equal to the famous Deutsch gate except for a factor $i$ on the $2\times2$ rotation matrix. The Deutsch gate is universal for quantum computation \cite{Deutsch73}, and thus our double-controlled holonomic gate is also universal. Implementation of this gate have previously been proposed using Rydberg atoms \cite{PhysRevApplied.9.051001}, albeit using three laser pulses instead of only two as in our case. In Figure \ref{fig:holonomic_dynamics} we show the evolution of different initial states under gate operation. Evidently, the qutrit rotation is blocked when the left qubit is in the spin-down state or both qubits are in the spin down state while the qutrit is rotated according to Equation \eqref{eq:holonomic} when both qubits are in the excited state. \begin{figure} \includegraphics[width=0.99\columnwidth]{CHolonomic.pdf} \caption{Populations of state versus $\theta$ ($\phi = 0$) after the application of the controlled-controlled holonomic gate to the initial state $\ket{\uparrow 0 \uparrow}$ (black and green points) and $\ket{\downarrow 0 \uparrow}$ (red points). See Supplementary Material for the parameters used in the simulation.} \label{fig:holonomic} \end{figure} In Figure \ref{fig:holonomic}, we show the populations of the final states for various values of $\theta$, while $\phi = 0$. The theoretical curves $\cos^2\!\theta$ and $\sin^2\!\theta $ from \eqref{eq:holonomic} are also shown and we observe a very good agreement. The final population of the blocked state $ \ket{\downarrow 0 \uparrow} $ is somewhat lower than expected primarily due to leakage to the other levels via a weak interaction with the external field, even though it is far from resonance. This leakage is also apparent in Figure \ref{fig:holonomic_dynamics} and can potentially be reduced by employing pulse shaping techniques \cite{PhysRevLett.103.110501_DRAG}. \section*{Discussion} \label{sec:discussion} \noindent To summarize, we have proposed a realistic superconducting circuit, consisting of a qutrit and two qubits, for efficient implementations of multi-qubit quantum gates. By utilizing the second excited state of the qutrit in the middle position, we proposed simple schemes for generating a maximally entangled Bell state of the outer qubits and a GHZ state of the qubits and the qutrit. Furthermore, our construction can implement several important quantum gates, such as the \textsc{ccnot} (Toffoli), and \textsc{cswap} (Fredkin) gates. We note that with qubits only, the theoretically most efficient realizations of the Fredkin and Toffoli gates each requires five two-qubit gates \cite{PhysRevA.88.010304_toffoli_minimum}. State of the art implementations of the Toffoli and CCZ gates using superconducting circuits have operation times ranging from $\SI{90}{\nano\second} $ (with poor fidelity) \cite{implementation_toffoli} to about $\SI{260}{\nano\second}$ \cite{PhysRevB.96.024504_toffoli_2017}. As for the Fredkin gate, we are not aware of an implementation with a superconducting circuit, but a hybrid scheme proposal has a gate execution time of $\SI{350}{\nano\second} $ \cite{Liu:18_hybrid_fredkin}. Using the current state of the art superconducting systems requiring $\SI{40}{\nano\second} $ per two-qubit gate \cite{2019arXiv190302492R_fast_two_qubit,PhysRevLett.109.060501_IBM_two_qubit}, the total three-qubit gate time would be at least $ \SI{200}{\nano\second} $. For comparison, our proposed scheme can complete the three-qubit operations in $\SI{100}{\nano\second} $. Thus, our results exemplify the flexibility and usefulness of qutrits for very efficient realizations of three-qubit gates, and demonstrate the potential of our circuit to serve as a basis for more complicated superconducting circuits. Our scheme can implement in principle any controlled-controlled unitary operation on the qutrit. As an example, we have considered the double-controlled non-abelian holonomic quantum gate on a single qubit, which can be used to implement the three-qubit Deutsch gate in only two operations. This implementation is more effective than current proposals with Rydberg atoms \cite{PhysRevApplied.9.051001}, while we are not aware of an implementation using superconducting circuits. We have implemented the holonomic gate as it is robust to parameter noise \cite{PhysRevA.70.042316} stemming from the geometric nature of this gate. The strategy of using such gates is known as holonomic quantum computation (HQC) \cite{HQC} and the universal non-abelian HQC (NHQC) generalization has since been performed by Sjöqvist et al. \cite{Sjoqvist_NHQC}. Here, three bare energy eigenstates are needed and are conveniently provided by the qutrit. A natural next step is to try to implement the two-qubit non-adiabatic holonomic quantum gate also suggested by Sjöqvist et al., requiring two nearest-neighbor qutrits. Such a gate could be possibly achieved by our circuit upon expanding the basis of one of the outer qubits. Together with the holonomic one-qubit gate, this would realize a universal set of holonomic gates. Realizing qutrit-qutrit interactions would also open the possibility of implementing higher-order effective spin chains, such as the spin-one Haldane spin model \cite{PhysRevLett.50.1153_Haldane_original,Haldane_spin_gap_review}, especially if the coherence times of higher levels are further prolonged \cite{noise_higher_level}. Another possible use of qutrits and a circuit similar to the one proposed in this paper is the implementation of autonomous quantum error correction via engineered dissipation. With a relatively small increase in circuit complexity including three energy levels, an impressive increase in transmon coherence time was predicted in Ref. \cite{PhysRevLett.116.150501_Kapit}. \section*{Methods} \label{subsec:methods} \noindent Consider the circuit with four connected superconducting islands with lumped element circuit shown in Figure \ref{fig:circuit} (b). After obtaining the Lagrangian of the corresponding effective lumped element model system in the node flux picture, we perform a suitable change of coordinates, primarily mixing the two central flux node coordinates: $ \psi_1 = \phi_a + \phi_b - 2\phi_c $, $ \psi_2 = \phi_a - \phi_b $ and $ \psi_3 = \phi_a + \phi_b - 2\phi_d $, where the $\phi$s are the flux node variables shown in the circuit (in natural units). They represent the horizontal dipole mode between the left superconducting island and the two middle islands, the vertical dipole mode between the two middle islands, and the horizontal mode between the right island and the two middle islands, respectively. With this choice of coordinates, we obtain three effective nodes with the relevant degrees of freedom sequentially coupled via non-linear interactions. We truncate the outer nodes to the lowest two states, obtaining qubits, while for the middle node we instead choose to truncate its Hilbert space to the lowest three energy levels, obtaining a qutrit. All three degrees of freedom are in the transmon limit with the kinetic energy terms being much smaller than the potential energy terms. Finally, by transforming to a rotating frame and making a rotating wave approximation to eliminate the fast oscillating terms, we obtain an effective Hamiltonian for the system of two qubits each coupled to the qutrit (see Supplementary Note 1 for the full derivation). This Hamiltonian is given in \eqref{eq:H_full}. The drive line terms are added to the non-truncated Lagrangian as externally varied flux nodes and a transformation to an appropriate frame rotating with the external field is performed. This transformation mixes the variables and after additional rotating wave approximations, the desired external part of the Hamiltonian is obtained along with modifications on the energy parameters, i.e. the AC-Stark shifts, which can be used for tuning the qubits and qutrit in and out of resonance. We simulate the dissipative dynamics of the system numerically, with the relaxation and decoherence times set to $T_1 = \SI{31}{\micro\second}$ and $T_2 = \SI{35}{\micro\second}$ respectively, based on recent studies \cite{noise,noise_higher_level,PhysRevB.86.100506} (we use the Python QuTip package \cite{qutip}, and relaxations are implemented by the simple built-in collapse operator functionality). The parameters of the Hamiltonian used in the numerical simulations are all obtained from realistic experimental circuit parameters, as detailed in Supplementary Note 1, and are listed in Supplementary Note 2 for each implementation. \subsection*{Data availability} \noindent The data that support the findings of this study are available from N.T.Z. upon request. \begin{acknowledgments} We thank W. D. Oliver, S. Gustavsson, and M. Kj{\ae}rgaard from the Engineering Quantum Systems Group at MIT for their kind hospitality and for extended discussion on superconducting circuits. This work was supported by the Carlsberg Foundation and the Danish Council for Independent Research under the DFF Sapere Aude program. \end{acknowledgments} \subsection*{Author contributions} The circuit was designed by L.B.K. and N.J.S.L., and analyzed by L.B.K. and T.B. The numerical calculations were performed by T.B. with suggestions from D.P., C.K.A., N.J.S.L. and N.T.Z. The initial draft of the paper was written by T.B., D.P. and N.T.Z. All authors contributed to the revisions that lead to the final version. \section*{Supplementary Note 1: Derivation of the Hamiltonian for the circuit} \begin{figure*}[htbp] \centering \includegraphics[width=0.8\textwidth]{Circuit_big.pdf} \caption{The superconducting circuit and the corresponding parameters describing the properties of the components. Indicated are also the nodes and the corresponding fluxes.} \label{fig:big_circuit} \end{figure*} We are considering the effective circuit in fig. \ref{fig:big_circuit} and want to show that the low-energy degrees of freedom of this circuit constitutes three qubits(qutrits) with a Heisenberg XXZ-interaction. We start by writing down the Lagrangian of the system in the node flux pictur with the closure branches being the two lower horizontal branches (each splitting into three branches with different circuit elements). The resulting Lagrangian is: \begin{equation} \label{eq:L_start} \begin{split} L &= \frac{C_1}{2}\left( \dot{\phi}_a - \dot{\phi}_c \right)^2 + \frac{C_1}{2}\left( \dot{\phi}_b - \dot{\phi}_c \right)^2 - \frac{1}{2L_1}\left( \phi_a - \phi_c \right)^2 - \frac{1}{2\tilde{L}_1}\left( \phi_b - \phi_c + \Phi_3 \right)^2 \\ &\quad + \frac{C_2}{2}\left( \dot{\phi}_a - \dot{\phi}_d \right)^2 + \frac{C_2}{2}\left( \dot{\phi}_b - \dot{\phi}_d \right)^2 - \frac{1}{2L_2}\left( \phi_a - \phi_d \right)^2 - \frac{1}{2\tilde{L}_2}\left( \phi_b - \phi_d + \Phi_6 \right)^2 \\ & \quad + E_{J1}\big[ \cos(\phi_a - \phi_c + \Phi_1) + \cos(\phi_b - \phi_c + \Phi_2) \big] \\ & \quad + E_{J2}\big[ \cos(\phi_a - \phi_d + \Phi_4) + \cos(\phi_b - \phi_d + \Phi_5) \big] \\ & \quad + E_{J_{q1}}\cos(\phi_c) + E_{J_{q2}}\cos(\phi_a - \phi_b) + E_{J_{q3}}\cos(\phi_d). \end{split} \end{equation} Defining $\Phi_{\Sigma1} = \Phi_1 + \Phi_2 $ and assuming $ \Phi_1 - \Phi_2 = 0 $, we can rewrite the third line using trigonometric identities: \begin{equation} \label{eq:using_trig_identities} 2E_{J1}\cos\left( \frac{\phi_a + \phi_b - 2\phi_c + \Phi_{\Sigma1}}{2} \right) \cos \left( \frac{\phi_a - \phi_b}{2} \right). \end{equation} We will stay in the transmon regime , where the potential terms are much larger than the kinetic terms. This means that we can assume to be close to the potential minimum, approximated to first order by a harmonic oscillator. Thus, we will later rewrite the Hamiltonian in terms of the bosonic step operators related to the harmonic part of the Hamiltonian. We thereafter employ a rotating wave approximation, removing all terms with odd dependence of the node flux variables since these will be energy non-conserving. Specifically these will, after the truncation to the lowest energy levels, represent spontaneous excitation terms. From the Taylor expansion of trigonometric functions, we notice that we can further simplify the expression \eqref{eq:using_trig_identities} to the following form using more trigonometric identities: \begin{equation} 2E_{J1}\cos\left( \frac{\Phi_{\Sigma1}}{2} \right)\cos\left( \frac{\phi_a + \phi_b - 2\phi_c}{2} \right)\cos\left( \frac{\phi_a - \phi_b}{2} \right). \end{equation} Making the same kind of definition for the fourth term, i.e. $ \Phi_{\Sigma2} = \Phi_4 + \Phi_5 $ and assuming $ \Phi_4 - \Phi_5 = 0 $, we can make the same simplification here. We can also ignore the terms dependent on $ \Phi_3 $ and $\Phi_6$ as these will again only be irrelevant offset terms or have an odd dependence on the node fluxes. The Lagrangian becomes \begin{equation*} \begin{split} L &= \frac{C_1}{2}\left( \dot{\phi}_a - \dot{\phi}_c \right)^2 + \frac{C_1}{2}\left( \dot{\phi}_b - \dot{\phi}_c \right)^2 - \frac{1}{2L_1}\left( \phi_a - \phi_c \right)^2 - \frac{1}{2\tilde{L}_1}\left( \phi_b - \phi_c \right)^2 \\ &\quad + \frac{C_2}{2}\left( \dot{\phi}_a - \dot{\phi}_d \right)^2 + \frac{C_2}{2}\left( \dot{\phi}_b - \dot{\phi}_d \right)^2 - \frac{1}{2L_2}\left( \phi_a - \phi_d \right)^2 - \frac{1}{2\tilde{L}_2}\left( \phi_b - \phi_d \right)^2 \\ & \quad + 2E_{J1}\cos\left( \frac{\Phi_{\Sigma1}}{2} \right)\cos\left( \frac{\phi_a + \phi_b - 2\phi_c}{2} \right)\cos\left( \frac{\phi_a - \phi_b}{2} \right) \\ & \quad + 2E_{J2}\cos\left( \frac{\Phi_{\Sigma2}}{2} \right)\cos\left( \frac{\phi_a + \phi_b - 2\phi_d}{2} \right)\cos\left( \frac{\phi_a - \phi_b}{2} \right) \\ & \quad + E_{J_{q1}}\cos(\phi_c) + E_{J_{q2}}\cos(\phi_a - \phi_b) + E_{J_{q3}}\cos(\phi_d). \end{split} \end{equation*} The next step is defining a suitable set of variables. Inspired by the way the $\phi_i$'s enter the cosines above, we choose the following: \begin{equation} \begin{split} \psi_1 &= \phi_a + \phi_b - 2\phi_c \\ \psi_2 &= \phi_a - \phi_b\\ \psi_3 &= \phi_a + \phi_b - 2\phi_d \\ \psi_\text{CM} &= \phi_a + \phi_b\ . \end{split} \end{equation} In terms of the new variables after expansion of the brackets and collection of terms, the Lagrangian is: \begin{align*} L &= \frac{C_1}{4} {\dot{\psi}_1}^{\ 2} - \left( \frac{1}{8L_1} + \frac{1}{8\tilde{L}_1} \right){\psi_1}^2 + E_{J_{q1}}\cos\left( \frac{\psi_1 - \psi_{CM}}{2} \right) \\ &\quad + \left( \frac{C_1}{4} + \frac{C_2}{4} \right)\dot{\psi}_2^{\ 2} - \left( \frac{1}{8L_1} + \frac{1}{8\tilde{L}_1} + \frac{1}{8L_2} + \frac{1}{8\tilde{L}_2} \right){\psi_2}^2 + E_{J_{q2}} \cos\left( \psi_2 \right)\\ &\quad + \frac{C_2}{4}{\dot{\psi}_3}^{\ 2} - \left( \frac{1}{8L_2} + \frac{1}{8\tilde{L}_2} \right){\psi_3}^2 + E_{J_{q3}}\cos\left( \frac{\psi_3 - \psi_{CM}}{2} \right) \\ &\quad - \left( \frac{1}{8L_1} - \frac{1}{8\tilde{L}_1} \right)\psi_1\psi_2 + 2E_{J1}\cos\left( \frac{\Phi_{\Sigma1}}{2} \right)\cos\left( \frac{\psi_1}{2} \right)\cos\left( \frac{\psi_2}{2} \right)\\ &\quad - \left( \frac{1}{8L_2} - \frac{1}{8\tilde{L}_2} \right)\psi_2\psi_3 + 2E_{J2}\cos\left( \frac{\Phi_{\Sigma2}}{2} \right)\cos\left( \frac{\psi_2}{2} \right)\cos\left( \frac{\psi_3}{2} \right). \end{align*} The conjugate momenta can be found from the usual definition $p_i = \frac{\partial L}{\partial \dot{\psi}_i} $: \begin{equation} \begin{split} p_1 &= \frac{C_1}{2}\dot{\psi}_1 \\ p_2 &= \left( \frac{C_1}{2} + \frac{C_2}{2} \right)\dot{\psi}_2 \\ p_3 &= \frac{C_2}{2}\dot{\psi}_3 \\ p_{_{CM}} &= 0 \ . \end{split} \end{equation} Notice that the conjugate momentum of $\psi_{CM}$ is zero, and thus the variable can be seen as purely a constraint variable without a kinetic term. We will thus ignore $\psi_{CM}$ from now on. Now performing a Legendre transforming to the Hamiltonian via the relation $ H = \sum_i \dot{\psi}_i p_i - L $, we obtain: \begin{align*} H &= \frac{1}{C_1}{p_1}^2 + \left( \frac{1}{8L_1} + \frac{1}{8\tilde{L}_1} \right){\psi_1}^2 - E_{J_{q1}}\cos\left( \frac{\psi_1}{2} \right) \\ &\quad + \frac{1}{C_1+C_2}{p_2}^2 + \left( \frac{1}{8L_1} + \frac{1}{8\tilde{L}_1} + \frac{1}{8L_2} + \frac{1}{8\tilde{L}_2} \right){\psi_2}^2 - E_{J_{q2}} \cos\left( \psi_2 \right)\\ &\quad + \frac{1}{C_2}{p_3}^2 + \left( \frac{1}{8L_2} + \frac{1}{8\tilde{L}_2} \right){\psi_3}^2 - E_{J_{q3}}\cos\left( \frac{\psi_3}{2} \right) \\ &\quad + \left( \frac{1}{8L_1} - \frac{1}{8\tilde{L}_1} \right)\psi_1\psi_2 - 2E_{J1}\cos\left( \frac{\Phi_{\Sigma1}}{2} \right)\cos\left( \frac{\psi_1}{2} \right)\cos\left( \frac{\psi_2}{2} \right)\\ &\quad + \left( \frac{1}{8L_2} - \frac{1}{8\tilde{L}_2} \right)\psi_2\psi_3 - 2E_{J2}\cos\left( \frac{\Phi_{\Sigma2}}{2} \right)\cos\left( \frac{\psi_2}{2} \right)\cos\left( \frac{\psi_3}{2} \right). \end{align*} The cosines are now expanded to fourth order in the flux variables: \begin{equation} \begin{split} \cos(x) &\simeq 1 - \frac{1}{2}x^2 + \frac{1}{24}x^4 \\ \cos\left( \frac{x}{2} \right) &\simeq 1 - \frac{1}{8}x^2 + \frac{1}{384}x^4 \\ \cos\left( \frac{x_1}{2} \right)\cos\left( \frac{x_2}{2} \right) &\simeq 1 - \frac{1}{8}{x_1}^2 + \frac{1}{384}{x_1}^4 - \frac{1}{8}{x_2}^2 + \frac{1}{384}{x_2}^4 + \frac{1}{64}{x_1}^2{x_2}^2\ . \end{split} \end{equation} This approximation neglects six-order terms and higher. Using first-order perturbation theory on the sixth-order term from $\cos\left( \psi_2 \right)$ above, we get a correction to the final ground state energy of the corresponding qubit of $ E^{(1)}/(2\pi) \simeq \SI{31}{\kilo\hertz} $. This can safely be ignored compared to the usual transmon energies in the order of $\SI{10}{\giga\hertz} $. Also, since we make sure to stay in the transmon limit with the potential terms much higher than the kinetic terms, we will always stay near very small values of the transmon phase drops, which is the physical interpretation of the flux variables, $\psi_i$. This makes, up to irrelevant constant terms, \begin{align*} H &= \frac{1}{C_1}{p_1}^2 + \alpha_1 {\psi_1}^2 - \beta_1 {\psi_1}^4 \\ &\quad +\frac{1}{C_1+C_2}{p_2}^2 + \alpha_2{\psi_2}^2 - \beta_2 {\psi_2}^4 \\ &\quad +\frac{1}{C_2}{p_3}^2 + \alpha_3{\psi_3}^2 - \beta_3{\psi_3}^4 \\ &\quad +\left( \frac{1}{8L_1} - \frac{1}{8\tilde{L}_1} \right)\psi_1\psi_2 - \frac{E_{J1}}{32}\cos\left( \frac{\Phi_{\Sigma1}}{2} \right){\psi_1}^2 {\psi_2}^2 \\ &\quad +\left( \frac{1}{8L_2} - \frac{1}{8\tilde{L}_2} \right)\psi_2\psi_3 - \frac{E_{J2}}{32}\cos\left( \frac{\Phi_{\Sigma2}}{2} \right){\psi_2}^2 {\psi_3}^2, \end{align*} where \begin{equation} \label{eq:alpha_beta} \begin{split} \alpha_1 &= \frac{1}{8L_1} + \frac{1}{8\tilde{L}_1} + \frac{E_{J_{q1}}}{8} + \frac{E_{J1}}{4}\cos\left( \frac{\Phi_{\Sigma1}}{2} \right) \\ \beta_1 &= \frac{E_{J_{q1}}}{384} + \frac{E_{J1}}{192}\cos\left( \frac{\Phi_{\Sigma1}}{2} \right)\\ \alpha_2 &= \frac{1}{8L_1} + \frac{1}{8\tilde{L}_1} + \frac{1}{8L_2} + \frac{1}{8\tilde{L}_2} + \frac{E_{J_{q2}}}{2} + \frac{E_{J1}}{4}\cos\left( \frac{\Phi_{\Sigma1}}{2} \right) + \frac{E_{J2}}{4}\cos\left( \frac{\Phi_{\Sigma2}}{2} \right) \\ \beta_2 &= \frac{E_{J_{q2}}}{24} + \frac{E_{J1}}{192}\cos\left( \frac{\Phi_{\Sigma1}}{2} \right) + \frac{E_{J2}}{192}\cos\left( \frac{\Phi_{\Sigma2}}{2} \right) \\ \alpha_3 &= \frac{1}{8L_2} + \frac{1}{8\tilde{L}_2} + \frac{E_{J_{q3}}}{8} + \frac{E_{J2}}{4}\cos\left( \frac{\Phi_{\Sigma2}}{2} \right) \\ \beta_3 &= \frac{E_{J_{q3}}}{384} + \frac{E_{J2}}{192}\cos\left( \frac{\Phi_{\Sigma2}}{2} \right). \end{split} \end{equation} Taking the anharmonicity (fourth-order terms) as a perturbation, we now choose the bosonic raising and lowering operators related to the harmonic parts of the Hamiltonian (the $p^2$ and $\psi^2$ -terms) above as usual (This perturbation is essential for the later truncation, since the resulting anharmonicity of the energy spacing between the levels allows us to only consider the lowest states): \begin{equation} \begin{split} \psi_1 &= \frac{1}{(4\alpha_1C_1)^\frac{1}{4}}\left( b_1^\dagger + b_1 \right) \\ \psi_2 &= \frac{1}{(4\alpha_2(C_1+C_2))^\frac{1}{4}}\left( b_2^\dagger + b_2 \right) \\ \psi_3 &= \frac{1}{(4\alpha_3C_2)^\frac{1}{4}}\left( b_3^\dagger + b_3 \right). \end{split} \end{equation} We end up with the rewritten Hamiltonian \begin{equation} \begin{aligned} \label{eq:H_b} H &= \sqrt{\frac{4\alpha_1}{C_1}}\left( b_1^\dagger b_1 + \frac{1}{2} \right) - \frac{\beta_1}{4\alpha_1C_1}\left( b_1^\dagger + b_1 \right)^4 \\ &\quad + \sqrt{\frac{4\alpha_2}{C_1+C_2}}\left( b_2^\dagger b_2 + \frac{1}{2} \right) - \frac{\beta_2}{4\alpha_2(C_1+C_2)}\left( b_2^\dagger + b_2 \right)^4 \\ &\quad +\sqrt{\frac{4\alpha_3}{C_2}}\left( b_3^\dagger b_3 + \frac{1}{2} \right) - \frac{\beta_3}{4\alpha_3C_2}\left( b_3^\dagger + b_3 \right)^4 \\ &\quad + \left( \frac{1}{8L_1} - \frac{1}{8\tilde{L}_1} \right)\frac{1}{2(\alpha_1\alpha_2C_1(C_1+C_2))^\frac{1}{4}}\left( b_1^\dagger + b_1 \right)\left( b_2^\dagger + b_2 \right) \\ &\quad - \frac{E_{J1}}{128}\cos\left( \frac{\Phi_{\Sigma1}}{2} \right)\frac{1}{\sqrt{\alpha_1\alpha_2C_1(C_1+C_2)}}\left( b_1^\dagger + b_1 \right)^2 \left( b_2^\dagger + b_2 \right)^2\\ &\quad + \left( \frac{1}{8L_2} - \frac{1}{8\tilde{L}_2} \right)\frac{1}{2(\alpha_2\alpha_3C_2(C_1+C_2))^\frac{1}{4}}\left( b_2^\dagger + b_2 \right)\left( b_3^\dagger + b_3 \right) \\ &\quad - \frac{E_{J2}}{128}\cos\left( \frac{\Phi_{\Sigma2}}{2} \right)\frac{1}{\sqrt{\alpha_2\alpha_3C_2(C_1+C_2)}}\left( b_2^\dagger + b_2 \right)^2 \left( b_3^\dagger + b_3 \right)^2. \end{aligned} \end{equation} We could now in principle map this to a spin model by using the anharmonicity to truncate to the lowest two/three energy eigenstates (e.g. $ b_i^\dagger + b_i \mapsto \sigma_i^x$ for a qubit) and after using the rotating wave approximation end up with a Hamiltonian with the desired form. This would have the drawback of being static, i.e. once the circuit is built to certain specifications the experimental parameters are fixed and the energy levels and coupling strengths can not be adjusted significantly afterwards. We will introduce a dynamical tuning by adding an external driving field, effectively mixing the first and second excited state of the middle ($\psi_2$) degree of freedom and thereby changing the energy levels and coupling strengths. \subsection{Adding an effective external energy level tuning} We now imagine control lines connecting to the nodes $a$ and $b$ as in the main text. These can then be used to drive the middle degree of freedom $\psi_2$ using external fields. Specifically, we let the control line 5 connect to the node with flux $\phi_a$ in Supplemetary Fig. \ref{fig:big_circuit} through the capacitance $C_\text{ext}$ and drive an external field $\phi_\text{ext}$. Similarly, we apply an external field $\phi_{-\text{ext}} $ through control line 6 connected to the node flux $\phi_b $ through the same capacitance $C_\text{ext} $. The following extra term will appear in the Lagrangian: \begin{equation} L_\text{ext} = \frac{C_\text{ext}}{2}\left( \dot{\phi}_a - \dot{\phi}_\text{ext} \right)^2 + \frac{C_\text{ext}}{2}\left( \dot{\phi}_b - \dot{\phi}_{-\text{ext}} \right)^2 . \end{equation} This can be rewritten to \begin{equation*} L_\text{ext} = \frac{C_\text{ext}}{4}\left[ \left( \dot{\phi}_a + \dot{\phi}_b - \dot{\phi}_{-\text{ext}} - \dot{\phi}_\text{ext} \right)^2 + \left( \dot{\phi}_a - \dot{\phi}_b + \dot{\phi}_{-\text{ext}} - \dot{\phi}_\text{ext} \right)^2 \right]. \end{equation*} Assuming $\phi_\text{ext} = - \phi_{-\text{ext}} = A_\text{ext} \sin(\omega_\text{ext} t)$ and transforming to the $\psi$-coordinates, this reduces to \begin{align} L_\text{ext} &= \frac{C_\text{ext}}{4}\left[\dot{\psi}_{CM}^{\quad 2} + \left( \dot{\psi}_2 - 2A_\text{ext}\omega_\text{ext}\cos(\omega_\text{ext} t) \right)^2\right] \nonumber\\ &= \frac{C_\text{ext}}{4}\dot{\psi}_{CM}^{\quad 2} + \frac{C_\text{ext}}{4}{\dot{\psi}_2}^2 + C_\text{ext} A_\text{ext}^2 \omega_\text{ext}^2\cos(\omega_\text{ext} t)^2 - C_\text{ext} \dot{\psi}_2 A_\text{ext} \omega_\text{ext} \cos(\omega_\text{ext} t). \end{align} The first term here is an apparently problematic kinetic term for what has so far been a constraint variable. This can, though, be constructed to have a spacing between the energy levels that is very far from the spacing of the other degrees of freedom and $ \psi_{CM} $ can thus be ignored in spite of this term. The second term is another kinetic term for the $\psi_2$-variable, the third term is an offset term, and the fourth and last term is an interaction term between $\psi_2$ and the external field. It is this last term that will be useful for driving transitions between, and hence coupling of, the first and second excited levels of $\psi_2$. This will allow us to tune the position of the two levels through non-crossing depending on how much we mix the states. Including the above addition, the Lagrangian related purely to the $\psi_2$-degree of freedom is the following: \begin{equation*} L_2 = \frac{C_1 + C_2 + C_\text{ext}}{4}\dot{\psi}_2^{\ 2} - \left( \frac{1}{8L_1} + \frac{1}{8\tilde{L}_1} + \frac{1}{8L_2} + \frac{1}{8\tilde{L}_2} \right){\psi_2}^2 + E_{J_{q2}}\cos(\psi_2) - C_\text{ext} A_\text{ext} \omega_\text{ext} \cos(\omega_\text{ext} t)\dot{\psi}_2, \end{equation*} with all other terms being either purely related to $\psi_1$ or $\psi_3$ or interaction terms. The corresponding conjugated momentum is also altered slightly: \begin{equation} p_2 = \frac{C_1+C_2+C_\text{ext}}{2}\dot{\psi}_2 - C_\text{ext} A_\text{ext} \omega_\text{ext} \cos(\omega_\text{ext} t), \end{equation} thus \begin{equation} \dot{\psi}_2 = \frac{2}{C_1 + C_2+C_\text{ext}}\big( p_2 + C_\text{ext} A_\text{ext} \omega_\text{ext} \cos(\omega_\text{ext} t) \big). \end{equation} The related Hamiltonian is therefore, ignoring any offset terms: \begin{align*} H_2 &= \frac{{p_2}^2}{C_1+C_2+C_\text{ext}} + \frac{2C_\text{ext} A_\text{ext} \omega_\text{ext} \cos(\omega_\text{ext} t)}{C_1+C_2+C_\text{ext}}p_2 \\ &\quad + \left( \frac{1}{8L_1} + \frac{1}{8\tilde{L}_1} + \frac{1}{8L_2} + \frac{1}{8\tilde{L}_2} \right){\psi_2}^2 - E_{J_{q2}}\cos(\psi_2). \end{align*} Performing the same expansions as before addition of the extra capacitances and external fields gives a few extra $\psi_2$-terms from the interaction terms. Focusing on only the resulting terms containing only $\psi_2$ or $p_2$, we get: \begin{equation} H_2 = \frac{1}{C_1 + C_2 + C_\text{ext}}{p_2}^2 + \alpha_2 {\psi_2}^2 - \beta_2 {\psi_2}^4 + \frac{2C_\text{ext} A_\text{ext} \omega_\text{ext} \cos(\omega_\text{ext} t)}{C_1 + C_2 + C_\text{ext}}p_2, \end{equation} where $\alpha_2$ and $\beta_2$ are defined in \eqref{eq:alpha_beta}. We once again introduce the bosonic step-operators related to the harmonic part of $H_2$: \begin{equation} \begin{split} \psi_2 &= \frac{1}{(4\alpha_2(C_1+C_2+C_\text{ext}))^\frac{1}{4}}\left( b_2^\dagger + b_2 \right) \\ p_2 &= i\frac{(4\alpha_2(C_1+C_2+C_\text{ext}))^\frac{1}{4}}{2} \left( b_2^\dagger - b_2 \right), \end{split} \end{equation} which makes \begin{equation} \label{eq:H_2_plus_external} H_2 = T_1 b_2^\dagger b_2 - T_2\left( b_2^\dagger + b_2 \right)^4 + iT_\text{ext}\cos \left( \omega_\text{ext} t \right)\left( b_2^\dagger - b_2 \right), \end{equation} where \begin{equation} \begin{split} T_1 &= \sqrt{\frac{4\alpha_2}{C_1+C_2+C_\text{ext}}} \\ T_2 &= \frac{\beta_2}{4\alpha_2(C_1+C_2+C_\text{ext})} \\ T_\text{ext} &= C_\text{ext} A_\text{ext} \omega_\text{ext} \frac{(4\alpha_2(C_1 + C_2 + C_\text{ext}))^{\frac{1}{4}}}{C_1 + C_2 + C_\text{ext}}. \end{split} \end{equation} We wish to diagonalize this Hamiltonian to investigate the effect of $T_\text{ext}$ on the spectrum. \subsection{Truncating the middle degree of freedom} We will now investigate the dynamical tuning of the spectrum by first truncating the ``internal'' Hamiltonian for the $\psi_2$ degree of freedom, corresponding to setting $A_\text{ext} = 0$, to the three lowest degrees of freedom, i.e. find the qutrit eigenstates. We will then add the external field and transform to a frame rotating with the external field, wherein we can see the effective mixing of the qutrit eigenstates. Lastly, we shift basis and use these driven states in the rotating frame as our qutrit eigenstates. We start by diagonalizing the internal Hamiltonian of the middle degree of freedom ``$H_{2,0}$'', i.e. the first two terms in \eqref{eq:H_2_plus_external}: In the basis of the three lowest simple harmonic oscillator states, which is chosen since we wish to end up with a qutrit in the end, we represent (up to irrelevant offset terms proportional to the identity) \begin{align}\label{eq:b_bd} b_2^\dagger \sim \begin{pmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & \sqrt{2} & 0 \end{pmatrix}, \qquad b_2 \sim \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & \sqrt{2} \\ 0 & 0 & 0 \end{pmatrix}. \end{align} This truncation is also done for all terms in \eqref{eq:H_2_plus_external}, after using the canonical commutation relation $[b,b^\dagger]=1$ to transform to normal ordering form: \begin{align*} b_2^\dagger b_2 \sim \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} \quad, \qquad \left( b_2^\dagger + b_2 \right)^4 \sim \begin{pmatrix} 0 & 0 & 6\sqrt{2} \\ 0 & 12 & 0 \\ 6\sqrt{2} & 0 & 36 \end{pmatrix}, \end{align*} and \begin{align*} b_2^\dagger - b_2 \sim \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & -\sqrt{2} \\ 0 & \sqrt{2} & 0 \end{pmatrix} \end{align*} Inserting this in the first two terms in equation \eqref{eq:H_2_plus_external}, we get the Hamiltonian \begin{align*} H_{2,0} \sim \begin{pmatrix} 0 & 0 & -6\sqrt{2}T_2 \\ 0 & T_1 -12T_2 & 0 \\ -6\sqrt{2}T_2 & 0 & 2T_1 - 36T_2 \end{pmatrix}, \end{align*} which has the eigenenergies: \begin{equation} \begin{split} E_0 &= T_1 - 18T_2 - \sqrt{(T_1 - 18T_2)^2 + 72 {T_2}^2} \\ E_1 &= T_1 - 12T_2 \\ E_2 &= T_1 - 18T_2 + \sqrt{(T_1 - 18T_2)^2 + 72 {T_2}^2} \end{split} \end{equation} with the corresponding eigenstates: \begin{equation} \begin{split} \ket{\tilde{0}} &= \frac{1}{\sqrt{72{T_2}^2 + {E_0}^2}} \begin{pmatrix} 6\sqrt{2}T_2 \\ 0 \\ -E_0 \end{pmatrix} \\ \ket{\tilde{1}} &= \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \\ \ket{\tilde{2}} &= \frac{1}{\sqrt{72{T_2}^2 + {E_2}^2}} \begin{pmatrix} -6\sqrt{2}T_2 \\ 0 \\ E_2 \end{pmatrix}. \end{split} \end{equation} Now turning on the full external field, the full Hamiltonian $H_2$ can be expressed as \begin{align}\label{eq:H_2_tilde_nonrotating} H_2 = E_0 \ketbra{\tilde{0}} + E_1 \ketbra{\tilde{1}} + E_2 \ketbra{\tilde{2}} + \frac{T_\text{ext}}{2}\left( \e{i\omega_\text{ext} t} + \e{-i\omega_\text{ext} t} \right) \begin{pmatrix} 0 & -i & 0 \\ i & 0 & -\sqrt{2}i \\ 0 & \sqrt{2}i & 0 \end{pmatrix}, \end{align} where the first three terms are the bare qutrit eigenstates and the last matrix is written in the old basis $ (\ket{0},\ket{1},\ket{2}) $. The outer degrees of freedom ($1,3$) is truncated to the lowest two degrees levels (i.e. an effective qubit) as is standard, where $ \ket{\uparrow} $ and $ \ket{\downarrow} $ will denote the excited and ground state, respectively. We now switch to a rotating frame corresponding to the external field, which means performing a unitary transformation with the operator: \begin{equation} U_{\text{ext}} = U_{\text{ext},1}U_{\text{ext},2}U_{\text{ext},3}, \end{equation} where \begin{equation} U_{\text{ext},\alpha} = \text{e}^{i\omega_\text{ext}t/2} \ketbra{\downarrow} + \text{e}^{-i\omega_\text{ext}t/2} \ketbra{\uparrow}, \end{equation} for $ \alpha = 1,3 $, and \begin{equation} U_{\text{ext},2}(t) = \text{e}^{i3\omega_\text{ext}t/2} \ketbra{\tilde{0}} + \text{e}^{i\omega_\text{ext}t/2} \ketbra{\tilde{1}} + \text{e}^{-i\omega_\text{ext}t/2} \ketbra{\tilde{2}}. \end{equation} We are trying to obtain a mixing of the first and second energy levels and therefore assume that we can tune $\omega_\text{ext}$ so that it is close to $E_2 - E_1$ and far from $E_1-E_0$ and $E_2-E_0$, effectively enabling us to perform the two-level approximation. A transformation of the Hamiltonian can now be performed according to the standard transformation rule \begin{equation} \label{eq:H_transformation_rotating} H \rightarrow H^R = U_{\text{ext}}^\dagger H\, U_{\text{ext}} + i \dv{U_{\text{ext}}^\dagger}{t} U_{\text{ext}}. \end{equation} Since our Hamiltonian is quite big, we take this one part at a time. Starting with the terms purely related to the $\psi_2$ degree of freedom, it is a good idea to look at the matrix elements from the last factor in \eqref{eq:H_2_tilde_nonrotating} when performing this transformation: \begin{align} \bra{\tilde{1}}\left( \ketbra{1}{0} -\sqrt{2} \ketbra{1}{2} \right)\ket{\tilde{2}} &= -\frac{\sqrt{2}(6T_2 + E_2)}{\sqrt{(6\sqrt{2}T_2)^2 + {E_2}^2}} \\ \bra{\tilde{2}}\left( -\ketbra{0}{1} +\sqrt{2} \ketbra{2}{1} \right)\ket{\tilde{1}} &= \frac{\sqrt{2}(6T_2 + E_2)}{\sqrt{(6\sqrt{2}T_2)^2 + {E_2}^2}}, \end{align} where the rest are either irrelevant or zero. So, we get in the rotating frame \begin{align} H_2 &= E_0 \ketbra{ \tilde{0} } + E_1 \ketbra{ \tilde{1} } + E_2 \ketbra{ \tilde{2} } \nonumber \\ &\quad+ i\frac{T_\text{ext}}{2} \frac{\sqrt{2}(6T_2 + E_2)}{\sqrt{(6\sqrt{2}T_2)^2 + {E_2}^2}} \left(\e{i\omega_\text{ext} t} + \e{-i\omega_\text{ext} t}\right)\left(-\ketbra{\tilde{1}}{\tilde{2}}\e{-i\omega_\text{ext} t} + \ketbra{\tilde{2}}{\tilde{1}}\e{i\omega_\text{ext} t} \right) \nonumber \\ &\quad + \frac{3\omega_\text{ext}}{2} \ketbra{\tilde{0}} + \frac{\omega_\text{ext}}{2} \ketbra{\tilde{1}} -\frac{\omega_\text{ext}}{2}\ketbra{\tilde{2}} \nonumber \\ &= \left(E_0+\frac{3\omega_\text{ext}}{2} \right) \ketbra{\tilde{0}} + \left(E_1+\frac{\omega_\text{ext}}{2} \right)\ketbra{ \tilde{1} } + \left(E_2 - \frac{\omega_\text{ext}}{2}\right) \ketbra{ \tilde{2} } \nonumber\\ & \quad- i\Delta \Big[ \ketbra{\tilde{1}}{\tilde{2}}\left(1+\e{-2i\omega_\text{ext} t}\right) - \ketbra{\tilde{2}}{\tilde{1}}\left(1+\e{2i\omega_\text{ext} t}\right) \Big] \nonumber \\ &\simeq \left(E_0+\frac{3\omega_\text{ext}}{2} \right) \ketbra{\tilde{0}} + \left(E_1+\frac{\omega_\text{ext}}{2} \right)\ketbra{ \tilde{1} } + \left(E_2 - \frac{\omega_\text{ext}}{2}\right) \ketbra{ \tilde{2} }+ i\Delta \Big( \ketbra{\tilde{2}}{\tilde{1}} - \ketbra{\tilde{1}}{\tilde{2}} \Big), \end{align} where we in the last equation has used the rotating wave approximation to remove the fast oscillating terms $\exp(\pm 2i\omega_\text{ext} t)$, and we have defined \begin{equation} \Delta = \frac{T_\text{ext}}{2} \frac{\sqrt{2}(E_2 + 6T_2)}{\sqrt{(6\sqrt{2}T_2)^2 + {E_2}^2}}. \end{equation} Writing this in terms of the detuning from resonance \begin{equation} \delta = E_2 - E_1 - \omega_\text{ext}, \end{equation} we can rewrite the expression above as \begin{equation} H_2^R = \left(\frac{E_1+E_2}{2} + \xi - \frac{3\delta}{2} \right) \ketbra{\tilde{0}} + \left(\frac{E_1+E_2}{2} - \frac{\delta}{2} \right)\ketbra{ \tilde{1} } + \left(\frac{E_1+E_2}{2} + \frac{\delta}{2} \right) \ketbra{ \tilde{2} }+ i\Delta \Big( \ketbra{\tilde{2}}{\tilde{1}} - \ketbra{\tilde{1}}{\tilde{2}} \Big) \end{equation} or, representing this in the matrix representation in the tilde states as basis states: \begin{align*} H_2^R \sim \begin{pmatrix} \xi - \frac{3\delta}{2} & 0 & 0 \\ 0 & - \frac{\delta}{2} & -i\Delta \\ 0 & i\Delta & \frac{\delta}{2} \end{pmatrix} + \frac{E_1+E_2}{2} \mathbb{I}_3, \end{align*} where $\mathbb{I}_3 $ is the $3\times3$ identity matrix of the qutrit, which can be safely ignored, and $ \xi = E_2 - E_1 - (E_1 - E_0) < 0 $ is the absolute anharmonicity between the first and second level in the qutrit. This can be diagonalized to find the energy spectrum \begin{equation} \begin{aligned} E_0' &= \xi - \frac{3\delta}{2} \\ E_1' &= - \gamma \\ E_2' &= + \gamma , \end{aligned} \end{equation} where \begin{align} \gamma = \frac{1}{2}\sqrt{\delta^2 + 4\Delta^2}. \end{align} The (normalized) eigenstates of this Hamiltonian, expressed in the basis $\{ \ket{\tilde{0}},\ket{\tilde{1}},\ket{\tilde{2}} \}$, are \begin{equation} \begin{aligned} \ket{0'} &\sim \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \\ \ket{1'} &\sim \frac{1}{\sqrt{\Delta^2 + \left( \frac{\delta}{2} - \gamma \right)^2 }}\begin{pmatrix} 0 \\ i\Delta \\ -\frac{\delta}{2} + \gamma\end{pmatrix} \\ \ket{2'} &\sim \frac{1}{\sqrt{\Delta^2 + \left( \frac{\delta}{2} + \gamma \right)^2 }}\begin{pmatrix} 0 \\ -i\Delta \\ \frac{\delta}{2} + \gamma \end{pmatrix}. \end{aligned} \end{equation} In the limit $ \Delta \rightarrow 0 $, these reduce to the bare energy states $ \ket{i'} \rightarrow \ket{\tilde{i}} $ for $i=0,1,2$ when $\delta > 0$ meaning the driving frequency is below the undriven energy difference between the upper qutrit states. The Hamiltonian for the $\psi_2$ degree of freedom reduces to \begin{align} H_2^R = E_0' \ketbra{0'} + E_1' \ketbra{1'} + E_2'\ketbra{2'} = E_0'\mathbb{I}_2 + (E_1' - E_0') \ketbra{1'} + \left( E_2' - E_0' \right)\ketbra{2'}, \end{align} where the diagonal term will again be throw away from this point onwards. In conclusion, we see that by tuning $A_\text{ext}$ and/or $\omega_\text{ext}$, we can change $E_1'$ and $E_2'$, i.e. the contribution of the part of the Hamiltonian purely related to $\psi_2$ to the energy of the two highest qutrit states.\\ Next, we perform the transformation to the rotating picture to the parts of the Hamiltonian purely related to the outer fluxes $j = 1,3$. We choose the qubit spin-up state as the excited state, e.g. we associate $ b_j^\dagger b_j \mapsto \tfrac{1}{2} + \tfrac{1}{2}\sigma_j^z $. Thus \begin{align} H_j \rightarrow H_j^R &= U_{\text{ext},j}^\dagger \left[ \sqrt{\frac{4\alpha_j}{C_j}}\left( b_j^\dagger b_j + \frac{1}{2} \right) - \frac{\beta_j}{4\alpha_jC_j}\left( b_j^\dagger + b_j \right)^4 \right] U_{\text{ext},\alpha} \\ &= \frac{1}{2}\left( \sqrt{\frac{4\alpha_j}{C_j}} - \frac{3\beta_j}{\alpha_jC_j} \right)\sigma_j^z + \frac{\omega_\text{ext}}{2} \ketbra{\downarrow} - \frac{\omega_\text{ext}}{2} \ketbra{\uparrow} \\ &= \frac{1}{2}\left( \sqrt{\frac{4\alpha_j}{C_j}} - \frac{3\beta_j}{\alpha_jC_j} - \omega_\text{ext} \right) \sigma_j^z, \end{align} where we have made the usual truncation to a qubit. \\ We have yet to do the transformation to the rotating picture for the interaction terms. The factors involved are $( b_j^\dagger + b_j )$, which normally for a qubit maps to $\sigma^x_j$ before moving to the interaction picture, and $( b_j^\dagger + b_j )^2$, which maps to $2+\sigma_j^z$ for a qubit. We start by looking at the factor (for $j=1,3$): \begin{align*} (b_2^\dagger + b_2)(b_j^\dagger + b_j) &\rightarrow U_{\text{ext},2}^\dagger \left( b_2^\dagger + b_2\right)U_{\text{ext},2} \ U_{\text{ext},j}^\dagger\left(b_j^\dagger + b_j\right)U_{\text{ext},j}. \end{align*} The first factor is: \begin{align} U_{\text{ext},2}^\dagger \left( b_2^\dagger + b_2\right)U_{\text{ext},2} = k_0 \left( \e{-i\omega_\text{ext}t}\ketbra{\tilde{0}}{\tilde{1}} + \e{i\omega_\text{ext}t}\ketbra{\tilde{1}}{\tilde{0}} \right) + k_2 \left( \e{-i\omega_\text{ext}t}\ketbra{\tilde{1}}{\tilde{2}} + \e{i\omega_\text{ext}t}\ketbra{\tilde{2}}{\tilde{1}} \right), \end{align} where \begin{align} k_i = (-1)^{\frac{i}{2}}\frac{\sqrt{2}(6T_2 - E_i)}{\sqrt{(6\sqrt{2}T_2)^2 + {E_i}^2}} \end{align} for $i=0,2$. The second factor contributes with: \begin{equation} \label{eq:outer_to_R} U_{\text{ext},j}^\dagger(b_j^\dagger + b_j)U_{\text{ext},j} = \e{-i\omega_\text{ext}t}\ketbra{\downarrow}{\uparrow} + \e{i\omega_\text{ext}t}\ketbra{\uparrow}{\downarrow}. \end{equation} So: \begin{align} (b_2^\dagger + b_2)(b_j^\dagger + b_j) &\rightarrow k_0 \left( \sigma_j^+ \ketbra{\tilde{0}}{\tilde{1}} + \sigma_j^- \ketbra{\tilde{1}}{\tilde{0}} + \e{-i2\omega_\text{ext}t}\sigma_j^- \ketbra{\tilde{0}}{\tilde{1}} + \e{i2\omega_\text{ext}t}\sigma_j^+ \ketbra{\tilde{1}}{\tilde{0}} \right) \\ &\quad + k_2 \left( \sigma_j^+ \ketbra{\tilde{1}}{\tilde{2}} + \sigma_j^- \ketbra{\tilde{2}}{\tilde{1}} + \e{-i2\omega_\text{ext}t}\sigma_j^- \ketbra{\tilde{1}}{\tilde{2}} + \e{i2\omega_\text{ext}t}\sigma_j^+ \ketbra{\tilde{2}}{\tilde{1}} \right) \\ &\simeq k_0 \left( \sigma_j^+ \ketbra{\tilde{0}}{\tilde{1}} + \sigma_j^- \ketbra{\tilde{1}}{\tilde{0}} \right) + k_2 \left( \sigma_j^+ \ketbra{\tilde{1}}{\tilde{2}} + \sigma_j^- \ketbra{\tilde{2}}{\tilde{1}} \right), \end{align} where we in the last equality have used the rotating wave approximation to remove the fast oscillating terms. Since we want to look at the primed mixed states as the new tunable qutrit, we transform this to the primed basis: \begin{align} (b_2^\dagger + b_2)(b_j^\dagger + b_j) &\rightarrow k_0 \sigma_j^+\braket{\tilde{1}}{1'}\ketbra{0'}{1'} + k_0\sigma_j^- \braket{1'}{\tilde{1}}\ketbra{1'}{0'} \\ &\quad + k_2 \Big( \sigma_j^+ \braket{1'}{\tilde{1}}\braket{\tilde{2}}{1'} + \sigma_j^- \braket{1'}{\tilde{2}}\braket{\tilde{1}}{1'} \Big) \ketbra{1'} \\ &\quad + k_2 \Big( \sigma_j^+ \braket{1'}{\tilde{1}}\braket{\tilde{2}}{2'} + \sigma_j^- \braket{1'}{\tilde{2}}\braket{\tilde{1}}{2'} \Big) \ketbra{1'}{2'} \\ &\quad + k_2 \Big( \sigma_j^+ \braket{2'}{\tilde{1}}\braket{\tilde{2}}{1'} + \sigma_j^- \braket{2'}{\tilde{2}}\braket{\tilde{1}}{1'} \Big) \ketbra{2'}{1'} \\ &\quad + k_2 \Big( \sigma_j^+ \braket{2'}{\tilde{1}}\braket{\tilde{2}}{2'} + \sigma_j^- \braket{2'}{\tilde{2}}\braket{\tilde{1}}{2'} \Big) \ketbra{2'} \\ &\quad + k_0 \sigma_j^+ \braket{\tilde{1}}{2'}\ketbra{0'}{2'} + k_0 \sigma_j^- \braket{2'}{\tilde{1}}\ketbra{2'}{0'}. \end{align} Some of these terms look troubling, but luckily, all terms proportional with an off-diagonal overlap between the primed and tilde states will be much smaller than the diagonal ones and can be ignored. Also, all these terms are general energy non-conserving and thus could also be eliminated using a rotating wave approximation in the interaction picture. We note that the imaginary factor from the matrix elements can be eliminated by defining $ \ket{1'} \mapsto \ket{1'}_\text{new} = -i\ket{1'}_\text{old} $. We thus end with: \begin{align} (b_2^\dagger + b_2)(b_j^\dagger + b_j) &\overset{\text{R}}{\rightarrow} \frac{k_0\Delta}{\sqrt{\Delta^2 + \left( \frac{\delta}{2} - \gamma \right)^2 }} \left( \sigma_j^+ \ketbra{0'}{1'} + \sigma_j^- \ketbra{1'}{0'} \right) \\ &\quad + \frac{k_2\left(\frac{\delta}{2}+\gamma\right)}{2\gamma} \left( \sigma_j^+ \ketbra{1'}{2'} + \sigma_j^- \ketbra{2'}{1'} \right), \end{align} where the ``R'' denotes the transformation to the rotating frame. We note that the couplings $ \ket{0'} \leftrightarrow \ket{1'} $ and $ \ket{1'} \leftrightarrow \ket{2'} $ are not equal. In fact, for small $\Delta$, the latter is a factor of $\sqrt{2}$ bigger, originating from the definition of the bosonic step operators. \\ We can now look at the transformation of the last kind of interaction term: \begin{align} \left( b_2^\dagger + b_2 \right)^2 \left( b_j^\dagger + b_j \right)^2 \rightarrow U_{\text{ext},2}^\dagger \left( b_2^\dagger + b_2\right)^2U_{\text{ext},2} \ U_{\text{ext},j}^\dagger\left(b_j^\dagger + b_j\right)^2 U_{\text{ext},j}. \end{align} For the second factor, the transformation is equal to the identity for the standard qubit basis, i.e. \begin{equation} U_{\text{ext},j}^\dagger(b_j^\dagger + b_j)^2U_{\text{ext},j} \mapsto 2 + \sigma_j^z. \end{equation} Performing the transformation for the first factor, we get: \begin{align} U_{\text{ext},2}^\dagger \left( b_2^\dagger + b_2\right)^2U_{\text{ext},2} &= C_{00} \ketbra{\tilde{0}} + 3 \ketbra{\tilde{1}} + C_{22} \ketbra{\tilde{2}} \nonumber\\ & \quad + C_{02} \Big( \e{-i2\omega_\text{ext}t} \ketbra{\tilde{0}}{\tilde{2}} + \e{i2\omega_\text{ext}t} \ketbra{\tilde{2}}{\tilde{0}} \Big). \end{align} Again, the last two terms are fast-rotating and can be removed. We have defined: \begin{equation} \begin{aligned} C_{00} &= 1 - \frac{4E_0\left(6T_2 - E_0\right)}{\left( 6\sqrt{2}T_2\right)^2 + {E_0}^2} \\ C_{02} &= \frac{ 4E_0E_2 - 12T_2(E_0 + E_2)}{\sqrt{\left( 6\sqrt{2}T_2 \right)^2 + {E_0}^2}\sqrt{\left( 6\sqrt{2}T_2 \right)^2 {E_2}^2}}\\ C_{22} &= 1 - \frac{4E_2\left(6T_2 - E_2\right)}{\left( 6\sqrt{2}T_2\right)^2 + {E_2}^2} . \end{aligned} \end{equation} Moving to the primed basis, we find: \begin{align*} \left( b_2^\dagger + b_2\right)^2 &\rightarrow C_{00} \ketbra{0'} + \Big( 3 \abs{\braket{1'}{\tilde{1}}}^2 + C_{22}\abs{\braket{1'}{\tilde{2}}}^2 \Big) \ketbra{1'} \nonumber\\ &\quad + \Big( 3 \braket{1'}{\tilde{1}}\braket{\tilde{1}}{2'} + C_{22} \braket{1'}{\tilde{2}}\braket{\tilde{2}}{2'} \Big) \ketbra{1'}{2'} + \Big( 3 \braket{2'}{\tilde{1}}\braket{\tilde{1}}{1'} + C_{22} \braket{2'}{\tilde{2}}\braket{\tilde{2}}{1'} \Big) \ketbra{2'}{1'} \nonumber\\ &\quad + \Big( 3\abs{\braket{2'}{\tilde{1}}}^2 + C_{22} \abs{\braket{2'}{\tilde{2}}}^2 \Big) \ketbra{2'}. \end{align*} Again, we will ignore the energy non-conserving terms proportional to an off-diagonal overlap. We will, though, keep the terms proportional to a non-diagonal overlap in the energy conserving terms for accuracy of the final Hamiltonian. Evaluating the matrix elements gives: \begin{align} \left( b_2^\dagger + b_2\right)^2 &\overset{\text{R}}{\rightarrow} C_{00} \ketbra{0'} + \frac{3\Delta^2 + C_{22} \left( -\frac{\delta}{2} + \gamma \right)^2}{\Delta^2 + \left( -\frac{\delta}{2} + \gamma \right)^2} \ketbra{1'} \nonumber\\ &\quad + \frac{3\Delta^2 + C_{22} \left( \frac{\delta}{2} + \gamma \right)^2}{\Delta^2 + \left( \frac{\delta}{2} + \gamma \right)^2} \ketbra{2'}. \end{align} We are now ready to look at the full transformed Hamiltonian. \subsection{Full Hamiltonian} We can now write down the full Hamiltonian for the system when coupled to an external field mixing the first and second excited state in the rotating frame of the Hamiltonian $H_\text{ext} = -\frac{3\omega_\text{ext}}{2 } \ketbra{\tilde{0}} -\frac{\omega_\text{ext}}{2} \ketbra{\tilde{1}} + \frac{\omega_\text{ext}}{2} \ketbra{\tilde{2}} $. We start from the Hamiltonian in equation \eqref{eq:H_b} and now insert how the factors containing the bosonic step operators transform under such a transformation, as calculated in the previous section. To sum, up, we found: \begin{equation} \begin{aligned} \label{eq:qutrit_mappings} T_1b_2^\dagger b_2 - T_2\left( b_2^\dagger + b_2 \right)^4 + iT_\text{ext} \cos(\omega_\text{ext} t)\left( b_2^\dagger - b_2 \right) &\mapsto (E_1' - E_0')\ketbra{1'} + (E_2' - E_0')\ketbra{2'} \\ \sqrt{\frac{4\alpha_j}{C_j}}\left( b_j^\dagger b_j + \frac{1}{2} \right) - \frac{\beta_j}{4\alpha_jC_j}\left( b_j^\dagger + b_j \right)^4 &\mapsto \frac{1}{2}\left( \sqrt{\frac{4\alpha_j}{C_j}} - \frac{3\beta_j}{\alpha_jC_j} - \omega_\text{ext} \right) \sigma_j^z\\ (b_2^\dagger + b_2)(b_j^\dagger + b_j) &\mapsto \frac{k_0\Delta}{\sqrt{\Delta^2 + \left( \frac{\delta}{2} - \gamma \right)^2 }} \left( \sigma_j^+ \ketbra{0'}{1'} + \sigma_j^- \ketbra{1'}{0'} \right) \\ &\phantom{\mapsto} + \frac{k_2\left(\frac{\delta}{2}+\gamma\right)}{2\gamma} \left( \sigma_j^+ \ketbra{1'}{2'} + \sigma_j^- \ketbra{2'}{1'} \right)\\ \left( b_2^\dagger + b_2 \right)^2\left( b_j^\dagger + b_j \right)^2 &\mapsto \Big( C_{00}\mathbb{I}_2 + D_1\ketbra{1'} + D_2\ketbra{2'} \Big)\Big( 2 + \sigma_j^z \Big) , \end{aligned} \end{equation} where \begin{equation} \label{eq:D_constants} \begin{split} D_1 &= \frac{3\Delta^2 + C_{22} \left( -\frac{\delta}{2} + \gamma \right)^2}{\Delta^2 + \left( -\frac{\delta}{2} + \gamma \right)^2} - C_{00} \\ D_2 &= \frac{3\Delta^2 + C_{22} \left( \frac{\delta}{2} + \gamma \right)^2}{\Delta^2 + \left( \frac{\delta}{2} + \gamma \right)^2} - C_{00}. \end{split} \end{equation} We remind the reader that we have defined the excited state as $ \ket{\uparrow} $ and the ground state as $ \ket{\downarrow} $, which explains the sign in front of the $ \sigma^z $s. \noindent We can now write out the full Hamiltonian in the rotating frame, changing the indices from ${1,2,3}$ to $ {L,M,R} $ for the sake of visualizing the system as a chain of two qubits with a qutrit in the middle: \begin{equation} \label{eq:H_full} \begin{aligned} H_\text{full} &= \frac{\Delta_L}{2} \sigma_L^z + \Delta_M \ketbra{1'} + (\Delta_M + \delta_M)\ketbra{2'} + \frac{\Delta_R}{2} \sigma_R^z \\ &\hspace{1cm} + J_{LM_{01}}\left( \sigma_L^-\ketbra{1'}{0'} + \sigma_L^+\ketbra{0'}{1'} \right) +J_{RM_{01}}\left( \sigma_R^-\ketbra{1'}{0'} + \sigma_R^+\ketbra{0'}{1'} \right) \\ &\hspace{1cm}+ J_{LM_{12}}\left( \sigma_L^-\ketbra{2'}{1'} + \sigma_L^+\ketbra{1'}{2'} \right)+ J_{RM_{12}}\left( \sigma_R^-\ketbra{2'}{1'} + \sigma_R^+\ketbra{1'}{2'} \right) \\ &\hspace{1cm}+ J_{LM}^{(z)}\sigma_L^z\left( D_1\ketbra{1'} + D_2\ketbra{2'} \right) + J_{RM}^{(z)}\sigma_R^z\left( D_1\ketbra{1'} + D_2\ketbra{2'} \right), \end{aligned} \end{equation} Here, the diagonal constants are \begin{equation} \begin{aligned}\label{eq:diagonal_constants} \Delta_L &= -\frac{3\beta_1}{\alpha_1C_1} + \sqrt{\frac{4\alpha_1}{C_1}} - \frac{C_{00}}{64}\frac{E_{J1}\cos(\frac{\Phi_{\Sigma 1}}{2})}{\sqrt{\alpha_1\alpha_2C_1(C_1+C_2+C_\text{ext})}} - \omega_\text{ext} \\ \Delta_M &= E_1'-E_0' - \frac{D_1}{64}\frac{E_{J1}\cos(\frac{\Phi_{\Sigma 1}}{2})}{\sqrt{\alpha_1\alpha_2C_1(C_1+C_2+C_\text{ext})}} - \frac{D_1}{64}\frac{E_{J2}\cos(\frac{\Phi_{\Sigma 2}}{2})}{\sqrt{\alpha_3\alpha_2C_2(C_1+C_2+C_\text{ext})}} \\ \delta_M &= 2\gamma - \frac{D_2-D_1}{64}\frac{E_{J1}\cos(\frac{\Phi_{\Sigma 1}}{2})}{\sqrt{\alpha_1\alpha_2C_1(C_1+C_2+C_\text{ext})}} - \frac{D_2-D_1}{64}\frac{E_{J2}\cos(\frac{\Phi_{\Sigma 2}}{2})}{\sqrt{\alpha_3\alpha_2C_2(C_1+C_2+C_\text{ext})}} \\ \Delta_R &= - \frac{3\beta_3}{\alpha_3C_2} + \sqrt{\frac{4\alpha_3}{C_2}} - \frac{C_{00}}{64}\frac{E_{J2}\cos(\frac{\Phi_{\Sigma 2}}{2})}{\sqrt{\alpha_3\alpha_2C_2(C_1+C_2+C_\text{ext})}} - \omega_\text{ext} , \end{aligned} \end{equation} while the $D_i$ constants are defined in \eqref{eq:D_constants} and the rest are \begin{equation}\label{eq:interaction_constants} \begin{aligned} J_{LM_{01}} &= \left( \frac{1}{8L_1} - \frac{1}{8\tilde{L}_1} \right)\frac{k_0}{2(\alpha_1\alpha_2C_1(C_1+C_2+C_\text{ext}))^{\frac{1}{4}}}\frac{\Delta}{\sqrt{\Delta^2 + \left( -\frac{\delta}{2}+\gamma \right)^2 }} \\ J_{RM_{01}} &= \left( \frac{1}{8L_2} - \frac{1}{8\tilde{L}_2} \right)\frac{k_0}{2(\alpha_3\alpha_2C_2(C_1+C_2+C_\text{ext}))^{\frac{1}{4}}}\frac{\Delta}{\sqrt{\Delta^2 + \left( -\frac{\delta}{2}+\gamma \right)^2 }} \\ J_{LM_{12}} &= \left( \frac{1}{8L_1} - \frac{1}{8\tilde{L}_1} \right)\frac{k_2}{4(\alpha_1\alpha_2C_1(C_1+C_2+C_\text{ext}))^{\frac{1}{4}}}\frac{ \frac{\delta}{2} + \gamma}{\gamma} \\ J_{RM_{12}} &= \left( \frac{1}{8L_2} - \frac{1}{8\tilde{L}_2} \right)\frac{k_2}{4(\alpha_3\alpha_2C_2(C_1+C_2+C_\text{ext}))^{\frac{1}{4}}}\frac{ \frac{\delta}{2} + \gamma}{\gamma} \\ J_{LM}^{(z)} &= -\frac{E_{J1}}{128}\frac{\cos(\frac{\Phi_{\Sigma 1}}{2})}{\sqrt{\alpha_1\alpha_2C_1(C_1+C_2+C_\text{ext})}} \\ J_{RM}^{(z)} &= -\frac{E_{J2}}{128}\frac{\cos(\frac{\Phi_{\Sigma 2}}{2})}{\sqrt{\alpha_3\alpha_2C_2(C_1+C_2+C_\text{ext})}}. \end{aligned} \end{equation} To sum up, we have now, starting from the circuit in Supplementary Fig. \ref{fig:big_circuit}, calculated the resulting Hamiltonian and added a dynamical tuning of the qutrit via driving of the first and second excited bare qutrit energy levels. In doing so, we first transformed to the rotating picture with respect to the driving field and then to the new mixed qutrit eigenstates, finally obtaining the Hamiltonian above. \section*{Supplementary Note 2: Circuit parameters used in the simulations} In Table \ref{tbl:realistic_parameters}, we have included a list of realistic circuit parameters and the corresponding spin-model parameters they yield. We can split this into three parts, a circuit with Hamiltonian parameters suited to implementing the dissociation procedure, a circuit implementing the \textsc{acswap} gate, and a circuit with all levels off-resonant, suitable to implementing the STIRAP procedure and \textsc{ccz} and holonomic gates. Note that for the dissociation and \textsc{acswap} procedures, we are working in the frame rotating with the AC Stark drive frequency, $\omega_\text{ext}$, and thus the energy terms are reduced accordingly. This of course does not change the (non-trivial) dynamics, since only the relative energy differences are important. \begin{table}[h!] \centering \begin{tabular}{| p{3.8cm} | p{6.5cm} | p{6.5cm} | } \hline & Circuit parameters & Spin-model parameters \\ \hline Dissociation, first part (STIRAP) &\vspace{-0.5cm}\begin{flushleft} $L_{1,2} = 19.999\:\si{\nano\henry},\ \linebreak \tilde{L}_{1,2} = 18.713\:\si{\nano\henry},\ \linebreak E_{J1,2}/(2\pi) = 185.91\:\si{\giga\hertz},\ \linebreak E_{Jq1,3}/(2\pi) = 79.515\:\si{\giga\hertz},\ \linebreak E_{Jq2}/(2\pi) = 27.441\:\si{\giga\hertz},\ \linebreak C_{1,2} = 55.816\:\si{\femto\farad},\ \linebreak\Phi_{\Sigma 1,2} = -0.43452\:\Phi_0,\ \linebreak C_\text{ext} = 2.3411\:\si{\femto\farad},\ \linebreak \text{Dynamical driving off}$ \end{flushleft} & \vspace{-0.5cm}\begin{flushleft} $\Delta_L/(2\pi) = 15.271\:\si{\giga\hertz},\ \linebreak \Delta_M/(2\pi) = 13.841\:\si{\giga\hertz},\ \linebreak \delta_M /(2\pi) = 13.671\:\si{\giga\hertz},\ \linebreak \Delta_R/(2\pi) = 15.271\:\si{\giga\hertz},\ \linebreak J_{\alpha M_{01}}/(2\pi) = 9.2737\:\si{\mega\hertz},\ \linebreak J_{\alpha M_{12}}/(2\pi) = 12.996\:\si{\mega\hertz},\ \linebreak J_{\alpha M}^{(z)}/(2\pi) = -20.433\:\si{\mega\hertz},\ \linebreak D_1 = 1.9877,\ \linebreak D_2 = 3.9754,\ \linebreak \max(\Omega_{1,2})/(2\pi) = 20\:\si{\mega\hertz}$ \end{flushleft} \\ \hline Dissociation, second part (two-photon resonance) &\vspace{-0.5cm}\begin{flushleft} $L_{1,2} = 20.000\:\si{\nano\henry},\ \linebreak \tilde{L}_{1,2} = 16.949\:\si{\nano\henry},\ \linebreak E_{J1,2}/(2\pi) = 32.371\:\si{\giga\hertz},\ \linebreak E_{Jq1,3}/(2\pi) = 109.74\:\si{\giga\hertz},\ \linebreak E_{Jq2}/(2\pi) = 108.94\:\si{\giga\hertz},\ \linebreak C_{1,2} = 55.935\:\si{\femto\farad},\ \linebreak\Phi_{\Sigma 1,2} = 0.27790\:\Phi_0,\ \linebreak C_\text{ext} = 71.773\:\si{\femto\farad},\ \linebreak A_\text{ext}/(4\alpha_2 (C_1+C_2+C_\text{ext}))^{-1/4} = 0.1000,\ \linebreak \omega_{\text{ext}}/(2\pi) = 14.674\:\si{\giga\hertz}$ \end{flushleft} & \vspace{-0.5cm}\begin{flushleft} In the frame rotating with $\omega_\text{ext}$: \linebreak $\Delta_L/(2\pi) = 0.46529\:\si{\giga\hertz},\ \linebreak \Delta_M/(2\pi) = 0.088376\:\si{\giga\hertz},\ \linebreak \delta_M /(2\pi) = 0.84222\:\si{\giga\hertz},\ \linebreak \Delta_R/(2\pi) = 0.46529\:\si{\giga\hertz},\ \linebreak J_{\alpha M_{01}}/(2\pi) = 15.0203\:\si{\mega\hertz},\ \linebreak J_{\alpha M_{12}}/(2\pi) = 17.114\:\si{\mega\hertz},\ \linebreak J_{\alpha M}^{(z)}/(2\pi) = -6.4890\:\si{\mega\hertz},\ \linebreak D_1 = 2.6636,\ \linebreak D_2 = 3.3015$ \end{flushleft} \\ \hline \textsc{ccnot}/\textsc{ccz} &\vspace{-0.5cm}\begin{flushleft} Same as during STIRAP. \end{flushleft} & \vspace{-0.5cm}\begin{flushleft} Same as for STIRAP, except \linebreak $\Omega_1/(2\pi) = 6\:\si{\mega\hertz}$ \end{flushleft} \\ \hline \textsc{acswap} \newline(first part of \textsc{ccswap}) & \vspace{-0.5cm}\begin{flushleft} $L_{1,2} = 16.049\:\si{\nano\henry},\ \linebreak \tilde{L}_{1,2} = 18.988\:\si{\nano\henry},\ \linebreak E_{J1,2}/(2\pi) = 56.08\:\si{\giga\hertz},\ \linebreak E_{Jq1,3}/(2\pi) = 155.22\:\si{\giga\hertz},\ \linebreak E_{Jq2}/(2\pi) = 135.33\:\si{\giga\hertz},\ \linebreak C_{1,2} = 56.619\:\si{\femto\farad},\ \linebreak \Phi_{\Sigma 1,2} = 0.49999\:\Phi_0,\ \linebreak C_\text{ext} = 90.011\:\si{\femto\farad},\ \linebreak A_\text{ext}/(4\alpha_2 (C_1+C_2+C_\text{ext}))^{-1/4} = 0.10000,\ \linebreak \omega_{\text{ext}}/(2\pi) = 14.367\:\si{\giga\hertz}$ \end{flushleft} & \vspace{-0.5cm}\begin{flushleft} In the frame rotating with $\omega_\text{ext}$: \linebreak $\Delta_L/(2\pi) = 0.9113\:\si{\giga\hertz},\ \linebreak \Delta_M/(2\pi) = -0.0798\:\si{\giga\hertz},\ \linebreak \delta_M /(2\pi) = 0.9121\:\si{\giga\hertz},\ \linebreak \Delta_R/(2\pi) = 0.9113\:\si{\giga\hertz},\ \linebreak J_{\alpha M_{01}}/(2\pi) = 14.311\:\si{\mega\hertz},\ \linebreak J_{\alpha M_{12}}/(2\pi) = 15.173\:\si{\mega\hertz},\ \linebreak J_{\alpha M}^{(z)}/(2\pi) = -0.601\:\si{\kilo\hertz},\ \linebreak D_1 = 2.8377,\ \linebreak D_2 = 3.12544$ \end{flushleft} \\ \hline Holonomic gate \phantom{abc} \linebreak (double-controlled) & \vspace{-0.5cm}\begin{flushleft} \vspace{0.0pt} Same as during STIRAP \end{flushleft} & \vspace{-0.5cm}\begin{flushleft} Same as for STIRAP, except \linebreak $\Omega_{1,2}/(2\pi) = 15\:\si{\mega\hertz}$ \end{flushleft} \\ \hline \end{tabular} \caption{A table of realistic parameters and the corresponding Hamiltonian parameters used in each implementation. In all simulations, we have included finite relaxation and coherence times set to $T_1 = \SI{31}{\micro\second}$ and $T_2 = \SI{35}{\micro\second}$, respectively. The parameters of the second part of the \textsc{cswap} is not shown, but is obtained by varying only the external parameters relative to the first part. The resulting parameters are similar to what is shown for the \textsc{ccnot} gate implementation.} \label{tbl:realistic_parameters} \end{table} \section*{Supplementary Note 3: Direct Dissociation to Entanglement} We wish to obtain an entangled state between the state where qubit one and three is in the excited state and the state where they are both in the ground state. Since the Hamiltonian conserves the total projection of spin, we start in the state $\ket{\downarrow 2 \downarrow}$. We look at the matrix representation of our Hamiltonian in the basis $ \left\{\, \ket{\downarrow 2 \downarrow}, \ket{\downarrow 1 \uparrow}, \ket{\uparrow 1 \downarrow}, \ket{\uparrow 0 \uparrow}\, \right\} $. If we assume the system to be symmetric, i.e. we do not distinguish between $\alpha=L$ and $\alpha=R$, and further assume the states to be resonant, i.e. $\Delta_M = \Delta_R = \Delta_L = \delta_M$, the contribution from these energy terms is proportional to the identity and we are left with the matrix representation of the XX and ZZ-terms, which we can easily diagonalize to find the eigenvalues. In the simplified case $J_{\alpha M_{01}} = J_{\alpha M_{12}} = J_{\alpha M} $ and $ D_1 = D_2 = 1 $ (This is not essential, and is only chosen as to ease the readability of the analysis. The equation for $D_2J_{\alpha M}^{(z)}$ in the main text is the result in the non-simplified case), the eigenvalues are: \begin{align} E_1 = 0 \quad , \quad E_2 = -2 J_{\alpha M}^{(z)} \quad , \quad E_3 = -J_{\alpha M}^{(z)} - \lambda \quad \text{and} \quad E_4 = -J_{\alpha M}^{(z)} + \lambda, \end{align} where \begin{equation}\label{eq:lambda} \lambda = \sqrt{4{J_{\alpha M}}^2 + {J_{\alpha M}^{(z)}}^2}. \end{equation} The associated eigenvectors are \begingroup \renewcommand*{\arraystretch}{1.5} \begin{align} \ket{v_1} = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \\ 0 \\ -1 \\ 1 \end{pmatrix} \quad , \quad \ket{v_2} &= \frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 1 \\ 0 \\ 0 \end{pmatrix} \quad , \quad \ket{v_3} = \sqrt{\frac{\lambda -J_{\alpha M}^{(z)}}{4\lambda}} \begin{pmatrix} \frac{2 J_{\alpha M}}{J_{\alpha M}^{(z)} - \lambda} \\ \frac{2 J_{\alpha M}}{J_{\alpha M}^{(z)} - \lambda} \\ 1 \\ 1 \end{pmatrix} \nonumber \\ \quad \text{and} \quad \ket{v_4} &= \sqrt{\frac{\lambda +J_{\alpha M}^{(z)}}{4\lambda}} \begin{pmatrix} \frac{2 J_{\alpha M}}{J_{\alpha M}^{(z)} + \lambda} \\ \frac{2 J_{\alpha M}}{J_{\alpha M}^{(z)} + \lambda} \\ 1 \\ 1\end{pmatrix}. \end{align} \endgroup We can now find \begin{align} \braket{f}{\text{e}^{-iHt} | \downarrow 2 \downarrow} &= \sum_{n,m=1}^4 \braket{f}{v_n}\braket{v_n | \text{e}^{-iHt}}{v_m} \braket{v_m}{\downarrow 2 \downarrow} \nonumber \\ &=\sum_{n=1}^4 \braket{f}{v_n} \text{e}^{-iE_nt}\braket{v_n}{\downarrow 2 \downarrow}, \end{align} where $\ket{f}$ is one of the four basis states mentioned earlier. From this, the complete time development of $\ket{\downarrow 2 \downarrow}$ is recovered analytically. Specifically, we have \begin{align*} \abs{\braket{\downarrow 2 \downarrow}{\text{e}^{-iHt} | \downarrow 2 \downarrow}}^2 = \frac{1}{4}\Big( \cos(J_{\alpha M}^{(z)}t) + \cos(\lambda t) \Big)^2 + \frac{1}{4}\Big( \sin(J_{\alpha M}^{(z)}t) + \frac{J_{\alpha M}^{(z)}}{\lambda}\sin(\lambda t) \Big)^2 \end{align*} and \begin{align*} \abs{\braket{\uparrow 0 \uparrow}{\text{e}^{-iHt} | \downarrow 2 \downarrow}}^2 = \frac{1}{4}\Big( \cos(J_{\alpha M}^{(z)}t) - \cos(\lambda t) \Big)^2 + \frac{1}{4}\Big( \sin(J_{\alpha M}^{(z)}t) - \frac{J_{\alpha M}^{(z)}}{\lambda}\sin(\lambda t) \Big)^2. \end{align*} Because $\lambda = \sqrt{4{J_{\alpha M}}^2 + {J_{\alpha M}^{(z)}}^2} $ we, for a general $J_{\alpha M}^{(z)} $, expect the probabilities to have a chaotic behavior. We ask ourselves if there exists a certain value of $J_{\alpha M}^{(z)}>0 $ where we can find a time $t$ at which \begin{align} \label{eq:condition_1} &\abs{\braket{\downarrow 2 \downarrow}{\text{e}^{-iHt} | \downarrow 2 \downarrow}}^2 = \abs{\braket{\uparrow 0 \uparrow}{\text{e}^{-iHt} | \downarrow 2 \downarrow}}^2 =\frac{1}{2} \end{align} and \begin{align} \label{eq:condition_2} \frac{\text{d}}{\text{d}t}\abs{\braket{\downarrow 2 \downarrow}{\text{e}^{-iHt} | \downarrow 2 \downarrow}}^2 = \frac{\text{d}}{\text{d}t}\abs{\braket{\uparrow 0 \uparrow}{\text{e}^{-iHt} | \downarrow 2 \downarrow}}^2 =0. \end{align} From the first equality in \eqref{eq:condition_1}, we find the condition \begin{align}\label{eq:con_1_2} \cos(\lambda t)\cos(J_{\alpha M}^{(z)}t) = - \frac{J_{\alpha M}^{(z)}}{\lambda}\sin(\lambda t)\sin(J_{\alpha M}^{(z)}t), \end{align} while we from equation \eqref{eq:condition_2} get \begin{align}\label{eq:con_2_2} &\sin(\lambda t)\left( \cos(J_{\alpha M}^{(z)}t) + \cos(\lambda t) \right) = 0 = \sin(\lambda t)\left( \cos(J_{\alpha M}^{(z)}t) - \cos(\lambda t) \right) \nonumber \\ &\Rightarrow \quad\sin(\lambda t) = 0 \quad \quad \text{or} \quad \quad \cos(J_{\alpha M}^{(z)}t)=\cos(\lambda t) = 0. \end{align} Here, the second option implies that $\sin(J_{\alpha M}^{(z)}t) = \pm \sin(\lambda t) = \pm 1 \neq 0$, which is not compatible with the condition \eqref{eq:con_1_2}. Therefore, we must have $ \sin(\lambda t) = 0 $ and from \eqref{eq:con_1_2} also $\cos(J_{\alpha M}^{(z)}t) = 0$. This means that we have two conditions on our time $t>0$: \begin{align} t=n\frac{\pi}{\lambda} \quad \text{and} \quad t=m\frac{\pi}{2J_{\alpha M}^{(z)}} \ ,\ n\in \mathbb{N}\,,\ m=1,3,5,\dots \end{align} We solve this for $J_{\alpha M}^{(z)}$: \begin{align} n\frac{\pi}{\lambda} &= m\frac{\pi}{2J_{\alpha M}^{(z)}} \Rightarrow 4n^2{J_{\alpha M}^{(z)}}^2 = (4{J_{\alpha M}}^2 + {J_{\alpha M}^{(z)}}^2)m^2\nonumber \\ &\Rightarrow J_{\alpha M}^{(z)} = \frac{2J_{\alpha M}}{\sqrt{4 n^2 / m^2 - 1}},\label{eq:J_ZZ} \end{align} where $n=1,2,3,\dots$ and $m = 1, 3, 5,\dots$ with the further condition to keep the time real $4 n^2 / m^2 > 1 \Rightarrow n > m/2 $. This is plotted for $n=m=1$ in Supplementary Fig. \ref{fig:n1m1} for arbitrary values of parameters for proof of concept. \begin{figure}[!htbp] \centering \includegraphics[width=0.75\textwidth]{direct_dissociation.pdf} \caption{Probability of occupation in the case $J_{\alpha M}^{(z)} = 2J_{\alpha M}/\sqrt{3} $. We notice that the desired entangled state is reached at $t = \pi /\lambda = \pi /2J_{\alpha M}^{(z)} \approx 1.36 \frac{1}{J_\alpha M} \approx \SI{0.028}{\micro\second} $.} \label{fig:n1m1} \end{figure} \section*{Supplementary Note 4: Analysis of the CSWAP}\label{sec:appendix_CSWAP} Assuming that the excited states of the outer qubits are in resonance with the second excited state of the qutrit ($\Delta_L \simeq \delta_M \simeq \Delta_R $) while the qutrit ground state is far detuned from the rest, we can move to the interaction picture of the diagonal in equation \eqref{eq:H_full}, ignoring these terms and also all terms involving the ground state ($\ket{0}$) of the qutrit. We also choose to assume symmetry between both ends of the chain, so e.g. $ J_{L M_{12}} = J_{R M_{12}} = J_{\alpha M_{12}} $, and that $J_{\alpha M}^{(z)} = 0$. A non-zero ZZ-coupling will make the states with the qubits in the same state slightly detuned, introducing a non-perfect transition. This can be remedied a bit by choosing $\Delta_\alpha = \delta_M - 2\frac{D_1+D_2}{2}J_{\alpha M}^{(z)}$, but because in general $D_1 \neq D_2$, this detuning can not be fixed completely. In the basis $\{\ket{\downarrow 1 \downarrow},\ket{\uparrow 1 \downarrow}, \ket{\downarrow 2 \downarrow}, \ket{\downarrow 1 \uparrow}, \ket{\uparrow 2 \downarrow}, \ket{\uparrow 1 \uparrow},\ket{\downarrow 2 \uparrow},\ket{\uparrow 2 \uparrow}\}$, the interaction part of the Hamiltonian can be written very simply: \begin{equation} H_I = \begin{pmatrix}[c | ccc | ccc | c] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 0 & 0 & J_{\alpha M_{12}} & 0 & 0 & 0 & 0 & 0 \\ 0 & J_{\alpha M_{12}} & 0 & J_{\alpha M_{12}} & 0 & 0 & 0 & 0 \\ 0 & 0 & J_{\alpha M_{12}} & 0 & 0 & 0 & 0 & 0 \\ \hline 0 & 0 & 0 & 0 & 0 & J_{\alpha M_{12}} & 0 & 0 \\ 0 & 0 & 0 & 0 & J_{\alpha M_{12}} & 0 & J_{\alpha M_{12}} & 0 \\ 0 & 0 & 0 & 0 & 0 & J_{\alpha M_{12}} & 0 & 0 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} . \end{equation} Notice the block matrix form where the two $3\times3$ matrices represent irreducible subspaces. Furthermore, they are equal in form, so when performing the time evolution $\e{-iH_I t} $ we only have to exponentiate one of these irreducible sub-matrices. It is easily found that: \begin{equation} \exp[-i \begin{pmatrix} 0 & J_{\alpha M_{12}} & 0 \\ J_{\alpha M_{12}} & 0 & J_{\alpha M_{12}} \\ 0 & J_{\alpha M_{12}} & 0 \end{pmatrix}t] = \frac{1}{2} \begin{pmatrix} \cos(\sqrt{2}J_{\alpha M_{12}} t) + 1 & -i\sqrt{2}\sin(\sqrt{2}J_{\alpha M_{12}} t) & \cos(\sqrt{2}J_{\alpha M_{12}} t) - 1 \\ -i\sqrt{2}\sin(\sqrt{2}J_{\alpha M_{12}} t) & 2\cos(\sqrt{2}J_{\alpha M_{12}} t) & -i\sqrt{2}\sin(\sqrt{2}J_{\alpha M_{12}} t)\\ \cos(\sqrt{2}J_{\alpha M_{12}} t) - 1 & -i\sqrt{2}\sin(\sqrt{2}J_{\alpha M_{12}} t) & \cos(\sqrt{2}J_{\alpha M_{12}} t) + 1 \end{pmatrix}. \end{equation} Letting the qutrit start in the ground state, we can now find \begin{align} \e{-iH_I t} \ket{\downarrow 1 \downarrow} &= \ket{\downarrow 1 \downarrow} \\ \e{-iH_I t} \ket{\uparrow 1 \downarrow} &= \cos[2](\frac{J_{\alpha M_{12}}}{\sqrt{2}}t)\ket{\uparrow 1 \downarrow} - \frac{i}{\sqrt{2}}\sin(\sqrt{2}J_{\alpha M_{12}}t)\ket{\downarrow 2 \downarrow} - \sin[2](\frac{J_{\alpha M_{12}}}{\sqrt{2}}t)\ket{\downarrow 1 \uparrow} \\ \e{-iH_I t} \ket{\uparrow 1 \uparrow} &= -\frac{i}{\sqrt{2}}\sin(\sqrt{2}J_{\alpha M_{12}}t)\Big( \ket{\uparrow 2 \downarrow} + \ket{\downarrow 2 \uparrow} \Big) + \cos(\sqrt{2}J_{\alpha M_{12}}t)\ket{\uparrow 1 \uparrow}. \end{align} We wish to obtain a \mbox{\textsc{-cswap}}, so we require $ \abs{\matrixel{\downarrow 1 \uparrow}{\e{-iH_It}}{\uparrow 1 \downarrow}}^2 = 1 $, thus we must choose the operation time to be $T = \frac{\pi}{\sqrt{2}J_{\alpha M_{12}}} $. Inserting this into the equations above, we get \begin{align} \matrixel{\downarrow 1 \uparrow}{\e{-iH_IT}}{\uparrow 1 \downarrow} = \matrixel{\uparrow 1 \uparrow}{\e{-iH_IT}}{\uparrow 1 \uparrow} = -1 = - \matrixel{\downarrow 1 \downarrow}{\e{-iH_IT}}{\downarrow 1 \downarrow}. \end{align} This last matrix element has the wrong sign, so if we tried to perform a \mbox{\textsc{-cswap}} between the outer qubits in different superpositions of the `up' and `down' state, we would obtain an unfortunate relative sign change. This can be seen by assuming the left qubit starts in the superposition \begin{equation} \ket{\psi_\text{L}} = a\ket{\uparrow} + b \ket{\downarrow}, \end{equation} while the right qubit starts in the superposition \begin{equation} \ket{\psi_\text{R}} = c\ket{\uparrow} + d \ket{\downarrow}. \end{equation} Here, $a,b,c$ and $d$ are complex coefficients. The total starting state of the system is then the factorizable state \begin{equation} \left( a\ket{\uparrow} + b\ket{\downarrow} \right)\ket{1}\left( c\ket{\uparrow} + d\ket{\downarrow} \right) = ac\ket{\uparrow1\uparrow} + ad\ket{\uparrow1\downarrow} + bc\ket{\downarrow1\uparrow} + bd\ket{\downarrow1\downarrow}. \end{equation} Operating on this state with the unitary operator $ \e{-iH_I T} $ now gives: \begin{equation} - ac\ket{\uparrow 1 \uparrow} - ad\ket{\downarrow1\uparrow} - bc\ket{\uparrow1\downarrow} + bd\ket{\downarrow 1 \downarrow}. \end{equation} This state is not factorizable because of the sign difference between the first term and the rest, and therefore does not have a simple interpretation as the system where a \textsc{swap} operation has been performed. This can be fixed by first operating with a \textsc{ccz} gate on the system so that only this specific state ($ \ket{\downarrow 1 \downarrow} $) obtains a sign change. This of course requires that the states are first moved out of resonance, but this is already required in order to catch the system in the swapped state. We then obtain the state \begin{equation} - ac\ket{\uparrow 1 \uparrow} - ad\ket{\downarrow 1 \uparrow} - bc\ket{\uparrow 1 \downarrow} - bd\ket{\downarrow 1 \downarrow} = -\left( c\ket{\uparrow} + d\ket{\downarrow} \right)\ket{1}\left( a\ket{\uparrow} + b\ket{\downarrow} \right). \end{equation} It is clear that the states of the outer qubits have been swapped, just as we wanted! We thus end up with a \textsc{cswap} gate in the interaction picture of $H_0$ with a two-step operation scheme and with the control ``bit'' being comprised of the states $\ket{0}$ and $\ket{1}$ of the qutrit (or, equivalently, the states $\ket{1}$ and $\ket{2}$ if we instead apply the \textsc{ccz} so only the state $\ket{\uparrow 1 \uparrow}$ receives a sign change). By symmetry, the topmost excited qutrit state $\ket{2}$ will also allow transfer, but here the state $\ket{\uparrow 2 \uparrow}$ will need a sign change. A simulation of the \textsc{cswap}/Fredkin gate is shown in the main text, where we get a fidelity of around $0.95$ for a perfect \textsc{swap} operation when the qutrit starts in the first excited state, as predicted by the analytical investigation above. \end{document}
{ "timestamp": "2019-10-28T01:11:06", "yymm": "1802", "arxiv_id": "1802.04299", "language": "en", "url": "https://arxiv.org/abs/1802.04299" }
\section{INTRODUCTION} \label{intro} Quantum correlations (QCs), existing between two or more parties \cite{horodecki2009quantum, modi2012classical}, are bestowed with properties unique to the quantum world and are of pivotal importance in quantum information science. The study of QCs not only unveils the fundamental traits responsible for the distinction of the quantum mechanically correlated systems from those attributed with a joint classical probability distribution \cite{BellCHSH}, it also helps in devising efficient ways of carrying out the tasks of quantum communication and computation \cite{gisin2002quantum, raussendorf2001one, briegel2009measurement}. Among the most celebrated notions in quantum physics are nonlocality \cite{bell1964einstein}, entanglement \cite{horodecki2009quantum}, quantum discord \cite{ollivier2001quantum,hendersonvedral}, and teleportation fidelity \cite{barrett2004deterministic, sbdistel}. These spatial quantum correlations (SQCs) have enhanced our understanding of nature at the fundamental level and at the same time have provided efficacious solutions in the development of the theory of quantum information. The SQCs mentioned above have been studied in many systems, viz., optical systems \cite{aspect1981experimental, tittel1998experimental, tittel1998violation, weihs1998violation, indranilsb,lanyon2013experimental, naikoo2017probing}, NMR \cite{ kessel2000quantum, dorai2001quantum, laflamme2001nmr}, neutrino oscillation \cite{blasone2009entanglement, alok2016quantum, banerjee2015quantum, Naikoo:2017fos}, $B$ and $K$ meson systems \cite{banerjee2016quantum}. Of the above listed SQCs, Bell nonlocality is the strongest and Bell inequalities are considered to be the oldest tool for detecting entanglement \cite{guhne2009entanglement}. The temporal quantum correlations (TQCs) arising from the sequential measurements on a system at different times, have also been considered as promising candidates in discerning the quantum behavior from the classical. Leggett and Garg inequalities (LGIs) \cite{leggett1985quantum} are among the well known TQCs, violation of which is a witness of quantum \textit{coherence} in the system. LGIs have been a topic of study in various theoretical works \cite{barbieri2009multiple, avis2010leggett, lambert2010distinguishing, lambert2011macrorealism, montina2012dynamics, emary2013leggett, kofler2013condition} including, in recent times, neutrino oscillations \cite{formaggio2016violation, Naikoo:2017fos, Fu:2017hky} and studied experimentally in systems like superconducting qubits \cite{palacios2010experimental, groen2013partial}, photons \cite{goggin2011violation, xu2011experimental, dressel2011experimental, suzuki2012violation}, and NMR \cite{athalye2011investigation, souza2011scattering, katiyar2013violation}. Leggett-Garg inequalities are based on the concept of \textit{macrorealism} (MR) and \textit{noninvasive measurability} (NIM). MR means that the system which has available to it two or more macroscopically distinct states, pertaining to an observable $\hat{Q}$, always exists in one of these states irrespective of any measurement performed on it. NIM states that, in principle, we can perform the measurement without disturbing the future dynamics of the system \cite{emary2013leggett}. MR and NIM put limits on certain combinations of the two time correlation functions $C_{ij} = \langle \hat{Q(t_i)} \hat{Q(t_j)} \rangle $. Quantum systems, however, violate these limits. The simplest form of LGI is the one involving three measurements performed at $t_0$, $t_1$ and $t_2$ ($t_2 > t_1 > t_0$) \begin{equation}\label{K3defined} K_3 = C_{01} + C_{12} - C_{02}, \end{equation} such that $-3 \le K_{3} \le 1$. The maximum quantum value of $K_3$ for a two level system in $\frac{3}{2}$ \cite{leggett1985quantum} and has been found to hold for any system, \textit{irrespective} of the number of levels, as long as the measurements are given by just two projectors $\Pi^{\pm}$ \cite{budroni2013bounding}, a fact revealed in several studies \cite{george2013opening, kofler2007classical, lambert2011macrorealism, wilde2010could}. It was shown in \cite{budroni2014temporal} that in the limit $N \rightarrow \infty$, the LGI can be violated up to its maximum algebraic sum. The autocorrelation $C_{12}$ turns out to contain a nonmeasurable quantity and hence reduces the efficacy of Eq. (\ref{K3defined}) from the experimental point of view. Such limitations of the two time correlations have been discussed in detail in \cite{huelga1995proposed, huelga1996temporal, huelga1997observation, waldherr2011violation}, and a different approach was developed which involves replacing the NIM by a weaker condition called \textit{stationarity}. This avoids the need of performing the measurement at the intermediate time $t_1$ by replacing $C_{12}$ by $C_{01}$, thus leading to an easily testable Leggtt-Garg type inequality (LGtI) \begin{equation}\label{tildeK3defined} \tilde{ K}_3 = 2C_{01} - C_{02} \le 1. \end{equation} The following set of assumptions \cite{emary2013leggett} are considered important for applying \textit{stationarity} to a system: (i) macroscopic realism, (ii) the conditional probability $P(\psi, t+t_0| \psi, t_0)$ of finding the system in state $\psi$ at time $t+t_0$ given that it was in state $\psi$ at time $t_0$, should be invariant under the time translation, $P(\psi, t+t_0|\psi,t_0) = P(\psi,t|\psi,0)$, (iii) Markovianity and (iv) that the system is prepared in state $\psi$ at time $t=0$. In this work, we study the LG and LG-type inequalities in the $B$ and $K$ meson systems. The effect of decoherence is included by using the formalism of open quantum systems. Decoherence, here, is modelled by a single phenomenological parameter \cite{Alok:2015iua} which represents the interaction between the one-particle system and its environment. The environment can be attributed to quantum gravity effects \cite{qg1,qg2,qg3,qg4,qg5,qg6,qg7,qg8} or it can be due to detector background itself. Apart from decoherence, we also include the effects of $CP$ violation. We find that the LG inequality is violated for both $B$ and $K$ meson systems. Apart from the experimentally measurable meson transition probabilities, we show that the LG function depends upon an additional term which vanishes in the limit of zero decoherence. The LG-type parameter on the other hand can be directly expressed in terms of transition probabilities. The plan of this work is as follows. In the next section, we discuss the time evolution of $B$ and $K$ meson systems treated as open quantum systems. In Sec. III, we derive the LG and LG-type inequalities for these systems. In Sec. IV, we present our results. Finally, in Sec. V, we make our conclusions. \section{$B$ and $K$ mesons as open quantum systems}\label{dynamics} In this section, we introduce our formalism for the study of $B$ and $K$ mesons as open quantum systems. \subsection{Kraus representation} Kraus representation \cite{kraus1983states}, describes the time evolution of an \textit{open} quantum system, which is not necessarily unitary unlike the evolution of a \textit{closed} quantum system. Real physical systems are always entangled with their ambient environment, alternatingly addressed as the reservoir. Kraus representations are very convenient for handling a number of practical problems of open system dynamics \cite{ breuer2002theory, weiss2012quantum, nielsen2000quantum, banerjee2010entanglement, banerjee2010dynamics, omkar2012operator}. Consider a large system $S$ comprising of two subsystems $S_a$ and $S_b$. At a given time $t$, let the quantum states corresponding to $S$, $S_a$ and $S_b$ be represented by $\rho(t)$, $\rho_a(t)$ and $\rho_b(t)$, respectively. Then $\rho_a(t) = Tr_b \{\rho(t) \}$ and $\rho_b(t) = Tr_a \{\rho(t) \}$. Since the total system is unitary, its evolution is given by \begin{equation} \rho(t) = U(t) \rho(0) U^{\dagger}(t), \end{equation} where $U(t)$ is a unitary operator.The evolution of system $S_a$ will look like \begin{equation} \rho_a(t) = Tr_b\{U(t)\rho(0) U^{\dagger}(t)\}. \label{rho_a} \end{equation} If it is possible to recast Eq. (\ref{rho_a}) in the following form \begin{equation} \rho_a(t) = \sum_{i} E_i(t) \rho_a(0) E_i^{\dagger}(t), \end{equation} such that $\sum_{i} E_i(t) E_i^{\dagger}(t) = \mathbb{1} $, then the evolution of $\rho_a(t)$ has a $Kraus$ representation and is completely positive. \subsection{Time evolution of $B$/$K$ mesons} We describe briefly the time evolution of $B^o(K^o)$ meson system. Since both $B^o$ and $K^o$ share the same scheme of dynamics, we discuss only $B^o$ system and the results, with appropriate notational changes, will be applicable to the $K^o$ system. The states of the total system, including the meson and the vacuum $\ket{0}$, introduced in order to incorporate the effect of decay in the meson system, reside in the Hilbert space given by the direct sum $\mathcal{H}_{B^0} \oplus \mathcal{H}_{0}$ \cite{caban2005unstable, Alok:2015iua, Alok:2013sca} spanned by the orthonormal vectors $\ket{B^0}$, $\ket{\bar{B}^0}$ and $\ket{0}$ \begin{equation} \ket{B^0} = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}; \quad \ket{\bar{B}^0} = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}; \quad \ket{0} = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}. \label{basis} \end{equation} Here $B^0$ stands for $B^0_d/B^0_s$ mesons. The mass eigenstates $\{\ket{B_L}, \ket{B_H} \}$ are related to the flavor eigenstates $\{ \ket{B^o}, \ket{\bar{B}^o}\}$ by the equations \begin{equation}\label{MassFlavorStates} \ket{B_L} = p \ket{B^o} + q \ket{\bar{B}^o}, \qquad \ket{B_H} = p \ket{B^o} - q \ket{\bar{B}^o}, \end{equation} with $|p|^2 + |q|^2 = 1$. The time evolution is given by a family of completely positive trace preserving maps forming a one parameter dynamical semigroup. The complete positivity requires the time evolution of a state of the system being represented by the operator-sum representation \cite{kraus1983states} \begin{equation} \rho(t) = \sum_{i=0} E_{i}(t) \rho(0) E^{\dagger}_{i}(t), \label{operator_sum_rep} \end{equation} where the $Kraus$ operators have the following form \begin{align*} E_0 &= \ket{0}\bra{0},\\ E_1 &= \mathcal{E}_{1+}\big( \ket{B^0}\bra{B^0} + \ket{\bar{B}^0}\bra{\bar{B}^0}\big ) + \mathcal{E}_{1-}\big( \frac{p}{q}\ket{B^0}\bra{\bar{B}^0} + \frac{q}{p}\ket{\bar{B}^0}\bra{B^0} \big),\\ E_2 &= \mathcal{E}_2 \big( \frac{p+q}{2p} \ket{0}\bra{B^0} + \frac{p+q}{2q} \ket{0}\bra{\bar{B}^0} \big), \\ E_3 &= \mathcal{E}_{3+} \frac{p+q}{2p} \ket{0}\bra{B^0} + \mathcal{E}_{3-} \frac{p+q}{2q} \ket{0}\bra{\bar{B}^0} , \\ E_4 &= \mathcal{E}_4 \big( \ket{B^0}\bra{B^0} + \ket{\bar{B}^0}\bra{\bar{B}^0} + \frac{p}{q}\ket{B^0}\bra{\bar{B}^o} + \frac{q}{p}\ket{\bar{B}^0}\bra{B^0} \big),\\ E_5 &= \mathcal{E}_5 \big( \ket{B^0}\bra{B^0} + \ket{\bar{B}^0}\bra{\bar{B}^0} - \frac{p}{q}\ket{B^0}\bra{\bar{B}^0} - \frac{q}{p}\ket{\bar{B}^0}\bra{B^0} \big). \end{align*} Here the coefficients are \begin{widetext} \begin{subequations} \begin{align} \mathcal{E}_{1\pm} &= \frac{1}{2} \left[ e^{-(2 i m_L + \Gamma_L + \lambda) t/2} \pm e^{-(2 i m_H + \Gamma_H + \lambda) t/2} \right], \label{E1}\\ \mathcal{E}_2 &= \sqrt{\frac{Re[\frac{p-q}{p+q}]}{|p|^2 - |q|^2} \big( 1 - e^{- \Gamma_L t} - (|p|^2 - |q|^2)^2 \frac{|1 - e^{-(\Gamma + \lambda - i \Delta m )t}|^2}{1 - e^{-\Gamma_H t}}}\big), \label{E2} \\ \mathcal{E}_{3\pm} &= \sqrt{\frac{Re[\frac{p-q}{p+q}]}{(|p|^2 - |q|^2)(1 - e^{-\Gamma_H t})}}\big[1 - e^{-\Gamma_H t} \pm (1 - e^{-(\Gamma + \lambda - i \Delta m)t})(|p|^2 - |q|^2)\big], \label{E3}\\ \mathcal{E}_{4} &= \frac{e^{-\Gamma_L t/2}}{2} \sqrt{1 - e^{-\lambda t}},\label{E4}\\ \mathcal{E}_{5} &= \frac{e^{-\Gamma_H t/2}}{2} \sqrt{1 - e^{-\lambda t}}. \label{E5} \end{align} \end{subequations} A meson initially in state $\rho_{B^0}(0) = \ket{B^0}\bra{B^0}$ or $\rho_{\bar{B}^0}(0) = \ket{\bar{B}^0}\bra{\bar{B}^0}$, after time $t$, evolves to \begin{equation} \rho_{B^0}(t) = \frac{1}{2}e^{-\Gamma t} \begin{pmatrix} a_{ch} + e^{-\lambda t} a_{c} & (\frac{q}{p})^* (-a_{sh} - i e^{- \lambda t} a_s) & 0 \\ (\frac{q}{p}) (-a_{sh} + i e^{-\lambda t} a_s) & |\frac{q}{p}|^2 a_{ch} - e^{-\lambda t} a_{c} & 0 \\ 0 & 0 & \rho_{33}(t) \end{pmatrix} \label{rhoBt}, \end{equation} and \begin{equation} \rho_{\bar{B}^0}(t) = \frac{1}{2} e^{-\Gamma t} \begin{pmatrix} |\frac{p}{q}|^2 (a_{ch} - e^{- \lambda t} a_c) & (\frac{p}{q}) (-a_{sh} + i e^{- \lambda t} a_{s}) & 0 \\ (\frac{p}{q})^* (-a_{sh} -i e^{- \lambda t} a_{s}) & a_{ch} + e^{- \lambda t} a_c & 0 \\ 0 & 0 & \tilde{\rho}_{33}(t) \\ \end{pmatrix} \label{rhoBbart}. \end{equation} \end{widetext} Here, $a_{ch}$ ( $a_{sh}$) and $a_{c}$ ($a_{s}$) stand for the hyperbolic functions $\cosh[{\frac{\Delta \Gamma t}{2}]}$ ($\sinh{[\frac{\Delta \Gamma t}{2}]}$) and the trigonometric functions $\cos{[\Delta m t]}$ ($\sin{[\Delta m t]}$), respectively. $p$ and $q$ are defined in Eq. (\ref{MassFlavorStates}). $\Delta\Gamma = \Gamma_L - \Gamma_H$ is the difference of the decay width $\Gamma_L$ (for $B^o_L$ ) and $\Gamma_H$ (for $B^o_H$). $\Gamma = \frac{1}{2}(\Gamma_L + \Gamma_H)$ is the average decay width. The mass difference $\Delta m = m_H - m_L$, where $m_H$ and $m_L$ are the masses of $B^o_H$ and $B^o_L$ states, respectively. The strength of the interaction between the one particle system and its environment is quantified by $\lambda$, the \textit{decoherence} parameter \cite{ABUDecho}. The elements $\rho_{33}(t)$ and $\tilde{\rho}_{33}(t)$ are known functions of B physics parameters, not used in this work. In the following section, we use this formalism to develop the LGI and LGtI for the meson systems. \section{Temporal quantum correlations in B/K systems}\label{LGI_for_BKmesons} \subsection{Leggett-Garg inequality} Leggett-Garg inequalities, often referred to as the temporal Bell inequalities, place bounds on certain combinations of the two time autocorrelations $C_{ij}$, defined in terms of the joint probabilities as \cite{leggett1985quantum, kofler2008conditions, castillo2013enhanced} \begin{equation} C_{ij} = p(^{+}t_i)q(^{+}t_j|^{+}t_i) - p(^{+}t_i)q(^{-}t_j|^{+}t_i) - p(^{-}t_i)q(^{+}t_j|^{-}t_i) + p(^{-}t_i)q(^{-}t_j|^{-}t_i) \label{Cij_pq}, \end{equation} where $p(^{a}t_i)$ is the probability of obtaining the result $a = \pm1$ at $t_i$, and $q(^{b}t_j|^{a}t_i)$ is the conditional probability of getting result $b=\pm 1$ at time $t_j$, given that result $a = \pm 1$ was obtained at $t_i$. To find the probabilities involved in Eq. (\ref{Cij_pq}), we define the projector $\Pi^{\pm}$ related to the eigenspace of the dichotomic operator $\hat{Q}$, such that the probability of obtaining outcome $a$ at time $t_i$ is \begin{equation} p(^{a}t_i) = Tr\{\Pi^a \rho(t_i)\} = Tr\{\Pi^a \sum_\mu K_\mu(t_i) \rho(0) K^{\dagger}_\mu(t_i) \}. \end{equation} The density matrix corresponding to the measurement result $a$ obtained at `$t_i$' is given by the von Neumann rule \begin{equation} \rho^a(t_i) = \frac{\Pi^a \rho(t_i) \Pi^a}{Tr\{\Pi^a \rho(t_i)\}} = \frac{\Pi^a \sum_\mu K_\mu(t_i) \rho(0) K^{\dagger}_\mu (t_i) \Pi^a}{p(^at_i)}, \end{equation} this state evolves until $t_j$, when the state of the system looks like $\sum_\nu K_\nu(t_j - t_i) \rho^a(t_i) K^{\dagger}_\nu(t_j - t_i)$, so that the probability of obtaining outcome $b$ at time $t_j$, given that $a$ was obtained at time $t_i$, is given by \begin{equation} q(^bt_j|^at_i) = \frac{Tr\{\Pi^b \sum_{\nu, \mu} K_\nu(t_j - t_i) \Pi^a K_\mu(t_i) \rho(0) K^{\dagger}_\mu (t_i) \Pi^a K^{\dagger}_\nu(t_j - t_i)\}}{p(^at_i)}. \end{equation} A generic term in the right-hand side of Eq. (\ref{Cij_pq}) becomes \begin{equation} p(^{a}t_i)q(^{b}t_j|^{a}t_i) = Tr\{\Pi^b \sum_{\nu} K_{\nu,\mu}(t_j - t_i) \Pi^a K_{\mu} (t_i) \rho(0) K_{\mu}^{\dagger}(t_i) \Pi^{a} K^{\dagger}_{\nu}(t_j - t_i)\}. \label{generic_term} \end{equation} With some algebra, we can show that the two time correlations turn out to be \cite{castillo2013enhanced} \begin{equation} C_{ij} = 1 - 2p(^+t_1) - 2p(^+t_2) + 4Re[g(t_i, t_j)], \end{equation} where \begin{equation} g(t_i,t_j) = Tr\Big\{\Pi^+ \sum_{\nu} K_\nu(t_j - t_i) \Pi^+ \rho(t_i) K^{\dagger}_\nu(t_j - t_i)\Big\}. \end{equation} We consider a dichotomic quantity $Q=\pm1$ for our \textit{three} level system, such that each level is associated with a definite value of Q. Assigning the same value of Q to different states is irrelevant from the macrorealistic point of view and does not change the bounds of Eq. (\ref{K3defined}) \cite{budroni2014temporal}. Let us assume that at time $t=0$, the meson was in state $\rho_{\bar{B}^0}$. This state evolves to $\rho_{\bar{B}^0(t_i)}$ at time $t_i$ and is given by Eq. (\ref{rhoBbart}). We define the dichotomic operator $\Pi = \Pi^+ - \Pi^- = \Pi_{B^0} - (\Pi_{\bar{B}^o} + \Pi_{0})$, where $\Pi_x = \ket{x}\bra{x}$. Now \begin{equation} p(^+t_i) = Tr\{\Pi^+ \rho_{\bar{B}_0}(t_i)\} = [\rho_{\bar{B}_0}(t_i)]_{11} = |p/q|^2 \frac{e^{-\Gamma t_i}}{2} \bigg[ \cosh(\frac{\Delta \Gamma t_i}{2}) - e^{-\lambda t_i} \cos(\Delta m t_i)\bigg] \label{Survival}. \end{equation} Thus, $p(^+t_i) = \mathcal{P}_{\bar{B^0} B^0}(t_i)$ is the transition probability from state $\rho_{\bar{B}^0}$ to $\rho_{B^0}$ at time $t_i$. With the assumption of equal time measurements $t_2-t_1 = t_1 - 0 = \Delta t$, we have the following expression for $C_{12}$ \begin{equation} C_{12} = 1 - 4 \mathcal{P}_{\bar{B^0} B^0}(\Delta t) + 4 Re[g(\Delta t)], \end{equation} with \begin{equation} g(t_1, t_2) = 2 \mathcal{P}_{\bar{B^0} B^0}(\Delta t) \mathcal{P}_{\bar{B}^0 \bar{B}^0}(\Delta t) + |\frac{p}{q}|^2 \frac{e^{-2 \Gamma \Delta t}(e^{-2 \lambda \Delta t} - 1)}{4} . \end{equation} Here $\mathcal{P}_{\bar{B}^0 \bar{B}^0}(\Delta t)$ and $\mathcal{P}_{\bar{B^0} B^0}(\Delta t)$ are the survival and transition probabilities, respectively, for the meson which started in state $\rho_{\bar{B}^0} = \ket{\bar{B}^0}\bra{\bar{B}^0}$ at time $t=0$. The survival probability of $\bar{B}^o$ has the following form: \begin{equation} \mathcal{P}_{\bar{B}^0 \bar{B}^0}(t) = \frac{e^{-\Gamma t}}{2} \bigg[ \cosh(\frac{\Delta \Gamma t}{2}) + e^{-\lambda t} \cos(\Delta m t)\bigg] \label{Transition}. \end{equation} The LG function finally becomes \begin{equation} K_3 = 1 - 4 \mathit{P}_{\bar{B}^0 B^0}( \Delta t) + 8 \mathit{P}_{\bar{B}^0 B^0}( \Delta t) \mathit{P}_{\bar{B}^0 \bar{B}^0}( \Delta t) + |p/q|^2 e^{-2 \Gamma \Delta t} \big( e^{-2\lambda \Delta t} - 1 \big). \label{K3} \end{equation} $CP$ violation implies that $|p/q| \ne 1$. The above developed formalism also applies to the $K$ meson case with some notational changes. The $CP$ violating parameter for $K$ mesons $\epsilon$ can be expressed in terms of $p$ and $q$ by the following relation $\epsilon = \frac{p-q}{p+q} \label{epsilon}$. \subsection{Leggett-Garg type inequality} The assumption of noninvasive measurability makes it difficult to test the Leggett-Garg inequality experimentally. Different measurement strategies like negative outcome measurement, delayed choice measurement, weak measurements \cite{tesche1990can, paz1993proposed, palacios2010experimental, goggin2011violation, fedrizzi2011hardy} have been devoted to this effect. Another formalism developed in \cite{huelga1995proposed, huelga1996temporal}, replacing the assumption of noninvasive measurability by ``stationarity", leads to easily testable inequalities using projective (von Neumann) measurements. According to the stationarity assumption, the conditional probability $q(t_i, t_j)$ to find a system in state $j$ at time $t_j$, if it was in state $i$ at time $t_i$ only depends on the time difference $t_j - t_i$, and is expected to hold not only for idealized closed quantum systems, but also in open quantum systems subjected to purely Markovian noise at a rate $\gamma$ such that the two time correlations are exponentially damped by a factor $\gamma (t_2 - t_1)$ \cite{waldherr2011violation}. The full set of assumptions (i)-(iv), for the stationarity to hold for a system, as given in Sec. (\ref{intro}), turns out to be applicable in the context of K and B meson systems. Given that the state of the meson at time $t=0$ is $\ket{\bar{B}^o}$, it can be shown that Markovian dynamics described by the Kraus operators in Sec. (\ref{dynamics}) lead to the time translation invariance of the conditional probability, i.e., $P(\bar{B}^o, t+t_0|\bar{B}^o,t_0) = P(\bar{B}^o,t|\bar{B}^o,0)$. With the assumption of stationarity, the Leggett-Garg type inequality, Eq. (\ref{tildeK3defined}), becomes \begin{align} \tilde{K}_3 &= 1 - 4 \mathit{P}_{\bar{B}^o B^o}( \Delta t) + 2 \mathit{P}_{\bar{B}^o B^o}(2 \Delta t). \label{lgtype} \end{align} Therefore, a knowledge of the transition probabilities at times $\Delta t$ and $2\Delta t$ would allow one to compute $K_3$ according to Eq. (\ref{lgtype}), such that $\tilde{K}_3 > 1$ shows the nonclassical nature of the neutral meson oscillations. It should be noted that Eq. (\ref{lgtype}) is expressed completely in terms of directly measurable quantities such as transition probabilities unlike Eq. (\ref{K3}), which contains a term ($|p/q|^2 e^{-2 \Gamma \Delta t} \big( e^{-2\lambda \Delta t} - 1 \big)$), apart from the survival and transition probabilities. However, it can be seen that in the limit of neglecting decoherence effects, Eq. (\ref{K3}), can also be expressed directly in terms of survival and transition probabilities. \begin{figure*}[ht] \centering \begin{tabular}{ccc} \includegraphics[width=60mm]{K_meson} & \includegraphics[width=60mm]{Bd_meson}& \includegraphics[width=60mm]{Bs_meson} \end{tabular} \caption{The left, middle and right panels of the figure depict the LG function $K_3$ plotted {\it w.r.t} the dimensionless quantity $\Delta t/\uptau$ for the $K$, $B_d$ and $B_s$ mesons, respectively. Here $\Delta t$ is the time between successive measurements and $\tau$ is the mean lifetime of respective mesons. Dashed and solid curves correspond to the cases with and without decoherence, respectively. For the $K$ system, the mean lifetime is $ \uptau_K = 1.7889 \times 10^{-10} s $. Also, $\Gamma = 5.59 \times 10^{9}~ {\rm s^{-1}}$, $ \Delta \Gamma = 1.1174 \times 10^{10}~{\rm s^{-1}}$, $\lambda = 2.0 \times 10^{8}~ {\rm s^{-1}}$ and $\Delta m = 5.302\times 10^{9} ~ {\rm s^{-1}}$ \cite{Olive:2016xmw}. Here we used $ Re(\epsilon) = 1.596 \times 10^{-3} $ and $ |\epsilon| = 2.228 \times 10^{-3} $ \cite{d2006determination}. For the $B_d$ system, $\uptau_{B_d} = 1.518 \times 10^{-12} s $, $ \Gamma = 6.58 \times 10^{11}~ {\rm s^{-1}}$, $\Delta \Gamma = 0$, $\lambda = 0.012 \times 10^{12} ~{\rm s^{-1}}$ and $\Delta m = 0.5064\times 10^{12} ~{\rm s^{-1}}$ \cite{Amhis:2016xyh}. The $CP$ violating parameter used here is $|\frac{q}{p}| = 1.010$ \cite{Amhis:2016xyh}. Finally, for the $B_s$ meson, $\uptau_{B_s} = 1.509 \times 10^{-12} s $, $ \Gamma = 0.6645 \times 10^{12}~ {\rm s^{-1}}$, $\Delta \Gamma = 0.086 \times 10^{12}~ {\rm s^{-1}}$, $\lambda = 0.012 \times 10^{12}~ {\rm s^{-1}}$ and $\Delta m = 17.757\times 10^{12}~ {\rm s^{-1}}$ \cite{Amhis:2016xyh}. The value of the $CP$ violating parameter here is $\frac{q}{p} = 1.003$ \cite{Amhis:2016xyh}. As we do not have any experimental bound on the decoherence parameter $\lambda$ for the $B_s$ system, we assume it to be the same as that of the $B_d$ system.} \label{LG-meson} \end{figure*} The experiments on the $B^0(K^0)$ meson systems involve determination of their flavor at the time of production or decay. This is done by analyzing the flavor specific decays. For e.g., a $B^0_d$ meson can decay into a positron (or a $\mu^+$), a neutrino and a hadron with a branching ratio of $\sim 0.1$. This semileptonic decay is induced by the quark level transition $\bar{b} \to \bar{c}\, l^+ \,\nu_l $, with $l=e,\,\mu$. On the other hand, the corresponding decay of a $\bar{B^0_d}$ meson results in an electron (or a $\mu^-$) in the final state. Thus, in general, the charge of the final state lepton is same as the charge of the decaying quark. This is known as the $\Delta B =\Delta Q$ rule for the semileptonic decays of B mesons and is assumed in most of the experimental analysis. Hence, the charge of the final state lepton in the semi-leptonic decays of a neutral meson usually determines the flavor of that meson at the time of decay. The process of determination of the initial flavor of a neutral meson is called tagging. This is achieved by making use of the rule of associated production. The mesons are produced either by strong or electromagnetic interactions and hence a quark is always produced in association with its anti-quark as flavor is conserved in these interactions. Thus, if a quark $q$ is detected at one end of the detector then at the quark at the other end has to be $\bar{q}$. Now if a charged meson is produced in association with a neutral meson, then the decay of the charged meson determines the flavor of the neutral meson at production. This is so because the charged meson cannot oscillate. The survival and oscillation probability of the neutral meson can then be measured by identifying the charge of the lepton in its semileptonic decay. If two entangled neutral mesons are produced, as in the $e^+e^-$ colliders by the process $e^+e^- \to \Upsilon(4S) \to B^0_d \bar{B^0_d}$, then detecting the flavor specific final state of one meson, say at time $t_1$, determines the flavor of that meson as well as the other meson at that time $t_1$. The oscillation probability of the tagged meson is then determined by identifying its final flavor specific state. \section{Results and discussion} The left panel of Fig. (\ref{LG-meson}) shows the variation of the LG function $K_3$, as a function of the dimensionless quantity $\Delta t/\uptau_K$. It can be seen from the figure that the LG inequality is violated for about $\Delta t = \uptau_K $. The middle and right panels of Fig. (\ref{LG-meson}) depict the variation of the LG function for the $B_d$ and $B_s$ mesons, respectively. One can see that the violation in the $B_d$ meson system sustains for about $\Delta t = \uptau_{B_d}$ while for the $B_s$ meson system the violation is roughly for $\Delta t \approx 0.5~ \uptau_{B_s}$. The maximum violation of LGI occurs around $\Delta t \approx 0.41 \uptau_{K} $, $\Delta t \approx 0.37 \uptau_{B_{d}}$ and $\Delta t \approx 0.037 \uptau_{B_{s}}$ for $K$, $B_d$ and $B_s$ meson systems, respectively. The figures clearly bring out the point that from the genesis of its decay \cite{Alok:2013sca}, the meson systems violate the upper threshold value of $K_3 = 1$, indicative of quantum behavior, and quickly fall below one. The $K_3$ value for $K$ meson remains above one longest while $B_s$ does it for the shortest time. In addition, the $B_s$ meson exhibits an additional recurrence behavior. In order to have an understanding of this recurrence behavior, we re-write Eq. (\ref{K3}) as \begin{widetext} \begin{align} K_3 &= 1 + |p/q|^2\bigg[2 e^{-(\Gamma + \lambda) \Delta t} \cos(\Delta m \Delta t) - e^{-2(\Gamma + \lambda) \Delta t} \cos(2\Delta m \Delta t) \nonumber - 2e^{-\Gamma \Delta t} \cosh(\Delta \Gamma \Delta t/2) + e^{-2\Gamma \Delta t} \cosh(\Delta \Gamma \Delta t)\bigg]. \label{K3_FullForm} \nonumber \\ \end{align} \end{widetext} One can then see that the oscillating behavior in the case of $B_s$ meson system could be attributed to the mass term $\Delta m$ (Eq. (\ref{K3_FullForm})), which plays the role of frequency, and is more than 35 times the corresponding value for the $B_d$ meson system. From Eq. (\ref{lgtype}), we find that the LG-type inequality is in terms of the transition probabilities only. Fig. (\ref{K3minusK3}) shows the deviation of the LG-type function, $\tilde{K_3}$ (\ref{lgtype}), from the LG-function ($K_3$). It is clear from the figure that the deviation is very small. Thus, a study of the LG inequality in mesons, using $\tilde{K_3}$, Eq. (\ref{lgtype}), in terms of experimentally measurable quantities would be well justified. Eq. (\ref{lgtype}) demands the knowledge of the transition probabilities at $\Delta t$ and $2 \Delta t$, for example, ($0.5\uptau_{K}$, $\uptau_{K}$) for the $K$ meson system. \begin{figure}[ht] \centering \begin{tabular}{cc} \includegraphics[width=70mm]{K3_minus_K3tilde_Kmeson} \end{tabular} \caption{ Plot of the difference of LG-function $K_3$ and LG-type function $\tilde{K_3}$ in the case of K meson system. The various parameters used are the same as in Fig. (\ref{LG-meson}).} \label{K3minusK3} \end{figure} Looking at the form of Eq. (\ref{K3}), it can be seen that the only nonmeasurable term in the equation is $|p/q|^2 e^{-2 \Gamma \Delta t} \big( e^{-2\lambda \Delta t} - 1 \big)$; we call this term $\mathcal{D}_B$ and $\mathcal{D}_K$ for the case of $B$ meson and $K$ meson systems, respectively. In the limit of zero decoherence, $\lambda \rightarrow 0$, $\mathcal{D}_{B/K} \rightarrow 0$, rendering the LG function, Eq. (\ref{K3}), in terms of measurable survival and transition probabilities \begin{align} K_3(\lambda = 0) &= 1 - 4 \mathit{P}_{\bar{B}^0 B^0}( \Delta t) + 8 \mathit{P}_{\bar{B}^0 B^0}( \Delta t) \mathit{P}_{\bar{B}^0 \bar{B}^0}( \Delta t). \label{K3_lambda_zero} \end{align} The variation of $\mathcal{D}_B$ and $\mathcal{D}_K$ with $\Delta t/\uptau_{K/B_{d(s)}}$ is shown in Fig.~\ref{DKDM}. \begin{figure*}[h] \centering \begin{tabular}{ccc} \includegraphics[width=60mm]{D_K}& \includegraphics[width=60mm]{D_Bd}& \includegraphics[width=60mm]{D_Bs} \end{tabular} \caption{ The nonmeasurable term $ \mathcal{D}_K = |(1+\epsilon)/(1-\epsilon)|^2 e^{-2 \Gamma \Delta t} \big( e^{-2\lambda \Delta t} - 1 \big)$ for $K$-meson system and $ \mathcal{D}_{Bd(s)} = |p/q|^2 e^{-2 \Gamma \Delta t} \big( e^{-2\lambda \Delta t} - 1 \big)$ for $B_{d(s)}$-meson system, plotted against $\Delta t/\uptau_{K/B_{d(s)}}$. The various parameters used in the two cases are the same as mentioned in the caption of Fig.~(\ref{LG-meson}).} \label{DKDM} \end{figure*} It is obvious from the figure that these terms are small compared with the maximum value attained by the LG function $K_3$. \section{Conclusion} In this work, we study the violation of LG and LG-type inequalities in $B$ and $K$ mesons within the framework of open quantum systems. It is found that LGI is violated in both $K$ and $B$ meson systems. This violation lasts for a longer time in the case of $K$ mesons as compared to that of $B$ mesons. In the case of $B$ meson systems, the violation lasts longer for $B_d$ mesons as compared to the $B_s$ system. We show that the LG function $K_3$, apart form the measurable survival and transition probabilities, contains a nonmeasurable term which is small compared to the maximum value attained by it and vanishes in the approximation of zero decoherence. Since systems with no coherence do not violate LGI, the effect of \textit{decoherence} should result in decreasing the extent of the violation, as observed in Fig. (\ref{LG-meson}). Further, it is highlighted in this work that the LG-type function, unlike LG function, can be expressed completely in terms of experimentally measurable quantities. Hence, LG type inequality is seen to be more suitable for understanding the nature of temporal quantum correlations in meson systems.
{ "timestamp": "2018-02-14T02:00:17", "yymm": "1802", "arxiv_id": "1802.04265", "language": "en", "url": "https://arxiv.org/abs/1802.04265" }
\section{Introduction} This paper continues the study of the multipliers between two sub-Hardy Hilbert spaces. By the term ``multiplier'' between two Hilbert spaces $\mathscr{H}_{1}, \mathscr{H}_2$ of analytic functions on the open unit disk $\mathbb{D} = \{|z| < 1\}$ we mean the set $\{\varphi \in \mathscr{O}(\mathbb{D}): \forall f\in\mathscr{H}_1,\, \varphi f\in\mathscr{H}_2\}$, where $\mathscr{O}(\mathbb{D})$ denotes the analytic functions on $\mathbb{D}$. By `sub-Hardy Hilbert spaces' we mean the Hilbert spaces which can be contractively embedded into the Hardy space $H^2$ of the unit disk. Prominent examples of these types of spaces are the de Branges-Rovnyak spaces \cite{MR3617311, Sa}. The study of multipliers between two model spaces began with a paper of Crofoot \cite{Crofoot} and continued in \cite{FHR-Mult-Model}. This recent work was continued further in \cite{Camara-Part} for two Toeplitz kernels. Since model spaces are special examples of de Branges-Rovnyak spaces \cite{MR3617311, Sa}, it seems natural to expand this investigation to include the multipliers between two de Branges-Rovnyak spaces. The multipliers from a de Branges-Rovnyak space to {\em itself} were discussed in \cite{MR1098860, MR1254125, MR1614726}. In particular, B. Davis and J. McCarthy \cite{MR1098860} obtained a nice description, in terms of growth of Fourier coefficients, of functions $f\in H^\infty$ (the bounded analytic functions on $\mathbb D$) which multiply every de Branges--Rovnyak space into itself. This paper explores another type of sub-Hardy Hilbert space, closely connected to de Branges--Rovnyak spaces, namely the range space $\mathscr{M}(\overline{a}) = T_{\overline{a}} H^2$ of the co-analytic Toeplitz operator $T_{\overline{a}}$ on $H^2$ with symbol $\overline{a}$ where $a\in H^\infty$. In \cite{MR1254125} B. Lotto and D. Sarason obtained a characterization of the multipliers of $\mathscr{M}(\overline{a})$ into itself in terms of the boundedness of the product of two Hankel operators. This characterization is rather difficult to check and one of the aims of this paper is to give a complete functional characterization in some particular situations. It was also observed by Sarason in \cite{MR847333} that any function analytic in a neighborhood of $\overline{\mathbb{D}}$, the closure of $\mathbb{D}$, is a multiplier of every $\mathscr{M}(\overline{a})$ into itself (see also \cite[Theorem 24.6]{MR3617311}). Since the constant functions belong to $\mathscr{M}(\overline{a})$, we see that every function analytic in a neighborhood of $\overline{\mathbb{D}}$ (in particular the polynomials) belongs to every $\mathscr{M}(\overline{a})$. Note that in \cite{MR3617311, MR847333}, it is assumed that $a$ is an outer function in the closed unit ball of $H^\infty$ that is non extreme (meaning $\log(1-|a|)\in L^1(\mathbb T)$), but since for $a\in H^\infty$, the function $a_1=a/\lambda$ ($\lambda=2\|a\|_{\infty}$) is non-extreme and $\mathscr{M}(\overline{a}) = \mathscr{M}(\overline{a_1})$ (see Proposition \ref{ppppopOO}), the result of Sarason is true for every $a\in H^\infty$. In particular, the function $\varphi(z)=z$ multiplies $\mathscr{M}(\overline{a})$ into itself, which means that the shift operator acts boundedly on $\mathscr{M}(\overline{a})$ for every $a\in H^\infty$. To state our results, we set some basic terminology that will be discussed in more detail in the next section. In \cite{FHR-Ma} we described the range space $\mathscr{M}(\overline{a})$ for {\em rational} $a \in H^{\infty}$. Here one can show (see Proposition \ref{93848bbczvcvgvgvgvgv} below) that if $\zeta_1, \ldots, \zeta_N$ are the zeros of $a$ on the unit circle $\mathbb{T}$, repeated according to their multiplicity, and the polynomial $\check{a}$ is defined by \begin{equation}\label{classA} \check{a}(z) = \prod_{j = 1}^{N} (z - \zeta_j), \quad \zeta_j \in \mathbb{T}, \end{equation} then $$\mathscr{M}(\overline{a}) = \mathscr{M}(\overline{\check{a}}).$$ In \cite[Cor.~6.21]{FHR-Ma} we identified $\mathscr{M}(\overline{\check{a}})$ as \begin{equation}\label{4938oryehdfgfe} \mathscr{M}(\overline{\check{a}}) = \check{a} H^2 \dotplus \mathscr{P}_{N - 1}, \end{equation} where $\mathscr{P}_{N - 1}$ denotes the polynomials of degree at most $N - 1$ and $\dotplus$ denotes the algebraic direct sum (not necessarily an orthogonal sum). We denote the polynomials of the form \eqref{classA} by $\mathscr{A}$. For $a_1, a_2 \in H^\infty$, let $$\mathfrak{M}(\overline{a_1}, \overline{a_2}) := \{\varphi \in \mathscr{O}(\mathbb{D}): \forall f\in \mathscr{M}(\overline{a_1}),\, \varphi f\in \mathscr{M}(\overline{a_2})\}$$ denote the set of {\em multipliers} from $\mathscr{M}(\overline{a_1})$ to $\mathscr{M}(\overline{a_2})$. When $a_1 = a_2 = a$, we set $$\mathfrak{M}(\overline{a}) := \mathfrak{M}(\overline{a}, \overline{a})$$ to be the multipliers from $\mathscr{M}(\overline{a})$ to itself. From standard theory of reproducing kernel Hilbert spaces of analytic functions on $\mathbb D$, one can show that multipliers (from a space to itself) must be a subset of $H^\infty$. Moreover, since the constant functions belong to $\mathscr{M}(\overline{a})$, the multiplier space $\mathfrak{M}(\overline{a})$ is always contained in $\mathscr{M}(\overline{a}) \cap H^{\infty}$. In Proposition \ref{8sd2lsewh} we prove that when $a \in \mathscr{A}$, the two sets coincide, that is \begin{equation}\label{nnnnnnncc} \mathfrak{M}(\overline{a}) = \mathscr{M}(\overline{a}) \cap H^{\infty}. \end{equation} Since $\mathfrak{M}(\overline{a})$ is an algebra, \eqref{nnnnnnncc} shows that, at least for $a \in \mathscr{A}$, the set $\mathscr{M}(\overline{a}) \cap H^{\infty}$ is an algebra. It is worth mentioning here that, for general $a \in H^{\infty}$, Lotto and Sarason \cite{MR1614726} proved that $\mathscr{M}(\overline{a}) \cap H^{\infty}$ is not always al algebra and thus some additional conditions on $a$ need to be imposed. For more general $a \in H^{\infty}$ this is not always the case (see Remark \ref{q98yrgeouiwergfh}). Two of the main theorems of this paper are complete descriptions of $\mathfrak{M}(\overline{a_1}, \overline{a_2})$ for certain $a_1, a_2 \in \mathscr{A}$. For instance, when $a_1/a_2 \in H^{\infty}$, in other words, the zero set of $a_2$ is contained in the zero set of $a_1$ (counting multiplicity), then $\mathscr{M}(\overline{a_1}) \subset \mathscr{M}(\overline{a_2})$ (Proposition \ref{ppppopOO}) and we have the following: \begin{Theorem}\label{ydayd1818347} Suppose that $a_1, a_2 \in \mathscr{A}$ and $h = a_1/a_2 \in H^{\infty}$. Then $$\mathfrak{M}(\overline{a_1}, \overline{a_2}) = \{\varphi \in \mathscr{M}(\overline{a_2}): h \varphi \in H^{\infty}\}.$$ \end{Theorem} \begin{comment}Indeed, we show that if $a_1/a_2 \in H^{\infty}$ then $\mathscr{M}(\overline{a_1}) \subset \mathscr{M}(\overline{a_2})$ and $$\mathfrak{M}(\overline{a_1}, \overline{a_2}) = \left\{\varphi \in \mathscr{M}(\overline{a_2}): \frac{a_1}{a_2} \varphi \in H^{\infty}\right\}.$$ \end{comment} When the division is reversed, i.e., $a_2/a_1 \in H^{\infty}$, in other words, the zero set of $a_1$ is contained in the zero set of $a_2$ (counting multiplicity), then $\mathscr{M}(\overline{a_2}) \subset \mathscr{M}(\overline{a_1})$ and we have the following: \begin{Theorem}\label{yyysatta6666} Suppose $a_1, a_2 \in \mathscr{A}$ with $k:= a_2/a_1 \in H^{\infty}$. Then $$\mathfrak{M}(\overline{a_1}, \overline{a_2}) = k (\mathscr{M}(\overline{a_1}) \cap H^{\infty}).$$ \end{Theorem} When $a \in \mathscr{A}$ (and $\|a\|_{\infty} \leq 1$) then $\mathscr{M}(\overline{a}) = \mathcal{H}(b)$, where $\mathcal{H}(b)$ is the de Branges-Rovnyak space corresponding to $b$ and $b$ is the {\em Pythagorean mate} for $a$: the unique outer function in $H^{\infty}$ such that $b(0)>0$ and $|a|^2 + |b|^2 = 1$ on $\mathbb{T}$. It turns out that since $a$ is a rational function (in fact a polynomial), then $b$ will also be a rational function. So, our description of $\mathfrak{M}(\overline{a_1}, \overline{a_2})$ for certain $a_1, a_2 \in \mathscr{A}$, yields a description of the multipliers between the de Branges-Rovnyak spaces $\mathcal{H}(b_1)$ and $\mathcal{H}(b_2)$ for the corresponding Pythagorean mates $b_1$ and $b_2$. We have already noticed that the special function $\varphi(z) = z$ multiplies $\mathscr{M}(\overline{a})$ to itself for every $a \in H^{\infty}$. Our next result computes the norm of the multiplication operator $f \mapsto z f$ on $\mathscr{M}(\overline{a})$. \begin{Theorem}\label{10w74hs-} If $S f = z f$ is the unilateral shift on $H^2$, then, for any outer function $a\in H^{\infty}$, $S \mathscr{M}(\overline{a}) \subset \mathscr{M}(\overline{a})$ and the norm of $S_{\overline{a}} := S|_{\mathscr{M}(\overline{a})}$ is equal to $$ \frac{1}{|a(0)|} \left(\int_{0}^{2 \pi} |a(e^{i \theta})|^2 \frac{d \theta}{2 \pi}\right)^{1/2}.$$ \end{Theorem} Finally, we explore, as was done in other multiplier space and range space settings \cite{MR1098860, MR1065054}, which functions belong to {\em all} of the multiplier spaces $\mathfrak{M}(\overline{a})$. \begin{Theorem}\label{Thm:multiplier-everyMabar} $$\bigcap_{a \in H^{\infty} \setminus \{0\}} \mathfrak{M}(\overline{a}) = \mathscr{F},$$ where $\mathscr{F}$ is the set of $\psi\in H^\infty$ whose Fourier coefficients satisfy $$ \widehat{\psi}(n)=O(e^{-c \sqrt{n}}), \quad n \geqslant 0,$$ for some $c>0$. \end{Theorem} Note that when $\widehat{\psi}(n)=O(e^{-cn})$, then $\psi$ is analytic on a neighborhood of $\overline{\mathbb D}$ (Hadamard's formula for the radius of convergence of a power series) and then, by the result of Sarason mentioned above, $\psi$ is a multiplier of every $\mathscr{M}(\overline{a})$. Thus Theorem~\ref{Thm:multiplier-everyMabar} is an improvement of this fact. \section{Basic fact about range spaces} For $a \in H^{\infty}$ and outer, the co-analytic Toeplitz operator $T_{\overline{a}}$ \cite{Bottcher} on the Hardy space $H^2$ \cite{Duren} is injective (Since $a$ is outer, the analytic Toeplitz operator $T_{a} = T_{\overline{a}}^{*}$ has dense range). One can define the range space as $$\mathscr{M}(\overline{a}) := T_{\overline{a}} H^2$$ and, since $T_{\overline{a}}$ is injective, endow $\mathscr{M}(\overline{a})$ with the range norm $\|\cdot\|_{\overline{a}}$ defined by \begin{equation}\label{Tzznorm} \|T_{\overline{a}} f\|_{\overline{a}} := \|f\|_{H^2} = \left(\int_{0}^{2 \pi} |f(e^{i \theta})|^2 \frac{d \theta}{2 \pi}\right)^{1/2}. \end{equation} In the above, we are norming, in the standard way, $H^2$ functions via their radial $L^2 = L^2(d \theta/2 \pi)$ boundary values on $\mathbb{T}$ \cite[p.~21]{Duren}. One can show \cite[Corollary 3.4]{FHR-Ma} that $\mathscr{M}(\overline{a})$ is a reproducing kernel Hilbert space with kernel function \begin{equation}\label{1snsdrgnnzz9} k_{\lambda}^{\overline{a}} = T_{\overline{a}} (a k_{\lambda}),\qquad \lambda\in\mathbb D, \end{equation} where \begin{equation}\label{CK} k_{\lambda}(z) = \frac{1}{1 - \overline{\lambda} z},\qquad z\in\mathbb D, \end{equation} is the standard reproducing kernel (the Cauchy kernel) for $H^2$. By the term ``reproducing kernel'' for $\mathscr{M}(\overline{a})$, we mean $$\langle f, k_{\lambda}^{\overline{a}}\rangle_{\overline{a}} = f(\lambda), \quad f \in \mathscr{M}(\overline{a}), \lambda \in \mathbb{D},$$ where $\langle \cdot, \cdot\rangle_{\overline{a}}$ is the inner product arising from the Hilbert space norm in \eqref{Tzznorm}. From time to time we will need the corresponding inner product on $H^2$ which we will denote by $\langle \cdot, \cdot\rangle_{H^2}$. Observe that when $a\in\mathscr{A}$, \eqref{4938oryehdfgfe} implies that $aH^2\subset\mathscr{M}(\overline{a})$. In fact, this is true for any function $a\in H^\infty$ since $$T_{a}f=T_{\overline{a}}T_{a/\overline{a}}f, \quad f\in H^2.$$ Let us complete this preliminary section by showing how to reduce the problem of describing the multiplier space $\mathfrak{M}(\overline{a_1}, \overline{a_2})$, for rational $a_1, a_2 \in H^{\infty}$, to that of $\check{a_1}, \check{a_2} \in \mathscr{A}$ described in \eqref{classA}. Basic theory of Hardy spaces \cite[p.~24]{Duren} says that every $a \in H^{\infty}$ can be factored as $a = u a_0$, where $u \in H^{\infty}$ is inner and $a_0 \in H^{\infty}$ is outer. Using the Douglas factorization lemma \cite[p.~2]{Sa}, one can prove the following two results. \begin{Proposition}{\cite[Lemma 17.3]{MR3617311}}\label{yyyyfgggfggf} If $a \in H^{\infty}$ and $a_0$ is its outer factor, then $\mathscr{M}(\overline{a}) = \mathscr{M}(\overline{a_0})$. \end{Proposition} \begin{comment} For the sake of completeness of our discussion, we include a statement of the Douglas factorization lemma \cite[p.~2]{Sa}. \begin{Lemma}[Douglas factorization lemma]\label{DFL} For two bounded linear operators $A, B$ on a Hilbert space $\mathcal{H}$ the following are equivalent: \begin{enumerate} \item $A \mathcal{H} \subset B \mathcal{H}$; \item $A A^{*} \leq \lambda B B^{*}$ for some $\lambda > 0$; \item There exists an operator $C$ on $\mathcal{H}$ such that $A = B C$. \end{enumerate} \end{Lemma} \begin{proof}[Proof of Proposition \ref{yyyyfgggfggf}] Observe that since, by the definition of an inner function, the inner part of $a$ is unimodular on $\mathbb{T}$, we have $$T_{\overline{a}} T_{\overline{a}}^{*} = T_{\overline{a}} T_{a} = T_{|a|^2} = T_{|a_0|^2} = T_{\overline{a_0}} T_{a_0} = T_{\overline{a_0}} T_{\overline{a_0}}^{*}.$$ Notice the use of the well-known identity $T_{\varphi} T_{\psi} = T_{\varphi \psi}$ when $\varphi$ or $\psi$ belong to $H^{\infty}$ \cite{Bottcher}. By Lemma \ref{DFL}, $\mathscr{M}(\overline{a}) = T_{\overline{a}} H^2 = T_{\overline{a_0}} H^2 = \mathscr{M}(\overline{a_0})$. In fact $\mathscr{M}(\overline{a}) = \mathscr{M}(\overline{a_0})$ as Hilbert spaces (equal vector spaces with equal norms). \end{proof} \end{comment} \begin{Proposition}{\cite[Lemma 17.5]{MR3617311}}\label{ppppopOO} If $a_1, a_2 \in H^{\infty}$ and outer then $$\frac{a_1}{a_2} \in H^{\infty} \iff \mathscr{M}(\overline{a_1}) \subset \mathscr{M}(\overline{a_2}).$$ \end{Proposition} \begin{comment} \begin{proof} Suppose $a_1 = a_2 h$ for some $h \in H^{\infty}$. Then $T_{\overline{a_1}} = T_{\overline{a_2}} T_{\overline{h}}$ and so $$\mathscr{M}(\overline{a_1}) = T_{\overline{a_1}} H^2 = T_{\overline{a_2}} T_{\overline{h}} H^2 \subset T_{\overline{a_2}} H^2 = \mathscr{M}(\overline{a_2}).$$ Conversely suppose $T_{\overline{a_1}} H^2 = \mathscr{M}(\overline{a_1}) \subset \mathscr{M}(\overline{a_2}) = T_{\overline{a_2}} H^2$. By Lemma \ref{DFL}, $T_{\overline{a_1}} T_{a_1} \leq c T_{\overline{a_2}} T_{a_2}$ for some $c > 0$. If $k_{\lambda}$ is the Cauchy kernel from \eqref{CK}, the above says that $$\|a_1 k_{\lambda}\|^{2}_{H^2} = \langle T_{\overline{a_1}} T_{a_1} k_{\lambda}, k_{\lambda}\rangle_{H^2} \leq c \langle T_{\overline{a_2}} T_{a_2} k_{\lambda}, k_{\lambda}\rangle_{H^2} = c \|a_2 k_{\lambda}\|^{2}_{H^2}.$$ We can write the inequality above in integral form to obtain $$\int_{0}^{2 \pi} \frac{|a_1(e^{i \theta})|^2}{|1 - e^{-i\theta} \lambda|^2} \frac{d \theta}{2 \pi} \leq c \int_{0}^{2 \pi} \frac{|a_2(e^{i \theta})|^2}{|1 - e^{-i\theta} \lambda|^2} \frac{d \theta}{2 \pi}, \quad \lambda \in \mathbb{D}.$$ Multiplying both sides of the previous equation by $1 - |\lambda|^2$ we obtain the inequality $$\mathscr{P}(|a_1|^2)(\lambda) \leq c \mathscr{P}(|a_2|^2)(\lambda), \quad \lambda \in \mathbb{D},$$ where $\mathscr{P}(|a_j|^2)$ denotes the standard Poisson integral of $|a_j|^2$. Letting $|\lambda| \to 1$ we obtain, via some boundary properties of Poisson integrals \cite[p.~4]{Duren}, $|a_1(\zeta)|^2 \leq |a_2(\zeta)|^2$ for $m$-a.e. $\zeta \in \mathbb{T}$. This says that the outer function $a_1/a_2$ is bounded on $\mathbb{T}$ and thus must be bounded on $\mathbb{D}$ (Smirnov's theorem \cite[p.~28]{Duren}), i.e., $a_1/a_2 \in H^{\infty}$. \end{proof} \end{comment} Our final reduction from rational $H^{\infty}$ functions to the class $\mathscr{A}$ comes from applying the previous two propositions. We leave the details to the reader. \begin{Proposition}\label{93848bbczvcvgvgvgvgv} Suppose $a \in H^{\infty}$ and rational and let $\check{a}$ be the polynomial defined by $$\check{a}(z) = \prod_{j = 1}^{N} (z - \zeta_j),$$ where $\zeta_1, \ldots, \zeta_N$ are the zeros of $a$ on $\mathbb{T}$, repeated according to multiplicity. Then $\mathscr{M}(\overline{a}) = \mathscr{M}(\overline{\check{a}})$. \end{Proposition} \begin{comment} \begin{proof} Since $a \in H^{\infty}$ is rational, its outer part $a_0$ also belongs to $H^{\infty}$ and is rational ($a_0$ is obtained from $a$ by dividing out the zeros of $a$ inside $\mathbb{D}$ by means of a finite Blaschke product -- which is a rational function). Proposition \ref{yyyyfgggfggf} says that $\mathscr{M}(\overline{a}) = \mathscr{M}(\overline{a_0})$. The outer, and rational, function $a_0$ may have some zeros or poles on the exterior disk $\{|z| > 1\}$. Divide out these zeros and poles to obtain a polynomial $a_1$ whose only zeros are on $\mathbb{T}$. Observe that both $a_0/a_1$ and $a_1/a_0$ belong to $H^{\infty}$ and thus two applications of Proposition \ref{ppppopOO} show that $\mathscr{M}(\overline{a_0}) = \mathscr{M}(\overline{a_1})$. \end{proof} \end{comment} \section{Multiplier spaces} As already noticed, from the general theory of reproducing kernel Hilbert spaces, a multiplier from such a space to {\em itself} must be a bounded function and thus $\mathfrak{M}(\overline{a}) \subset H^{\infty}$. Since the constant functions belong to $\mathscr{M}(\overline{a})$ we see that $\mathfrak{M}(\overline{a}) \subset H^{\infty} \cap \mathscr{M}(\overline{a})$. For functions $a\in \mathscr{A}$, we have equality. Note that this fact was already observed by Sarason \cite{MR847333} in the special case when $a(z)=(1-z)/2$ (see also \cite[Corollary 28.29]{MR3617311}). \begin{Proposition}\label{8sd2lsewh} Suppose $a \in \mathscr{A}$. Then $\mathfrak{M}(\overline{a}) = \mathscr{M}(\overline{a}) \cap H^{\infty}.$ \end{Proposition} \begin{proof} From the previous paragraph, we have the $\subset$ containment. Now suppose that $\varphi \in \mathscr{M}(\overline{a}) \cap H^{\infty}$. By \eqref{4938oryehdfgfe} $$\varphi = a \widetilde{\varphi} + p, \quad \widetilde{\varphi} \in H^2, p \in \mathscr{P}_{N - 1}, N = \mbox{deg}(a).$$ This implies that $a \widetilde{\varphi} = \varphi - p \in H^{\infty}$. If $f \in \mathscr{M}(\overline{a})$ then, again by \eqref{4938oryehdfgfe}, $$f = a \widetilde{f} + q, \quad \widetilde{f} \in H^2, q \in \mathscr{P}_{N - 1},$$ and so $$\varphi f = (a \widetilde{\varphi} + p) ( a \widetilde{f} + q) = a (a \widetilde{\varphi} \widetilde{f} + p \widetilde{f} + q \widetilde{\varphi}) + p q.$$ We have already shown that $a \widetilde{\varphi} \in H^{\infty}$ and thus $a \widetilde{\varphi} \widetilde{f} \in H^2$. Clearly the terms $p \widetilde{f}$ and $q \widetilde{\varphi}$ belong to $H^2$ and so, using \eqref{4938oryehdfgfe}, $$a (\widetilde{\varphi} \widetilde{f} + p \widetilde{f} + q \widetilde{\varphi}) \in a H^2 \subset \mathscr{M}(\overline{a}).$$ Since $\mathscr{M}(\overline{a})$ contains the polynomials, we have $p q \in \mathscr{M}(\overline{a})$ and thus $\varphi f \in \mathscr{M}(\overline{a})$. Hence $\varphi \in \mathfrak{M}(\overline{a})$ and the $\supseteq$ inclusion follows. \end{proof} \begin{Remark}\label{q98yrgeouiwergfh} \begin{enumerate} \item Proposition \ref{8sd2lsewh} says that for $a \in \mathscr{A}$, the set $\mathscr{M}(\overline{a})\cap H^\infty$ is an algebra (since it is equal to the multiplier algebra $\mathfrak{M}(\overline{a})$). For general $a \in H^{\infty}$ we do not always have $\mathfrak{M}(\overline{a}) = \mathscr{M}(\overline{a}) \cap H^{\infty}$ since there are $a \in H^{\infty}$ such that $\mathscr{M}(\overline{a})\cap H^{\infty}$ is {\em not} an algebra \cite{MR1614726}. \item For a general bounded outer function $a$ we have $H^{\infty} \subset \mathfrak{M}(\overline{a})$ if and only if the Toeplitz operator $T_{a/\bar a}$ is invertible \cite[Theorem 17.20]{MR3617311}. In this case we in fact have $\mathscr{M}(a)=\mathscr{M}(\overline{a})$ and $\mathfrak{M}(\overline{a})=H^\infty$. \item In \cite{MR1254125} they obtained the following characterization of $\mathfrak{M}(\overline{a})$ for a general bounded outer function $a$: Let $\varphi\in \mathscr{M}(\overline{a})\cap H^\infty$ and let $\psi\in H^2$ such that $\varphi=T_{\bar a}\psi$. Then the following are equivalent: (i) $\varphi \in \mathfrak{M}(\overline{a})$; (ii) the operator $H^*_{\bar\psi}H_{\bar a}$ is bounded on $H^2$ ($H_{\overline{\psi}}$ and $H_{\overline{a}}$ are Hankel operators). This result is difficult to check, even when $a \in \mathscr{A}$, which makes Proposition \ref{8sd2lsewh} all the more useful. \end{enumerate} \end{Remark} The rest of this section contains the proofs of Theorem \ref{ydayd1818347} and Theorem \ref{yyysatta6666}. First observe that $\mathfrak{M}(\overline{a_1}, \overline{a_2})$ is never trivial. \begin{Proposition} For any $a_1, a_2 \in \mathscr{A}$, $a_2 H^{\infty} \subset \mathfrak{M}(\overline{a_1}, \overline{a_2}).$ \end{Proposition} \begin{proof} Let $\varphi \in H^{\infty}$. Then for any $f \in \mathscr{M}(\overline{a_1})$ we can use \eqref{4938oryehdfgfe} to see that $$f = a_1 \widetilde{f} + p, \quad \widetilde{f} \in H^2, p \in \mathscr{P}_{N_1 - 1}, N_1 = \mbox{deg}(a_1).$$ Thus $$(a_2 \varphi) f = a_2 \varphi (a_1 \widetilde{f} + p) = a_2 (a_1 \varphi \widetilde{f} + \varphi p) \in a_2 H^2 \subset \mathscr{M}(\overline{a_2}). \qedhere$$ \end{proof} \begin{proof}[Proof of Theorem \ref{ydayd1818347}] We remind that $1 \in \mathscr{M}(\overline{a_1})$. This means that if $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$ then $\varphi \mathscr{M}(\overline{a_1}) \subset \mathscr{M}(\overline{a_2})$ and so $\varphi = \varphi \cdot 1 \in \mathscr{M}(\overline{a_2})$. By \eqref{4938oryehdfgfe}, \begin{equation}\label{cxncx8asmsd0dhyy} \varphi = a_2 \widetilde{\varphi} + p, \quad \widetilde{\varphi} \in H^2, p \in \mathscr{P}_{N_2 - 1}, N_2 = \mbox{deg}(a_2). \end{equation} With $h := a_1/a_2$, which we assume to belong to $H^{\infty}$, observe that $$a_1 \widetilde{\varphi} = \frac{\varphi - p}{a_2} a_1 = h (\varphi - p)$$ and so $$a_1 \widetilde{\varphi} \in H^{\infty} \iff h \varphi \in H^{\infty}.$$ Thus $$\{\varphi \in \mathscr{M}(\overline{a_2}): h \varphi \in H^{\infty}\} = \{\varphi \in \mathscr{M}(\overline{a_2}): \widetilde{\varphi} a_1 \in H^{\infty}\}.$$ To complete the proof, we will now prove that $$\mathfrak{M}(\overline{a_1}, \overline{a_2}) = \{\varphi \in \mathscr{M}(\overline{a_2}): \widetilde{\varphi} a_1 \in H^{\infty}\}.$$ ($\subset$): Let $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$ and recall from \eqref{cxncx8asmsd0dhyy} that $\varphi = a_2 \widetilde{\varphi} + p$. We will show that $a_1 \widetilde{\varphi} \in H^{\infty}$ by showing that $a_1 \widetilde{\varphi}$ is a multiplier from $H^2$ to itself. Indeed let $\widetilde{f} \in H^2$ and define $f = a_1 \widetilde{f}$. From \eqref{4938oryehdfgfe}, $f$ belongs to $\mathscr{M}(\overline{a_1})$. By our assumption that $\varphi$ is a multiplier, $\varphi f \in \mathscr{M}(\overline{a_2})$. Moreover, $$\varphi f = (a_2 \widetilde{\varphi} + p) (a_1 \widetilde{f}) = a_1 a_2 \widetilde{\varphi} \widetilde{f} + p a_1 \widetilde{f}.$$ For the second summand above, observe that $$p a_1 \widetilde{f} = p a_2 h \widetilde{f} \in a_2 H^2 \subset \mathscr{M}(\overline{a_2}).$$ Since $\varphi f \in \mathscr{M}(\overline{a_2})$ by assumption, it must be the case that the first summand, i.e., $a_1 a_2 \widetilde{\varphi} \widetilde{f}$ belongs to $\mathscr{M}(\overline{a_2})$ and so $$a_1 a_2 \widetilde{\varphi} \widetilde{f} = a_2 \widetilde{F} + R, \quad \widetilde{F} \in H^2, R \in \mathscr{P}_{N_2 - 1}.$$ This means that $$a_1 \widetilde{\varphi} \widetilde{f} = \widetilde{F} + \frac{R}{a_2}.$$ Clearly $\widetilde{F} \in H^2$ and $a_1 \widetilde{\varphi} \widetilde{f} \in H^1$. Thus, since $R/a_2 \in H^1$ and rational, it follows that $R/a_2 \in H^{\infty}$. In summary, $$a_1 \widetilde{\varphi} \widetilde{f} \in H^2, \quad \widetilde{f} \in H^2,$$ which makes $a_1 \widetilde{\varphi}$ a multiplier of $H^2$ and hence bounded. ($\supseteq$): Let $f \in \mathscr{M}(\overline{a_1})$. Then by \eqref{4938oryehdfgfe} $$f = a_1 \widetilde{f} + p, \quad \widetilde{f} \in H^2, p \in \mathscr{P}_{N_1 - 1}, N_1 = \mbox{deg}(a_1).$$ If $\varphi= a_2 \widetilde{\varphi} + q \in \mathscr{M}(\overline{a_2})$ with $a_1 \widetilde{\varphi} \in H^{\infty}$ then $$\varphi f = (a_2 \widetilde{\varphi} + q) (a_1 \widetilde{f} + p) = a_2 (a_1 \widetilde{\varphi} \widetilde{f} + \widetilde{\varphi} p + h \widetilde{f} q) + q p.$$ For the first summand above, observe that, by assumption $a_1 \widetilde{\varphi} \in H^{\infty}$, and also that $\widetilde{\varphi}p$ and $h \widetilde{f} q$ belong to $H^2$ and so the first summand belongs to $a_2 H^2 \subset \mathscr{M}(\overline{a_2})$. Since $\mathscr{M}(\overline{a_2})$ contains all the polynomials, $pq \in \mathscr{M}(\overline{a_2})$. Thus $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$. \end{proof} \begin{Corollary} For $a \in \mathscr{A}$, $\mathfrak{M}(\overline{a}, 1) = \{\varphi \in H^2: a \varphi \in H^{\infty}\}.$ \end{Corollary} Notice the above characterizes the multipliers from $\mathscr{M}(\overline{a})$ (the smaller space) to $H^2$ (the bigger space). From Proposition \ref{8sd2lsewh}, the multipliers from $\mathscr{M}(\overline{a})$ to {\em itself} must be bounded functions. However, as the following example shows, for {\em different} $a_1, a_2 \in \mathscr{A}$, it is possible for $\mathfrak{M}(\overline{a_1}, \overline{a_2})$ to contain unbounded functions. \begin{Example} If $a_1(z) = (1 + z) (1 - z)$ and $a_2(z) = (1 + z)$, then the unbounded function $$\varphi(z) = (1 + z)^{1/2 + \epsilon} (1 - z)^{-1/2 + \epsilon}, \quad \epsilon \in (0, \tfrac{1}{2}),$$ belongs to $\mathfrak{M}(\overline{a_1}, \overline{a_2})$. To see this observe that $\varphi \in a_2 H^2 \subset \mathscr{M}(\overline{a_2})$ and $$\frac{a_1}{a_2} \varphi = (1 - z) \varphi = (1 + z)^{1/2 + \epsilon} (1 - z)^{1/2 + \epsilon} \in H^{\infty}.$$ Now apply Theorem \ref{ydayd1818347}. The proof of Theorem \ref{yyysatta6666} is more involved and needs a little bit more set-up. For $$a = \prod_{j = 1}^{N} (z - \zeta_j), \quad \zeta_j \in \mathbb{T}, $$ we have seen from \eqref{4938oryehdfgfe} the decomposition $$\mathscr{M}(\overline{a}) = a H^2 \dotplus \mathscr{P}_{N - 1}.$$ Since $\mathscr{P}_{N-1}$ is a finite dimensional space, it follows that $aH^2$ is a closed subspace of $\mathscr{M}(\overline{a})$ and standard functional analysis arguments (see for instance \cite[Theorem 5.16]{MR1157815}) show that the projection with range $aH^2$ and null space $\mathscr{P}_{N-1}$ is continuous and we have \begin{equation}\label{xxx22625qwrlkncv<<} \|a f\|_{\overline{a}} \asymp \|f\|_{H^2}, \quad f \in H^2. \end{equation} With this set up we are now ready to prove Theorem \ref{yyysatta6666}. \begin{comment} To see this, let $f \in \mathscr{M}(\overline{a_1})$. Then $$f = (1 + z) (1 - z) u + p, \quad u \in H^2, p \in \mathscr{P}_{1}.$$ Then $$\varphi f = (1 + z) \left((1 + z)^{1/2 + \epsilon} (1 - z)^{1/2 + \epsilon} u + p (1 + z)^{-1/2 + \epsilon} (1 - z)^{-1/2 + \epsilon}\right).$$ Clearly $$(1 + z)^{1/2 + \epsilon} (1 - z)^{1/2 + \epsilon} u \in H^2.$$ Furthermore, $$p (1 + z)^{-1/2 + \epsilon} (1 - z)^{-1/2 + \epsilon}$$ is analytic on $\mathbb{D}$, belongs to the Smirnov class (since $p$ is a polynomial and $(1 + z)^{-1/2 + \epsilon} (1 - z)^{-1/2 + \epsilon}$ is outer) and has $L^2$ boundary values. Thus, by Smirnov's theorem, belongs to $H^2$. Thus $\varphi f \in (1 + z) H^2 \subset \mathscr{M}(\overline{a_2})$ and thus $\varphi$ is an unbounded multiplier from $\mathscr{M}(\overline{a_1})$ to $\mathscr{M}(\overline{a_2})$. \end{comment} \end{Example} \begin{proof}[Proof of Theorem \ref{yyysatta6666}] By assumption, $k = a_2/a_1 \in H^{\infty}$ We first show that $k \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$. Indeed if $f = a_1 \widetilde{f} + p \in \mathscr{M}(\overline{a_1})$, then $$k f = \frac{a_2}{a_1} (a_1 \widetilde{f} + p) = a_2 \widetilde{f} + p \frac{a_2}{a_1}.$$ The first term belongs to $a_2 H^2 \subset \mathscr{M}(\overline{a_2})$ while the second term is analytic in a neighborhood of $\overline{\mathbb{D}}$ (since it is a rational function in $H^2$) and hence belongs to $\mathscr{M}(\overline{a_2})$. Thus $k \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$. \begin{comment} Next we show that \begin{equation}\label{wewfjsdf77373} \mathfrak{M}(\overline{a_1}, \overline{a_2}) \subset \mathfrak{M}(\overline{a_1}, \overline{a_1}). \end{equation} Indeed, if $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$ then since $1 \in \mathscr{M}(\overline{a_1})$ we have $\varphi = \varphi \cdot 1 \in \mathscr{M}(\overline{a_2}) \subset \mathscr{M}(\overline{a_1})$ (Proposition \ref{ppppopOO}). Now we will show that $\varphi \in H^{\infty}$. Since $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$ we have the well-known identity $$M_{\varphi}^{*} k_{\lambda}^{\overline{a_2}} = \overline{\varphi(\lambda)} k_{\lambda}^{\overline{a_1}}, \quad \lambda \in \mathbb{D}.$$ From here we see that \begin{equation}\label{984543} |\varphi(\lambda)| \lesssim \frac{\|k_{\lambda}^{\overline{a_2}}\|_{\overline{a_2}}}{\|k_{\lambda}^{\overline{a_1}}\|_{\overline{a_1}}}. \end{equation} From \eqref{1snsdrgnnzz9} we have the identities $$k_{\lambda}^{\overline{a_j}} = T_{\overline{a_j}} (a_j k_{\lambda}), \quad k_{\lambda}(z) = \frac{1}{1 - \overline{\lambda} z}.$$ By the definition of the norm in $\mathscr{M}(\overline{a_j}) = T_{\overline{a_j}} H^2$ (the range norm), we have $$\|k_{\lambda}^{\overline{a_j}}\|_{\overline{a_j}}^{2} = \|T_{\overline{a_j}} (a_j k_{\lambda})\|_{H^2}^{2} = \int_{0}^{2 \pi} \frac{|a_j(e^{i \theta})|^2}{|e^{i \theta} - \lambda|^2} \frac{d \theta}{2 \pi}.$$ This, combined with \eqref{984543} shows that $$|\varphi(\lambda)|^2 \lesssim \frac{\mathscr{P}(|a_2|^2)(\lambda)}{\mathscr{P}(|a_1|^2)(\lambda)}, \quad \lambda \in \mathbb{D},$$ where $\mathscr{P}(|a_j|^2)$ is the Poisson integral of $|a_j|^2$. Taking radial limits almost everywhere on $\mathbb{T}$ and using standard facts about radial limits of Poisson integrals, we get $$|\varphi(\xi)|^2 \lesssim \frac{|a_2(\xi)|^2}{|a_1(\xi)|^2}$$ for almost every $\xi \in \mathbb{T}$. But since we are assuming that $k = a_2/a_1 \in H^{\infty}$ we conclude that the radial limit function for $\varphi$ is bounded on $\mathbb{T}$. Since $\varphi \in H^2$ we can use Smirnov's theorem \cite{Duren} to see that $\varphi \in H^{\infty}$. Now use Proposition \ref{8sd2lsewh} to get $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_1})$, which proves \eqref{wewfjsdf77373} as desired. \end{comment} Next observe, from the inclusion $\mathscr{M}(\overline{a_2}) \subset \mathscr{M}(\overline{a_1})$ that \begin{equation}\label{wewfjsdf77373} \mathfrak{M}(\overline{a_1}, \overline{a_2}) \subset \mathfrak{M}(\overline{a_1}). \end{equation} We are now ready to prove $$\mathfrak{M}(\overline{a_1}, \overline{a_2}) = k (\mathscr{M}(\overline{a_1}) \cap H^{\infty}).$$ ($\supseteq$): Let $\varphi \in \mathscr{M}(\overline{a_1}) \cap H^{\infty}$ and $f \in \mathscr{M}(\overline{a_1})$. By Proposition \ref{8sd2lsewh}, $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_1})$ and thus $\varphi f \in \mathscr{M}(\overline{a_1})$. We argued before that $k \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$ and so $k \varphi f \in \mathscr{M}(\overline{a_2})$. Hence $k \varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$. ($\subset$): Let $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$. Since $1 \in \mathscr{M}(\overline{a_1})$ we see that $\varphi \in \mathscr{M}(\overline{a_2})$ and hence \begin{equation}\label{oooiiiuwuwuuw} \varphi = a_2 \widetilde{\varphi} + p = k a_1 \widetilde{\varphi} + p, \quad \mbox{deg}(p) \leq N_2 - 1. \end{equation} Our first step is to show that $\varphi \in k \mathscr{M}(\overline{a_1})$ and to do this, we need to show that $p/k$ is a polynomial. To this end, let $h\in H^2$ and put $f=a_1h$. Then \begin{equation}\label{0009987} \varphi f = a_2 a_1 \widetilde{\varphi} h+ p a_1 h. \end{equation} However, $\varphi f \in \mathscr{M}(\overline{a_2})$ (since $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$ and $f\in a_1H^2\subset\mathscr{M}(\overline{a_1})$) and so \begin{equation}\label{886d6sd6dsf6} \varphi f = a_2 \widetilde{\varphi f} + t, \quad \mbox{deg} (t) \leq N_2 - 1 \end{equation} for some $\widetilde{\varphi f} \in H^2$. A calculation using \eqref{0009987} and \eqref{886d6sd6dsf6} yields $$ a_1 p h=a_2(\widetilde{\varphi f}-a_1\widetilde{\varphi}h)+t. $$ Let $g = \widetilde{\varphi f} - a_1 \widetilde{\varphi} h $ and observe that $g \in H^1$. Using the division algorithm for polynomials we get $$t = a_1 \gamma + \delta, \quad \mbox{deg}(\delta) \leq N_1 - 1.$$ This yields \begin{align*} a_1 h p = a_2 g + t = a_2 g + a_1 \gamma + \delta, \end{align*} that is $$ a_1hp=a_1(kg+\gamma)+\delta. $$ Now observe that \begin{align*} k g = k \widetilde{\varphi f} - a_1 k \widetilde{\varphi}h = k \widetilde{\varphi f} - a_2 \widetilde{\varphi}h = k \widetilde{\varphi f} - (\varphi - p) h \end{align*} \begin{comment} \begin{align*} kg & =ka_1\widetilde\varphi\widetilde f+ k\widetilde\varphi q\\ & =a_2\widetilde\varphi \widetilde f+k\widetilde\varphi q\\ & =(\varphi-p)\widetilde f+k\widetilde\varphi q \end{align*} \end{comment} which belongs to $H^2$ since $\varphi \in H^{\infty}$ (recall \eqref{wewfjsdf77373}). By the uniqueness of the representation \eqref{4938oryehdfgfe} and the equality $a_1hp=a_1(kg+\gamma)+\delta$, we conclude that $\delta = 0$ and $h p = k g + \gamma$. Finally recall that $g \in H^1$ and so $$|g(r \xi)| \lesssim \frac{1}{1 - r}.$$ Hence the function $k g + \gamma$ has a radial limit at each zero of $k$, along with its derivatives of order one less than the order of the zero of $k$. Thus the same must be true for $h p$. But $h$ was an arbitrary element of $H^2$. This means that $p$ must have a zero at every zero of $k$ of at least the multiplicity of the zero of $k$. Conclusion: $p/k$ is a polynomial $b$. From the above and the representation of $\varphi$ from \eqref{oooiiiuwuwuuw} we know that $$\varphi = k (a_1 \widetilde{\varphi} + b), \quad \mbox{deg}(b) \leq N_1 - 1.$$ Notice in the last step how we used that $\mbox{deg}(p) \leq N_2 - 1$ and $\mbox{deg}(k) = N_2 - N_1$. Note that $$\varphi_0 := a_1 \widetilde{\varphi} + b \in \mathscr{M}(\overline{a_1})$$ and so it remains to show that $\varphi_0 \in H^{\infty}$. Since $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$ we know that \begin{equation}\label{pppqq11455} \|\varphi(a_1 h)\|_{\overline{a_2}} \lesssim \|a_1 h\|_{\overline{a_1}}, \quad h \in H^{\infty}. \end{equation} However, $$\varphi a_1 h = k a_1 \varphi_0 h = a_2 h \varphi_0.$$ From \eqref{xxx22625qwrlkncv<<} we have $$\|a_1 h\|_{\overline{a_1}} \asymp \|h\|_{H^2}, \quad \|a_2 h \varphi_0\|_{\overline{a_2}} \asymp \|h \varphi_0\|_{H^2}.$$ Combining this with \eqref{pppqq11455} we get $$\|h \varphi_0\|_{H^2} \lesssim \|h\|_{H^2}, \quad h \in H^{\infty}.$$ This means that the operator $h \mapsto \varphi_0 h$, initially defined on $H^{\infty} \subset H^2$, extends to a bounded multiplication operator on $H^2$. It is well known that the multipliers of $H^2$ must be bounded and so $\varphi_0 \in H^{\infty}$ which completes the proof. \end{proof} \begin{Corollary} For $a \in \mathscr{A}$, $\mathfrak{M}(1, \overline{a}) = a H^{\infty}$. \end{Corollary} Notice how the above corollary characterizes the multipliers between $H^2$ (the bigger space) and $\mathscr{M}(\overline{a})$ (the smaller space). \section{Onto multipliers} Crofoot \cite{Crofoot} studied the onto multipliers between model spaces. Here we discuss the onto multipliers between $\mathscr{M}(\overline{a_1})$ and $\mathscr{M}(\overline{a_2})$, i.e., $\varphi \in \mathscr{O}(\mathbb{D})$ for which $\varphi \mathscr{M}(\overline{a_1}) = \mathscr{M}(\overline{a_2})$. \begin{Theorem}\label{ppsdfusd7sdfbbvxxz} Suppose $a_1, a_2 \in \mathscr{A}$ with $a_2/a_1 = h \in H^{\infty} \setminus \mathbb{C}$. Then there are no multipliers from $\mathscr{M}(\overline{a_1})$ onto $\mathscr{M}(\overline{a_2})$. \end{Theorem} \begin{proof} Suppose there is a $\varphi \in H^2$ with $\varphi \mathscr{M}(\overline{a_1}) = \mathscr{M}(\overline{a_2})$. By Theorem \ref{yyysatta6666} there is a $\psi \in H^{\infty}$ so that $\varphi = h \psi$. But since $1 \in \mathscr{M}(\overline{a_2})$ there is a $g \in \mathscr{M}(\overline{a_1})$ such that $1 = \varphi g$. Thus $1/\varphi \in H^2$ and hence $\psi/\varphi = 1/h \in H^2$. However, $1/h$ is a non-constant rational function with poles on $\mathbb{T}$ and thus can not belong to $H^2$ -- which yields a contradiction. \end{proof} \begin{Corollary}\label{bsudfysdfiusyf} Suppose $a_1, a_2 \in \mathscr{A}$ with $a_1/a_2 \in H^{\infty} \setminus \mathbb{C}$. Then there are no multipliers from $\mathscr{M}(\overline{a_1})$ onto $\mathscr{M}(\overline{a_2})$. \end{Corollary} \begin{proof} Suppose there is a $\varphi \in H^2$ with $\varphi \mathscr{M}(\overline{a_1}) = \mathscr{M}(\overline{a_2})$. Then $\frac{1}{\varphi} \mathscr{M}(\overline{a_2}) = \mathscr{M}(\overline{a_1})$. Apply Theorem \ref{ppsdfusd7sdfbbvxxz} to obtain a contradiction. \end{proof} When $a_1 = a_2$, there are indeed plenty of onto multipliers. \begin{Proposition} If $a \in H^\infty$ and $\lambda \in \mathbb C$, $|\lambda|<\|a\|_{\infty}^{-1}$, then $$\frac{1}{1 - \overline{\lambda} a} \mathscr{M}(\overline{a}) = \mathscr{M}(\overline{a}).$$ \end{Proposition} \begin{proof} Fix $\lambda \in \mathbb{D}$, $|\lambda|<\|a\|_{\infty}^{-1}$, and $a \in H^\infty$ and note that for any $f \in \mathscr{M}(\overline{a})$ we have the identity $$\frac{f}{1 - \overline{\lambda} a} = f + a \frac{\overline{\lambda} f}{1 - \overline{\lambda} a}$$ which belongs to $\mathscr{M}(\overline{a})$ since the first term belongs to $\mathscr{M}(\overline{a})$ and the second belongs to $a H^2$ which is contained in $\mathscr{M}(\overline{a})$. Thus $$\frac{1}{1 - \overline{\lambda} a} \in \mathfrak{M}(\overline{a}, \overline{a}).$$ On the other hand for $f \in \mathscr{M}(\overline{a})$, we have $$(1 - \overline{\lambda} a) f = f - a (\overline{\lambda} f) \in \mathscr{M}(\overline{a})$$ for similar reasons as before. Thus $(1 - \overline{\lambda} a) \in \mathfrak{M}(\overline{a}, \overline{a})$ which completes the proof. \end{proof} \begin{Question} Is there a tractable description of all of the onto multipliers from $\mathscr{M}(\overline{a})$ to itself? \end{Question} \section{Intersections of multiplier spaces} In this section, we prove Theorem~\ref{Thm:multiplier-everyMabar}. Recall that $\mathscr{F}$ denotes the set of $\psi \in H^{\infty}$ whose Fourier coefficients $$\widehat{\psi}(n) = \int_{0}^{2 \pi} \psi(e^{i \theta}) e^{- i n \theta} \frac{d \theta}{2 \pi}$$ satisfy $$\widehat{\psi}(n) = O(e^{-c \sqrt{n}}), \quad n \geqslant 0,$$ for some $c > 0$ and let $$\mathscr{B} = \{\varphi \in H^{\infty} \setminus\{0\}: \|\varphi\|_{\infty} \leq 1, \log(1 - |\varphi|) \in L^1\}$$ denote the (non-zero) non-extreme points in the closed unit ball of $H^{\infty}$. Also recall that $$\mathfrak{M}(\mathscr{H}(b)) = \{\varphi \in \operatorname{Hol}(\mathbb{D}): \varphi \mathscr{H}(b) \subset \mathscr{H}(b)\}$$ are the multipliers of $\mathscr{H}(b)$ to itself. In \cite{MR1098860} it was shown that \begin{equation}\label{oosdpfisdf11} \bigcap_{b \in \mathscr{B}}\mathfrak{M}(\mathscr{H}(b) )= \mathscr{F} \end{equation} and in \cite{MR1065054} it was shown that \begin{equation}\label{pptttfffffzzz2} \bigcap_{\varphi \in H^{\infty}\setminus\{0\}} \mathscr{M}(\overline{\varphi}) = \mathscr{F}. \end{equation} \begin{proof}[Proof of Theorem \ref{Thm:multiplier-everyMabar}] From \cite[Thm.~20.17]{MR3617311} we know that if $b \in \mathscr{B}$ and $a$ is the so-called ``Pythagorean mate'' for $b$, meaning the unique $a \in \mathscr{B}$ with $a(0) > 0$ and with $|a|^2 + |b|^2 = 1$ almost everywhere on $\mathbb{T}$ (such mates exist by standard Hardy space theory \cite{Duren} and the assumption that $b$ is non-extreme) \begin{equation}\label{11nsdve5rtyghjfb} \mathfrak{M}(\mathscr{H}(b)) \subset \mathfrak{M}(\overline{a}). \end{equation} Moreover, for any $a \in H^{\infty} \setminus \{0\}$ with $\|a\|_{\infty} \leq 1$ (not necessarily non-extreme) we have $$ \mathscr{M}(\overline{a}) = \mathscr{M}(\overline{a/2}) $$ and, more importantly, $a/2 \in \mathscr{B}$. This all yields \begin{equation}\label{rye89wiofdp} \bigcap_{a \in H^{\infty} \setminus \{0\}} \mathscr{M}(\overline{a}) = \bigcap_{a \in \mathscr{B}} \mathscr{M}(\overline{a}). \end{equation} Our preliminary comment is that $\mathscr{M}(\overline{a})$ always contains the constant functions and thus \begin{equation}\label{7765432} \mathfrak{M}(\overline{a}) \subset \mathscr{M}(\overline{a}). \end{equation} Putting this all together we have \begin{align*} \mathscr{F} & = \bigcap_{b \in \mathscr{B}} \mathfrak{M}(\mathscr{H}(b)) && \mbox{(by \eqref{oosdpfisdf11})}\\ & \subset \bigcap_{a \in \mathscr{B}} \mathfrak{M}(\overline{a}) && \mbox{(by \eqref{11nsdve5rtyghjfb})}\\ & \subset \bigcap_{a \in \mathscr{B}} \mathscr{M}(\overline{a}) && \mbox{(by \eqref{7765432})}\\ & \subset \bigcap_{a \in H^{\infty} \setminus \{0\}} \mathscr{M}(\overline{a}) && \mbox{(by \eqref{rye89wiofdp})}\\ & = \mathscr{F} && \mbox{(by \eqref{pptttfffffzzz2})}. \end{align*} This completes the proof. \end{proof} \section{The commutant and the norm of the shift on $\mathscr{M}(\overline{a})$} We know that the identity function $\varphi(z) = z$ belongs to $\mathfrak{M}(\overline{a})$. This means that if $S f = z f$ is the standard unilateral shift on $H^2$, then the operator $$S_{\overline{a}} := S|_{\mathscr{M}(\overline{a})}$$ is a well defined bounded operator on $\mathscr{M}(\overline{a})$. The next result, which is quite standard for the shift operator on many Hilbert spaces of analytic functions, computes the commutant of $S_{\overline{a}}$. If $\mathscr{B}(\mathscr{M}(\overline{a}))$ denotes the bounded operators on $\mathscr{M}(\overline{a})$, the commutant $\{S_{\overline{a}}\}'$ is defined to be $$\{S_{\overline{a}}\}' := \{A \in \mathscr{B}(\mathscr{M}(\overline{a})): A S_{\overline{a}} = S_{\overline{a}} A\}.$$ \begin{Proposition} For $a \in H^{\infty}$ and outer, $$\{S_{\overline{a}}\}' = \{M_{\varphi}: \varphi \in \mathfrak{M}(\overline{a})\},$$ where $M_{\varphi}$ is the multiplication operator $M_{\varphi} f = \varphi f$ on $\mathscr{M}(\overline{a})$. \end{Proposition} \begin{proof} Clearly we have $\supseteq$. To prove the other containment, let $A \in \mathscr{B}(\mathscr{M}(\overline{a}))$ with $A S_{\overline{a}} = S_{\overline{a}} A$. This implies that for any polynomial $p$ $$A (p(S_{\overline{a}}) 1) = p(S_{\overline{a}}) A(1),$$ equivalently, $A(p) = p A(1)$. Since the polynomials are dense in $\mathscr{M}(\overline{a})$ (see \cite[Theorem 17.4]{MR3617311}), for a given $f \in \mathscr{M}(\overline{a})$ we can find a sequence of polynomials $\{p_n\}_{n \geqslant 1}$ such that $p_{n} \to f$ in the norm of $\mathscr{M}(\overline{a})$. Since point evaluations on $\mathbb{D}$ are continuous in the norm of $\mathscr{M}(\overline{a})$ (indeed $\mathscr{M}(\overline{a})$ is a reproducing kernel Hilbert space), we see that $p_n \to f$ pointwise on $\mathbb{D}$. Since $A p_n \to A f$ both in norm as well as pointwise on $\mathbb{D}$, we see that $A f = A(1) f$. Thus $A(1)$ is a multiplier of $\mathscr{M}(\overline{a})$ and $A = M_{A(1)}$. \end{proof} \begin{Remark} The above fact is quite standard for many spaces of analytic functions (see also \cite[Theorem 9.16]{FM} for a broader setting). \end{Remark} Making adjustments to the above proof, we can show that $$\{A \in \mathscr{B}(\mathscr{M}(\overline{a_1}), \mathscr{M}(\overline{a_2})): A S_{\overline{a_1}} = S_{\overline{a_2}} A\} = \{M_{\varphi}: \varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})\}.$$ In the above, $M_{\varphi}: \mathscr{M}(\overline{a_1}) \to \mathscr{M}(\overline{a_2})$, $M_{\varphi} f = \varphi f$. One can also show that this set is (operator) norm closed. \begin{comment} \begin{proof} Suppose that $\{\varphi_{n}\}_{n \geqslant 1} \subset \mathfrak{M}(\overline{a_1}, \overline{a_2})$ is such that $\{M_{\varphi_n}\}_{n \geqslant 1}$ is a Cauchy sequence in $\mathscr{B}(\mathscr{M}(\overline{a_1}), \mathscr{M}(\overline{a_2}))$. Since $\mathscr{B}(\mathscr{M}(\overline{a_1}), \mathscr{M}(\overline{a_2}))$ is norm closed, there exists an $A \in \mathscr{B}(\mathscr{M}(\overline{a_1}), \mathscr{M}(\overline{a_2}))$ such that $M_{\varphi_n} \to A$ in norm (and thus $M_{\varphi_n}^{*} \to A^{*}$ in norm). If $$k_{\lambda}^{\overline{a_j}}, \quad \lambda \in \mathbb{D}, j = 1, 2,$$ denote the reproducing kernels for $\mathscr{M}(\overline{a_j})$, $j = 1, 2$, it is a standard fact of adjoints of multiplication operators on reproducing kernel Hilbert spaces that $$M_{\varphi_n}^{*} k_{\lambda}^{\overline{a_2}} = \overline{\varphi_{n}(\lambda)} k_{\lambda}^{\overline{a_1}}.$$ This means that $$\overline{\varphi_{n}(\lambda)} k_{\lambda}^{\overline{a_1}} = M_{\varphi_{n}}^{*} k_{\lambda}^{\overline{a_2}} \to A^{*} k_{\lambda}^{\overline{a_2}}, \quad \lambda \in \mathbb{D},$$ (where the convergence is in the norm of $\mathscr{M}(\overline{a_1}$) and hence $$\langle k_{\lambda}^{\overline{a_1}}, \overline{\varphi_{n}(\lambda)} k_{\lambda}^{\overline{a_1}}\rangle_{\overline{a_1}} \to \langle k_{\lambda}^{\overline{a_1}}, A^{*} k_{\lambda}^{\overline{a_2}}\rangle_{\overline{a_1}}.$$ Sorting all this out, we see that for each fixed $\lambda \in \mathbb{D}$ we get $$\varphi_{n}(\lambda) \to \varphi(\lambda) := \frac{\langle k_{\lambda}^{\overline{a_1}}, A^{*} k_{\lambda}^{\overline{a_2}}\rangle_{\overline{a_1}}}{\|k_{\lambda}^{\overline{a_1}}\|_{\overline{a_1}}^{2}}.$$ Finally, for each $f \in \mathscr{M}(\overline{a_1})$ and $\lambda \in \mathbb{D}$ we have \begin{align*} \varphi(\lambda) f(\lambda) & = \langle f, \overline{\varphi(\lambda)} k_{\lambda}^{\overline{a_1}}\rangle_{\overline{a_1}}\\ & = \lim_{n \to \infty} \langle f, \overline{\varphi_{n}(\lambda)} k_{\lambda}^{\overline{a_1}}\rangle_{\overline{a_1}}\\ & = \langle f, A^{*} k_{\lambda}^{\overline{a_2}}\rangle_{\overline{a_1}}\\ & = \langle A f, k_{\lambda}^{\overline{a_2}}\rangle_{\overline{a_2}}\\ & = (A f)(\lambda). \end{align*} Since $A f \in \mathscr{M}(\overline{a_2})$ we conclude from the above calculation that $\varphi f \in \mathscr{M}(\overline{a_2})$, i.e., $\varphi \in \mathfrak{M}(\overline{a_1}, \overline{a_2})$. \end{proof} \end{comment} In the rest of this section we prove Theorem \ref{10w74hs-}. For $f \in \mathscr{O}(\mathbb{D})$ let $$B f = \frac{f - f(0)}{z}$$ denote the backward shift of $f$. It is well known that $$T_{\overline{z}} f = Bf, \quad f \in H^2,$$ and that $B$ acts contractively on $H^2$. Furthermore, since $T_{\overline{z}} T_{\overline{a}} = T_{\overline{a}} T_{\overline{z}}$ we conclude that $B \mathscr{M}(\overline{a}) \subset \mathscr{M}(\overline{a})$. This allows us to define $$X_{\overline{a}} := B|_{\mathscr{M}(\overline{a})}.$$ Observe that $X_{\overline{a}}$ is also a contraction since for any $f = T_{\overline{a}} g \in \mathscr{M}(\overline{a})$ we have $$\|X_{\overline{a}} f\|_{\overline{a}} = \|T_{\overline{a}} T_{\overline{z}} g\|_{\overline{a}} = \|T_{\overline{z}} g\|_{H^2} \leq \|g\|_{H^2} = \|f\|_{\overline{a}}.$$ \begin{Proposition}\label{Thm:adjoint-Xbara} Let $a$ be a bounded outer function. Then $$ X_{\overline a}^*=S_{\overline{a}}+1\otimes_{\overline{a}}T_{\overline{a}}Ba. $$ \end{Proposition} \begin{proof} For $f=T_{\overline a}g\in\mathscr{M}(\overline a)$ and $\lambda\in\mathbb{D}$, we have \[ (X_{\overline a}^* f)(\lambda)=\langle X_{\overline a}^*f,k_{\lambda}^{\overline a}\rangle_{\overline a}=\langle f,X_{\overline a}k_\lambda^{\overline a}\rangle_{\overline a}. \] Let $P_{+}$ denote the orthogonal projection of $L^2$ onto $H^2$ and $P_{-} = \operatorname{Id} - P_{+}$. Using the definition of $X_{\overline{a}}$ along with the identities $$k_{\lambda}^{\overline{a}}=T_{\overline{a}}a k_{\lambda}=T_{|a|^2}k_{\lambda},$$ and $T_{\overline{a}} T_{\overline{z}} = T_{\overline{z}} T_{\overline{a}}$, we obtain $$X_{\overline a}k_\lambda^{\overline a}=BT_{\overline a}(ak_\lambda)=T_{\overline a}B(ak_\lambda).$$ From here we get \begin{align*} (X_{\overline a}^* f)(\lambda) &=\langle f,T_{\overline{a}} B(a k_{\lambda})\rangle_{\overline a}\\ &=\langle g,B(ak_\lambda)\rangle_{L^2}\\ &=\langle g,\overline zak_\lambda \rangle_{L^2}\\ &=\langle \overline ag,\overline z k_\lambda\rangle_{L^2}\\ &=\langle P_+(\overline ag),\overline z k_\lambda\rangle_{L^2}+\langle \overline ag,P_-(\overline z k_\lambda)\rangle_{L^2}\\ &=\langle zf,k_\lambda \rangle_{L^2}+\langle \overline ag,P_-(\overline z k_\lambda)\rangle_{L^2}\\ &=\lambda f(\lambda)+\langle \overline ag,P_-(\overline z k_\lambda)\rangle_{L^2}. \end{align*} A short computation with power series shows that $P_-(\overline zk_\lambda)=\overline z$, whence \[ (X_{\overline a}^* f)(\lambda)=\lambda f(\lambda)+\langle \overline ag,\overline z\rangle_{L^2}. \] Observe now \begin{align*} \langle \overline ag,\overline z\rangle_{L^2}&= \langle g,P_{+}(a\bar z)\rangle_{L^2}\\ &=\langle g,Ba\rangle_{L^2}\\ &=\langle T_{\overline{a}}g,T_{\overline{a}}Ba\rangle_{\overline{a}}\\ &=\langle f,T_{\overline{a}}Ba\rangle_{\overline{a}}\\ &=(1\otimes_{\overline{a}} T_{\overline{a}}Ba)f. \end{align*} Hence $X_{\overline a}^* f=S_{\overline{a}}f+(1\otimes T_{\overline{a}}Ba)f$, which yields the result. \end{proof} \begin{proof}[Proof of Theorem \ref{10w74hs-}] First, observe from Proposition~\ref{Thm:adjoint-Xbara} that $S_{\overline{a}}=X_{\overline{a}}^*-1\otimes_{\overline{a}}T_{\overline{a}}Ba$ and so $S_{\overline{a}}^*=X_{\overline{a}}-T_{\overline{a}}Ba\otimes_{\overline{a}}1$. Thus for every $f\in\mathscr{M}(\overline{a})$, we have \begin{align*} S_{\overline{a}}^{*} S_{\overline{a}} f & =X_{\overline{a}}S_{\overline{a}}f-\langle S_{\overline{a}}f,1 \rangle_{\overline{a}}T_{\overline{a}}Ba =f-\langle S_{\overline{a}}f,1 \rangle_{\overline{a}}T_{\overline{a}}Ba. \end{align*} Next we see that \begin{align*} \|S_{\overline{a}} f\|_{\overline{a}}^{2} & = \langle S_{\overline{a}}^{*} S_{\overline{a}}f,f \rangle_{\overline{a}}\\ &=\|f\|_{\overline{a}}^2-\langle S_{\overline{a}}f,1 \rangle_{\overline{a}} \langle T_{\overline{a}}Ba,f \rangle_{\overline{a}}\\ &=\|f\|_{\overline{a}}^2-\langle f,S_{\overline{a}}^*1 \rangle_{\overline{a}} \langle T_{\overline{a}}Ba,f \rangle_{\overline{a}} \end{align*} But $S_{\overline{a}}^*1=X_{\overline{a}}1-\|1\|_{\overline{a}}^2T_{\overline{a}}Ba=-\|1\|_{\overline{a}}^2T_{\overline{a}}Ba$, which yields \begin{align*} \|S_{\overline{a}} f\|_{\overline{a}}^{2}&=\|f\|^2_{\overline{a}}+\|1\|_{\overline{a}}^2\,|\langle f,T_{\overline{a}}Ba \rangle|^2\\ &\leq (1 + \|1\|_{\overline{a}}^2 \|T_{\overline{a}} B a\|_{\overline{a}}^{2}) \|f\|_{\overline{a}}^{2}. \end{align*} This proves the upper bound $$\|S_{\overline{a}}\|_{\overline{a}}^{2} \leq 1 + \|1\|_{\overline{a}}^2 \|T_{\overline{a}} B a\|_{\overline{a}}^{2}.$$ To obtain equality, observe that the previous computation shows that $$\|S_{\overline{a}} f\|_{\overline{a}}^{2} = \|f\|_{\overline{a}}^{2} + \|1\|_{\overline{a}}^{2} \left| \langle f, T_{\overline{a}} B a\rangle_{\overline{a}}\right|^2.$$ Applying this identity to $f = T_{\overline{a}} B a$ we get \begin{align*} \|S_{\overline{a}} (T_{\overline{a}} B a)\|_{\overline{a}}^{2} & = \|T_{\overline{a}} B a\|_{\overline{a}}^{2} + \|1\|_{\overline{a}}^{2} \|T_{\overline{a}} B a\|_{\overline{a}}^{4}\\ & = \|T_{\overline{a}} B a\|_{\overline{a}}^{2} (1 + \|1\|_{\overline{a}}^{2} \|T_{\overline{a}} B a\|_{\overline{a}}^{2}) \end{align*} and thus $$\|S_{\overline{a}}\|^{2} = 1 + \|1\|_{\overline{a}}^{2} \|T_{\overline{a}} B a\|_{\overline{a}}^{2}.$$ Also observe that $$\|1\|_{\overline{a}}^{2} = \Big\|T_{\overline{a}} \frac{1}{\overline{a(0)}}\Big\|_{\overline{a}}^{2} = \Big\|\frac{1}{\overline{a(0)}}\Big\|_{H^2}^{2} = \frac{1}{|a(0)|^2}$$ and \begin{align*} \|T_{\overline{a}} B a\|_{\overline{a}}^{2} & = \|B a\|_{H^2}^{2} = \langle S S^{*} a, a\rangle_{H^2}\\ & = \langle a - a(0), a\rangle_{H^2}\\ & = \|a\|_{H^2}^{2} - |a(0)|^2, \end{align*} to conclude \begin{align*} \|S_{\overline{a}}\|^{2} & = 1 + \|1\|_{\overline{a}}^{2} \|T_{\overline{a}} B a\|_{\overline{a}}^{2}\\ & = 1 + \frac{1}{|a(0)|^2} (\|a\|_{H^2}^{2} - |a(0)|^2) = \frac{\|a\|_{H^2}^{2}}{|a(0)|^2}. \qedhere \end{align*} \end{proof} \bibliographystyle{plain}
{ "timestamp": "2018-02-13T02:17:50", "yymm": "1802", "arxiv_id": "1802.04026", "language": "en", "url": "https://arxiv.org/abs/1802.04026" }
\section{Introduction} Integrability is a unique tool allowing one to obtain exact non-perturbative results in fully interacting field theories even when the supersymmetry is of no use. The range of theories where integrability is known to be applicable includes supersymmetric theories such as planar ${\cal N}=4$ SYM and ABJM theory, which are important from a holographic perspective. Quite significantly, recently found examples of integrable theories include a particular class of scalar models in 4D possessing no supersymmetry at all \cite{Gurdogan:2015csr,Caetano:2016ydc,Gromov:2017cja,Grabner:2017pgm,Kazakov:2018qbr}. Integrability methods of the type used here started being developed in the seminal papers \cite{bfklint} in the QCD context and independently in \cite{Minahan:2002ve} for ${\cal N}=4$ SYM. After almost $20$ years of development it was shown that both approaches can be united by the Quantum Spectral Curve (QSC) formalism \cite{Gromov:2013pga,Gromov:2014caa}\footnote{The QSC formalism was also developed for the ABJM model in \cite{Cavaglia:2014exa,Bombardelli:2017vhk} } of which both are some particular limits~\cite{Gromov:2014caa,Alfimov:2014bwa}. The QSC was initially developed with the primary goal of computing the spectrum of anomalous dimensions or, equivalently, two point correlators. The QSC is based on the Q-system, a system of functional equations on Q-functions (see \cite{Gromov:2017blm,Kazakov:2018ugh} for a recent review). At the same time, the Q-functions are known to play the role of the wave functions in the Separation of Variables (SoV) program initiated for quantum integrable models in \cite{Sklyanin:1989cg,Sklyanin:1991ss,Sklyanin:1992eu,Sklyanin:1995bm} and recently generalized to $SU(N)$ spin chains in \cite{Gromov:2016itr} leading to a new algebraic construction for the states (see also \cite{Smirnov2001,Chervov:2007bb}). In all these models the Q-functions (Baxter polynomials in this case) give the wave functions in separated variables \footnote{Some inspiring results were obtained in \cite{Lukyanov:2000jp,Negro:2013wga}.}.\footnote{Moreover, even without use of the QSC, the standard SoV approach has already given a number of results for correlators in $\mathcal{N}=4$ SYM \cite{Sobko:2013ema,Jiang:2015lda,Kazama:2016cfl,Kazama:2015iua,Kazama:2014sxa,Kazama:2013qsa,Kazama:2013rya,Kazama:2012is,Kazama:2011cp} though without finite size wrapping effects or at the classical level. } From this perspective it is natural to expect that the Q-functions of the QSC construction in ${\cal N}=4$ SYM contain much more information than the spectrum and should also play an important role for more general observables. There are a few important lessons one can learn from the simple spin chains. In particular one should introduce ``twists" (quasi-periodic boundary conditions/external magnetic field) in order for the SoV construction to work nicely. One of the main reasons why the twists are important is that they break global symmetry and remove degeneracy in the spectrum. This makes the map between the Q-functions and the states bijective. Fortunately, one can rather easily introduce twists into the QSC construction \cite{Gromov:2013qga,Gromov:2015dfa,Kazakov:2015efa} (see also \cite{Klabbers:2017vtw}), however the interpretation of these new parameters is not always clear from the QFT point of view. The $\gamma$-deformation of $\mathcal{N}=4$ SYM \cite{Frolov:2005dj,Alday:2005ww,Frolov:2005iq,Beisert:2005if} is one of the cases which is rather well understood, but only breaks the R-symmetry part (dual to the isometries of $S^5$ part of AdS/CFT) of the whole ${\rm PSU}(2,2|4)$ group.\footnote{Recently in \cite{Guica:2017mtd} it was understood how to study the spectrum for a more general deformation.} \begin{figure} \centering \includegraphics[scale=1.1]{triangleonsteroids.pdf} \caption{The Maldacena-Wilson loop with three cusps. The cusps are connected by circular arcs with $3$ different scalars $\vec \Phi\cdot\vec n_{ij}$ coupled to the three different arcs. The expectation value of this object behaves exactly in the same way as a three point correlation function of 3 local operators but provides additional $6$ parameters ($2$ for each cusp) $\phi_1,\;\phi_2,\;\phi_3$ and $\cos\theta_{1}=\vec n_{12}\cdot\vec n_{23},\;\cos\theta_{2}=\vec n_{23}\cdot\vec n_{31},\;\cos\theta_{3}=\vec n_{31}\cdot\vec n_{12}$, which are associated with twists in the QSC description.} \label{fig:triangleonsteroids} \end{figure} The situation where the twist in both $AdS_5$ and $S^5$ appears naturally is the cusped Maldacena-Wilson loop. In this paper we consider the correlation function of $3$ cusps for $3$ general angles (see Fig.~\ref{fig:triangleonsteroids}). We consider a ladders limit \cite{Erickson:1999qv, Erickson:2000af} where the calculation can be done to all loop orders starting from Feynman graphs. We observe that the result obtained as a resummation of the perturbation theory takes a stunningly simple form when expressed in terms of the Q-functions, which we produced from the QSC. \paragraph{Set-up and the Main Results.} The Maldacena-Wilson lines we consider are defined as \begin{equation} W={\rm Pexp} \int\,d\tau \(i A_\mu \dot x^\mu +\Phi^a n^a | \dot x |\) , \end{equation} where $n^a$ is a constant unit 6-vector parameterizing the coupling to the scalars $\Phi^a$ of $\mathcal{N}=4$ SYM. The observable we study is the Wilson loop defined on a planar triangle made of three circular arcs\footnote{Each arc is the image of a straight line segment under a conformal transformation and thus is locally 1/2-BPS.}, see Fig.~\ref{fig:triangleonsteroids}. It is parameterized by three cusp angles $\phi_i$ at its vertices and also three angles $\theta_i$ between the couplings to scalars on the lines adjacent to each vertex. At each cusp we have a divergence controlled by the celebrated cusp anomalous dimension $\Gamma_{\rm cusp}(\phi_i,\theta_i)$ which can be efficiently studied via integrability \cite{Correa:2012hh,Drukker:2012de,Gromov:2015dfa} and is analogous to the local operator scaling dimensions in its mathematical description by the QSC. Due to this we will use notation $\Delta$ for the cusp dimension. To regularize the divergence we cut an $\epsilon$-ball at each of the cusps. The whole Wilson loop has a conformally covariant dependence on the cusp positions and defines the structure constant $C_{123}$ for a 3-point correlator of three cusps. We focus on the ladders limit in which $\theta_i\to i \infty$ while the 't Hooft coupling $g=\sqrt{\lambda}/({4\pi})$ goes to zero with the finite combinations \begin{equation} \hat g_i=\frac{g}{2} e^{-i\theta_i/2}\la{lad} \end{equation} playing the role of three effective couplings. The perturbative expansion for $\Delta$ can then be resummed to all orders leading to a stationary Schr\"odinger equation \cite{Erickson:1999qv, Erickson:2000af,Correa:2012nk}. However, the 3-cusp correlator is much more nontrivial and depends on three couplings $\hat\lambda_i$ which we can vary separately. We have studied the case when two of them are nonzero, corresponding to the structure constant we denote by $C^{\bullet \bullet \circ}_{123}$. The result may be written in terms of the Schr\"odinger wave-functions but it is a highly complicated integral which does not offer much structure. Yet once we rewrite it in terms of the QSC Q-functions $q(u)$, we observe miraculous cancellations leading to a surprisingly simple expression \begin{equation}\la{correlator} \boxed{ C^{\bullet \bullet \circ}_{123} = \, \frac{\, \br{ q_{1} \, q_{2}\, e^{-\phi_3 u} } }{\sqrt{ \br{ q_1^2}\br{ q_2^2}} } \ \ ,} \end{equation} where the bracket $\br{f(u)}$ is defined for the functions which behave as $\sim e^{u\beta}u^\alpha$ at large $u$ and are analytic for all ${\rm Re}\;u>0$ as \begin{equation} \la{eq:thebracket} \br{f(u) }\equiv \(2\sin\frac{\beta}{2}\)^\alpha\int_{c -i \infty}^{c+i\infty} f(u)\frac{du}{2\pi i u}\;\;,\;\;c>0\;. \end{equation} The functions $q_1(u),q_2(u)$ describe the first and the second cusp, while $e^{-\phi_3 u}$ is just the Q-function at zero coupling corresponding to the third cusp. Each of the Q-functions solves a simple finite difference equation \eq{Bax2p}. This is precisely the kind of result one expects for an integrable model treated in separated variables. Note that all the dependence on the angles and the couplings is coming solely through the Q-functions, which depend nontrivially on these parameters, in particular at large $u$ we have $q_i(u)\simeq u^{\Delta_i}e^{\phi_i u}$. We also found a very simple expression for the derivative of $\Delta$ w.r.t. the coupling $\hat g$ and the angle $\phi$ in terms of the bracket $\br{ \cdot}$ \begin{equation}\la{Cinsert} -\frac{1}{4}\frac{\partial \Delta}{\partial\hat g^2}=\frac{\br{q^2\frac{1}u}}{\br{q^2}}\;\;,\;\;-2\, \frac{\partial(\sin\phi\Delta)}{\partial\phi}=\frac{\br{q^2 u}}{\br{q^2}}\;, \end{equation} which has the form very similar to \eq{correlator} with $q_1=q_2=q$ and different insertions in the numerator! These quantites can be interpreted as structure constants of two cusps with a local BPS operator \cite{Costa:2010rz}. In the limit when the triangle collapses to a straight line, this configuration has recently attracted much attention as it defines a 1d CFT on the line \cite{Giombi:2017cqn,Beccaria:2017rbe,Kim:2017sju,Cooke:2017qgm,Kim:2017phs}. In particular the structure constants we consider were computed in \cite{Kim:2017sju} by resumming the diagrams using the exact solvability of the Schr\"odinger problem at $\phi=0$. Our results in the zero angle limit can be simplified further by noticing that for $\phi_i\to 0$ the integral is saturated by the leading large $u$ asymptotics of the integrand. This leads to $\br{ q_i q_j }\to {1/\Gamma(1-\Delta_i-\Delta_j)}$, reproducing the results of \cite{Kim:2017sju}. As a byproduct, we also resolved the question of how to use integrability to compute the anomalous dimension for the cusp with an insertion of the same scalar as that coupled to the Wilson lines. We propose that it simply corresponds to one of the excited states in the Schr\"odinger equation (and to a well-defined analytic continuation in the QSC outside the ladders limit). We verified this claim at weak coupling by comparing with the direct perturbation theory calculation of \cite{Alday:2007he}\footnote{The result in that paper is for $\theta=0$, whereas we consider $\theta=i\infty$, however we expect the 1-loop result should not depend on $\theta$.}. Very recently the importance of the cusps with such insertions were further motivated in \cite{Bruser:2018jnc} where the $3$ loop result was extracted. We demonstrate some of our results in Fig.~\ref{fig:HHLnum2} where we show the plots of the spectrum and the structure constant for a range of the effective coupling $\hat g$. \begin{figure} \centering \includegraphics[scale=0.6]{spec1p0.pdf} \includegraphics[scale=0.6]{HHL1p0.pdf} \caption{The spectrum (left) and the diagonal Heavy-Heavy-Light correlator given by \eq{correlator} (right) for the first several states ($n=0,1,\dots,7$), with all angles equal to $\phi=1 $. The solid blue line corresponds to the usual cusp, while others correspond to excited states with scalar insertions discussed in section~\ref{sec:excited}.} \label{fig:HHLnum2} \end{figure} \paragraph{Structure of the paper.} The rest of the paper is organized as follows. In Sec. \ref{sec:qsclad} we briefly review the QSC and present the Baxter equation to which it reduces in the ladders limit. We also derive compact formulas for the variation of $\Delta$ with respect to the coupling and the angle $\phi$. In Sec. \ref{sec:BS} we write the regularized 2-pt function in terms of the Schr\"odinger equation wave functions, in particular deriving the pre-exponent normalization which is important for 3-pt correlators. We also relate the wave functions to the QSC Q-functions via a Mellin transform. In Sec. \ref{sec:3pt1} we study the 3-cusp correlator and derive our main result for the structure constant \eq{correlator}. In Sec. \ref{sec:excited} we describe the interpretation of excited states in the Schr\"odinger problem as insertions at the cusp. We generalize our results for 3-pt functions to the excited states and provide both perturbative and numerical data for their scaling dimensions. In Sec. \ref{sec:smallPhi} we describe the limit when the 3-cusp configuration degenerates, in particular reproducing the results of \cite{Kim:2017sju} when all angles become zero. In Sec. \ref{sec:num} and \ref{sec:weak} we present numerical and perturbative results for the structure constants. Finally in Sec. \ref{sec:ope} we interpret the regularized 2-pt function as a 4-cusp correlator for which we write an OPE-type expansion in terms of the structure constants, perfectly matching our previous results. In Sec. \ref{sec:concl} we present conclusions. The appendices contain various technical details, in particular the detailed strong coupling expansion for the spectrum. \section{Quantum Spectral Curve in the ladders limit} \label{sec:qsclad} In this section we provide all necessary background for this paper about the Quantum Spectral Curve (QSC). More technical details are given in Appendix~\ref{app:qsc}. The QSC provides a finite set of equations describing non-perturbatively the cusp anomalous dimension $\Delta$ at all values of the parameters $\phi,\theta$ and any coupling $g$. Let us briefly review this construction and then discuss the form it takes in the ladders limit. The QSC was originally developed in \cite{Gromov:2013pga,Gromov:2014caa} for the spectral problem of local operators in $\mathcal{N}=4$ SYM. It was extended in \cite{Gromov:2015dfa} to describe the cusp anomalous dimension, reformulating and greatly simplifying the TBA approach of \cite{Correa:2012hh, Drukker:2012de}. The QSC is a set of difference equations (QQ-relations) for the Q-functions which are central objects in the integrability framework. When supplemented with extra asymptotics and analyticity conditions, these relations fix the Q-functions and provide the exact anomalous dimension $\Delta$ (see \cite{Gromov:2017blm} for a pedagogical introduction and \cite{Kazakov:2018ugh} for a wider overview). The QSC is based on 4+4 basic Q-functions denoted as ${\bf P}_a(u)$, $a=1,\dots,4$ and ${\bf Q}_i(u)$, $i=1,\dots,4$ which are related to the dynamics on $S^5$ and on $AdS_5$ correspondingly. The ${\bf P}$-functions are analytic functions of $u$ except for a cut at $[-2g,2g]$. They can be nicely parameterized in terms of an infinite set of coefficients that contain full information about the state, including $\Delta$. Details of this parameterization are given in Appendix \ref{app:qsc}. The other 4 basic Q-functions ${\bf Q}_i$ are indirectly determined by ${\bf P}_a$ via the 4th order Baxter equation \cite{Alfimov:2014bwa} \beqa\la{bax5} {\bf Q}^{[+4]}_iD_0 &-& {\bf Q}^{[+2]} \[ D_1-{\bf P}_a^{[+2]}{\bf P}^{a[+4]}D_0 \] + {\bf Q} \[ D_2-{\bf P}_a{\bf P}^{a[+2]}D_1+ {\bf P}_a{\bf P}^{a[+4]}D_0 \]\\ &-& {\bf Q}^{[-2]} \[ \bar D_1+{\bf P}_a^{[-2]}{\bf P}^{a[-4]}\bar D_0 \] +{\bf Q}^{[-4]}\bar D_0 =0\nonumber \ , \eeqa where the coefficients $D_n, \bar D_n$ are simple determinants built from ${\bf P}_a$ and are given explicitly in Appendix \ref{app:qsc}\footnote{The functions ${\bf P}^a$ appearing here are defined by ${\bf P}^a=\chi^{ab}{\bf P}_b$ with the only non-zero entries of $\chi^{ab}$ being $\chi^{14}=-\chi^{23}=\chi^{32}=-\chi^{41}=-1\ $.}. Here we used the shorthand notation \begin{equation} f^\pm=f(u\pm\tfrac{i}{2}), \ \ f^{[+a]}=f(u+\tfrac{ia}{2}) \ . \end{equation} Being of the 4th order, this Baxter equation has four independent solutions which precisely correspond to the four Q-functions ${\bf Q}_i$. Different solutions can be identified by the four possible asymptotics $ {\bf Q}_i\sim u^{1/2\pm \Delta}e^{\pm u\phi}$ which uniquely fix the basis of four Q-functions up to a normalization if we also impose that the solutions ${\bf Q}_i(u)$ are analytic in the upper half-plane of $u$, which is always possible to do. Then they will have an infinite set of Zhukovsky cuts in the lower half-plane with branch points at $u=\pm 2g-in$ (with $n=0,1,\dots$). Finally in order to close the system of equations we need to impose what happens after the analytic continuation through the cut $[-2g,2g]$. It was shown in \cite{Gromov:2015dfa} that in order to close the equations one should impose the following ``gluing" conditions \beqa\la{qtil} &&\tilde { q}_1(u)={q}_1(-u)\\ &&\tilde { q}_2(u)={ q}_2(-u)\\ &&\tilde {q}_3(u)=a_1\sinh(2\pi u){ q}_2(-u)+{ q}_3(-u)\\ &&\tilde { q}_4(u)=a_2\sinh(2\pi u){ q}_1(-u)+{ q}_4(-u)\;, \eeqa where $q_i(u)={\bf Q}_i(u)/\sqrt{u}$ and $\tilde q_i$ is its analytic continuation under the cut. These relations fix both ${\bf P}$- and ${\bf Q}$-functions and allow one to extract the exact cusp anomalous dimension $\Delta$ from large $u$ asymptotics. The equations presented above are valid at any values of $g$ and the angles $\phi,\theta$. For the purposes of this paper we have to take the ladders limit of these equations. We will see that they simplify considerably. \subsection{Baxter equation in the ladders limit} \label{sec:baxter} In the ladders limit \eq{lad} the coupling $g$ goes to zero and the QSC greatly simplifies as all the branch cuts of the Q-functions collapse and simply become poles. This limit was explored in detail in \cite{Gromov:2016rrp} for the special case $\phi=\pi$ corresponding to the flat space quark-antiquark potential. Here we briefly generalize these results to the generic $\phi$ case. The key simplification is that the 4th order Baxter equation \eq{bax5} on ${\bf Q}_i$ factorizes into two 2nd order equations, the first one being \begin{equation} \label{Bax2p}\boxed{ \left(-2u^2 \cos \phi +2\Delta u \sin \phi +4 \hat{g}^2\right){q}(u) +u^2 { q}(u-i)+u^2 { q}(u+i)=0} \end{equation} and another equation obtained by $\Delta\to-\Delta$. This follows from the fact that coefficients $A_n,B_n$ entering ${\bf P}$'s via \eq{cuspas}, \eq{fgAB} scale as $\sim 1$ in the ladders limit\footnote{We assumed this in analogy with the $\phi=\pi$ case and verified it by self-consistency}. Then as in \cite{Gromov:2016rrp} one can carefully expand the 4th order Baxter equation for $t\equiv e^{i\theta/2}\to 0$ and recover the 2nd order equation \eq{Bax2p}. As the large $u$ behaviour of $q(u)$ is fixed by the Baxter equation \eq{Bax2p}, we denote them as $q_+$ and $q_-$ according to the large $u$ asymptotics $q_\pm\sim e^{\pm\phi u} u^{\pm\Delta}$. For example in the weak coupling limit $\hat g=0$ for $\Delta=0$ we see that $q_\pm$ are simply \begin{equation}\la{weakq} q^{(0)}_+=e^{+\phi u}\;\;,\;\;q^{(0)}_-=e^{-\phi u}\;. \end{equation} At finite $\hat g$ the Q-functions become rather nontrivial. While $q_\pm(u)$ are regular in the upper half-plane including the origin, they have poles in the lower half-plane at $u=-in, \ \ n=1,2,\dots$. The equation \eq{Bax2p} is just an $sl(2)$ (non-compact) spin chain Baxter equation, similarly to \cite{Gromov:2017cja}. This is expected based on symmetry grounds. What is less trivial is the ``quantization condition" i.e. the condition which will restrict $\Delta$ to a discrete set. It was first derived in \cite{Gromov:2016rrp} for $\phi\to\pi$ and later generalized to the very similar calculation of two-point functions in the fishnet model \cite{Gromov:2017cja}. The derivation of the quantization condition for any $\phi$ is done in Appendix~\ref{app:qsc} and leads to the following result: \begin{equation} \label{qquant} \boxed{ \Delta=-\frac{2\hat g^2}{\sin\phi} \frac{q_+(0)\bar q_+'(0)+\bar q_+(0)q_+'(0)}{q_+(0)\bar q_+(0)} } \;. \end{equation} Together with the Baxter equation \eq{Bax2p}, this relation fixes $\Delta$ as well as $q_+$. Note that the r.h.s.\! of \eq{qquant} contains $q_+$, which has to be found from the Baxter equation and thus also depends on $\Delta$ nontrivially. Due to this \eq{qquant} is a non-linear equation, which may have several solutions. Some intuition behind it becomes clearer after reformulating the problem in a more standard Schr\"odinger equation form as we will see in section \ref{sec:cuspdiv}. At the same time we see that we only need $q_+$ to find the spectrum. For this reason we will simply denote it as $q(u)$ in the rest of the paper. The meaning of the Q-functions from the QFT point of view is still a big mystery. There is no known observable in the field theory which is known to correspond to them directly. However in the ``fishnet" theory, which is a particular limit of ${\cal N}=4$ SYM, such an object was recently identified~\cite{Gromov:2017cja}. Here, in the ladders limit we will be able to relate $q(u)$ with a solution of the Bethe-Salpeter equation, which resums the ladder Feynman diagrams and thus has direct field theory interpretation. \subsection{Scalar product and variations of $\Delta$} In this section we demonstrate the significance of the bracket $\br{\cdot}$, which we defined in the introduction in \eq{eq:thebracket}. In particular we will derive a closed expression for ${\partial\Delta}/{\partial\hat g}$ which can be considered as a correlation function of two cusps with the Lagrangian \cite{Costa:2010rz}. Even though that seems to be the simplest application of the QSC for the computation of the $3$-point correlators, it is not yet known how to write the result for $\partial\Delta/\partial g$ for the general state in a closed form. We demonstrate here that this is in fact possible to do at least in our simplified set-up. First we rewrite the Baxter equation \eq{Bax2p} by defining the following finite difference operator \begin{equation} \hat O \equiv \frac{1}{u}\left[ (4\hat g^2-2u^2\cos\phi+2\Delta u\sin\phi) +u(u-i)D^{-1}+u(u+i)D \right]\frac{1}{u} \end{equation} where $D$ is a shift by $i$ operator so that the Baxter equation \eq{Bax2p} becomes \begin{equation} \hat O q(u)=0\;. \end{equation} Now we notice that this operator is ``self-adjoint" under the integration along the vertical contour to the right from the origin, meaning that \begin{equation} \int_{{\bf |}} q_1(u)\hat O q_2(u) du= \int_{{\bf |}} q_2(u)\hat O q_1(u) du\;\;,\;\;\int_{{\bf |}}\equiv \int_{c-i\infty}^{c+i\infty} \ . \end{equation} where $c>0$\footnote{Due to the sign of the exponential factors in the asymptotics of $q(u)$ (where we assume $\phi > 0$ ), the integrals would vanish trivially if we chose an integration contour with $c < 0$. }. Indeed, consider the term with $D$: \begin{equation} \int_{{\bf |}} q_1(u)u(u+i) Dq_2(u) du= \int_{{\bf |}} q_1(u)u(u+i) q_2(u+i) du= \int_{{\bf |}} q_2(u)u(u-i)D^{-1} q_1(u) du \end{equation} which now became the term with $D^{-1}$ acting on $q_1(u)$. In the last equality we changed the integration variable $u\to u-i$. The fact that $\hat O$ has this property immediately leads to the great simplification for the expression for $\partial\Delta/\partial g$. We can now apply the standard QM perturbation theory logic. Changing the coupling and/or the angle $\phi$ will lead to a perturbation of both the operator $\hat O$ and the q-function in such a way that the Baxter equation is still satisfied , \begin{equation} (\hat O+\delta \hat O)(q+\delta q)=0\;\;,\;\;\delta \hat O = \frac{1}{u^2} (8\hat g\delta \hat g+2u\sin\phi \delta \Delta +2u^2\sin\phi\delta \phi+2\Delta u\cos\phi\delta \phi )\;. \end{equation} An explicit expression for $\delta q$ could be rather hard to find, but luckily we can get rid of it by contracting $(\hat O+\delta \hat O)(q+\delta q)$ with the original $q(u)$: \begin{equation} 0=\int_{{\bf |}} q(\hat O+\delta \hat O)(q+\delta q)du=\int_{{\bf |}}(q+\delta q)(\hat O+\delta \hat O)q du= \int_{{\bf |}}(q+\delta q)\delta \hat O q du \end{equation} At the leading order in the perturbation we can now drop $\delta q$ to obtain \begin{equation} \int_{{\bf |}} q (8\hat g\delta \hat g+2u\sin\phi \delta \Delta +2u^2\sin\phi\delta \phi+2\Delta u\cos\phi\delta \phi ) q \frac{du}{u^2} = 0 \ , \end{equation} so that \begin{equation} \label{eq:dddg}\frac{\partial \Delta}{\partial \hat g}=-\frac{4\hat g}{\sin\phi}\frac{\int_{{\bf |}}\frac{q^2}{u^2} du} {\int_{{\bf |}}\frac{q^2}{u} {du}} \;\;,\;\; \frac{\partial \Delta}{\partial \phi}=-\frac{\int_{{\bf |}} q^2 du} {\int_{{\bf |}}\frac{q^2}{u} {du}}-\Delta\cot\phi \;. \end{equation} In terms of the bracket $\br\cdot$ this becomes \begin{equation} \boxed{ -\frac{1}{4}\frac{\partial \Delta}{\partial\hat g^2}=\frac{\br{q^2\frac{1}u}}{\br{q^2}}\;\;,\;\;-2 \, \frac{\partial(\sin\phi\Delta)}{\partial\phi}=\frac{\br{q^2 u}}{\br{q^2}}\;. } \end{equation} This very simple equation is quite powerful. For example by plugging the leading order $q=e^{u\phi}$ from \eq{weakq} and computing the integrals by poles at $u=0$ we get \begin{equation} \frac{\partial \Delta}{\partial \hat g}=-\frac{4\hat g}{\sin\phi}2\phi+{\cal O}(\hat g^3)\;, \end{equation} which gives immediately the one loop dimension $\Delta=-\hat g^2\frac{4\phi}{\sin\phi}+{\cal O}(\hat g^4)$. Furthermore, another interesting property of the bracket is that solutions with different $\Delta's$ are orthogonal to each other. Indeed, consider two solutions $q_a$ of the Baxter equation with two different dimensions $\Delta_a$, such that $\hat O_1 q_1=\hat O_2 q_2=0$. Then \begin{equation} 0=\int_{{\bf |}} q_1(u)(\hat O_1-\hat O_2)q_2(u) du = (\Delta_1-\Delta_2)2\sin\phi\int_{{\bf |}} \frac{q_1(u)q_2(u)}{u} du \ , \end{equation} from which we conclude that $\br{q_1(u)q_2(u)}=0$. In the next section we relate the Q-function to the solution of the Bethe-Salpeter equation resumming the ladder diagrams for the two point correlator. \section{Bethe-Salpeter equations and the Q-function}\label{sec:BS} \begin{figure} \centering \includegraphics[scale=1.1]{archesFD.pdf} \caption{The two cusp correlator with four different cut-offs $\Lambda_a$, which can be considered as a particular case of $4$-cusp correlator. We take $n$ points along each of the circular arcs and connect them with scalar propagators. We have to integrate over the domain $-\Lambda_1<t_1<t_2<\dots<t_n<\Lambda_3$ and $-\Lambda_4<s_1<s_2<\dots s_n<\Lambda_2$. One should use a specific parameterization given in \eq{eq:param}.} \label{fig:G4} \end{figure} \label{sec:schr} In this section we consider a two cusp correlator with amputated cusps shown on Fig.~\ref{fig:G4} which we denote by $G(\Lambda_1,\Lambda_2,\Lambda_3,\Lambda_4)$. We derive an expression for it re-summing the ladder diagrams. To do this we write a Bethe-Salpeter equation and then reduce it to a stationary Schr\"odinger equation, expressing $G$ in terms of the wave functions and energies of the Schr\"odinger problem. After that we discuss the relation between the wave functions and the Q-functions introduced in the previous section. \subsection{Bethe-Salpeter equation}\label{sec:cuspdiv} Our goal in this section is reviewing the field-theoretical definition of the cusp anomalous dimension and its computation in the ladder limit, where it relates to the ground state energy of a simple Schr\"odinger problem. First we define more rigorously the object from the Fig.~\ref{fig:G4}. We are computing an expectation value \begin{equation} \label{G4WW} G(\Lambda_1, \Lambda_2 , \Lambda_3, \Lambda_4 ) = \left\langle{\rm Tr\;} W_{\vec x_+(-\Lambda_1)}^{\vec x_+(+\Lambda_3)}(\vec n_1) \;\;W_{\vec x_-(-\Lambda_4)}^{\vec x_-(+\Lambda_2)}(\vec n_2)\right\rangle , \end{equation} with \begin{equation} W_{x}^{y}(\vec n)={\rm Pexp}\int_{x}^y \(iA_\mu d x^\mu+\Phi^a n^a | dx |\) . \label{eq:Wxyop} \end{equation} For simplicity we can assume that the contours belong to the $(*,*,0,0)$ two dimensional plane (which can be always achieved with a suitable rotation) and we use a particular ``conformal" parameterization of the circular arcs by \begin{equation}\la{eq:param} \vec{x}_\pm(s) = ( \text{Re}( \zeta_\pm(s) ) , \text{Im}(\zeta_\pm(s) ) , 0, 0 ) , \end{equation} where \beqa \zeta_{_\pm}( s ) &=& z_1 + \frac{ (z_2 - z_1 ) }{1\mp i e^{\mp s + i (\chi\pm\phi)/2} } \label{eq:zazb} \eeqa such that $\vec x_1 \equiv ( \text{Re}( z_1 ) , \text{Im}(z_1) , 0, 0 ) =\vec x_\pm(\mp\infty)$ and $\vec x_2=( \text{Re}( z_2 ) , \text{Im}(z_2) , 0, 0 ) =\vec x_\pm(\pm\infty)$. Here $\vec x_+$ corresponds to the upper arc in Fig.~\ref{fig:G4}, and $\vec x_-$ to the lower one. The configuration has one parameter $\chi$, which allows one to bend two arcs simultaneously keeping the angle between them fixed. This is the most general configuration of two intersecting circular arcs up to a rotation. Next we notice that in the ladders limit we can neglect gauge fields so we get\footnote{Note also that in the ladders limit the orientation of the Wilson line is irrelevant, e.g. $\langle W^{\vec y}_{\vec x}(\vec n)\rangle=\langle W^{\vec x}_{\vec y}(\vec n)\rangle$.} \beqa\la{eq:BS}&& \partial_{\Lambda_3}\partial_{\Lambda_4} G(\Lambda_1, \Lambda_2 , \Lambda_3, \Lambda_4 ) =\\ \nonumber&& \left\langle{\rm Tr}\; W_{\vec x_+(-\Lambda_1)}^{\vec x_+(+\Lambda_3)}(\vec n_1) \;\;\Phi^a n_1^a |\dot{\vec x}_+(\Lambda_3)|\;\;\Phi^b n_2^b |\dot{\vec x}_-(-\Lambda_4)|\;\;W_{\vec x_-(-\Lambda_4)}^{\vec x_-(+\Lambda_2)}(\vec n_2)\right\rangle , \eeqa which gives \begin{equation} \partial_{\Lambda_3}\partial_{\Lambda_4} G(\Lambda_1, \Lambda_2 , \Lambda_3, \Lambda_4 )= G(\Lambda_1, \Lambda_2 , \Lambda_3, \Lambda_4 )P( -\Lambda_4, \Lambda_3) \ , \end{equation} where the last term is the scalar propagator \begin{equation} P( s, t) = 4 \, \hat{g}^2 \, \frac{ | \dot{ \vec x}_- (s) | \, | \dot{ \vec x}_+ (t) |}{ | \vec x_+(t) - \vec x_-(s) |^2 } \label{eq:Pst} \end{equation} with $\hat g^2= g^2 \vec n_a \cdot \vec n_b/2$ (which is equivalent in the ladders limit to the definition of $\hat g$ in \eq{lad} as $\vec n_1 \cdot \vec n_2=\cos\theta$). The main advantage of the parameterization we used is that the propagator $P(s,t)$ is a function of the sum $s+t$: \begin{equation}\la{eq:goodpropagator} P( s,t) = \frac{2\hat{g}^2}{ \cosh(s+t) + \cos(\phi)}\; . \end{equation} Finally, we have to specify the boundary conditions. We notice that whenever one of the Wilson lines degenerates to a point the expectation value in the ladders limit becomes $1$, which implies \begin{equation} G(\Lambda_1,\Lambda_2,-\Lambda_1,\Lambda_4)= G(\Lambda_1,\Lambda_2,\Lambda_3,-\Lambda_2)=1\;. \end{equation} \paragraph{Stationary Schr\"odinger equation.} In order to separate the variables we introduce new ``light-cone" coordinates in the following way \begin{equation} x=\Lambda_4-\Lambda_3\;\;,\;\;y=\frac{\Lambda_1+\Lambda_2+\Lambda_3+\Lambda_4}{2} \end{equation} so that $\partial_{\Lambda_3}\partial_{\Lambda_4}=-\partial_x^2+\frac{1}{4}\partial_y^2$. We also denote \begin{equation} \tilde G_{\Lambda_1,\Lambda_2}(x,y)\equiv G(\Lambda_1, \Lambda_2 , \Lambda_3, \Lambda_4 ) \end{equation} so that \eq{eq:BS} becomes \begin{equation}\la{eq:BS2} \frac{1}{4}\partial_y^2\tilde G_{\Lambda_1,\Lambda_2}(x,y)= \[\partial_x^2+ \frac{2\hat{g}^2}{ \cosh x + \cos\phi}\] \tilde G_{\Lambda_1,\Lambda_2}(x,y)\;. \end{equation} In order to completely reduce this equation to the stationary Schr\"odinger problem, we have to extend the function $G_{\Lambda_1,\Lambda_2}(x,y)$ to the whole plane. Currently it is only defined for $-\Lambda_1<\Lambda_3$ and $-\Lambda_2<\Lambda_4$ i.e. inside the future light-cone, see Fig.~\ref{fig:lc}. We extend $\tilde G_{\Lambda_1,\Lambda_2}(x,y)$ to the whole plane using the following definition: \beqa \tilde G_{\Lambda_1,\Lambda_2}(x,y)&=&-\tilde G_{\Lambda_1,\Lambda_2}(x,|y|)\;\;,\;\;y<0\\ \tilde G_{\Lambda_1,\Lambda_2}(x,y)&=&0\;\;,\;\;|y|>|x-\Lambda_1+\Lambda_2|/2\;. \eeqa With this definition it is easy to see that if \eq{eq:BS2} was satisfied in the future light cone, it will hold for the whole plane. \begin{figure} \centering \includegraphics[scale=1.6]{lightcone.pdf} \caption{We have to impose the boundary condition $\tilde G_{\Lambda_1,\Lambda_2}(x,y)=1$ on the light-rays intersecting at $x=\Lambda_1-\Lambda_2$ and given by the equation $x=\Lambda_1-\Lambda_2\pm 2y$. The initial function $\tilde G_{\Lambda_1,\Lambda_2}(x,y)$ is only defined inside the future light cone. It can be extended to the whole plane by setting it to zero outside the light cone and imposing $\tilde G_{\Lambda_1,\Lambda_2}(x,y)=-\tilde G_{\Lambda_1,\Lambda_2}(x,-y)$ for negative $y$.} \label{fig:lc} \end{figure} After that we can expand $\tilde G_{\Lambda_1,\Lambda_2}(x,y)$ in the complete basis of the eigenfunctions of the Schr\"odinger equation in the $x$ direction, \begin{equation} \tilde G_{\Lambda_1,\Lambda_2}(x,y)=\sum_n a_n(y) F_n(x) \end{equation} where \begin{equation} 4\[-\partial_x^2- \frac{2\hat{g}^2}{ \cosh x + \cos\phi}\] F_n(x)={E_n} F_n(x)\; \end{equation} and $a_n(y)$ has to satisfy $a_n''(y)=-E_n a_n(y)$. Since $\tilde G(x,y)$ is odd in $y$ we get \begin{equation}\la{Gxya} \tilde G_{\Lambda_1,\Lambda_2}(x,y)=\sum\hspace{-5mm}\int_n C_n(\Lambda_1,\Lambda_2)\left(e^{\sqrt{-E_n} \, y}-e^{-\sqrt{-E_n} \, y}\right) F_n(x)\;. \end{equation} In the above expression we assume the sum over all bound states with $E_n < 0$ and integral over the continuum $E_n>0$ (see Fig.~\ref{fig:specschr}). \begin{figure}[t] \centering \includegraphics[scale=0.8]{spec1p5sh.pdf} \caption{ {\bf Spectrum of the Schr\"odinger problem} at $\phi=1.5$ for a range of values of the coupling. Solid lines show numerical data for the first few bound states. For small $\hat g$ there is only one bound state in the spectrum, but their number grows linearly with the coupling. Dashed lines show analytic continuation of the levels in the coupling $\hat g$ beyond the point where they disappear from the bound state spectrum and become resonances (to be discussed in detail in section \ref{sec:qscexc}). } \label{fig:specschr} \end{figure} Next we should determine the coefficients $C_n(\Lambda_1,\Lambda_2)$, for that we consider the small $y$ limit. For small $y$ we see that $G(x,y)$ is almost constant inside the light cone ($+1$ for $y>0$ and $-1$ for $y<0$) and is zero for $\Lambda_1-\Lambda_2-2y<x<\Lambda_1-\Lambda_2+2y$. In other words for small $y$ we have \begin{equation} \tilde G_{\Lambda_1,\Lambda_2}(x,y)\simeq 4 y\delta(x-\Lambda_1+\Lambda_2) \label{eq:tilGdelta} \end{equation} at the same time from the ansatz \eq{Gxya} we have, in the small $y$ limit \begin{equation} \tilde G_{\Lambda_1,\Lambda_2}(x,y)\simeq 2y\sum\hspace{-5mm}\int_n C_n(\Lambda_1,\Lambda_2)\sqrt{-E_n} F_n(x)\;.\label{eq:tilGexp} \end{equation} Contracting equations (\ref{eq:tilGdelta}) and (\ref{eq:tilGexp}) with an eigenvector $F_n(x)$ and comparing the results, we get \begin{equation} C_n(\Lambda_1,\Lambda_2)=\frac{2F_n(\Lambda_1-\Lambda_2)}{||F_n||^2\sqrt{-E_n}}\; . \end{equation} Which results in the following final expression for $G$ \begin{equation}\la{G1234} G(\Lambda_1,\Lambda_2,\Lambda_3,\Lambda_4) =\sum\hspace{-5mm}\int_n \frac{4F_n(\Lambda_1-\Lambda_2)F_n(\Lambda_4-\Lambda_3)}{||F_n||^2\sqrt{-E_n}} \sinh\left(\sqrt{-E_n} \frac{\Lambda_1+\Lambda_2+\Lambda_3+\Lambda_4}{2}\right) \;. \end{equation} We will use this result in the next section to compute the two-point function in a certain regularisation including the finite part. This will be needed for normalisation of the 3-cusp correlator. \subsection{Two-point function with finite part}\label{sec:2ptcusp} Now let us study the two-cusp configuration shown in Fig.~\ref{fig:2pt}, regularised by cutting $\epsilon$-balls around each of the cusps. Here we show that the correlator has the expected space-time dependence of a two-point function with conformal dimension $\Delta = -\sqrt{-E_0}$. \begin{figure} \centering \includegraphics[scale=1.4]{arches.pdf} \caption{{\bf The 2-cusp correlator.} For regularisation we cut an $\epsilon$-ball around each of the cusps. The configuration is parameterised by the external angle $\phi$. The result does not depend on $d$ (or equivalently $\chi$ in \eq{eq:param}) and is only a function of $x_{12}=|x_1-x_2|,\;\phi,\;\Delta$ and the regulator $\epsilon$.} \label{fig:2pt} \end{figure} In order to compute this quantity we need to work out which cut-offs in the parameters $s$ and $t$ appearing in (\ref{eq:zazb}) correspond to the $\epsilon$-regularisation. By imposing \begin{equation} | \zeta_{+}(-\Lambda_1)-z_1|=\epsilon\;\;,\;\; | \zeta_{+}(\Lambda_3)-z_2|=\epsilon\;\;,\;\; | \zeta_{-}(+\Lambda_2)-z_1|=\epsilon\;\;,\;\; | \zeta_{-}(-\Lambda_4)-z_2|=\epsilon \end{equation} we find (asymptotically for small $\epsilon$) \begin{equation} \Lambda_1=\Lambda_2=\Lambda_3=\Lambda_4=\log\(\frac{ x_{12}}{\epsilon} \), \;\;\;\; x_{12} = |z_1 - z_2 | ,\label{eq:lambdacutoff0} \end{equation} which allows us to write, using \eq{G1234} \beqa \langle W_{\epsilon, x_1, x_2} \rangle &=& G(\Lambda,\Lambda,\Lambda,\Lambda)\simeq \frac{2F^2_0(0)e^{2\sqrt{-E_0} \Lambda}}{||F_0||^2\sqrt{-E_0}} = -\frac{2F^2_0(0)}{||F_0||^2\Delta_0} \(\frac{\epsilon}{x_{12}} \)^{2\Delta_0}\; \eeqa where we use that for large $\Lambda$ only the ground state contributes. We use the notation \begin{equation} \Delta_0\equiv-\sqrt{-E_0}\; \end{equation} so that $\Delta_0$ is the usual cusp anomalous dimension. We see that the result for the $2$-cusp correlator takes the standard form $\frac{{\cal N}_{\hat g,\phi}^2}{x_{12}^{2\Delta_0}}$ with a rather non-trivial normalization coefficient \begin{equation} {\cal N}_{\hat g,\phi}=\epsilon^{\Delta_0}\frac{F_0(0)}{||F_0||}\sqrt{\frac{2}{-\Delta_0}}\;,\label{NDelta} \end{equation} which we will use to extract the structure constant from the 3-cusp correlator. \subsection{Relation to Q-functions} \label{sec:BaxtoSchrod} Here we describe a direct relation between solutions of the Schr\"odinger equation and the Q-functions. From the previous section we can identify $\Delta=-\sqrt{-E}$ resulting in \begin{equation}\label{eq:Schrodinger} F''(z) +\frac{2\hat g^2}{\cosh z+\cos\phi}F(z)=\frac{\Delta^2}{4}F(z)\;. \end{equation} In this section we will relate $F(z)$ with $q(u)$. The relation is very similar to that found previously for the $\phi=\pi$ case in \cite{Gromov:2016rrp}. For $\phi > 0$, the map is defined as follows \begin{equation} \frac{F(z)}{2\pi} = \, e^{-\Delta z/2 } \int_{{\bf |}} \, q(u) \, e^{w_{\phi}(z) \, u} \, \frac{du}{2\pi i u} \ \ ,\label{eq:qToF} \end{equation} where \begin{equation} \label{defw} e^{i w_{\phi}(z) } = \left(\frac{\cosh{\frac{z-i \phi}{2}} }{\cosh{\frac{z+ i \phi}{2}}}\right), \end{equation} and $q(u)\equiv q_+(u)$ is one of the solutions of the Baxter equation (\ref{Bax2p}), specified by the large $u$ asymptotics $q(u)\simeq u^\Delta e^{u\phi}$. We remind that we use the notation $\int_{{\bf |}}$ for the integration along a vertical line shifted to the right from the origin. For negative $\Delta$ the integral in \eq{eq:qToF} converges for any finite $z$, and we can shift the integration contour horizontally, as long as we do not cross the imaginary axis where the poles of $q(u)$ lie. Let us show that if $q$ satisfies the Baxter equation \eq{Bax2p}, then $F(z)$ computed from \eq{eq:qToF} satisfies the Schr\"odinger equation \eq{eq:Schrodinger}. Applying the derivative in $z$ twice to the relation (\ref{eq:qToF}) we find \beqa \nonumber &&F''(z) - \frac{\Delta^2}{4}F(z)\\ &&= \frac{e^{-\Delta z/2 } }{2 (\cosh(z) + \cos\phi )}\, \int_{{\bf |}} q(u) \,\left( ( D + D^{-1} ) + \frac{2 \Delta \sin\phi}{u} - 2 \, \cos\phi \right)[ u \, e^{u \, w_{\phi}(z)} ] \,du \label{eq:rhs} \eeqa where $D$ represents the shift operator $D [ f(u) ] = f(u+i)$. Shifting the integration variable and using the Baxter equation (\ref{Bax2p}), the rhs of (\ref{eq:rhs}) simplifies leading to (\ref{eq:Schrodinger}). Notice that this relation between the Baxter and Schr\"odinger equations holds also off-shell, i.e. when $\Delta$ is a generic parameter and the quantization condition (\ref{qquant}) need not be satisfied. In Appendix \ref{app:quant} we show that the quantization condition (\ref{qquant}) is equivalent to the condition that $F(z)$ is a square-integrable function, so that it corresponds to a bound state of the Schr\"odinger problem. \paragraph{Reality.} Let us show that the transform (\ref{eq:qToF}) defines a real function $F(z)$. Here we assume the quantization condition to be satisfied. Taking the complex conjugate of (\ref{eq:qToF}) we find \begin{equation} \frac{F^*(z)}{2\pi} = \, e^{-\Delta z/2 } \int_{{\bf |}}\,\bar{q}(u) \, e^{w_{\phi}(z) \, u} \, \frac{du}{2\pi i u} .\label{eq:qToF*} \end{equation} A precise relation between $q(u)$ and $\bar{q}(u)$ is discussed in appendix~\ref{app:qsc}. In particular, from (\ref{q1b}), (\ref{adif}) we see that, when the quantization conditions are satisfied, \begin{equation} \bar{q}(u) = q(u) + \mathcal{O}( e^{- 2 \pi u} ) + \mathcal{O}( e^{ - \phi u } ), \label{qqbar} \end{equation} for large $\text{Re} \, u$. Shifting the contour of integration to the right we see that the contribution of the omitted terms in (\ref{qqbar}) is irrelevant, and therefore the integral transforms involving $\bar{q}(u)$ and $q(u)$ are equivalent. This shows that $F^*(z) = F(z)$. \paragraph{Inverse map.} The transform (\ref{eq:qToF}) can be inverted as follows: \begin{equation} \frac{q(u)}{u} = \frac{\sin{\phi}}{2 \pi} \, \int_{i \pi -i \phi}^{+\infty} \frac{dz \; e^{\Delta z/2 - w_{\phi}(z) u }}{\cosh{z} + \cos\phi } \, \, F(z) .\label{eq:Ftoq} \end{equation} The above integral representation converges for $\text{Im}(u) > 0$ and $\Delta < 0$. Assuming $F(z)$ is a solution to the Schr\"odinger equation with decaying behaviour $F(z) \sim e^{ \Delta z/2 }$ at positive infinity $z \rightarrow + \infty$, this map generates the solution to the Baxter equation $q(u)$. When additionally $F(z)$ decays at $z \rightarrow - \infty$, $q(u)$ satisfies the quantization conditions. \paragraph{Relation to the norm of the wave function.} From the Schr\"odinger equation \eq{eq:Schrodinger} we can use the standard perturbation theory to immediately write \begin{equation}\la{ddF} \frac{ \partial \Delta}{\partial\hat g}=\frac{8\hat g}{\Delta}\frac{1}{||F||^2}\int \frac{F^2(z)}{\cosh z+\cos \phi}dz \ . \end{equation} We will rewrite the numerator in terms of the Q-function. For that we use that $F_n(z)$ is either an even or an odd function depending on the level $n$, then we can write $F^2(z)=(-1)^n F(z)F(-z)$ and then use \eq{eq:qToF}. The advantage of writing the product in this way is that the factor $e^{+\Delta z/2 }$ in \eq{eq:qToF} cancels giving \begin{equation} \frac{1}{4\pi^2}\int \frac{F_n^2(z)}{\cosh z+\cos\phi} = (-1)^n \int_{{\bf |}} \frac{du}{2\pi i} \int_{{\bf |}} \frac{dv}{2\pi i} \int_{-\infty}^{\infty}dz \,\frac{ q_n(u) }{u}\frac{ q_n(v) }{v}\, \frac{e^{w_{\phi}(-z) \, u}e^{w_{\phi}(+z) \, v}}{{\cosh z+\cos\phi}} \;\;\;\; .\label{eq:qToF2} \end{equation} Next we notice that the integration in $z$ can be performed explicitly \begin{equation} K(u-v)\equiv \int_{-\infty}^\infty\frac{ e^{w_{\phi}(-z) \, u}e^{w_{\phi}(+z) \, v}}{\cosh z+\cos \phi}dz = \frac{ e^{\phi (u-v)}-e^{\phi (v-u)}}{(u-v)\sin\phi}\;. \end{equation} Note that the function $K(u-v)$ is not singular by itself as the pole at $u=v$ cancels. We are going to get rid of the integral in $u$ in \eq{eq:qToF2}, for that we notice that we can move the contour of integration in $v$ slighly to the right from the integral in $u$, and after that we can split the two terms in $K(u-v)$. The first term $\sim \frac{e^{\phi(u-v)}}{u-v}$ decays for ${\rm Re}\;v\to+\infty$ and we can shift the integration contour in $v$ to infinity, getting zero. Similarly the second term $\sim \frac{e^{\phi(v-u)}}{u-v}$ decays for ${\rm Re}\;u\to+\infty$ and we can move the integration contour in $u$ to infinity, but this time on the way we pick a pole at $u=v$. That is, only this pole contributes to the result giving \begin{equation} \label{QFder2} \frac{1}{4\pi^2}\int \frac{F_n^2(z)}{\cosh z+\cos\phi} =\frac{(-1)^n}{\sin\phi} \int_{{\bf |}} \frac{q_n^2(v)}{v^2}\frac{dv}{2\pi i}\;. \end{equation} At the same time, above in \eq{eq:dddg} we have already derived an expression for $\partial\Delta/\partial\hat g$ in terms of the Q-function. Comparing it with \eq{ddF} and using \eq{QFder2} we conclude that \begin{equation} \boxed{ \frac{1}{4\pi^2}||F_n||^2=-(-1)^n\frac{2}{\Delta_n}\int_{{\bf |}} \frac{q_n^2}{u}\frac{du}{2\pi i}\;. \label{eq:normQ} } \end{equation} We will use the relations between $q$ and $F$ to rewrite the 3-cusp correlator in terms of Q-functions in the next section. \section{Three-cusp structure constant} \label{sec:3pt1} In this section we derive our main result -- an expression for the structure constant. First, we compute it for the case when only one of the $3$ couplings is nonzero. We refer to this case as the Heavy-Light-Light (HLL) correlator \footnote{The name is justified since, in analogy with the case of local operators, the scaling dimensions of the cusps become large at strong coupling. }. Then we generalize the result to two non-zero couplings, this case we call the Heavy-Heavy-Light (HHL) correlator. In both cases we managed to find an enormous simplification when the result is written in terms of the Q-functions. We postpone the Heavy-Heavy-Heavy (HHH) case for future investigation. \subsection{Set-up and parameterization}\label{sec:setup3pt} \begin{figure}[t] \centering \includegraphics[scale=0.7]{triangle.pdf} \caption{The general configuration of the Wilson loop we consider (the $x_1x_2x_3$ triangle) is built out of $3$ circular arcs belonging to the same plane. The configuration is parameterized by $3$ external angles $\phi_i$, coordinates of the vertices $x_i$ and $3$ scalar products of the unit vectors attached to the scalars inside the Maldacena-Wilson loop (or equivalently $3$ couplings $\hat g_a$). Pairs of arcs continued outside the triangle intersect again at $A$, $B$ and $C$. The renormalized 3-cusp correlator has the typical CFT dependence on the positions of the vertices, with a structure constant which depends only on the $3$ angles and $3$ couplings. In this paper we only consider the case with two non-zero couplings.} \label{fig:triangle} \end{figure} In this section we describe the $3$-cusp Wilson loop configuration, parameterization and regularisation, which we use in the rest of the paper. The Wilson loop is limited to a 2D plane and consists of $3$ circular arcs coming together at $3$ cusps (see Fig.~\ref{fig:triangle}). The $3$ angles $\phi_i$, $i=1,2,3$ can be changed independently. The geometry is completely specified by the angles and the positions of the cusps $x_i$, $i=1,2,3$. In the rest of this paper, we consider the following ``triangular" inequalities on the angles: \beqa &&\phi_1 + \phi_2 > \phi_3 , \,\,\, \phi_3 + \phi_2 > \phi_1 , \,\,\, \phi_3 + \phi_1 > \phi_2 ,\,\,\, \label{eq:ineq} 0 < \phi_i < \pi . \eeqa To understand the geometric meaning of these relations, consider the extension of the arcs forming the Wilson loop past the points $\vec{x}_i$: this defines three virtual intersections $A$, $B$, $C$ (see Fig.~\ref{fig:triangle}). The inequalities (\ref{eq:ineq}) mean that $A$, $B$, $C$ are all outside the Wilson loop. Our results will hold in this kinematics regime. In the limit where we approach the boundary of the region (\ref{eq:ineq}) our result significantly simplifies and will be considered in Sec.~\ref{sec:smallPhi}, in particular we will reproduce the results of \cite{Kim:2017sju} for the case $\phi_1=\phi_2=\phi_3=0$. Now we describe a nice way to parametrize the Wilson lines. Consider the two arcs departing from $\vec{x}_1$. Extending these arcs past the points $\vec{x}_2$, $\vec{x}_3$, they define a second intersection point $A$. By making a special conformal transformation, we map $A$ to infinity and both arcs connecting $x_1$ with $A$ to straight lines, which we can then map on a cylinder like in \eq{eq:param}. The most convenient parametrization corresponds to the coordinate along the cylinder. By mapping $A$ back to some finite position we get a rather complicated but explicit parametrization like the one we used in Sec. \ref{sec:cuspdiv}. It is again very convenient to use complex coordinates, similarly to \eq{eq:param}, \begin{equation} \vec{x} = ( \text{Re}(z) , \text{Im}(z) , 0, 0 ) , \end{equation} so that the cusp points are $\vec{x}_i = ( \text{Re}(z_i) , \text{Im}(z_i) , 0, 0 ) $, $i=1,2,3$. For the arcs departing from $z_1$ we obtain, as described above, the following representation \beqa\label{eq:zeta1213} \zeta_{12}( s ) &=& z_1 - \frac{z_{12} \, z_{13} \, e^s}{ e^{s} \, z_{13} +\frac{i}{2 \sin\phi_1 } \, z_{23} \, (1-e^{s}) \, ( -e^{i \phi_1} + e^{-i (\phi_3-\phi_2)} )} , \\ \zeta_{13}( t ) &=& z_1 -\frac{z_{12} \, z_{13} \, e^t}{ e^{t} \, z_{12} + \frac{i }{2 \sin\phi_1 } \, z_{23} \, (1-e^{t}) \, (-e^{-i \phi_1 } + e^{-i (\phi_3 - \phi_2 )} ) } ,\nonumber \eeqa where $z_{ab} = z_a - z_b$. Notice that we have slightly redefined the parameters such that $s=0$ and $t=0$ correspond to the other two cusp points: $\zeta_{12}(0) = z_2$, $\zeta_{13}(0) = z_3$, while $\zeta_{12}(-\infty) = \zeta_{13}(-\infty) = z_1$, and $\zeta_{12}(\infty) = \zeta_{13}(\infty) = A$. By a cyclic permutation of all indices, we define similar parametrizations for the other arcs. Notice that, in this way, all arcs are parametrized in two distinct ways, e.g. the same arc connecting $\vec{x}_1$ and $\vec{x}_2$ is described by the functions $\zeta_{12}(s)$ and $\zeta_{21}(t)$, which are different. The main advantage of the parametrization (\ref{eq:zeta1213}) is that the propagator between the two arcs is very simple: \begin{equation} \frac{|\dot {\vec{x}}_{12}(s_1)||\dot {\vec{x}}_{13}(t_1)|} {|\vec{x}_{12}(s_1)-\vec x_{13}(t_1)|^2}=\frac{1/2}{ \cosh \left(s_1-t_1-\delta x_1\right)+\cos \phi_1} \ . \label{eq:propagator} \end{equation} However, since we decided to shift the parameters so that $s=0$ gives $\vec x_2$ and $t=0$ gives $\vec x_3$, the propagator appears to be shifted compared to \eq{eq:goodpropagator} by the quantity \begin{equation} \delta x_1 = \log\frac{\sin \frac{1}{2} ({\phi_1}-{\phi_2}+{\phi_3})}{ \sin \frac{1}{2} ({\phi_1}+{\phi_2}-{\phi_3})}\; ,\label{eq:deltax1} \end{equation} with $\delta x_2$ and $\delta x_3$ defined similarly by cyclic permutations of the indices $1,2,3$. We see now the importance of the inequalities \eq{eq:ineq} as they ensure $\delta x_i$ are real. \paragraph{Notation.} Below we consider correlators where the ladder limit is taken independently for the three cusps. Namely, by choosing appropriately polarization vectors $\vec{n}_i$ on the three lines, we define effective couplings \begin{equation} \hat{g}_i^2 = g^2 \; \frac{(\vec{n}_{i-1,i} \cdot \vec{n}_{i,i+1})}{2}, \;\;\;\;\;\; g \to 0 , \end{equation} for the three cusps $i=1,2,3$. Correspondingly, in this section we use the notation\footnote{This should not be confused with the notation for the scaling dimensions for excited states $\Delta_n$ used in other parts of the paper.} $\Delta_{i,0} $, $i=1,2,3$, to denote the scaling dimensions corresponding to the ground state for the three cusps (in the setup we consider we always have $\hat g_3=0$, $ \Delta_{3,0}=0$). The extension to excited states will be discussed in section~\ref{sec:excited}. The Q-functions describing the ground state for the first and second cusps will be denoted as $q_i(u)$, $i=1,2$, respectively. Explicitly, $q_i(u)$ is the solution of the Baxter equation $q_+(u)$, evaluated at parameters $\hat g= \hat g_i$, $\Delta = \Delta_{i,0}$ and $\phi = \phi_i$. \subsection{Regularization} The $3$ cusp correlator is UV divergent. To regularize the divergence we are going to cut $\epsilon$-circles around each of the cusps\footnote{See \cite{Dorn:2015bfa} for a general argument why the divergence depends on the geometry only through the angles $\phi_i$.} -- the same way as we regularized the 2-cusp correlator in the previous section. This will set a range for the parameters $s_i$ and $t_i$ entering the parametrizations $\zeta_{ij}( s_i )$, $\zeta_{ij}( t_i )$ defined above. Namely from (\ref{eq:zeta1213}) it is easy to find that instead of running from $-\infty$ they now start from a cutoff: \begin{equation} s_i\in[-\Lambda_{s_i},0]\;\;,\;\;t_i\in [-\Lambda_{t_i},0] \end{equation} where \begin{equation} \Lambda_{s_1}=\log \left(\frac{x_{12} x_{13}\sin\phi _1}{x_{23} \epsilon \sin \left(\frac{1}{2} \left(\phi _1-\phi _2+\phi _3\right)\right) }\right)\;\;,\;\; \Lambda_{t_1}=\log \left(\frac{x_{12} x_{13}\sin\phi _1}{x_{23} \epsilon \sin \left(\frac{1}{2} \left(\phi _1+\phi _2-\phi _3\right)\right) }\right)\;\;.\label{eq:LambdaHLL} \end{equation} All other $\Lambda_{s_i}$ and $\Lambda_{t_i}$ for $i=2,3$, can be obtained by cyclic permutation of the indices $1,2,3$. We note that \begin{equation} \la{lambdashift} \Lambda_{s_i}+\delta x_i=\Lambda_{t_i}\;. \end{equation} \subsection{Heavy-Light-Light correlator}\label{sec:HLL} \begin{figure} \centering \includegraphics{triangleHLL.pdf} \caption{The HLL correlator corresponds to the situation when the couplings $\hat g_2$ and $\hat g_3$ are zero. In this case there is only one type of propagators to re-sum.} \label{fig:HLL} \end{figure} Now we consider the simplest example of three point function in the ladder limit, where we have only one non-vanishing effective coupling, $\hat{g}_1$ for the cusp at $\vec{x}_1$, with $\hat{g}_2 = \hat{g}_3=0$. Correspondingly, we will have $\Delta_{2,0} = \Delta_{3,0} = 0$, so that this can be considered as a correlator between one nontrivial operator and two protected operators (see Fig.~\ref{fig:HLL}). For simplicity we will denote $\Delta_{1,0}$ as just $\Delta_0$ in this section. We start by defining a regularized correlator, which we denote as $Y_{\vec{x}_1 , \epsilon }( \vec{x}_2 , \vec{x}_3 )$, which is obtained by cutting the integration along the Wilson lines at a distance $\epsilon$ from $\vec{x}_1$. To compute this observable we consider the sum of all ladder diagrams built around the first cusp and covering the Wilson lines $(12)$, $(13)$ up to the points $\vec{x}_2$, $\vec{x}_3$, respectively, see Fig.~\ref{fig:HLL}. As discussed in section~\ref{sec:BS}, this is described by the Bethe-Salpeter equation, which takes a very convenient form using the parameterization introduced in the previous section for the Wilson lines departing from $\vec{x}_1$: $\vec{\gamma}_{12}(s) = ( \text{Re}(\zeta_{12}(s) ), \text{Im}(\zeta_{12}(s) ), 0, 0 )$, and $\vec{\gamma}_{13}(t) = ( \text{Re}(\zeta_{13}(t) ), \text{Im}(\zeta_{13}(t) ), 0, 0 )$. The appropriate integration range for cutting an $\epsilon$-circle around $\vec{x}_1$ is $s \in [-\Lambda_{s_1}, 0]$, $t \in [- \Lambda_{t_1}, 0]$, with cutoffs defined in (\ref{eq:LambdaHLL}). However, in order to make a connection with $G(\Lambda_1,\Lambda_2,\Lambda_3,\Lambda_4)$ defined in section~\ref{sec:BS}, we have to take into account the fact that the propagator in \eq{eq:propagator} is shifted by $\delta x_1$. This means that we have to redefine $s\to s+\delta x_1$, which will shift the range to $s \in [-\Lambda_{s_1}-\delta x_1,-\delta x_1]$, furthermore due to \eq{lambdashift} the range becomes $s \in [-\Lambda_{t_1},-\delta x_1]$ . From that we read off the values of $\Lambda_k$ and find \begin{equation}\la{Yground} Y_{\vec{x}_1, \epsilon}(\vec{x}_2 , \vec{x}_3 ) = G(\Lambda_{t_1},\Lambda_{t_1},-\delta x_1,0). \end{equation} Again, at large $\Lambda's$ only the ground state survives and we get \begin{equation} Y_{\vec{x}_1, \epsilon}(\vec{x}_2 , \vec{x}_3 ) \simeq \frac{2F_0(0)F_0(\delta x_1)}{-||F_0||^2\Delta_0} \exp\left(-\Delta_0 \frac{\Lambda_{t_1}+\Lambda_{s_1}}{2}\right) . \end{equation} Substituting the values for $\Lambda_{t_1}$ from \eq{eq:LambdaHLL} leads to \begin{equation} Y_{\vec{x}_1, \epsilon}(\vec{x}_2 , \vec{x}_3 ) = \frac{2F_0(0)F_0(\delta x_1)}{-||F_0||^2\Delta_0} \, \epsilon ^{\Delta _0} \, \frac{ ( L_{123} )^{ {\Delta _0} } }{ x_{12}^{\Delta _0} x_{13}^{\Delta _0} x_{23}^{-\Delta _0}} ,\label{eq:unnormY} \end{equation} which naturally has the structure of the $3$-point correlator in a CFT, where we have defined \begin{equation} L_{123} = \frac{ \sqrt{ \sin\frac{1}{2}( \phi_1 + \phi_2 - \phi_3) \, \sin\frac{1}{2}( \phi_1 - \phi_2 + \phi_3)} }{\sin\phi_1} .\label{eq:L123} \end{equation} Finally, to extract the structure constant we have to divide (\ref{eq:unnormY}) by the two point functions normalization \eq{NDelta}, $\mathcal{N}_{1} = \epsilon^{\Delta_0}\frac{F_0(0)}{||F_0||}\sqrt{\frac{2}{-\Delta_0}}$ , so we get: \begin{equation} C_{123}^{\bullet\circ\circ} = \left(\frac{-2}{ \Delta_0 \, ||F_{0}||^2 }\right)^{\frac{1}{2} } \, ( L_{123} )^{ {\Delta _0}} \; F_{0}(\delta x_1) .\label{eq:CHLL} \end{equation} Let us now write the result in terms of the Q-functions. Using (\ref{eq:qToF}) to evaluate the shifted wave function in (\ref{eq:CHLL}), we already notice a nice simplification: \begin{equation} w_{\phi_1}( \pm \delta x_1 ) = \pm (\phi_2 - \phi_3 ), \label{eq:wphi1} \end{equation} therefore (using also parity of the ground-state wave function) \begin{equation} F_{0}(-\delta x_1 ) = F_{0}(+\delta x_1 )= -i \, e^{- \frac{ \delta x_1}{2} \, \Delta_{0} } \, \int_{{\bf |}} \frac{q_{1}(u)}{u} \, e^{(\phi_2 - \phi_3) u} \, du \end{equation} and taking into account also the norm formula (\ref{eq:normQ}), we find \begin{equation} C^{\bullet \circ \circ}_{123}= \,(K_{123} )^{\Delta_{0}} \, \frac{ -i \, \int_{{\bf |}} \frac{q_{1}(u)}{u} \, e^{(\phi_2 - \phi_3) u} \, du }{\left( -2 \pi i \, \int_{{\bf |}} \frac{q_{1}^2(u) \, }{u} \, du \right)^{\frac{1}{2}}} , \end{equation} where the constant $K_{123}$ is defined as \begin{equation} K_{123} = L_{123} \, e^{ \frac{\delta x_1}{2}} = \frac{ \sin\frac{1}{2}( \phi_1 + \phi_2 - \phi_3) }{\sin\phi_1} .\label{eq:K123} \end{equation} Using the parity of the ground state wave function $F_0$, it can be verified that the result is symmetric in the two angles $\phi_2 \leftrightarrow \phi_3$. We see that the result takes a much simpler form in terms of the Q-functions. The structure becomes even more clear when written in terms of the bracket $\br\cdot$ defined in \eq{eq:thebracket}: \begin{equation} \boxed{ C^{\bullet \circ \circ}_{123}=\frac{\br{q_1e^{u(\phi_2-\phi_3)}}}{\sqrt{\br{q_1^2}}} }\; , \label{CHLL1} \end{equation} which is amazingly simple! \subsection{Heavy-Heavy-Light correlator} \begin{figure} \centering \includegraphics{triangleHHL.pdf} \caption{The HHL correlator. In this case there are two types of propagators since two couplings are non-zero.} \label{fig:my_label} \end{figure} Now, we switch on the effective couplings $\hat{g}_i$, $i=1,2$ for both the first and the second cusp. This means that this observable is defined perturbatively by Feynman diagrams with two kinds of ladders built around the cusps $\vec{x}_1$ and $\vec{x}_2$, see Fig.~\ref{fig:my_label}. As in the previous section let us denote by $ Y_{ \vec{x}_1 , \epsilon}( \vec{x}_2 , \vec{x}_3 ) $ the sum of all ladders built around the cusp point $\vec{x}_1$, with a cutoff at distance $\epsilon$ from the cusp. We introduce a similar notation for the ladders built around the second cusp. The sum of all diagrams contributing to the $\epsilon$-regularized Heavy-Heavy-Light correlator can be organized as follows: \begin{equation} \label{Wsum} \begin{array}{rccccc} W^{{\bullet \bullet \circ} , \, \epsilon}_{123} &=& \underbrace{\sum\limits_{\text{propagators only around $2$}}}_{Y_{\vec{x}_2, \epsilon}( \vec{x}_3 , \vec{x}_1 )} &+&\underbrace{\sum\limits_{\text{diagrams with at least one propagator around $1$}}}_{\left( W^{{\bullet \bullet \circ} , \epsilon}_{123} \right)_1} \end{array} \end{equation} where the part $ \left( W^{{\bullet \bullet \circ} , \epsilon}_{123} \right)_1 $ represents the sum of all diagrams with at least one propagator around the cusp $x_1$. As we are about to show, the leading UV divergence comes only from the connected part, which behaves as $\sim \epsilon^{\Delta_{1,0} + \Delta_{2,0} }$. Since the disconnected contributions in (\ref{eq:sumHHL}) have a milder divergence $\sim \epsilon^{\Delta_{i,0}}$ $(i=1,2)$, we can drop them since they are irrelevant to the definition of the renormalized structure constant. \begin{figure} \centering \includegraphics{triangleHHLhl.pdf} \caption{We split the propagators into two groups by explicitly writing the last propagator between $\vec \gamma_{12}$ and $\vec \gamma_{13}$. Then we re-sum the propagators surrounding cusp $x_2$ into $Y_{\vec x_2}(\vec x_3,\vec \gamma_{12})$ and those around $x_1$ into $Y_{\vec x_1}(\vec \gamma_{12},\vec \gamma_{13})$. } \label{fig:triangleHHLhl} \end{figure} As illustrated in Fig.~\ref{fig:triangleHHLhl}, the main contribution can be computed as follows: \beqa \left( W^{{\bullet \bullet \circ} , \epsilon}_{123} \right)_1 = \int_{ \vec{x}_1 + O(\epsilon) }^{ \vec{x}_2 + O(\epsilon) } d | \vec{\gamma}_{12} | \int_{ \vec{x}_1 + O(\epsilon) }^{ \vec{x}_3 } d | \vec{\gamma}_{13}|\; Y_{ \vec{x}_1 , \epsilon }( \vec{\gamma}_{12} , \vec{\gamma}_{13} ) \, \frac{1}{| \vec{\gamma}_{12} - \vec{\gamma}_{13} |^2 }\, Y_{ \vec{x}_2 , \epsilon}(\vec{x}_{3} , \vec{\gamma}_{12} ) , \nonumber \\ \label{eq:sumHHL} \eeqa where we are denoting with $Y_{ \vec{x}_1 , \epsilon }( \vec{\gamma}_{12} , \vec{\gamma}_{13} )$ the sum of all ladder diagrams up to the points $\vec{\gamma}_{12}$, $\vec{\gamma}_{13}$ on the arcs $(12)$, $(13)$, respectively (and similarly for $Y_{ \vec{x}_2 , \epsilon }( \vec{x}_3 , \vec{\gamma}_{12})$). To compute the connected integral explicitly we choose the following parametrization for the arcs $(12)$, $(13)$: \beqa \vec{\gamma}_{12}(s) &=& \left( \text{Re}( \zeta_{12}(s) ), \text{Im}(\zeta_{12}(s) ) , 0, 0 \right), \label{eq:gammaexpli}\\ \vec{\gamma}_{13}(t) &=& \left( \text{Re}( \zeta_{13}(t) ), \text{Im}(\zeta_{13}(t) ) , 0, 0 \right), \eeqa where the functions $\zeta_{ij}$ are again the ones we defined above in section~ \ref{sec:setup3pt}. The function $Y_{\vec{x}_1, \epsilon}( \vec{\gamma}_{12}(s) ; \vec{\gamma}_{13}(t) )$ is given by the solution to the Bethe-Salpeter equation with shifted propagator (\ref{eq:propagator}), where the integration range is $s_1 \in [-\Lambda_{s_1} , s]$, $t_1 \in [-\Lambda_{t_1}, t]$. Exactly as described in section~\ref{sec:HLL}, redefining the parameters we find, in terms of the amputated four point function $G(\Lambda_1, \dots, \Lambda_4)$: \begin{equation} Y_{\vec{x}_1, \epsilon}( \vec{\gamma}_{12}(s) ; \vec{\gamma}_{13}(t) ) = G_{1}( \Lambda_{t_1} , \Lambda_{t_1} , s - \delta x_1 , 0 ), \end{equation} where $\delta x_1$ is defined in (\ref{eq:deltax1}), and for $\epsilon \rightarrow 0$ we have \begin{equation} Y_{\vec{x}_1, \epsilon}( \vec{\gamma}_{12}(s) ; \vec{\gamma}_{13}(t) ) \sim \left(\frac{2F_{1,0}(0)}{-||F_{1,0}||^2\Delta_{1,0}} \right) \, \left( \frac{ \epsilon \, L_{123} \; x_{23} }{x_{12} \, x_{13} } \right)^{\Delta_{1,0} } \, e^{-\frac{s + t}{2} \, \Delta_{1,0} } \, F_{1,0}(-\delta x_1 + s - t ),\label{eq:divY1} \end{equation} where $L_{123}$ is defined in (\ref{eq:deltax1}). The other ingredient appearing in (\ref{eq:sumHHL}) is $Y_{\vec{x}_2, \epsilon }( \vec{x}_3 , \vec{\gamma_{12}}(s) ) $. Computing this quantity is slightly more complicated, since the ladders built around the second cusp point $\vec{x}_2$ are described most naturally in terms of a different parametrization, which uses the functions $\zeta_{21}(t_2)$, $\zeta_{23}(s_2)$ to parametrize the arcs $(12)$, $(23)$. In fact, it is only in the variables $s_2$ and $t_2$ that the propagator takes the simple form (\ref{eq:propagator}), with $\delta x_1 \rightarrow \delta x_2$. Therefore we need to relate the two alternative parametrizations, $\zeta_{21}(t_2)$ vs $\zeta_{12}(s_1)$, for the line $(12)$. To this end we introduce the transition map $T_{12}(s)$: \begin{equation} \zeta_{12}(s) = \zeta_{21}( T_{12}(s) ), \end{equation} which is given explicitly by \begin{equation} e^{T_{12}(s) } = \frac{(1 - e^s)}{1 - e^s\frac{\cos\phi_3 - \cos( \phi_1 + \phi_2) }{ \cos\phi_3 - \cos( \phi_1 - \phi_2) } \, } .\label{eq:T12} \end{equation} Using this map, we find that $Y_{\vec{x}_2, \epsilon }( \vec{x}_3 , {\vec\gamma_{12}}(s) ) $ is defined by the Bethe-Salpeter equation with propagator shifted by $\delta x_2 $ and integration ranges $s_2\in [-\Lambda_{s_2} , 0]$, $t_2 \in [-\Lambda_{t_2}, T_{12}(s) ]$. Taking into account the shift in the propagator, we have \begin{equation} Y_{\vec{x}_2, \epsilon }( \vec{x}_3 , {\vec\gamma_{12}}(s) ) = G_{2}( \Lambda_{t_2}, \Lambda_{t_2} , -\delta x_2 , T_{12}(s) ) , \end{equation} which for small $\epsilon$ yields \begin{equation} Y_{\vec{x}_2, \epsilon }( \vec{x}_3 , \vec{\gamma_{12}}(s) ) \sim \left(\frac{2F_{2,0}(0)}{-||F_{2,0}||^2\Delta_{2,0}} \right) \,\left( \frac{ \epsilon \; L_{231}\; x_{13} }{x_{23} \, x_{12} }\right)^{\Delta_{2,0}} \, e^{- \frac{T_{12}(s) }{2}\, \Delta_{2,0} } \, F_{2,0}( - \delta x_2 - T_{12}(s) ) , \label{eq:divY2} \end{equation} where $L_{231}$ is defined applying a cyclic permutation to (\ref{eq:K123}). Combining (\ref{eq:divY1}), (\ref{eq:divY2}) in (\ref{eq:sumHHL}), we find, for the leading divergent part: \begin{equation} W^{\bullet \bullet \circ , \, \epsilon}_{123} = \frac{\epsilon^{\Delta_{1,0} + \Delta_{2,0}}( L_{123} )^{\Delta_{1,0}}\, ( L_{231} )^{\Delta_{2,0}} }{x_{12}^{\Delta_{1,0} + \Delta_{2,0} } \, x_{13}^{\Delta_{1,0}-\Delta_{2,0}} \, x_{23}^{\Delta_{2,0}-\Delta_{1,0}} } \, \left(\frac{4 \, F_{2,0}(0) F_{1,0}(0) }{||F_{1,0}||^2 \, ||F_{2,0}||^2 \, \Delta_{1,0} \, \Delta_{2,0}} \right) \,\;\mathcal{N}^{\bullet \bullet \circ}_{123} ,\label{eq:3ptHHL} \end{equation} where $\mathcal{N}_{123}^{\bullet \bullet \circ}$ is a finite constant which can be written explicitly as\footnote{Notice that in this formula we have sent to infinite all the cutoffs defining the ranges of integration. Since the integrals in (\ref{eq:N123def}) are convergent, this does not change the leading UV divergence of the correlator, which is enough to get to the final result for the OPE coefficient. A more detailed argument would show that, by sending the cutoffs to infinity in (\ref{eq:N123def}), we also restore the disconnected contributions with subleading divergences. } \beqa \mathcal{N}_{123}^{\bullet \bullet \circ} &=& 2 \hat{g}_1^2 \, \int_{-\infty }^{0} ds \, \int_{-\infty }^0 dt \,\frac{ F_{1,0}( -\delta x_1 +s - t) \, F_{2,0}( -\delta x_2 - T_{12}(s) ) \, e^{-\frac{s+t}{2} \, \Delta_{1,0} - \frac{T_{12}(s)}{2} \, \Delta_{2,0} } }{ \cosh(s-t - \delta x_1 ) + \cos\phi_1 }. \nonumber\\ &&\label{eq:N123def} \eeqa Again, we see that (\ref{eq:3ptHHL}) has the correct space-time dependence for a CFT 3-point correlator. Normalizing by the 2-pt functions factors $\mathcal{N}_{\Delta_i, \phi_i}$ defined in (\ref{NDelta}) for the two cusps, we get a finite expression for the structure constant: \begin{equation} C^{\bullet \bullet \circ}_{123} = 2 \, \frac{( L_{123} )^{\Delta_{1,0}} \, (L_{231} )^{\Delta_{2,0}} }{ \sqrt{\Delta_{1,0} \, \Delta_{2,0} }\, ||F_{1,0}||\, ||F_{2,0}||} \, \mathcal{N}_{123}^{{\bullet \bullet \circ}} . \end{equation} Using the Schr\"odinger equation for $F_{1,0}$, we can simplify the expression for $\mathcal{N}_{123}^{\bullet \bullet \circ} $ further and remove one of the integrations: \beqa \mathcal{N}_{123}^{\bullet \bullet \circ}&=& \int_{-\infty }^{0} ds \, \int_{-\infty }^0 dt \, \partial_s \partial_t \, \left(F_{1,0}( -\delta x_1 +s - t) \, e^{-\frac{s+t}{2} \, \Delta_{1,0} } \right) \, F_{2,0}( -\delta x_2 - T_{12}(s) ) \, e^{- \frac{T_{12}(s)}{2} \, \Delta_{2,0} } \nonumber \\ &=&\int_{-\infty }^{0} ds \, \partial_s \left( F_{1,0}( -\delta x_1 +s ) \, e^{-\frac{s}{2} \, \Delta_{1,0} } \right) \, F_{2,0}( \, -\delta x_2 - T_{12}(s) \, ) \, e^{ - \frac{T_{12}(s)}{2} \, \Delta_{2,0} } . \label{eq:NHHLexpl} \eeqa While (\ref{eq:NHHLexpl}) provides an explicit result, it still appears rather intricate, especially since it contains the complicated transition function $T_{12}(s)$. We will now show that it can be reduced to an amazingly simple form in terms of the Q-functions. First, applying the transform (\ref{eq:qToF}), and using parity of the ground state wave function, $F_{1,0}(z) = F_{1,0 }(-z)$, we can write \beqa F_{1,0}( s - \delta x_1 ) \, e^{-\frac{s-\delta x_1}{2} \Delta_{1,0} } &=& -i\int_{{\bf |}} \frac{du}{u} \, q_{1}(u) \times \text{exp}\left( u \, w_{\phi_1}(\delta x_1 - s ) \right) , \label{eq:inteF1} \\ F_{2,0}( - \delta x_2 - T_{12}(s) ) \, e^{-\frac{T_{12}(s) + \delta x_2}{2} \Delta_{2,0} } &=& -i\int_{{\bf |}} \frac{du}{u} \, q_{2}(u) \times \text{exp}\left( u \, w_{\phi_2}(- T_{12}(s)- \delta x_2 ) \right) .\nonumber\\\label{eq:inteF2} \eeqa We then plug these relations into (\ref{eq:NHHLexpl}). We noticed a magic relation between the integrands of (\ref{eq:inteF1}) and (\ref{eq:inteF2}), \begin{equation} w_{\phi_1}( s-\delta x_1 ) = w_{\phi_2} ( -\delta x_2 - T_{12}(s) ) - \phi_3 \label{eq:magic} \ \ , \end{equation} which suggests that we switch to a new integration variable $\xi = w_{\phi_1}( s-\delta x_1 ) - \phi_3/2$. Notice that the integration measure is invariant, $ds \, \partial_s = d\xi \, \partial_{\xi}$. Taking into account (\ref{eq:magic}) we get: \beqa \mathcal{N}_{123}^{\bullet \bullet \circ }&=& - e^{\frac{\delta_{12} }{2}} \, \int_{{\bf |}} \frac{du}{u} \int_{{\bf |}} \frac{dv}{v} \, q_{1}(u) \, q_{2}(v) \,\left[ \int^{-\phi_2 + \phi_3/2 }_{\phi_1 - \phi_3/2} d\xi \, \partial_{\xi} \left( e^{- u \xi - u \phi_3/2 } \right) \, e^{v \xi - v \, \phi_3/2 } \right] , \nonumber \\ \delta_{12} &=& - {\delta x_1}{}\, \Delta_{1,0} + {\delta x_2 }{}\, \Delta_{2,0} \ \ , \eeqa and remarkably we can do the integral explicitly and find \begin{equation} \mathcal{N}_{123}^{\bullet \bullet \circ }= e^{\frac{\delta_{12} }{2}} \, \int_{{\bf |}} du \, \int_{{\bf |}} \frac{dv}{v} \, q_{1}(u) \, q_{2}(v) \, \left(\frac{ e^{ (\phi_2 - \phi_3) u - \phi_2 v} - e^{ -\phi_1 \, u + (\phi_1 - \phi_3 ) v} }{u-v}\right).\label{eq:dudv} \end{equation} We can simplify this expression further. In fact, notice that the integrand has no poles for $\text{Re}(u) > 0$, $\text{Re}(v) > 0$, in particular there is no pole at $u\sim v$. Therefore we can shift the two integration contours independently. Similarly to the trick used in section~\ref{sec:BaxtoSchrod}, we shift the $v$ integration contour to the right so that $\text{Re}(v) > \text{Re}(u)$, and split the integral into two contributions. One of them vanishes since the $v$-integrand is suppressed and the integration contour can be closed at $\text{Re}(v) = \infty$: \begin{equation} \int_{{\bf |}} du \, q_{1}(u) \, e^{ ( \phi_2 - \phi_3 ) u } \,\left(\int_{ {{\bf |}} \, + 0^+} \frac{dv}{v} \, q_{2}(v) \, \frac{ e^{- \phi_2 v } }{ u-v} \right)= 0 , \end{equation} while for the second integral it is the $u$-integrand that is suppressed. Closing the contour we now pick a residue at $u \sim v$: \beqa \mathcal{N}^{\bullet \bullet \circ }_{123} &=& - e^{\frac{\delta_{12} }{2}} \, \int_{ {{\bf |}} \, \, + \,0^+ } \frac{dv}{v} \, q_{2}(v) \, e^{( \phi_1 - \phi_3 ) v } \, \left( \int_{{\bf |}} du \, \frac{ q_{1}(u) \, e^{ - \phi_1 u } }{u-v}\right) \\ &=& + e^{\frac{\delta_{12} }{2}} \, ( 2 \pi i )\int_{{\bf |}} \frac{dv}{v} \, q_{1}(v) \, q_{2}(v) \, e^{- \phi_3 v} . \eeqa Combining all ingredients, we get the final expression for the structure constant in terms of the Q functions: \begin{equation} C^{\bullet \bullet \circ}_{123} = (K_{123} )^{\Delta_{1,0}} \, (K_{213} )^{\Delta_{2,0}} \, \frac{\, \int_{{\bf |}} \, q_{1} \, q_{2} \, e^{- \phi_3 u} \frac{du}{ 2 \pi i u} }{\sqrt{ \int_{{\bf |}} q_1 \, {q}_1 \, \frac{du}{2\pi i u} } \ \,\sqrt{ \int_{{\bf |}} q_2 \, {q}_2 \, \frac{du}{2\pi i u} } } , \end{equation} where the constants $K_{123}$, $K_{213}$ are defined as in (\ref{eq:K123}) by permutation of the indices. Again, it simplifies further in terms of the bracket $\br{\cdot}$ defined in \eq{eq:thebracket} \begin{equation}\la{correlator_2} \boxed{ C^{\bullet \bullet \circ}_{123} = \, \frac{\, \br{ q_{1} \, q_{2}\, e^{-\phi_3 u} } }{\sqrt{ \br{ q_1^2}\br{ q_2^2}} }} \ \ . \end{equation} In this form it is clear that the final expression is explicitly symmetric for $1 \leftrightarrow 2$, even though for the derivation we treated cusp $x_1$ differently from $x_2$. This strikingly compact expression is one of our main results. Notice that it also covers the HLL case, namely if we send one of the effective couplings $\hat g_1, \hat g_2$ to zero we recover \eq{CHLL1} as for zero coupling $\br{ q^2 } = 1$. \section{Excited states}\label{sec:excited} In this section we explore the meaning of the excited states and give them a QFT interpretation as insertions at the cusps. We will also extend our result for the structure constant to the excited states. \begin{figure} \centering \includegraphics[scale=2.0]{speccut.pdf} \caption{Structure of the spectrum of the Schrodinger operator. For finite coupling there are finitely many bound states. When the coupling is decreased, eventually the top bound state touches the continuum and goes to another sheet, becoming a resonance. There are infinitely many resonances for any value of the coupling. The spectrum of dimensions is related to the energy of the Schrodinger equation by $\Delta = -\sqrt{-E}$. This map resolves the branch cut of the continuum spectrum making the bound states and the resonances indistinguishable and equally important.} \label{fig:speccut} \end{figure} \subsection{Excited states and insertions} \label{sec:insert} First, let us discuss the structure of the spectrum of the Schr\"odinger equation. When we increase the coupling we find more and more bound states in the spectrum at $E<0$. If we analytically continue the bound state energy by slowly decreasing the coupling we will find that the level approaches the continuum at $E=0$ and then reflects back. After that point the state will strictly speaking disappear from the spectrum of the bound states as the wave function will no longer be normalizable. However, if we define the bound state as a pole of the resolvent, it will continue to be a pole, just not on the physical sheet, but under the cut of the continuum part of the spectrum. At the same time, from the expression for $G(\Lambda_1,\Lambda_2,\Lambda_3,\Lambda_4)$ in \eq{G1234} we see that the natural variable is not $E$ but rather $\Delta = -\sqrt{-E}$. In the $\Delta$-plane the branch cut of the continuum spectrum will open revealing all the infinite number of the resonances bringing them back into the physical spectrum (see Fig.~\ref{fig:speccut2}). \begin{figure} \centering \includegraphics[scale=2.0]{speccut2.pdf} \caption{Structure of the spectrum of the QSC. The map $\Delta = -\sqrt{-E}$, which relates the spectrum obtained from QSC to the Schrodinger equation, resolves the cut of the continuum spectrum, revealing an infinite set of states.} \label{fig:speccut2} \end{figure} In order to give the field theory interpretation of those bound states we build projectors, which acting on our main object $G(\Lambda_1,\Lambda_2,\Lambda_3,\Lambda_4)$ will project on the excited states $\Delta_n$ in the large $\Lambda_i$ limit. First let us rewrite \eq{G1234} in terms of $\Delta_n$'s\footnote{To obtain \eq{G1234delta} rigorously from \eq{G1234}, one should take the coupling very large bringing many bound states into the spectrum and neglect the continuum part of the spectrum, which will get exponentially suppressed w.r.t. the bound states with $\Delta_n<0$. After that one can continue in the coupling to smaller values. Alternatively, one can open the integral over the continuum part of the spectrum into the next sheet picking the poles at the resonances.} \begin{equation}\la{G1234delta} G(\Lambda_1,\Lambda_2,\Lambda_3,\Lambda_4) \simeq\sum_n \frac{2F_n(\Lambda_1-\Lambda_2)F_n(\Lambda_4-\Lambda_3)}{||F_n||^2(-\Delta_n)} \exp\left(-\Delta_n \frac{\Lambda_1+\Lambda_2+\Lambda_3+\Lambda_4}{2}\right) \;. \end{equation} Since $G$ has an interpretation as a $4$-BPS correlator, one can think about \eq{G1234delta} as an OPE expansion in the $t$-channel. We will also see soon that the coefficients appearing there are the HLL structure constants with excited states. We will come back to this point in section \ref{sec:ope}. When $\Lambda$'s tend to infinity the sum is saturated by the smallest $\Delta_n$. To suppress the lowest states we define the following differential operators: \begin{equation}\la{Odef} {\cal O}_{2m}=\prod_{i=0}^{m-1}\frac{\partial_++\Delta_{2i}}{-\Delta_{2m}+\Delta_{2i}}\;\;,\;\; {\cal O}_{2m+1}=\prod_{i=0}^{m-1}\frac{\partial_++\Delta_{2i+1}}{-\Delta_{2m+1}+\Delta_{2i+1}}\times\frac{1}{2}\partial_- \end{equation} where $ \partial_\pm\equiv \partial_{\Lambda_1}\pm \partial_{\Lambda_2}\;\;,\;\;\bar\partial_\pm \equiv \partial_{\Lambda_4}\pm \partial_{\Lambda_3} $. With the help of these operators we define \beqa W_{n}&\equiv& \left.{\cal O}_n \bar {\cal O}_n \, G(\Lambda_1,\Lambda_2,\Lambda_3,\Lambda_4)\right|_{\Lambda_1=\Lambda,\;\Lambda_2=\Lambda,\;\Lambda_3=\Lambda,\;\Lambda_4=\Lambda}\; , \eeqa which at large $\Lambda$ scales as $e^{-2\Delta_n \Lambda}$ since all terms with $k < n$ are projected out! Notice that, as discussed in Sec. \ref{sec:2ptcusp}, $G( \Lambda, \Lambda, \Lambda, \Lambda )$ can be used to describe a regularized two-point function, where the cutoff is identified with $ x_{12}e^{-\Lambda}= \epsilon$, similarly we get \begin{equation}\label{2ptn} W_{2m}\simeq \left(\frac{\epsilon}{x_{12}}\right)^{2\Delta_{2m}}\frac{-2[F_{2m}(0)]^2}{ ||F_{2m }||^2 \, \Delta_{2m}}\;\;,\;\; W_{2m+1}\simeq \left(\frac{\epsilon}{x_{12}}\right)^{2\Delta_{2m+1}}\frac{-2[F'_{2m+1}(0)]^2}{ ||F_{2m + 1 }||^2 \, \Delta_{2m+1}} , \end{equation} which indeed has the structure of the two point function of operators with dimension $\Delta_n$! These are the two point functions of the cusps with extra insertions due to the action of ${\cal O}_n$. The specific form of the operator insertion in general depends on the regularization scheme. The operators ${\cal O}_n$ give an explicit form of these insertions for the point-splitting regularization\footnote{We expect that for the finite $\theta$ case, i.e. away from the ladder limit, one should simply replace $\partial_\pm$ with the corresponding covariant derivatives at least at weak coupling.}. For instance, the first two operators ${\cal O}_1=\frac{1}{2}\partial_-$ and ${\cal O}_2=\frac{\partial_++\Delta_0}{\Delta_0-\Delta_2}$ will produce the following insertions\footnote{In \eq{O1phi} and \eq{O2phi} the scalar coupled to $n_1$ is located at position $-\Lambda_1$ on the contour, and the scalar coupled to $n_2$ is at $\Lambda_1$. } \beqa\label{O1phi} {\cal O}_1\;\;&\leftrightarrow&\;\;\frac{1}{2} \( -\Phi^a n_2^a |\dot x(\Lambda_2)|+\Phi^a n_1^a |\dot x(-\Lambda_1)| \)= \( \Phi^a n_1^a-\Phi^a n_2^a \)\frac{\epsilon}{2}\;,\\ \label{O2phi} {\cal O}_2\;\;&\leftrightarrow&\;\;\frac{(\Phi^a n_2^a+\Phi^a n_1^a)\epsilon+\Delta_0}{\Delta_0-\Delta_2}\;.\label{eq:O2def} \eeqa Naively, the interpretation of the operators corresponding to the excited states is only valid for large enough coupling when $\Delta_n<0$. In the next section we verify that it remains true at weak coupling at one loop level. Below, we also extend our result for the 3-cusp correlator to excited states. For this, we will need to know the long-time asymptotics of $\tilde G_{\Lambda_1, \Lambda_2}(x, y)$ computed with the new type of boundary conditions described by the action of the projector $\mathcal O_n$. We have, for $y \rightarrow \infty$, \begin{equation}\la{largetimen} \mathcal O_n \, \tilde G_{\Lambda_1, \Lambda_2}(x, y) \simeq c_n F_n(x) \; e^{-\Delta_n y } , \end{equation} where \begin{equation} c_{2m}=-\frac{2 F_{2m}(0)}{ ||F_{2m }||^2 \, \Delta_{2m}}\;\;,\;\;c_{2m+1}=-\frac{2 F'_{2m+1}(0)}{ ||F_{2m+1}||^2 \, \Delta_{2m+1}}\;. \label{eq:cn} \end{equation} Finally, from the $2$-point correlator \eq{2ptn} we extract the normalization coefficients \begin{equation}\la{Nn2} {\cal N}_{\Delta_n}=\epsilon^{\Delta_n} \, c_{n} \, \sqrt{\frac{-\Delta_n \, ||F_{n}||^2}{2} } , \end{equation} which we will need to normalize the structure constant in the next section. \subsection{Correlator with excited states}\label{sec:HLLexc} We will redo the calculation of the HLL correlator for the case when the heavy state is excited. We mostly notice that all the steps are essentially the same as in the case of the ground state. We begin by applying the projector operator ${\cal O}_n$, defined in \eq{Odef} to the cusp at $x_1$ and use that in the small $\epsilon$ limit we simply use the leading asymptotics \eq{largetimen} to obtain, very similarly to the ground state \eq{Yground} \begin{equation} {\cal O}_n Y_{\vec{x}_1, \epsilon}(\vec{x}_2 , \vec{x}_3 ) =c_n \, F_{n}(-\delta x_1) \, \epsilon ^{\Delta _{n}} \, \frac{ ( L_{123} )^{ {\Delta _{n}} } }{ x_{12}^{\Delta _{n}} x_{13}^{\Delta _{n}} x_{23}^{-\Delta _{n}}} , \end{equation} with $c_n$ defined in (\ref{eq:cn}). Normalizing the result with \eq{Nn2} to get a finite result for the structure constant we get \begin{equation} C_{123}^{\bullet_{n}\circ\circ}= \sqrt{\frac{2}{-\Delta_{n}||F_{n}||^2}} \, {F_{n}(\delta x_1)} \, ( L_{123} )^{ {\Delta_{n}}} \; . \end{equation} rewriting it in terms of q-functions exactly as for the ground state we obtain \begin{equation} \boxed{ C^{\bullet_{n} \circ \circ}_{123}= \frac{ \br{q_{1,n} e^{\phi_2 u-\phi_3 u}} }{\sqrt{(-1)^n\br{q_{1,n}^2}}} }\;, \end{equation} where $q_{1,n}$ denotes the solution of the QSC corresponding to the $n$-th excited state, with parameters $\hat g=\hat g_1$, $\phi = \phi_1$. The $(-1)^n$ appears from the corresponding factor in the relation for the norm of the wavefunction in \eq{eq:normQ}, it is needed to ensure the denominator is real at large couplings. Similarly for the HHL correlator we simply replace q-functions and the corresponding dimensions, but the expression stays the same! \begin{equation} \boxed{ C^{\bullet_{n}\bullet_{m} \circ}_{123}= \frac{(-1)^m \br{q_{1,n}q_{2,m} e^{-\phi_3 u}} }{\sqrt{(-1)^{n+m}\br{q_{1,n}^2}\br{q_{2,m}^2}}} }\;. \end{equation} \subsection{Excited states at weak coupling from QSC} \label{sec:qscexc} As we discussed above (see section \ref{sec:schr}), for large coupling the Schr\"odinger equation has several bound states while for small coupling all of them except the ground state disappear. Nevertheless the excited states have remnants at weak coupling which are not immediately apparent in the Schr\"odinger equation but are directly visible in the QSC. By solving the Baxter equation \eq{Bax2p} and the gluing condition \eq{qquant} numerically, we can follow any excited state from large to small coupling and we find that $\Delta$ has a perfectly smooth dependence on $\hat g$. The first several states are shown on Fig.~\ref{fig:spectrum2} and Fig.~\ref{fig:spectrum2p2} which also demonstrate an intricate pattern of level crossings that we will discuss below. For $\hat g\to 0$ we moreover observe that $\Delta$ becomes a positive integer $L$, \begin{equation} \label{dexp} \Delta=L+\Delta^{(1)}\hat g^2+\Delta^{(2)}\hat g^4+\dots, \ \ \ \ L=1,2,\dots \ \ . \end{equation} Remarkably, for each $L>0$ we have two states which become degenerate at zero coupling. In contrast, the ground state (corresponding to $L=0$) does not merge with any other state. This pattern is consistent with our proposal for the insertions \eq{Odef} -- the states with $n=2m$ and $n=2m-1$ have the same number of derivatives and thus should have the same bare dimension. \begin{figure}[t] \centering \includegraphics[scale=0.8]{spec1p5.pdf} \caption{The first few states for $\phi=1.5\,\,$. We show numerical data for $\Delta$ as a function of $\hat g$, obtained from the Baxter equation. We see that all the states, except the ground state, are paired together at weak coupling. \label{fig:spectrum2}} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.8]{spec3p0.pdf} \caption{The first few states for $\phi=3.0\,\,$. We plot $\Delta$ as a function of the coupling $\hat g$ similarly to Fig.~\ref{fig:spectrum2}. \label{fig:spectrum2p2}} \end{figure} We can explicitly compute $\Delta$ for these states at weak coupling from the Baxter equation. We solve it perturbatively using the efficient iterative method of \cite{Gromov:2015vua} and the Mathematica package provided with \cite{Gromov:2015dfa}. We start from the solution at $\hat g=0$ and improve it order by order in $\hat g$. At $\hat g=0$ the solution for any $L\equiv \left.\Delta\right|_{\hat g=0}$ has the form of a polynomial of degree $L$ multiplied by $e^{u\phi}$. At the next order we already encounter nontrivial pole structures. This procedure gives $q$-functions written in terms of generalized $\eta$-functions \cite{Marboe:2014gma, Gromov:2015dfa} defined as \begin{equation} \label{defeta} \eta_{s_1,\dots,s_k}^{z_1,\dots,z_k}(u)\equiv \sum_{n_1 > n_2 > \dots > n_k \geq 0}\frac{z_1^{n_1}\dots z_k^{n_k}}{(u+i n_1)^{s_1}\dots (u+i n_k)^{s_k}} \ . \end{equation} As an example, for $L=1$ we find \begin{equation} q=e^{u\phi }\[u+\hat g^2 \(-i\Delta^{(1)}u\;\eta_1^1 -\frac{2}{\sin\phi} +\frac{\Delta^{(1)}}{2} (-2 u e^{2 i \phi }+\cot \phi+i)\) \]+\mathcal{O}(\hat g^4) \end{equation} where $\Delta^{(1)}$ is the 1-loop coefficient in \eq{dexp}. The second solution $q_-$ is more complicated and already involves twisted $\eta$-functions such as $\eta_1^{e^{2i\phi}}$, but fortunately we only need $q_+$ to close the equations. The quantization condition \eq{qquant} then gives a quadratic equation on $\Delta^{(1)}$ which fixes \begin{equation} \Delta^{(1)}=\pm 4\ \ \ \text{for} \ \ \ L=1 \ . \end{equation} Thus as expected from the numerical analytsis we find two separate states, which become degenerate at zero coupling. For comparison, for the ground state ($L=0$) we have \begin{equation} \label{qgs} q = e^{u\phi } \, \left[ 1 + \hat{g}^2 \, \frac{2 i }{\sin\phi} \, \left( 2 \, \phi \, (\eta^1_1- \eta ^{e^{2 i \phi }}_1 ) - (\eta^1_2-\eta ^{e^{2 i \phi }}_2 )\right) \right] +\mathcal{O}(\hat g^4) \ . \end{equation} Repeating this calculation for $L=2,3,4,5$ we were able to guess a simple closed formula for the 1-loop correction, \begin{equation} \label{Dpm} \Delta_{L,\pm}=L\pm \frac{4}{L}\frac{\sin L\phi}{\sin\phi}\hat g^2+\dots, \ \ \ \ L=1,2,\dots\;. \end{equation} For the ground state ($L\to 0$) this formula also gives the correct result although only the minus sign is admissible. For the first several states we also computed $\Delta$ to two loops, e.g. for $L=1$ \beqa \label{dn12} \Delta_{1,-}&=&1-4\hat g^2 + 16 \left(\phi \cot \frac{\phi }{2}-1\right)\hat g^4 +\dots \\ \label{dp12} \Delta_{1,+} &=&1+4\hat g^2-16 \left(\phi \tan \frac{\phi }{2}+1\right)\hat g^4 +\dots\;. \eeqa The two-loop results for $L=2,3$ are given in\footnote{ Notice that for $\phi=\pi/L$ the two states with $\Delta=L$ at zero coupling are degenerate at one loop but not at two loops, at least for $L=1,2,3$.} Appendix \ref{app:exc}. All these results are also in excellent agreement with QSC numerics. For completeness, the ground state anomalous dimension to two loops is \cite{Makeenko:2006ds,Drukker:2011za}\footnote{See \cite{Correa:2012nk,Henn:2013wfa,Henn:2012qz} for higher-loop results.} \beqa \label{eq:gs2loops} \Delta_0&=&0 -4\frac{\phi}{\sin\phi}\hat g^2 \\ \nonumber && +\frac{4}{\sin^2\phi} \left[2 i \phi \left(\text{Li}_2(e^{2 i \phi })-\text{Li}_2(e^{-2 i \phi })\right)-2 \left(\text{Li}_3(e^{-2 i \phi })+\text{Li}_3(e^{2 i \phi })\right)+4 \zeta_3\right]\hat g^4 +\dots\;. \eeqa Let us note that for the ground state the leading weak coupling solution $q=e^{\phi u}$ immediately provides the 1-loop anomalous dimension via the quantization condition \eq{qquant}. However for excited states the leading order $q$-function is not enough because it vanishes at $u=0$, leading to a singularity in the quantization condition (resolved at higher order in $\hat g$). \bigskip \begin{table}[h] \begin{tabular}{ | l| l |l |l |l | l| l| l| l | l | l |} strong coupling & $\Delta_0$ & $\Delta_1$ & $\Delta_2$ & $\Delta_3$ & $\Delta_4$ & $\Delta_5$ & $\Delta_6$ & $\Delta_7$ & $\Delta_8$ & $\Delta_9$ \\ \hline weak coupling & $\Delta_{0,-}$ & $\Delta_{1, -}$ & $\Delta_{1,+} $& $\Delta_{2,+} $& $\Delta_{2,-}$ & $\Delta_{3,-}$ & $\Delta_{3,+}$ & $\Delta_{4,+}$ & $\Delta_{4,-}$ & $\Delta_{5,-}$ \end{tabular} \caption{The table shows the correspondence between the weak and strong coupling behaviour of the first few excited states. The notation $\Delta_n$ denotes the ordering of the states at strong coupling (in particular see (\ref{eq:strongcoupling})), while the notation $\Delta_{L, \pm}$ is related to the form of the one-loop correction, see (\ref{Dpm}). The pattern evident from the table continues for all excited states.\label{tab:states}} \end{table} \paragraph{Comments on level crossing.} Let us now discuss another curious feature of the spectrum, namely the presence of level crossings for $\Delta>0$ which is evident from Fig.~\ref{fig:spectrum2}. Level crossings are of course forbidden in 1d quantum mechanics, but there is no contradiction as our states only correspond to energies of the Schr\"odinger problem when $\Delta<0$. As we increase the coupling, for any state $\Delta$ eventually becomes negative and the levels get cleanly separated. At the same time the odd (even) levels do seem to repel from each other. \begin{figure}[t] \centering \includegraphics[scale=0.7]{spec00v2.pdf} \caption{The first several states at $\phi=0$. For each level the dependence of $\Delta$ on the coupling alternates between \eq{Dn0} and \eq{Dm0} before taking the form \eq{Dn0} at large coupling.} \label{fig:spectrumzero2} \end{figure} At large coupling it is natural to label the states by $n=0,1,2,\dots$ starting from the ground state. However the reshuffling of levels makes it a priori nontrivial to say what is the weak coupling behavior of a state with given $n$. First, we observe that $\Delta$ at zero coupling is given by $L= n/2$ (rounded up). Moreover we found a nice relationship between $n$ and the signs plus or minus in \eq{Dpm} determining the 1-loop anomalous dimension. Namely, the levels with $n=0,1,2,\dots$ correspond to the following sequence of signs: \begin{equation} \label{MMPP} --++--++--++\ \dots \end{equation} In order to understand this pattern it is helpful to consider the analytically solvable case when $\phi=0$. We plot the states for this case on Fig.~\ref{fig:spectrumzero2}. The spectrum of the Schr\"odinger problem for $\phi=0$ is known exactly \cite{Correa:2012nk}, \begin{equation} \label{Dn0} \Delta_n=\frac{1}{2}\[1-\sqrt{16\hat g^2+1}\]+n\ , \ \ n=0,1,2,\dots \end{equation} Here only the values of $n$ for which $\Delta_n<0$ actually correspond to bound states. One may try to analytically continue $\Delta_n$ in $\hat g$ starting from large coupling where it is negative, and arrive to weak coupling. However this would not be correct, as we know that half the levels should have positive slope at weak coupling, corresponding to the choice of the plus sign in the 1-loop correction\footnote{Clearly, \eq{Dn0} would instead give a negative 1-loop coefficient with $\Delta=n-4\hat g^2+\dots$. Also note that for $\phi=0$ the 1-loop correction \eq{Dpm} becomes equal to $\pm 4 \hat g^2$ and does not depend on $n$.} \eq{Dpm}. The true levels instead are shown on Fig.~\ref{fig:spectrumzero2}. At weak coupling half of them are given by an expression of the same form \eq{Dn0} but with opposite sign of the square root, \begin{equation} \label{Dm0} \Delta_m'=\frac{1}{2}\[1+\sqrt{16\hat g^2+1}\]+m\ , \ \ m=0,1,2,\dots \end{equation} At large coupling the levels are given by \eq{Dn0}, so dependence on the coupling switches from \eq{Dn0} to \eq{Dm0} (where $m$ and $n$ may be different) at the point where these two curves intersect. Moreover, at this point two levels meet, and they correspond to adjacent values of $n$ of the same parity. In this way e.g. the levels with even $n$ `bounce' off each other, and the same is true for odd $n$. That explains the pattern of signs in \eq{MMPP}. In fact as we see in Fig.~\ref{fig:spectrumzero2} the behavior of $\Delta$ can switch multiple times between forms \eq{Dn0} and \eq{Dm0}, before finally becoming the expected curve \eq{Dn0} at large coupling. The derivative $\partial \Delta/\partial\hat g$ is discontinuous at these switching points. However when $\phi$ becomes nonzero the picture smoothes out and the level crossing at the intersection point is also avoided (though some other level crossings truly remain) as can be see on Fig.~\ref{fig:spectrum2}. Having $\Delta$ as a piecewise-defined function made up of parts given by \eq{Dn0} and \eq{Dm0} reminds somewhat the spectrum of local twist-2 operators at zero coupling, where the anomalous dimension becomes a piecewise linear function of the spin (with different regions corresponding e.g. to the BFKL limit \cite{bfkl2, bfklint} or to usual perturbation theory\footnote{ See e.g. \cite{Costa:2012cb} for a discussion and \cite{Gromov:2015wca} for some finite coupling plots.}). One may regard \eq{Dm0} as an analytic continuation of \eq{Dn0} around the branch point at $\hat g=i/4$. There are more branch points at complex values of $\hat g$ where curves of the form \eq{Dn0} and \eq{Dm0} intersect, and we expect all the levels to be obtained from each other by analytic continuation in $\hat g$, even for generic $\phi$. Again this situation is reminiscent of the twist operator spectrum. \subsection{Excited states at weak coupling from Feynman diagrams} In this section we compute the diagrams contributing to the anomalous dimensions of the lowest excited states. First let us reproduce the one loop correction to the ground state. For that case there is only one diagram, shown on Fig.~\ref{fig:groundstate}, \begin{equation} D_0=\int_{-\Lambda}^\Lambda dt\int_{-\Lambda}^\Lambda ds \frac{2\hat{g}^2}{\cosh (s-t)+\cos (\phi )}\label{eq:D0def} \ . \end{equation} It can be computed exactly for any $\Lambda$, \begin{equation} D_0= \frac{4\hat g^2}{\sin\phi} \left(2 \Lambda \phi -i \text{Li}_2\left(-e^{-2 \Lambda -i \phi }\right)+i \text{Li}_2\left(-e^{i \phi -2 \Lambda }\right)+i \text{Li}_2\left(-e^{-i \phi }\right)-i \text{Li}_2\left(-e^{i \phi }\right)\right)\label{eq:D0eval} \end{equation} and at large $\Lambda$ it diverges linearly as $D_0=8\hat g^2 \frac{\phi}{\sin\phi}\Lambda+{\cal O}(\Lambda^0)$. Recalling that $\Lambda=\log\frac{x_{12}}{\epsilon}$ we read-off the anomalous dimension $\gamma_0=-4\hat g^2 \frac{\phi}{\sin\phi}$ in agreement with \eq{eq:gs2loops}. For the lowest excited states we have $4$ diagrams (see Fig.~\ref{fig:diagrams}). For example, the 4th diagram $D_4$ is given by the double integral \begin{equation} D_4=\int_{-\Lambda}^{\Lambda} dt_1 \int_{t_1}^{\Lambda} dt_2 \, \frac{ 4 \, \hat g^4}{( \cosh(-\Lambda - t_1 ) + \cos\phi ) \, (\cosh(\Lambda - t_2 ) + \cos\phi ) } , \end{equation} and corresponds to the following differentiation of the four point function: \begin{equation} \partial_{\Lambda_1} \partial_{\Lambda_3} \left. G(\Lambda_1, \Lambda_2 , \Lambda_3 , \Lambda_4 ) \right|_{\Lambda_i=\Lambda} = D_4 + O( \hat g^6 )\ . \end{equation} Below we give the result for these diagrams for large $\Lambda$, keeping $e^{-2\Lambda}$ terms: \beqa D_1&=&4\hat g^2 e^{-2\Lambda} \ ,\\ \nonumber D_2&=&2 \hat{g}^2 \phi \csc \phi -4 \hat{g}^2 e^{-2 \Lambda }+{\cal O}( e^{-4 \Lambda }) \ ,\\ \nonumber D_3&=&(D_2)^2=4 \hat{g}^4 \phi ^2 \csc ^2\phi -16 \hat{g}^4 e^{-2 \Lambda } \phi \csc \phi +{\cal O}( e^{-4 \Lambda })\ ,\\ \nonumber D_4&=&4 \hat{g}^4 \phi ^2 \csc ^2\phi + 16 \hat{g}^4 e^{-2 \Lambda } (-2 \Lambda +\phi \cot \phi +\log (\cos \phi +1)-1+\log 2) \ . \eeqa Combining these diagrams we can construct the operators described in section \ref{sec:insert}, in particular here we consider operators obtained with the insertion of one scalar at the cusp\footnote{The operators with more scalar insertions built this way may include derivatives acting on the scalars.}. We have\footnote{In the r.h.s. of \eq{O11G} and \eq{O22G} we omit an overall irrelevant prefactor.} \begin{equation} \label{O11G} 2 \, \mathcal{O}_1 \, \mathcal{\bar{O}}_1 \left. G(\Lambda_1, \Lambda_2 , \Lambda_3 , \Lambda_4 ) \right|_{\Lambda_i=\Lambda} = D_1+D_3 -D_4 + O ( \hat g^6 ) \end{equation} and from the diagrams computed above we find \begin{equation} 2 \, \mathcal{O}_1 \, \mathcal{\bar{O}}_1 \left. G(\Lambda_1, \Lambda_2 , \Lambda_3 , \Lambda_4 ) \right|_{\Lambda_i=\Lambda} ={4 \hat{g}^2}e^{-2\Lambda}\left(1+8 \hat{g}^2 \Lambda \right) + \dots \end{equation} Again identifying the cutoff with $\Lambda=\log\frac{x_{12}}{\epsilon}$, we read off the one-loop dimension $\Delta_1=1-4\hat g^2$. Remarkably, it perfectly matches the analytic continuation to weak coupling of the first excited state energy, computed from the QSC above in \eq{dn12}. This state corresponds to the second line from below on Fig.~\ref{fig:spectrum2}. \begin{figure} \centering \def300pt{200pt} \input{diagrams_groupdpdf.tex} \caption{One loop diagram, contributing to the ground state anomalous dimension.} \label{fig:groundstate} \end{figure} Another operator one can build is obtained from the following combination of derivatives: \beqa \label{O22G} && (\Delta_0 - \Delta_2)^2 \, \mathcal{O}_2 \, \mathcal{\bar{O}}_2 \left. G(\Lambda_1, \Lambda_2,\Lambda_3 , \Lambda_4 ) \right|_{\Lambda_i=\Lambda} \\ \nonumber && =\left[ 2 \, (\partial_{\Lambda_1} \partial_{\Lambda_3} + \partial_{\Lambda_1} \partial_{\Lambda_4} ) + \Delta_0^2 + 4 \, \Delta_0 \, \partial_{\Lambda_1} \right] \,\left. G(\Lambda_1,\Lambda_2,\Lambda_3 , \Lambda_4 ) \right|_{\Lambda_i=\Lambda} \ \ \ . \eeqa The r.h.s. here can be written in terms of the diagrams we have computed and is equal to \beqa \gamma_0^2 + 4 \gamma_0 \, D_2 + 2 (D_1 + D_3 + D_4 ) + O ( \hat g^6 ) = {4 \hat{g}^2}e^{-2\Lambda}\left(1-8 \hat{g}^2 \Lambda \right) + \dots \ \ ,\label{eq:secondO2} \eeqa where $\gamma_0 = -4 \hat g^2 \phi/\sin\phi $ is the one-loop scaling dimension for the ground state. The logarithmic divergence in (\ref{eq:secondO2}) correctly reproduces the energy of the analytic continuation of the second excited state at one loop $\Delta_2=1+4\hat g^2$, matching the QSC result \eq{Dpm}. This state corresponds to the third line from below in Fig.~\ref{fig:spectrum2}. The one-loop result agrees with the one obtained in \cite{Alday:2007he,Bruser:2018jnc} at $\theta=0$ (we expect in the ladders limit this result should be the same). \begin{figure} \centering \def300pt{300pt} \input{diagramspdf.tex} \caption{Four diagrams contributing to the mixing matrix of the cusps with insertions of a scalar operator.} \label{fig:diagrams} \end{figure} \section{Simplifying limit}\label{sec:smallPhi} In this section we consider the limit when $\phi_1+\phi_2\to \phi_3$. Geometrically this limit, which lies at the boundary of the regime of parameters considered in the rest of the paper (\ref{eq:ineq}), describes the situation where the cusp point $\vec{x}_3$ belongs to the circle defined by the extension of the arc $(12)$. In this situation, the points $A$ and $B$ shown in Fig. \ref{fig:triangle} both coincide with the cusp point $\vec{x}_3$. A special case of this limit is the situation when all angles are zero and the triangle reduces to a straight line. The main simplification comes from the most important part of the result \begin{equation} \int_{{\bf |}}\frac{du}{u} q_{1}q_{2}e^{-\phi_3 u} \end{equation} which now can be evaluated explicitly. When $\phi_1+\phi_2\to \phi_3$ we can deform the integration contour to infinity and notice that only the large $u$ asymptotic of the integrand contributes. This is clear from the following integral \begin{equation} \frac{1}{2 i \pi }\int_{{\bf |}} {du}\frac{e^{\beta u}}{u^\alpha} =\frac{ \beta ^{\alpha -1}}{\Gamma (\alpha )}\;\;\label{eq:gint} \end{equation} where in our case $\beta=\phi_1+\phi_2-\phi_3$ is small and positive. We see that the integral \eq{eq:gint} allows us to convert the large $u$ expansion into small $\beta$ series. The large $u$ expansion of the integrand is very easy to deduce from the Baxter equation \eq{Bax2p}, one just has to plug into the Baxter equation \eq{Bax2p} the ansatz \begin{equation} q=e^{\phi u}u^{\Delta}\(1+\frac{k_1}{u}+\frac{k_2}{u^2}+\dots\) \label{eq:largeuq} \end{equation} to get a simple linear system for the coefficients $k_i$, which gives \beqa k_1\sin\phi &=& \frac{1}{2} (\Delta -1) \Delta \cos \phi -2 \hat g^2\\ \nonumber k_2\sin^2\phi &=& \frac{1}{48} (\Delta -2) (\Delta -1) \Delta ((3 \Delta -1) \cos (2 \phi )+3 \Delta -5)-(\Delta -1)^2 \hat g^2 \cos \phi+2 \hat g^4 , \\ &\dots& \label{eq:an} \eeqa which allows us to compute explicitly \beqa &&\int_{{\bf |}}\frac{du}{u} q_{1}q_{2}e^{-\phi_3 u}=\frac{2 i \pi \beta ^{-\Delta _1-\Delta _2}}{\Gamma \left(-\Delta _1-\Delta _2+1\right)}\\ \nonumber &&-\frac{i \pi \beta ^{-\Delta _1-\Delta _2+1} \left(-\left(\Delta _1-1\right) \Delta _1 \cot \phi _1-\left(\Delta _2-1\right) \Delta _2 \cot \phi _2+4 \left(g_1^2 \csc \phi _1+ g_2^2\csc \phi _2\right)\right)}{\Gamma \left(-\Delta _1-\Delta _2+2\right)} + \dots \eeqa In this way we get the following small-$\beta$ expansion for the bracket in the numerator of structure constant with insertions at $1$ and $2$: \beqa \label{eq:CsmallPhi} \br{ q_1 q_2 e^{- \phi_3 u } } &=& \frac{1 \,}{\Gamma (-\Delta_1 -\Delta_2+1)}\\ & + &\frac{\left((\Delta_1 -1) \Delta_1 \cot \phi_1 +(\Delta_2-1) \Delta_2 \cot \phi_2-4 \left( \hat g_1^2 \csc \phi_1 +\hat g_2^2 \csc \phi_2\right)\right)}{2 \, \Gamma (-\Delta_1 -\Delta_2+2)} \, \beta \nonumber \\ \nonumber &+& \dots . \eeqa In principle, the expansion can be performed to an arbitrary order in $\beta = \phi_1 + \phi_2 - \phi_3$. Similarly, the norm factors appearing in the denominator of the structure constants simplify when $\phi_i \rightarrow 0$ for one of the cusps $i=1$ or $i=2$. This limit describes the situation where the cusp angle disappears. As we reviewed in Sec. \ref{sec:qscexc}, at $\phi=0$ the Schr\"odinger equation becomes exactly solvable and the spectrum is explicitly known \cite{Correa:2012nk}. The main ingredient for the computation of the norm is the integral (\ref{eq:normQ}), and it is clear that for small $\phi$ it simplifies for the very same mechanism we have just described. In particular, every term in the $1/u$ expansion of the integrand gives an integral of the kind (\ref{eq:gint}), which allow us to organize the result in powers of $\phi$. Naturally we should also take into account the scaling of the coefficients $k_i$ appearing in (\ref{eq:largeuq}) for $\phi \sim 0$. Notice that the expressions (\ref{eq:an}) are apparently singular at $\phi \sim 0$. However, a nice feature of this limit is that most of these divergences are cancelled systematically due to the fact that the scaling dimension too depends on $\phi$ in a nontrivial way. In particular, we found numerically that, for the QSC solution corresponding to the ground state, the coefficients $k_n$ have the following scaling for $\phi \rightarrow 0$: \begin{equation} \left\{ k_1 , k_2, k_3, k_4,k_5, \dots \right\} \sim \left\{0 \, ,\, O(1)\,, \,0\,, \, O(1) \, , \, 0\,,\, O(1) \,, \dots \right\}. \label{eq:gsScale} \end{equation} This observation is quite powerful. Indeed, combined with the parametric form of the coefficients (\ref{eq:an}), the requirement that they scale as (\ref{eq:gsScale}) fixes all terms\footnote{A very similar observation was made in the context of the fishnet models at strong coupling in \cite{Gromov:2017cja}.} in the expansion of $\Delta$ for small $\phi$ ! More precisely, we find that the scaling (\ref{eq:gsScale}) corresponds to two solutions for $\Delta( \phi)$: one is the ground state, for which we reproduce the results of \cite{Correa:2012nk} obtained using perturbation theory of the Schr\"odinger equation, namely, for the first two orders, \beqa \Delta_0 &=& \frac{1}{2}\left(1 - \sqrt{1 + 16 \, \hat{g}^2 }\right) + \frac{\hat g^2 \left(-16 \hat g^2+\sqrt{16 \hat g^2+1}+1\right)}{\left(16 \hat g^2-3\right) \sqrt{16 \hat g^2+1}}\, \phi^2 + \dots . \label{eq:DeltagsPhi} \eeqa The other solution describes one of the excited states trajectories\footnote{As explained in Sec. \ref{sec:qscexc}, this trajectory strictly speaking is formed patching together pieces of infinitely many levels, which are separate for finite $\phi$, see Fig. \ref{fig:spectrumzero2}. } \begin{equation} \Delta_0' = \frac{1}{2} \left(1+\sqrt{16 \hat g^2+1}\right)+\frac{\hat g^2 \left(16 \hat g^2+\sqrt{16 \hat g^2+1}-1\right)}{\left(16 \hat g^2-3\right) \sqrt{16 \hat g^2+1}}\, \phi^2 + \dots \end{equation} It is straightforward to generate higher orders in $\phi$ with this method. The remaining infinitely many states can be described allowing for a more general scaling of the coefficients $k_m$, see Appendix~\ref{app:smallPhi} for details and some results. Plugging in the scaling of coefficients (\ref{eq:gsScale}), for the solution corresponding to the ground state we find \begin{equation} \br{ q^2 } = \frac{ 1 }{\Gamma( 1 - 2 \Delta ) } + O(\phi^2 ) , \end{equation} which combined with (\ref{eq:CsmallPhi}) gives a finite result for the OPE coefficient at $\phi_1 =\phi_2 = \phi_3 = 0$: \begin{equation} \left. C^{\bullet \bullet \circ}_{123} \right|_{\phi_i=0} = \left.\frac{ \sqrt{ \Gamma( 1 - 2 \Delta_1 ) \, \Gamma( 1 - 2 \Delta_2 ) }}{\Gamma( 1- \Delta_1 - \Delta_2 ) } \right|_{\phi_i=0} = \frac{ \sqrt{\Gamma \left(\sqrt{16 \hat g_1^2+1}\right)} \sqrt{\Gamma \left(\sqrt{16 \hat g_2^2+1}\right)} }{ \Gamma \left(\frac{1}{2} \left(\sqrt{16 \hat g_1^2+1}+\sqrt{16 \hat g_2^2+1}\right)\right)}, \label{eq:Cphi0} \end{equation} where we used (\ref{eq:DeltagsPhi}) in the last step. This is in perfect agreement with the result of \cite{Kim:2017sju}. It is simple to obtain further orders in a small angle expansion, the next-to leading order in all angles is reported in Appendix~\ref{app:smallPhi}. \section{Numerical evaluation} \label{sec:num} \begin{figure} \centering \includegraphics[scale=0.6]{HHL1p0.pdf} \includegraphics[scale=0.6]{HHL0p3.pdf} \caption{Diagonal HHL correlator for several first excited states ($n=0,1,\dots,7$) with all angles equal to $\phi=1 $ (left) or $\phi=1/3 $ (right). Colors are the same as on Figure \ref{fig:spectrum2}. } \label{fig:HHLnum} \end{figure} The expression for the 3-cusp correlator we found has the form of an integral $\int_{{\bf |}} q_{\Delta_1}q_{\Delta_2}e^{-u\phi_3}\frac{du}{2\pi i u}$ which is guaranteed to converge for large enough coupling as the q-functions behave as $e^{\phi u}u^\Delta$ where $\Delta$ decreases linearly with $\hat g$ and reaches arbitrarily large negative values. However, we would like to be able to use these expressions at small coupling too, where the convergence of the integral is only guaranteed when both states are ground states, but for the excited states the integral is formally not defined. To define the integrals we introduce the following $\zeta$-type of regularization. We multiply the integrand by some negative power $u^\alpha$, compute the integral for large negative enough $\alpha$ and then analytically continue it to zero value. The key integral is \eq{eq:gint} where the r.h.s. gives the ananlytic continuation to all values of $\alpha$. We see that for large negative $\alpha$ the expression decays factorially. This fact is crucial for our numerical evaluation of the correlation function. Once the value of the energy is known numerically it is very easy to get an asymptotic expansion of the q-functions at large $u$ to essentially any order. However, since the poles of the q-functions accumulate at infinity, this expansion is doomed to have zero convergence radios. Nevertheless if we expand the integrand at large $u$ and then integrate each term of the expansion using \eq{eq:gint} we enhance the convergence of this series by a factorially decaying factor making it a very efficient tool for the numerical evaluation. We applied this method to compute the correlation function for several excited states (see Fig. \ref{fig:HHLnum}). The method allows one to compute the correlator even faster than the spectrum. We checked that it works very well for $\phi\sim 1$ giving $10$ digits precision easily, but seems to diverge for $\phi=1.5$. To cross check our precision we also used the $d\Delta/dg$ correlator \eq{eq:dddg}, which is given by the same type of integrals. \section{Correlation functions at weak coupling} \label{sec:weak} In this section we present some explicit results for the structure constants at weak coupling. Our all-loop expression for the structure constants \eq{correlator} is rather straightforward to evaluate perturbatively. First one should find the Q-function $q$ at weak coupling, which can be done by iteratively solving the Baxter equation as discussed in section \ref{sec:qscexc}. The result at each order is given as a linear combination of twisted $\eta$-functions (see \eq{defeta}) multiplied by exponentials $e^{\phi u}$ and rational functions of $u$, as in e.g. \eq{qgs}. Then the integrals appearing in the numerator and denominator of \eq{correlator} can be easily done by closing the integration contour to encircle the poles of $q(u)$ in the lower half-plane, giving an infinite sum of residues\footnote{For excited states the integral in the lhs of \eq{intres} may be divergent. We still replace it by the (convergent) sum of residues, which corresponds to the $\zeta$-type regularization discussed in section \ref{sec:num}. }: \begin{equation} \label{intres} \frac{1}{2 \pi i} \, \int_{{\bf |}} f(u) \, du = \sum_{n=0}^{\infty} \left. \text{Res} \, f(u) \right|_{u=-i n} . \end{equation} The residues come from poles of the $\eta$-functions, e.g. \begin{equation} \eta^z_n= \frac{z^m}{(u + i m )^n} + O(1) , \;\;\; u \to -i m , \;\;\; m=0,1,\dots , \end{equation} To get the residue one may need more coefficients of this Laurent expansion, which are given by zeta values or polylogarithms. Finally one should take the infinite sum in \eq{intres} which again may give polylogs. In this way we have computed the first 1-2 orders of the weak coupling expansions, as a demonstration (going to higher orders is in principle straightforward, limited by computer time and the need to simplify the resulting multiple polylogarithms). The integrals giving the norm of $q$-functions are especially simple. Below, we assume that $q(u)$ is normalized\footnote{Notice that, while the brackets in the numerator and denominator of (\ref{correlator}) depend on this normalization, the structure constants are clearly invariant. } such that the leading coefficient in the large $u$ expansion is $1$, so $q(u) \simeq u^{\Delta} \, e^{\phi u}$. For the ground state ($L=0$) we find \begin{equation} \br{ q^2 }_{L=0} = 1+8 \, \hat g^2 \, \frac{ \phi}{\sin\phi } \, \gamma_{\text{E}} + \mathcal{O}(\hat g^4 ) , \end{equation} where $\gamma_{\text{E}}$ is the Euler-Mascheroni constant. For the excited states $(L, \pm)$\footnote{This notation for the excited states is explained in Section \ref{sec:qscexc}, see also Table \ref{tab:states}.} corresponding to insertion of $L$ scalars, we have \beqa \br{ q^2 }_{1,\pm}& =& \pm 8 \hat g^2 +\dots \\ \br{ q^2 }_{2,\pm} &=& \pm 16 \, \cos\phi \;\hat g^2 +\dots \label{eq:2pm} \eeqa The $L=3$ result is given in \eq{norm3w}. Notice here that for the states $2^+$ and $2^-$ the signs of $\br{ q^2 }$ are different at weak and strong coupling. Indeed, at strong coupling the relation with the wavefunctions \eq{eq:normQ} implies that $\br{q^2}$ is positive/negative for even/odd states, respectively. Since the even state is $2^-$ (see Table~\ref{tab:states}), in (\ref{eq:2pm}) we see explicitly that these signs can change at weak coupling. The structure constants are more involved. For the HHL correlator without scalar insertions we have to 1-loop order \begin{equation} ( C^{\bullet \bullet \circ})_{L=0}=1+\hat g_1^2 F_{123}+\hat g_2^2 F_{213}+\dots \end{equation} where \beqa\la{F123} F_{123}=&& \frac{1}{\sin\phi_1} \[2 i \left(\text{Li}_2(e^{-2 i \phi_1})-\text{Li}_2(e^{-i\phi_1-i\phi_2+i\phi_3})+\text{Li}_2(e^{i \phi_1-i\phi_2+i\phi_3})\right)-\frac{i \pi ^2}{3} \right. \\ \nonumber && \left. +2 \left(\phi _1-\phi _2+\phi _3\right) \log \left(\frac{1-e^{-i \phi _1-i\phi _2+i\phi _3}}{1-e^{i \phi _1-i\phi _2+i\phi _3}}\right)-4 \phi _1 \log \left(\frac{\sin \frac{1}{2} \left(\phi _1+\phi _2-\phi _3\right)}{\sin\phi_1} \right)\] \ . \eeqa For the correlators with excited states both the numerator and the denominator in the expression \eq{correlator} for $C^{\bullet \bullet o}$ vanish at weak coupling. Due to this even the leading order in the expansion is nontrivial and requires using $q(u)$ computed to $\hat g^2$ accuracy. For the correlators with two $L=1$ states we find \beqa ( C^{ \bullet \bullet \circ} )_{L=1} &=&\frac{1}{2} \, \left(\frac{\hat g_1 }{ \hat g_2 } \pm \frac{\hat g_2 }{ \hat g_1 } \right) +\dots \ , \eeqa while for $L=2$ we get a nontrivial dependence on the angles, \beqa ( C^{ \bullet \bullet \circ} )_{L=2} =&& \frac{1}{2} \, \left( \sqrt{ \frac{ \hat g_1^2 \, \cos\phi_1 }{\hat g_2^2 \, \cos\phi_2} } \pm \sqrt{ \frac{ \hat g_2^2 \, \cos\phi_2 }{\hat g_1^2 \, \cos\phi_1} } \right) \\ \nonumber && \times \left(- \frac{\cos\phi_3 }{\sin\phi_1 \, \sin\phi_2 }+\cot\phi_1 \, \cot \phi_2 +2 \right) + \dots \ \ . \eeqa Here we have the plus sign for correlators corresponding to $(L^+,L^+)$ or $(L^-,L^-)$ states, and the minus sign for the $(L^+,L^-)$ correlator. Curiously, the HHL results do not have a smooth limit when one of the couplings goes to zero corresponding to the HLL case (this is related to a singularity in the 2-pt function normalization). This means we have to compute the HLL correlators separately. For H${}_n$LL with the excited state being $\Delta_{1,+}$ we get \begin{equation}\la{C1p} (C^{\bullet \circ \circ} )_{1^+,0,0}= -\sqrt{2 \, \hat g^2} \, \frac{\cos(\frac{1}{2}(\phi_2 - \phi_3 ))}{\cos{\frac{1}{2}\phi_1}} \ \ , \end{equation} while for $\Delta_{1,-}$ we have \begin{equation}\la{C1m} (C^{\bullet \circ \circ} )_{1^-,0,0}= -\sqrt{2 \, \hat g^2} \, \frac{\sin(\frac{1}{2}(\phi_2 - \phi_3 ))}{\sin{\frac{1}{2}\phi_1}} \ \ . \end{equation} For the $L=2$ states we find \beqa\la{C2p} (C^{\bullet \circ \circ} )_{2^+,0,0}&=& -\hat g\,i \, \frac{\sin(\phi_2 - \phi_3 )}{\sin{\phi_1} } \, \sqrt{\cos\phi_1} \ \ , \\ \la{C2m} (C^{\bullet \circ \circ} )_{2^-,0,0}&=& \hat g\,i \, \frac{ \sqrt{\cos\phi_1} }{\sin^2\phi_1} \, ( \cos\phi_1 \, \cos(\phi_2 - \phi_3 ) - 1)\ \ . \eeqa These two structure constants are purely imaginary due to the sign of $\br{q^2 }$ at weak coupling. We also present the results for the $L=3$ states in Appendix \ref{app:exc}. \section{The 4-point function and twisted OPE} \label{sec:ope} In this section we examine more closely the expression for the 4-point function which we obtained in \eq{G1234delta}. We interpret it as an OPE expansion and cross-test it at weak coupling against our perturbative data for the correlation functions. We also present some conjectures on the generalization of this OPE expansion and its applications to the computation of more general correlators. \subsection{The 4-cusp correlation function}\label{sec:4cusp} \begin{figure} \centering \includegraphics[scale=1.5]{archesFD4A.pdf} \caption{The 4-cusp correlator. Its OPE-like expansion \eq{G1234delta4} provides predictions for the HLL structure constants.} \label{fig:4cusp} \end{figure} Our starting point is an OPE-like formula \eq{G1234delta} for the 4-cusp correlator. It is based on the 2-pt function of cusps with angle $\phi_0$, but the four cutoffs $\Lambda_1,\dots,\Lambda_4$ give it the structure of a 4-point function with four cusp angles $\phi_a$ determined by $\Lambda$'s as shown on Fig.~\ref{fig:4cusp}. To make the analogy more clear we notice that we can get rid of the wavefunctions in \eq{G1234delta} entirely and rewrite it in terms of the structure constants as follows \beqa\la{G1234delta4} G(\Lambda_1,\Lambda_2,\Lambda_3,\Lambda_4) =\sum_{n=0}^\infty &&{C_{012}^{\bullet_{n}\circ\circ}} {C^{\bullet_{n}\circ\circ}_{043}}{ }\(\frac{e^{-2\Lambda}}{ L_{043} L_{012} }\)^{\Delta_n} \;, \eeqa where $\Lambda\equiv \frac{\Lambda_1+\Lambda_2+\Lambda_3+\Lambda_4}{4}$, while the angles $\phi_1,\dots,\phi_4$ at the cusps $y_a$ (see Fig.~\ref{fig:4cusp}) can be found from $w_{\phi_0}(\Lambda_a-\Lambda_b)=\phi_b-\phi_a$ with $w$ defined by \eq{defw}. More explicitly, \begin{equation}\la{tophi} e^{-i \phi_{12}}=\frac{e^{\Lambda _{12}}+e^{i \phi_0 }}{1+e^{\Lambda _{12}+i \phi_0 }}\;\;,\;\; e^{-i \phi_{43}}=\frac{e^{\Lambda _{43}}+e^{i \phi_0 }}{1+e^{\Lambda _{43}+i \phi_0 }}\;, \end{equation} where we denoted $\phi_{ab}=\phi_a-\phi_b$ and \begin{equation} \label{L12def} \Lambda_{12}=\Lambda_1-\Lambda_2, \ \ \Lambda_{43}=\Lambda_4-\Lambda_3 \ . \end{equation} The factor $L_{abc}$ as before is defined by \begin{equation}\label{eq:defL} L_{abc} = \frac{ \sqrt{ \sin\frac{1}{2}( \phi_a + \phi_{b}-\phi_c) \, \sin\frac{1}{2}( \phi_a - \phi_{b}+\phi_c)} }{\sin\phi_a} . \end{equation} We can view equation \eq{G1234delta4} as defining the 4-cusp correlator in terms of the structure constants, opening an easy way for computing this quantity in various regimes including numerically at finite coupling. This equation suggests a natural interpretation in terms of an OPE expansion for pairs of cusps. To understand this point, let us first investigate the space-time dependence of the 4pt function (\ref{G1234delta4}), which comes through the factors \begin{equation}\label{eq:4ptfactors} \left( \frac{e^{-2 \Lambda} }{L_{012} \, L_{034} } \right)^{\Delta_n} . \end{equation} To decode the dependence of (\ref{eq:4ptfactors}) on the cusp positions, it is convenient to introduce six complex parameters: four space-time positions $y_i$, $i=1, \dots, 4$, defined as \begin{equation} y_1 = \zeta_+( -\Lambda_1 ) , \;\;\; y_2 = \zeta_-(\Lambda_2 ) , \;\;\; y_3 = \zeta_+(\Lambda_3) , \;\;\; y_4 =\zeta_-(-\Lambda_4) , \end{equation} (where $\zeta_\pm$ is the parameterization defined by \eq{eq:zazb}) together with the intersection points of the two arcs $x_1$, $x_2$ (see Fig. \ref{fig:4cusp}), which we denote as $y_0 \equiv x_1$, $y_5 \equiv x_2$. These six points are not all independent as we can express $y_5$ in terms of the other five complex coordinates through the solution of the equations\footnote{These equations express the fact that four points lying on the same line or circle have a real cross ratio.} \begin{equation} \frac{ y_{53} \, y_{10} }{y_{31} \, y_{50} } = \frac{ y_{53}^* \, y_{10}^* }{y_{31}^* \, y_{50}^* } , \;\;\;\;\; \frac{ y_{54} \, y_{20} }{y_{42} \, y_{50} } = \frac{ y_{54}^* \, y_{20}^* }{y_{42}^* \, y_{50}^* } , \end{equation} where $y_{ab} = y_a-y_b$. From these two relations we can obtain $y_5$ as a rational function of $y_i$, $i=0,\dots,4$ and their complex conjugates.\footnote{We have also found nice explicit parameterizations of the spacetime dependence in terms of crossratios of these points and we present them in Appendix \ref{sec:4ptparam}. } Eliminating the parameters $\Lambda_i$ in favour of the $y_i$ coordinates, we find that the term (\ref{eq:4ptfactors}) appearing in the 4pt function can be written as \begin{equation} \left( \frac{e^{-2 \Lambda} }{L_{012} \, L_{034} } \right)^{\Delta_n} = | y_{05}^2 |^{\Delta_n} \, \frac{ | y_{12} |^{\Delta_n} }{| y_{15} \, y_{25} \, |^{\Delta_n}}\, \frac{ | y_{34} |^{\Delta_n} }{| y_{30} \, y_{40} \, |^{\Delta_n}} . \end{equation} Notice that this is the space-time dependence of the product of two 3pt functions, divided by a 2pt function, and (\ref{G1234delta4}) can be rewritten suggestively as \begin{equation}\label{eq:finalG} G = \sum_n \frac{ C_{512}^{\bullet_n \circ \circ} }{| y_{15} \, y_{25} \, |^{\Delta_n} \, | y_{12} |^{-\Delta_n}}\, \frac{ C_{043}^{\bullet_n \circ \circ} }{| y_{30} \, y_{40} \, |^{\Delta_n} \, | y_{34} |^{-\Delta_n}} \, \left( \frac{1}{| y_{05} |^{2\Delta_n} } \right)^{-1} . \end{equation} This relation is illustrated in Fig. \ref{fig:archesOPE} and it strongly reminds the usual OPE decomposition of a 4pt function in terms of 3pt correlators. In the next subsection we provide an interpretation of this relation on the operator level. \begin{figure} \centering \includegraphics[scale=0.75]{archesOPEA.pdf} \caption{The OPE decomposition of the four-point function, illustrating equation \eq{eq:finalG}.} \label{fig:archesOPE} \end{figure} \subsection{The cusp OPE} Let us now rederive the decomposition \eq{eq:finalG} of the 4pt function from first principles using the logic inspired by the usual OPE. The idea, illustrated in Fig. \ref{fig:archesOPE2}, is to express the cusps at $y_1$, $y_2$ as a combination of cusp operators inserted at $y_0$: \begin{equation}\label{eq:OPEG} W_{y_1}^{y_3} \, W_{y_2}^{y_4} =\sum_n \, \mathcal{C}^{y_1, y_2}_n \; \left[\mathcal{O}_n \, \left( W_{y_4}^{y_0} \, W_{y_0}^{y_3} \right) \right] , \end{equation} where $\mathcal{C}^{y_1, y_2}_n $ are some coefficients, $W_x^y$ are the Wilson line operators defined in (\ref{eq:Wxyop}), and $\mathcal{O}_n$ represent projector operators on the $n$-th excitation of the cusp at $y_0$. To make sense of the rhs of (\ref{eq:OPEG}), we need to specify a regularization scheme; we assume that the $\epsilon$ regularization defined in the rest of the paper is used, and the projectors $\mathcal{O}_n$ are the ones defined explicitly in section \ref{sec:insert}. Notice that the expansion corresponds to a change in the limit of integration of the Wilson lines. Derivatives of the Wilson line with respect to its endpoints produce the scalar insertions described in Sec. \ref{sec:insert}. For this reason, at least in the ladder limit considered here, we expect that only these excitations are involved in the OPE. To determine the coefficients $\mathcal{C}_n^{y_1,y_2}$, we proceed in the standard logic of the OPE and place equation (\ref{eq:OPEG}) inside an expectation value. Considering the limit where $y_3 $,$y_4$ converge towards $y_5$ (with the usual point-splitting regulator $\epsilon$), and projecting on the $n$-th state, we have \begin{equation} \bar{\mathcal{O}}_n \langle W_{y_1}^{y_5} \, W_{y_2}^{y_5} \rangle = \mathcal{N}_{\Delta_n, \epsilon} \, \frac{C_{512}^{\bullet_n \circ \circ} }{| y_{15} \, y_{25} |^{\Delta_n} \, | y_{12}|^{-\Delta_n} } , \end{equation} where we noticed that in this limit the configuration reduces to an HLL 3pt function, which we related to the structure constant as in Sec. \ref{sec:HLLexc}. Here, the constant $\mathcal{N}_{\Delta_n, \epsilon}$ is the square root of the normalization of the 2pt function, explicitly defined in (\ref{Nn2}). On the other hand from the rhs of (\ref{eq:OPEG}) we obtain (see Fig. \ref{fig:OPEexp2}): \begin{equation} \label{eqCnope} \bar{\mathcal{O}}_n \, \left( \sum_m \, \mathcal{O}_m \; \langle \, W_{y_0 }^{y_5 } \; W_{y_0 }^{y_5 } \, \rangle \right) = \mathcal{C}_{n}^{y_1 , y_2} \, \frac{\mathcal{N}_{\Delta_n, \epsilon}^2 }{|y_{05} |^{2 \Delta_n} } , \end{equation} therefore we find the coefficients: \begin{equation} \mathcal{C}_n^{y_1,y_2} = C_{512}^{\bullet_n \circ \circ } \, \left(\frac{ |y_{12} \, y_{05}^2 | }{|y_{15} \, y_{25} |} \right)^{\Delta_n} \, \mathcal{N}_{\Delta_n, \epsilon}^{-1} . \end{equation} Taking the expectation value of (\ref{eq:OPEG}) now fixes the 4pt function precisely to the form (\ref{eq:finalG}). In the next subsection we will discuss how to apply similar logic to higher-point correlators. \begin{figure} \centering \includegraphics[scale=0.8]{OPEexpA.pdf} \caption{Expansion of the Wilson lines starting at points $y_1$, $y_2$ in terms of Wilson arcs emanating from $y_0$, as written in equation \eq{eq:OPEG}.} \label{fig:archesOPE2} \end{figure} \begin{figure} \centering \includegraphics[scale=0.63]{OPEexp2A.pdf} \caption{Graphical representation of equation \eq{eqCnope} determining the value of $\mathcal{C}_n^{y_1,y_2}$.} \label{fig:OPEexp2} \end{figure} \subsection{OPE expansion of more general correlators} The OPE approach we presented above can also be applied to more general correlation functions. As one of the possible generalizations\footnote{One could also consider correlators with more than four protected cusps. In particular, the 4pt function considered in this section can naturally be viewed as a limit of the correlator of six protected cusps, which is obtained by introducing a finite $\epsilon$ cutoff around $y_1$ and $y_4$. This six point function can also be decomposed using the OPE. }, let us consider the four point function shown in Figure \ref{fig:OPE4pt}. For simplicity of notation, we assume that the same scalar polarization $\vec{n}$ is chosen for the Wilson lines denoted as $C$ and $B$, while on lines $A$ and $D$ we have a different polarization vector $\vec{m}$. This defines a configuration where the two cusps at $y_1$ and $y_4$ are not protected, while the remaining two are. Explicitly, we are considering the expectation value: \begin{equation} G_{1243}^{\bullet \circ \bullet \circ} = \frac{ \langle W_{y_1 }^{y_2}(\vec{m} ) \; W_{y_2}^{y_4 }(\vec{m} ) \; W_{y_4 }^{y_3}(\vec{n} ) \; W_{y_3}^{y_1 }(\vec{n} ) \; \rangle }{ \mathcal{N}_{1}\, \mathcal{N}_{4} } , \end{equation} where we divided by the usual 2pt function normalization factors $\mathcal{N}_1$, $\mathcal{N}_4$ for the unprotected cusps (defined explicitly in (\ref{Nn2})) in order to get a finite result\footnote{As usual we assume the point-splitting $\epsilon$-regularization close to the cusps. }. Our conjecture for this quantity is based on the assumption that we can use the same type of OPE expansion as in the previous section. This allows us to replace each pair of consecutive cusps with a sum over excitations of a single cusp, whose position is defined by the geometry. For instance, the two cusps at $y_3$ and $y_4$, which are defined by the consecutive sides $A$ $B$ $C$ of the Wilson loop, are traded for a sum over excitations of a single cusp at the point $D$, defined by the extension of the lines $A$ and $C$. \begin{figure} \centering \includegraphics[scale=0.75]{OPE4pt.pdf} \caption{The 4pt function $G_{1243}^{\bullet \circ \bullet \circ}$ of two protected and two unprotected cusps. We assume that only two scalar polarizations are involved: $\vec{n}$ on the arcs $B$,$C$ and $\vec{m}$ on the arcs $A$,$D$, so that the configuration depends on a single effective coupling. } \label{fig:OPE4pt} \end{figure} As expected, the OPE expansion gives rise to nontrivial crossing equations. Let us see this explicitly here. Taking into account the space-time dependence as in the previous section, from the contraction of $y_3$ and $y_4$ we obtain (see Fig. \ref{fig:OPE4pttwo} on the right): \begin{equation} G_{1243}^{\bullet \circ \bullet \circ} = \sum_n \, \frac{ C_{D12}^{\bullet_n \bullet \circ} }{ |y_{1D} |^{\Delta_n + \Delta_0} \, |y_{2D} |^{\Delta_n - \Delta_0} \, |y_{12} |^{\Delta_0 - \Delta_n} } \, \frac{ C_{B43}^{\bullet_n \bullet \circ} }{ |y_{B4} |^{\Delta_n + \Delta_0} \, |y_{B3} |^{\Delta_n - \Delta_0} \, |y_{34} |^{\Delta_0 - \Delta_n} } \, |y_{BD} |^{2\Delta_n } ,\label{eq:OPE4pt1} \end{equation} which now involves HHL structure constants\footnote{Here we assume that the excited states studied in the rest of this paper constitute a full enough basis which makes possible this decomposition. This point requires further investigation. If that is not the case one will have to add a sum over some additional states as well.}. Performing the OPE decomposition in the crossed channel, which corresponds to contracting $y_1$ and $y_3$ (see Fig. \ref{fig:OPE4pttwo} on the left), yields a different expansion: \begin{equation} G_{1243}^{\bullet \circ \bullet \circ} = \sum_{n} \, \frac{ C_{A42}^{\bullet_n \bullet \circ} }{ |y_{4A} |^{\Delta_n + \Delta_0} \, |y_{2A} |^{\Delta_n - \Delta_0} \, |y_{24} |^{\Delta_0 - \Delta_n} } \, \frac{ C_{C13}^{\bullet_n \bullet \circ} }{ |y_{C1} |^{\Delta_n + \Delta_0} \, |y_{C3} |^{\Delta_n - \Delta_0} \, |y_{13} |^{\Delta_0 - \Delta_n} } \, |y_{AC} |^{2\Delta_n }.\label{eq:OPE4pt2} \end{equation} Notice that we left the dependence on all angles implicit; however, we point out that the sums in (\ref{eq:OPE4pt1}) and (\ref{eq:OPE4pt2}) are over different spectra, characterized by the same coupling but different cusp angles. Proving the equivalence between (\ref{eq:OPE4pt1}) and (\ref{eq:OPE4pt2}) would be an important test of these expressions, and more generally of the OPE expansion on which they are based\footnote{A somewhat related OPE approach was discussed in \cite{Kim:2017sju} for the $\phi=0$ case. It would be interesting to clarify possible connections with the OPE that we discuss here, which seems to be not a completely trivial task. We thank S.~Komatsu for discussions of this point.}. We leave this nontrivial task for the future. Crossing relations such as the one presented above could perhaps also be used to gain information on the HHH structure constants, which would appear in one of the two channels in the OPE expansion of correlators of the form $G_{1234}^{\bullet \bullet \circ \circ}$. \begin{figure} \centering \includegraphics[scale=0.3]{OPE4ptNL.pdf} \caption{The two alternative OPE decompositions of the 4pt function $G_{1243}^{\bullet \circ \bullet \circ }$. } \label{fig:OPE4pttwo} \end{figure} \subsection{Checks at weak coupling} In this section, we present some tests of the 4pt OPE expansion \eq{eq:finalG} at weak coupling. We will show that perturbative expansion of the 4pt function reproduces our results for HLL structure constants. In Appendix \ref{HLLspace} we also verify at 1 loop that when two of the four points collide, the 4pt function reduces precisely to a 3pt HLL correlator, including the expected spacetime dependence. This provides an important test of our results for the structure constants and also of the OPE expression for the 4pt function. At one loop it is very easy to compute the 4pt function, and we find \begin{equation} G(\Lambda_1,\Lambda_2,\Lambda_3,\Lambda_4)=1+ \int_{-\Lambda_4}^{\Lambda_2} ds\int_{-\Lambda_1}^{\Lambda_3} dt \frac{2\hat{g}^2}{\cosh (s+t)+\cos (\phi_0 )} \ , \end{equation} resulting in \beqa\nonumber G&=&1+\frac{2 i \hat g^2}{\sin\phi_0}\Big[ \text{Li}_2\left(-e^{-i \phi_0 +\Lambda _{12}}\right)-\text{Li}_2\left(-e^{i \phi_0 +\Lambda _{12}}\right) -\text{Li}_2\left(-e^{-i \phi_0 -\Lambda _{23}}\right)+\text{Li}_2\left(-e^{i \phi_0 -\Lambda _{23}}\right) \\&-&\text{Li}_2\left(-e^{-i \phi_0 +\Lambda _{14}}\right)+\text{Li}_2\left(-e^{i \phi_0 +\Lambda _{14}}\right)+\text{Li}_2\left(-e^{-i \phi_0 +\Lambda _{43}}\right)-\text{Li}_2\left(-e^{i \phi_0 +\Lambda _{43}}\right)\Big] \ , \label{G41loop} \eeqa where we denoted (note the difference with \eq{L12def}) \begin{equation} \Lambda_{23}=\Lambda_2+\Lambda_3, \ \ \Lambda_{14}=\Lambda_1+\Lambda_4\ . \end{equation} Expanding this expression at large $\Lambda$ we get: \beqa G&=&g_0+\Lambda h_0+e^{-2\Lambda}g_1+e^{-4\Lambda}g_2+e^{-6\Lambda}g_3+{\cal O}(e^{-8\Lambda}) \ , \eeqa where the first coefficient is rather involved, \beqa g_0&=&\nonumber 2 \frac{\hat g^2}{\sin\phi_0} \left(i \text{Li}_2\left(-e^{\Lambda _{12}-i \phi_0 }\right)-i \text{Li}_2\left(-e^{i \phi_0 +\Lambda _{12}}\right)+i \text{Li}_2\left(-e^{\Lambda _{43}-i \phi_0 }\right)-i \text{Li}_2\left(-e^{i \phi_0 +\Lambda _{43}}\right)\right)\\ &+&2 \, {\hat g^2}{}\frac{\Lambda _{12} \phi_0 +\Lambda _{43} \phi_0}{\sin\phi_0} +1 \ , \label{resg0} \eeqa while the rest are simpler, \beqa h_0&=&8 \frac{\hat g^2 \phi_0}{\sin\phi_0} \ , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ g_1= 8 \hat g^2 \cosh \left(\frac{\Lambda _{12}+\Lambda _{43}}{2} \right)\ ,\\ \nonumber g_2&=&- 4 \hat g^2 \cosh \left(\Lambda _{12}+\Lambda _{43}\right) \cos (\phi_0 )\ , \ \ \ \ \ g_3= \frac{8\hat g^2}{9} \cosh \left(\frac{3(\Lambda _{12}+\Lambda _{43})}{2} \right) (2 \cos (2 \phi_0 )+1) \ .\nonumber \eeqa Rewriting this in terms of the angles using \eq{tophi} we obtain \begin{equation} L_{012}L_{043}\;g_1= 2 \hat g^2 \left(\frac{\cos \frac{\phi _{12}}{2} \cos \frac{\phi _{43}}{2}}{\cos ^2\frac{\phi _0}{2}}+\frac{\sin \frac{\phi _{12}}{2} \sin \frac{\phi _{43}}{2}}{\sin ^2\frac{\phi _0}{2}}\right)=C^{\bullet_1 \circ \circ}_{012} C^{\bullet_1 \circ \circ}_{043} +C^{\bullet_2 \circ \circ}_{012} C^{\bullet_2 \circ \circ}_{043} \ , \end{equation} where we used that there are only two states $n=1,2$ which converge to $\Delta=1$ at weak coupling. Furthermore, we can identify precisely $n=1$ and $n=2$, by using the fact that the $n=1$ state is associated with an odd state and thus should give an odd function in $\phi_{12}$. This results in \begin{equation} C^{\bullet_1 \circ \circ}_{012} =\pm \sqrt{2\hat g^2}\frac{\sin\frac{\phi_{12}}{2}}{\sin\frac{\phi_0}{2}}\;\;,\;\; C^{\bullet_2 \circ \circ}_{012} =\pm \sqrt{2\hat g^2}\frac{\cos\frac{\phi_{12}}{2}}{\cos\frac{\phi_0}{2}} \ , \end{equation} in complete agreement with our perturbative results \eq{C1m} and \eq{C1p} ! In the same way we find for the $L=2$ states \beqa C^{\bullet_3 \circ \circ}_{012} &=&\pm i \hat g \frac{\sin \phi _{12}}{\sin\phi_0} \sqrt{\cos \phi _0} \ ,\\ C^{\bullet_4 \circ \circ}_{012} &=&\pm i \hat g \frac{\sqrt{\cos \phi _0}}{\sin^2\phi _0} \left(\cos \phi _0 \cos \phi _{12}-1\right) \ , \eeqa in agreement with \eq{C2m} and \eq{C2p}. We also verified the $L=3$ states and reproduced expressions \eq{HLL3pw}, \eq{HLL3mw} given in Appendix \ref{app:exc}. We also notice that the term $h_0$ is indeed equal to $2\Delta_0^{(1)}$ i.e. the ground state energy at 1 loop. Finally, the expression $g_0$ can be compared with the HLL structure constant of three ground states, which reads at weak coupling \begin{equation} (C^{\bullet o o})_{L=0}=1+\hat g^2 F_{123}+\dots \end{equation} where $F_{123}$ is given explicitly by the lengthy formula \eq{F123}. From the OPE (\ref{G1234delta4}) we expect that \begin{equation} g_0 = 1 + \hat g^2 \, \left(- \Delta_0^{(1)} \, \log{( L_{043} \, L_{012} )} + F_{012} + F_{043} \right)\ , \end{equation} and indeed our result \eq{resg0} for $g_0$ precisely matches this complicated expression! This is a nontrivial check of the OPE as well as the HLL structure constant at 1 loop. \section{Conclusions} \label{sec:concl} Our main result is the all-loop computation of the expectation value of a Wilson line with three cusps with particular class of insertions at the cusps in the ladders limit. We demonstrated that in terms of the q-functions it takes a very simple form, reminiscent of the SoV scalar product. The key ingredient in the construction is the bracket $\br\cdot$, which allows to wrote the result in a very compact form \eq{correlator}. We also found a similar representation for the diagonal correlator of two cusps and the Lagrangian \eq{Cinsert}. This gives a clear indication that the Quantum Spectral Curve and the SoV approach can be able to provide an all-loop description of 3-point correlators. In order to generalise our results one could consider correlators with more complicated insertions which should help to reveal more generally the structure of the SoV-type scalar product. We expect in this case that the bracket $\br\cdot$ will involve product of several Q-functions: \begin{equation}\la{measureg} \br{q_1q_2}=\int \mu(u_1,\dots,u_L)q_1(u_1)\dots q_1(u_L) q_2(u_1)\dots q_2(u_L)du_1\dots du_L \end{equation} for some universal measure function $\mu$, which should not depend on the states, but could be a non-trivial function of coupling\footnote{In fact $L$ itself may be nontrivial to define at finite coupling as states with different values of the charges can be linked by analytic continuation.}. It would also be important to extend the results obtained in this paper to the more general HHH configuration where all three effective couplings are nonzero. The form of our result \eq{correlator}, where the BPS cusp always appears with a different sign for the rapidity, suggests that in the most general case one of the Q-functions may need to be treated on a different footing as the other two. Therefore, the generalization to the HHH case may be nontrivial and reveal new important elements. Going away from the ladders limit (see e.g. \cite{Bykov:2012sc, Henn:2012qz}) could also give some hints about the measure in the complete ${\cal N}=4$ SYM theory and eventually lead to the solution of the planar theory. Potentially a simpler problem is the fishnet theory \cite{Gurdogan:2015csr,Gromov:2017cja,Grabner:2017pgm}, where some $3-$ and $4-$point correlators were found explicitly and have a very similar form to the $\phi\to 0$ limit of our correlator. As they involve only conventional local operators this is another natural setting for further developing our approach. It would be also interesting to consider the cusp in ABJM theory for which the ladders limit was recently elucidated in \cite{Bonini:2016fnc}. It would be also useful to utilize the perturbative data from other approaches \cite{Escobedo:2010xs,Gromov:2012uv,Caetano:2014gwa,Basso:2015zoa,Basso:2017muf,Eden:2016xvg,Fleury:2016ykk} in order to guess the measure factor. Let us mention that our result incorporates all finite size corrections (in particular the 2-point functions are given exclusively by wrapping contributions). These corrections are rather nontrivial to deal with in the hexagon~\cite{Basso:2015zoa} approach to computation of correlators (see also \cite{Eden:2016xvg,Fleury:2016ykk,Bargheer:2017nne,Eden:2017ozn}). The diagonal correlators, which we studied numerically in this paper at any value of coupling, are proven to be particularly hard in the hexagon formulation which is known to be incomplete in this situation. Nevertheless, it would be interesting to draw parallels between the two approaches. The hexagon techniques could be especially helpful in generalisation of our results for the longer states, where the wrapping corrections are suppressed by powers of `t Hooft coupling. Another possible limit which would be interesting to consider is near-BPS. This could be either the small spin limit of twist-2 local operators or the $\phi\simeq\theta$ limit of the cusps. In both cases the analytic solutions of the QSC are known explicitly~\cite{Gromov:2013qga,Gromov:2014bva} (see also \cite{Gromov:2012eu}), which could be helpful in fixing the measure factor. In particular, at the leading order, the Q-functions $q(u)$ describing the excited states of a cusp are orthogonal on $[-2g,2g]$ with the measure $\mu(u)= \sinh(2\pi u)$~\cite{Gromov:2012eu,Gromov:2013qga,Sizov:2013joa}. It is not clear how this measure is related to our result yet, but there are some promising signs which we discuss in the Appendix~\ref{app:bps}. Let us point out that the naive guess that this is the measure we need is not consistent in an obvious way with the structure expected from SoV \eq{measureg}, where we expect multiple interactions for the insertions of such scalars. It would be really interesting to compare with localisation methods, which are applicable in the near-BPS limit. Some preliminary results were reported recently~\cite{Giombi:2018qox} (see also \cite{Bonini:2015fng} for partial results for the spectrum). Let us also mention that often the measure can be bootstrapped from the orthogonality requirement, see \cite{Nsl2} for a higher-loop result in the $sl(2)$ sector. One could try this strategy too in order to find the measure in ${\cal N}=4$ SYM. As another new result, we understood the meaning of the bound states of the Schr\"odinger problem resulting from Bethe-Salpeter resummation of ladder diagrams. They correspond to insertion of scalar operators of the same type as those on the Wilson lines\footnote{and of their derivatives, when there is more than one scalar inserted}, see \cite{Klebanov:2006jj} for a string theory interpretation. From the point of view of the Bethe-Salpeter equation the excited states can be interpreted as resonances -- poles of the resolvent on the non-physical sheet, which can be reached by analytic continuation under the branch cut of the continuum. As such they are hard to study analytically or numerically. In the QSC approach there is no continuum spectrum and the bound states can be studied on completely equal footing with the vacuum state. Moreover they can be easily tracked away from the ladders limit and should still correspond to scalar insertions. In addition, we showed that our results for the 3-cusp correlators immediately generalize to the case with these scalar insertions. Our result opens the way to efficiently study the cusp with scalar insertions at arbitrary values of $\theta$ using the powerful QSC methods, both analytically and numerically. We already found the first few orders in the weak and strong coupling expansions of the energies of excited states in the ladders limit. The result at $1$ loop for the first excited state matches the known 1-loop prediction \cite{Alday:2007he} (assuming it is not changed in the ladders limit). It would be also important to further investigate the OPE picture we presented in section \ref{sec:ope}. In order to reveal more structure for higher point correlators it would be very useful to find a compact way to perform the spectral sums appearing in the OPE. Recent results of \cite{Gross:2017aos} for the SYK model suggest that this could be feasible at least in the ladder limit. One could also explore the applicability of modern conformal bootstrap techniques \cite{Rattazzi:2008pe,ElShowk:2012ht} for the OPE expansion we considered. Finally, the structure of our OPE expansion is very reminiscent of the one for null polygonal Wilson loops \cite{Alday:2010ku}, and it could be useful to explore this analogy. \section*{Acknowledgements} We thank N.~Drukker, D.~Grabner, V.~Kazakov, E.~Sobko, A.~Sever, A.~Tseytlin, A.~Tumanov and K.~Zarembo for related discussions. We are especially grateful to A.~Pushnitsky and F.~Smirnov for inspirational comments, and to S.~Komatsu for sharing the manuscript of \cite{Kim:2017sju} before publication. F.~L.-M. was supported by LabEX ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL*. A.C. was supported by the STFC grant (ST/P000258/1) “Fundamental Physics from the Planck Scale to the LHC”. N.G. wishes to thank STFC for support from Consolidated grant number ST/J002798/1.
{ "timestamp": "2018-03-07T02:12:22", "yymm": "1802", "arxiv_id": "1802.04237", "language": "en", "url": "https://arxiv.org/abs/1802.04237" }
\section{ Introduction} The problem of a rotating object in the presence of the gravitational field is essentially practical than viewing objects as a mere test particles, in order to ignore their intrinsic property due to the Orthodox General Theory of Relativity. Accordingly, several attempts were done in the last century started by Mathisson [1], followed by Papapetrou [2] and extended by Dixon [3] to include other non-gravitational fields e.g. electromagnetic. Also, there is an approach by Dixon-Souriau to include spinning motion, magnetic moment with charged objects [4]. Such of these detailed equations have been presented only in Riemannin geometry. Now, the arising question, is based on the following: \\ What is the situation of the above mentioned particles in case of Non-Remannain geometries?\\ In order to find the above enquiry, one must take treat the situation of non-Riemanian geometries as individual cases: one of its special classes is Riemann-Cartan geometry; which considers a tetrad space $\h{i}^{\mu}$ as two independent vector fields, one may be responsible for general coordinate transformation (GCT), the holonomic coordinates, labeled by Greek indices and the Latin ones are used to express the Local Lorentz transformation [LLT], mainly to describe the internal properties of the object [5], labeled by Latin letters, the anholonomic coordinates. This type of work has led many authors [6-8], to relate this type of geometry with gauge theories of gravity [9]. In there are a tetrad space for gauge translation, and spin connection to represent gauge rotation. [10-12][..].\\ Also, another trend of viewing the Non-Riemaniann geometry, is called a Teleparallel geometry- A geometry with a tetrad building blocks, may represent a translational gauge with a vanishing curvature [13] , and treating the annholonomic coordinates as vector number. Such a tendency of neutralizing the role of annholnomic coordinates, to be a vector number, with an additional property. This may give rise to define that there are non-vanishing torsion curvatures simultaneously, due to within different types of absolute derivatives\footnote{For more details about the underlying geometry and its application in establishing a generalized field theory see [14-16]}. The arising notation of AP-geometry led Wanas et al (1995) to describe three different paths may act the role of geodesic in Riemanian geometry [17]. The striking features of these paths, have a step $\frac{1}{2}$ from one path into other . This gives an impression, that paths in this type of geometry are naturally quantized. Lately, Wanas(1998) obtained a parameterized absolute parallelism geometry [PAP] obtaining a spin-torsion interaction, together with defining non-vanishing curvature and torsion tensors simultaneously [18]. The existence of such an interaction has led Wanas et al to detect its presence in terms of revealing the discrepancy between theory and observation of thermal neutrons [19] and presenting a temporal model for SN1987A [20]. Accordingly, in the present work we are going to obtain the analogous of the Papapetrou equation with precesession in the context of AP-geometry. This will enable us to examine, the effect of different absolute derivatives on the interaction with the torsion and spin tensors. \\ The paper is organized as follows; section 2 displays the relationship between spin tensor and geodesic and geodesic deviation vectors in the Riemaniann geometry, section 3 is extending the previous relationship to become among paths and path deviation vectors with their corresponding spin tensors in AP-geometry. Section 4 deals with the Lagranagian formalism of the Papapetrou equation in AP-geometry, and finally, Section 5 displays the results obtained the previous sections , regarding some recommendations in our future work of on this approach. \section{Motion in Riemanian Geometry} \subsection{ Geodesic and Geodesic Deviations : The Bazanski Approach} Equations of geodesic and geodesic deviation equations Riemannian geometry are required to examine many problems of motion for different test particles in gravitational fields. This led many authors to derive them by various methods, one of the most applicable ones is the Bazanski approach [21] in which from one single Lagrangian one can obtain simultaneously equation of geodesic and geodesic deviations in the following way; \begin{equation} L = g_{\mu\nu} U^{\mu} \frac{D \Psi^{\nu}}{D s} \end{equation} where , $g_{\mu \nu}$ is the metric tensor, $U^{\mu}$, is a unit tangent vector of the path whose parameter is $s$, and $\Psi^{\nu}$ is the deviation vector associated to the path $(s)$, $ \frac{D}{Ds}$ is the covariant derivative with respect to parameter $s$. Applying the Euler Lagrange equation , by taking the variation with respect to the deviation tensor: \begin{equation} \frac{d}{ds} \frac{\partial L}{\partial \dot{\Psi}^{\mu}}- \frac{\partial L}{\partial {\Psi}^{\mu}} =0 \end{equation} to obtain the geodesic equation \begin{equation} \frac{D U^{\mu}}{D s} = 0 \end{equation} and taking the variation with respect the the unit vector $U^{\mu}$ \begin{equation} \frac{d}{ds} \frac{\partial L}{\partial{U}^{\mu}}- \frac{\partial L}{\partial {x}^{\mu}} =0 \end{equation} to obtain the geodesic deviation equation \begin{equation} \frac{D^2 {\Psi}^{\mu}}{D s^2} = R^{\mu}_{\nu \rho \sigma} U^{\nu}U^{\rho}\Psi^{\sigma} \end{equation} where $ R^{\mu}_{\nu \rho \sigma}$ is Riemann-Christoffel tensor. \subsection{On The Relation Between Spin Tensor and The Deviation Vector: The Riemanian Case} Equations of spinning motion ,the case of $P^{\alpha}=mU^{\alpha}$ can be related to geodesic if one follows the following transformation [22] \begin{equation} \bar{U}^{\mu} = U^{\mu} + \beta \frac{D \Psi^{\mu}}{Ds} \end{equation} where $\bar{U}^{\alpha}$ is a unit tangent vector with respect to the parameter ,such that $\bar{U}^{\alpha} = \frac{d x^{\mu}}{d \bar{s}} $ , $\bar{s}$. By taking the covariant derivative on both sides one obtains: \begin{equation} \frac{D \bar{U}^{\alpha}}{D \bar{s}}= \frac{D}{Ds}(U^{\mu} + \beta \frac{D \Psi^{\mu}}{Ds})\frac{ds}{d \bar{s}} \end{equation} From geodesic and geodesic deviation equations one gets \begin{equation} \frac{D U^{\alpha}}{Ds} =0 \end{equation} and \begin{equation} \frac{D^{2} \Psi^{\alpha}}{Ds^{2}} =R^{\alpha}_{\mu \nu \sigma} U^{\mu}U^{\nu}\Psi^{\sigma} \end{equation} Substituting equations (2.8) and (2.9) in (2.)7 to get \begin{equation} \frac{D \bar{U}^{\alpha}}{D \bar{s}}=( \beta R^{\alpha}_{\mu \nu \sigma}U^{\mu} U^{\nu} \Psi^{\sigma})\frac{ds}{d \bar{s}} \end{equation} Let us assume the following taking $\beta = \frac{s}{m}$ \begin{equation} S^{\mu \nu} = s {(U^{\alpha }\Psi^{\beta}-U^{\beta }\Psi^{\alpha}) } \end{equation} Thus, we get \begin{equation} \frac{D \bar{U}^{\alpha}}{D \bar{s}}=\frac{1}{2m}( R^{\alpha}_{\mu \nu \sigma}U^{\mu} U^{\nu} \Psi^{\sigma})\frac{ds}{d \bar{s}} \end{equation} i.e \begin{equation} \frac{D \bar{U}^{\alpha}}{D \bar{s}}= \frac{1}{2m} R^{\alpha}_{\mu \nu \sigma} S^{\nu \sigma} \bar{U}^{\mu} \end{equation} Which is the Papapetrou equation for short. \subsection{Lagrangian Formalism of Spinning Equations} Another way to derive the Papapetrou equation for short is, by applying the action principle on the following equation [23]: \begin{equation} L = g_{\mu\nu} \bar{U}^{\mu} \frac{D \bar{\Psi}^{\nu}}{D \bar{s}} + \frac{1}{2m} R_{\mu \nu \rho \sigma} S^{\rho \sigma} U^{\nu}\Psi^{\mu}, \end{equation} taking the variation with respect to the deviation tensor $\bar{\Psi}^{\alpha}$ we obtain equation (2.13). Also, by taking the variation with respect to $\bar{U}^{\alpha}$ after some manipulations, we get its corresponding spinning deviation equation \begin{equation} \frac{ D \bar{\Psi^2}}{D^2\bar{s}} = R^{\alpha}_{\beta \gamma \delta} \bar{U}^{\beta}\bar{U}^{\gamma}\bar{\Psi}^{\delta}+ \frac{1}{2m}(R^{\alpha}_{\beta \gamma \sigma} S^{\gamma \sigma}U^{\beta})_{; \delta}\bar{\Psi}^{\delta} \end{equation} Thus, we can figure out the Euler Lagrange Equations on the Bazanski-like Lagragian give an identical equation to (2.13) and its corresponding deviation equations. \subsection{Spinning and Spinning Deviation Equations Without Precession} The Papapetrou equation of a spinning object with precession [2]is obtained by a modified Bazanski Lagrangian [23] : $$ L= g_{\alpha \beta} ( m U^{\alpha} + U_{\beta}\frac{D S^{\alpha \beta}}{Ds}) \frac{D \Psi^{\beta}}{Ds} + \frac{1}{2} R_{\alpha \beta \gamma \delta} S^{\gamma \delta} U^{\beta} \Psi^{\alpha} $$ to obtain equation of a spinning object by taking the variation with respect to the deviation vector $\Psi^{\alpha}$ \begin{equation} \frac{D}{Ds}( m U^{\alpha} + U_{\beta}\frac{D S^{\alpha \beta}}{Ds})= \frac{1}{2} R^{\alpha}_{. \mu \nu \rho} S^{\rho \nu} U^{\mu} \end{equation} and its deviation equation can be obtained by taking the variation with respect to $U^{\alpha}$ to become: $$ \frac{D^{2}\Psi^{\alpha}}{Ds^{2}}= R^{\alpha}_{.\mu \nu\rho}U^{\mu}( m U^{\nu} + U_{\beta}\frac{D S^{\nu \beta}}{Ds})\Psi^{\rho}+ g^{\alpha \sigma}g_{\nu \lambda}( m U^{\lambda} + U_{\beta}\frac{D S^{\lambda \beta}}{Ds})_{; \sigma} \frac{D \Psi^{\nu}}{Ds} $$ \begin{equation} ~~~~~~~+ \frac{1}{2}(R^{\alpha}_{. \mu \nu \rho} S^{\nu \rho} \frac{D \Psi^{\mu}}{Ds}+ R^{\alpha}_{\mu \nu \lambda}S^{\nu \lambda}_{.; \rho}U^{\mu}\Psi^{\rho} + R^{\alpha}_{\mu \nu \lambda; \rho }S^{\nu \lambda} U^{\mu} \Psi^{\rho}) . \end{equation} \subsection{Spinning and Spinning Deviation Equations with Precession} It is well known that equation of spinning charged objects in the presence of gravitational field have been studied extensively [23]. This led us to suggest its corresponding Lagrangian formalism , using a modified Bazanski Lagrangian [25], for a spinning and precessing object and their corresponding deviation equation in Riemanian geometry in the following way \begin{equation} L= g_{\alpha \beta} P^{\alpha} \frac{D \Psi^{\beta}}{Ds} + S_{\alpha \beta}\ \frac{D \Psi^{\alpha \beta}}{Ds}+ F_{\alpha}\Psi^{\alpha}+ M_{\alpha \beta}\Psi^{\alpha \beta} \end{equation} where \begin{equation} P^{\alpha}= m U^{\alpha}+ U_{\beta} \frac{D S^{\alpha \beta}}{DS}. \end{equation} Taking the variation with respect to $ \Psi^{\mu}$ and $\Psi^{\mu \nu}$ simultaneously we obtain \begin{equation} \frac{DP^{\mu}}{DS}= F^{\mu}, \end{equation} and\begin{equation} \frac{DS^{\mu \nu}}{DS}= M^{\mu \nu} , \end{equation} where $P^{\mu}$ is the momentum vector, $$ F^{\mu} = \frac{1}{2} R^{\mu}_{\nu \rho \delta} S^{\rho \delta} U^{\nu},$$ and $R^{\alpha}_{\beta \rho \sigma}$ is the Riemann curvature, $\frac{D}{Ds}$ is the covariant derivative with respect to a parameter $S$,$S^{\alpha \beta}$ is the spin tensor, \begin{equation} M^{\mu \nu} =P^{\mu}U^{\nu}- P^{\nu}U^{\mu} \end{equation}, regarding $U^{\mu}$ is the unit tangent vector to the geodesic. \\ Using the following identity on both equations (2.20) and (2.21) \begin{equation} A^{\mu}_{; \nu \rho} - A^{\mu}_{; \rho \nu} = R^{\mu}_{\beta \nu \rho} A^{\beta}, \end{equation} where $A^{\mu}$ is an arbitrary vector, and multiplying both sides with arbitrary vectors, $U^{\rho} \Psi^{\nu}$ as well as using the following condition [26] \begin{equation} U^{\alpha}_{; \rho} \Psi^{\rho} = \Psi^{\alpha}_{; \rho } U^{\rho}, \end{equation} and $\Psi^{\alpha}$ is its deviation vector associated to the unit vector tangent $U^{\alpha}$. Also in a similar way: \begin{equation} S^{\alpha \beta}_{; \rho} \Psi^{\rho} = \Phi^{\alpha \beta}_{; \rho } U^{\rho} \end{equation} One obtains the corresponding deviation equations [27] \begin{equation} \frac{D^2 \Psi^{\mu}}{DS^2}= R^{\mu}_{\nu \rho \sigma}P^{\nu} U^{\rho} \Psi^{\sigma}+ F^{\mu}_{; \rho} \Psi^{\rho}, \end{equation} and \begin{equation} \frac{D^2\Psi^{\mu \nu}}{DS^2}= S^{\rho [ \mu} R^{\nu ]}_{\rho \sigma \epsilon} U^{\sigma} \Psi^{\epsilon} + M^{\mu \nu}_{; \rho} \Psi^{\rho} \end{equation} \section{Motion in AP-Geometry} \subsection*{ A brief introduction of AP-Space} The structure of this space is defined completely by a set of n-contravariant vector fields $\h{i}^{\mu}$ where $i =1,2,3,...,n)$ denotes the vector number, and $\mu ( = 1,2,3,...,n)$ denotes $\h{i}_{\mu}$ of the vectors $\h{i}^{\mu}$, in the determinant $|| \h{i}^{\mu} ||$, is defined such that\footnote{ for more detail see [13-16]} $$ \h{i}^{\mu}\h{j}_{\mu} = \delta_{ij}, $$ $$ \h{i}^{\mu}\h{i}_{\nu} = \delta^{\mu}_{\nu}. $$ Using these vectors, the following second order symmetric tensors are defined: $$ g^{\mu \nu} \edf \h{i}^{\mu} \h{i}^{\nu} , $$ $$ g_{\mu \nu} \edf \h{i}_{\mu} \h{i}_{\nu} . $$ one can define Christoffel symbols and covariant derivatives using this symbol, in the usual manner. The following third order tensor, the contortion tensor, can be defined as, $$ \gamma^{\alpha}_{. \mu \nu} \edf \h{i}^{\alpha} \h{i}_{\mu ; \nu}, $$ which is non-symmetric in its last two indices $\mu , \nu$. It can be shown that $\gamma^{\alpha}_{. \mu \nu}$ is skew-symmetric in its first two indices.\\ {\bf{The AP-Condition}} \\ $$ \h{i}^{\nu}_{| \stackrel{\mu}{+}} =0 $$ where $| \stackrel{\mu}{+}$ is the absolute +ve derivative, such that it defines $\Gamma^{\alpha}-{\mu \nu}$ a non symmetric affine connection, in which $$ \Gamma^{\alpha}_{\mu \nu}= \h{i}^{\alpha} \h{i}_{\mu ,\nu}. $$ \subsection*{{(i)}Paths and Path Deviation Equations Subject to (+)ve Derivative} Paths and Path deviations equations are the counterpart of geodesic and geodesic deviation in AP-geometry. Accordingly, we have different trajectories based on the type of the absolute derivative [17]. From this perspective, it has been found out that the Bazanski Lagrangian may be a good candidate to express these trajectories. \begin{equation} L = g_{\mu \nu} V^{\mu} \frac{\nabla \Phi^{\nu}}{\nabla S^{+}} \end{equation} where $$\frac{\nabla \Phi}{\nabla S^{+}} = \frac{d \Phi^{\alpha}}{ d S^{+}} + \Gamma^{\alpha}_{\mu \nu} \Phi^{\mu} V^{\nu}.$$ Thus,taking the variation with respect to $\xi^{\mu}$ and implementing the AP-condition to find that \begin{equation} g_{\stackrel{\mu}{+} \stackrel{\sigma}{\nu}{+} | \sigma} \equiv 0 \end{equation} one finds out the following path equation \begin{equation} \frac{\nabla V^{\mu}}{\nabla S^{+}} = 0. \end{equation} Also, its associated deviations can be derived if one applies the following relation \begin{equation} A^{\mu}_{| \stackrel{\nu}{+}\stackrel{\rho}{+} } - A^{\mu}_{|\stackrel{\rho}{+}\stackrel{\nu}{+} } = M^{\mu}_{\sigma \nu \rho} A^{\sigma}+ \Lambda ^{\sigma}_{\nu \rho} A^{\mu}_{|\stackrel{\sigma}{+} }, \end{equation} provided with the following condition: \begin{equation} U^{\alpha}_{| \stackrel{\rho}{+}}\Phi^{\rho} = \Phi^{\alpha}_{| \stackrel{\rho}{+}}U^{\rho}, \end{equation} with taking into consideration, the vanishing curvature tensor \begin{equation} M^{\mu}_{\sigma \nu \rho} \equiv 0, \end{equation} to be substituted in (3.30) to obtain the corresponding deviation equation \begin{equation} \frac{\nabla^{2} \Phi^{\alpha}}{\nabla^{2+}} = \Lambda^{\rho}_{\mu \nu} V^{\mu} \Phi^{\nu} V^{\alpha}_{| \stackrel {\rho}{+}}. \end{equation} \subsection*{{(ii)}Paths and Path Deviation Equations subject to (0)ve Derivative} Due to (0)ve derivative one can derive its associated path and path deviation equations using the following Lagrangian [17]: \begin{equation} L = g_{\mu \nu} W^{\mu} \frac{\hat{\nabla} \eta^{\nu}}{\hat{\nabla}S^{0}}, \end{equation} where $$\frac{\hat{\nabla} \eta^{\alpha}}{\hat{\nabla} S^{0}} = \frac{d \eta^{\alpha}}{ d S^{0}} + \Gamma^{\alpha}_{(\mu \nu)} \eta^{\mu} W^{\nu} .$$ Thus, taking the variation with respect to $\eta^{\mu}$;provided that \begin{equation} g_{\stackrel{\mu}{0}\stackrel{\nu}{0}| \sigma } = \Lambda_{(\mu \nu) \sigma}, \end{equation} to obtain its corresponding path equation: \begin{equation} \frac{\nabla W^{\mu}}{\nabla S^{0}} = \frac{1}{2} \Lambda^{~~ \mu}_{(\nu \rho).} W^{\nu}W^{\rho}. \end{equation} Using the following relation \begin{equation} A^{\mu}_{| \stackrel{\nu}{0}\stackrel{\rho}{0} } - A^{\mu}_{|\stackrel{\rho}{0}\stackrel{\nu}{0} } = L^{\mu}_{\sigma \nu \rho} A^{\sigma} + \Lambda ^{\sigma}_{\nu \rho} A^{\mu}_{|\stackrel{\sigma}{0} }, \end{equation} and the condition below \begin{equation} W^{\mu}_{\stackrel{\rho}{(0)}} \eta^{\rho} = \eta^{\mu}_{\stackrel{\rho}{(0)}} W^{\rho}, \end{equation} to be substituted in (3.37), provided that its associated curvature, \begin{equation} L^{\mu}_{\sigma \nu \rho} \neq 0, \end{equation} is non vanishing, to obtain the corresponding deviation equation \begin{equation} \frac{\hat{\nabla}^{2} \zeta^{\alpha}}{\hat{\nabla} S^{2(0)}} = \frac{1}{2} (\Lambda^{.~.~ \alpha}_{\mu \nu}W^{\mu}W^{\nu})_{| \stackrel{\rho}{0}}\zeta^{\rho}+ L^{\alpha}_{\beta \rho \sigma} W^{\beta} W^{\rho} \zeta^{\sigma} + \Lambda^{\rho}_{\mu \nu} W^{\mu} \eta^{\nu}\zeta^{\alpha}_{| \stackrel{\rho}{0}} \end{equation} \subsection*{{(iii)}Paths and Path Deviation Equations Subject to (-)ve Derivative} Following the same approach as explained the previous items(i) and (ii), one may derive the paths and path deviations equations associated to -ve derivative, by introducing the following Lagrangian [17]: \begin{equation} L = g_{\mu \nu} J^{\mu} \frac{\tilde{\nabla} \zeta^{\nu}}{\tilde{\nabla}S^{-}} \end{equation} such that $$ \frac{\tilde{\nabla} \zeta^{\nu}}{\tilde{\nabla}S^{-}} = \frac{d \zeta^{\nu}}{d S^{-}}+ \tilde{\Gamma}^{\nu}_{ \mu \sigma} \zeta^{\mu} J^{\sigma}.$$ Accordingly, taking the variation with respect to $\eta^{\mu} $ to derive its corresponding path equation, and provided that [16] \begin{equation} g_{\stackrel{\mu}{-} \stackrel{\nu}{-} | \sigma} = 2\Lambda_{(\mu \nu) \sigma} \end{equation} to get \begin{equation} \frac{\tilde{\nabla} J^{\mu}}{\tilde{\nabla}S^{-}} = \Lambda_{(\alpha \beta)}^{~.~ \mu} J^{\alpha} J^{\beta} . \end{equation} Also, in order to derive its corresponding path deviation equation, one must take into account the following relation: \begin{equation} A^{\mu}_{| \stackrel{\nu}{-}\stackrel{\rho}{-} } - A^{\mu}_{| \stackrel{\rho}{-}\stackrel{\nu}{-} } = N^{\mu}_{\sigma \nu \rho} A^{\sigma} + \Lambda ^{\sigma}_{\nu \rho} A^{\mu}_{|\stackrel{\sigma}{-} } \end{equation} together with, the condition \begin{equation} J^{\mu}_{| \stackrel{\rho}{(-)}} \zeta^{\rho} = \zeta^{\mu}_{| \stackrel{\rho}{(-)}} J^{\rho}, \end{equation} to be substituted in (3.44), provided that its associated curvature, \begin{equation} N^{\mu}_{\sigma \nu \rho} \neq 0, \end{equation} is non vanishing curvature. Thus we derive the corresponding path deviation equation \begin{equation} \frac{\tilde{\nabla}^{2} \eta^{\alpha}}{\tilde{\nabla}S^{2-}} = N^{\alpha}_{\beta \rho \sigma} J^{\beta} J^{\rho} \eta^{\sigma} + \Lambda^{\rho}_{\mu \nu} J^{\mu} \eta^{\nu}\eta^{\alpha}_{| \stackrel{\rho}{-}} \end{equation} \subsection{On the Relation Between Spin Tensor and The Deviation Vector: The AP-geometry} In this part, we are going to extend the relationship obtained in (2.2) to derive the corresponding spin equations and their corresponding spin deviation equations. \subsection*{+ve Derivative} Equations of spinning motion ,the case of $P_{+}^{\alpha}=mV^{\alpha}$ can be related to geodesic if one follows the following transformation \begin{equation} \bar{V}^{\mu} = V^{\mu} + \beta \frac{D \Phi^{\mu}}{Ds^{+}} \end{equation} where $\bar{V}^{\alpha}$ is a unit tangent vector with respect to the parameter ,such that $\bar{V}^{\alpha} = \frac{d x^{\mu}}{d \bar{s}^{+}} $ , $\bar{s}$. By taking the covariant derivative on both sides one obtains: \begin{equation} \frac{ \nabla \bar{V}^{\alpha}}{\nabla \bar{s}^{+}}= \frac{\nabla}{\nabla s^{+}}(V^{\mu} + \beta \frac{\nabla \Phi^{\mu}}{\nabla {s}^{+}})\frac{ds}{d \bar{s}}. \end{equation} Substituting equations (3.30) and (3.34) in (3.50) to get \begin{equation} \frac{\nabla \bar{V}^{\alpha}}{\nabla \bar{s}^{+}}=( \beta \Lambda^{\rho}_{\nu \sigma} V_{\mu | \stackrel{\rho}{+} } V^{\nu} \Phi^{\sigma} )\frac{ds}{d \bar{s}} \end{equation} let us assume the following Taking $\beta = \frac{s}{m}$ \begin{equation} \bar{S}^{\mu \nu} = s {(V^{\alpha }\Phi^{\beta}-V^{\beta }\Phi^{\alpha}) } \end{equation} Thus, we get \begin{equation} \frac{\nabla \bar{V}^{\alpha}}{\nabla \bar{s}^{+}}=\frac{1}{2m}( \Lambda^{\rho}_{\nu \sigma} V_{\mu | \stackrel{\rho}{+}} \bar{S}^{\nu \sigma} ) \frac{ds}{d \bar{s}} \end{equation} i.e \begin{equation} \frac{\nabla \bar{V}^{\alpha}}{ \nabla \bar{s}^{+}}= \frac{1}{2m} \Lambda^{\rho}_{\nu \sigma} \bar{V}_{\mu | \stackrel{\rho}{+}} \bar{S}^{\nu \sigma} \end{equation} Which is the version the Papapetrou equation for +ve derivative, for short. \subsection*{0-Derivative} Equations of spinning motion ,the case of $P_{(0)}^{\alpha}=mW^{\alpha}$ can be related to geodesic if one follows the following transformation \begin{equation} \bar{W}^{\mu} = W^{\mu} + \beta \frac{\bar{\nabla} \eta^{\mu}}{\bar{\nabla}\bar{s}^{(0)}} \end{equation} where $\bar{W}^{\alpha}$ is a unit tangent vector with respect to the parameter ,such that $\bar{W}^{\alpha} = \frac{d x^{\mu}}{d \bar{s}^{(0)}} $ , $\bar{s}$. By taking the covariant derivative on both sides one obtains: \begin{equation} \frac{\bar{\nabla} \bar{W}^{\alpha}}{\bar{\nabla} \bar{s}^{(0)}}= \frac{\nabla}{\nabla s^{+}}(W^{\mu} + \beta \frac{\nabla \eta^{\mu}}{\nabla {s}^{(0)}})\frac{ds}{d \bar{s}}. \end{equation} Substituting equations (3.37) and (3.41) in (3.56) to get \begin{equation} \frac{\nabla \bar{W}^{\alpha}}{\nabla \bar{s}^{(0)}}=( \frac{1}{2} \Lambda^{~.~. \alpha }_{\mu \nu} W^{\mu}W^{\nu} + \beta [ L^{\alpha}_{ \beta \gamma \delta} W^{\beta}W^{\gamma}\eta^{\delta} + \Lambda^{\rho}_{\nu \sigma} W_{\mu | \stackrel{\rho}{(0)} } W^{\nu} \eta^{\sigma} ] )\frac{ds}{d \bar{s}}. \end{equation} Now, let us assume that $\beta = \frac{s}{m}$, and \begin{equation} \bar{S}^{\mu \nu} = s {(V^{\alpha }\eta^{\beta}-W^{\beta }\eta^{\alpha}) }. \end{equation} Thus, we get \begin{equation} \frac{\bar{\nabla} \bar{W}^{\alpha}}{\nabla \bar{s}^{(0)}}=( \frac{1}{2} \Lambda^{~.~. \alpha }_{\mu \nu} W^{\mu}W^{\nu} + \frac{1}{2m}[ L^{\alpha}_{\mu \nu \sigma} W^{\mu} + \Lambda^{\rho}_{\nu \sigma} W_{\mu | \stackrel{\rho}{(0)}}] ) \bar{S}^{\nu \sigma} ( \frac{ds^{(0)}}{d \bar{s}^{(0)}}) \end{equation} i.e \begin{equation} \frac{\nabla \bar{W}^{\alpha}}{ \nabla \bar{s}^{(0)}}= \frac{1}{2} \Lambda^{~.~. \alpha }_{\mu \nu} W^{\mu}\bar{W}^{\nu} + \frac{1}{2m} (L^{\alpha}_{\mu \nu \sigma} \bar{W}^{\mu} + \Lambda^{\rho}_{\nu \sigma} \bar{W}_{\mu | \stackrel{\rho}{(0)}}) \bar{S}^{\nu \sigma}. \end{equation} If we regard $$ \frac{ds^{(0)}}{d \bar{s}^{(0)}} =1, $$ then, equation (3.64) becomes \begin{equation} \frac{\nabla \bar{W}^{\alpha}}{ \nabla \bar{s}^{(0)}}= \frac{1}{2} \Lambda^{~.~. \alpha }_{\mu \nu} \bar{W}^{\mu}\bar{W}^{\nu} + \frac{1}{2m} (L^{\alpha}_{\mu \nu \sigma} \bar{W}^{\mu} + \Lambda^{\rho}_{\nu \sigma} \bar{W}_{\mu | \stackrel{\rho}{(0)}}) \bar{S}^{\nu \sigma} \end{equation} Which is the version the Papapetrou equation for (0)ve derivative, for short. \subsection*{-ve Derivative} Equations of spinning motion ,the case of $P_{-}^{\alpha}=mJ^{\alpha}$ can be related to geodesic if one follows the following transformation \begin{equation} \bar{J}^{\mu} = J^{\mu} + \beta \frac{\tilde{\nabla} \zeta^{\mu}}{\tilde{\nabla}\bar{s}^{-}} \end{equation} where $\bar{J}^{\alpha}$ is a unit tangent vector with respect to the parameter ,such that $\bar{J}^{\alpha} = \frac{d x^{\mu}}{d \bar{s}^{-}} $ , $\bar{s}$. By taking the covariant derivative on both sides one obtains: \begin{equation} \frac{\bar{\nabla} \bar{J}^{\alpha}}{\bar{\nabla} \tilde{s}^{(-)}}= \frac{\tilde{\nabla}}{\nabla s^{-}}(J^{\mu} + \beta \frac{\nabla \zeta^{\mu}}{\tilde{\nabla} {s}^{-}})\frac{ds}{d \bar{s}^{-}}. \end{equation} Substituting equations (3.44) and (3.48) in (3.63) to get \begin{equation} \frac{\tilde{\nabla} \bar{J}^{\alpha}}{\tilde{\nabla} \tilde{s}^{-}}=( \Lambda^{~.~. \alpha }_{\mu \nu} J^{\mu}J^{\nu}+ \beta [ N^{\alpha}_{\beta \gamma \delta} J^{\beta} J^{\gamma} \zeta^{\delta} + \Lambda^{\rho}_{\nu \sigma} J_{\mu | \stackrel{\rho}{-} } J^{\nu} \zeta^{\sigma}] )\frac{ds^{-}}{d \bar{s^{-}}} \end{equation} let us assume the following Taking $\beta = \frac{s}{m}$,and \begin{equation} \tilde{S}^{\mu \nu} = s {(J^{\alpha }\zeta^{\beta}-J^{\beta }\zeta^{\alpha}) }. \end{equation} Thus, we get \begin{equation} \frac{\tilde{\nabla} \bar{J}^{\alpha}}{\tilde{\nabla} \bar{s}^{-}}=( \Lambda^{~.~. \alpha }_{\mu \nu} J^{\mu}J^{\nu}+ \frac{1}{2m} [ N^{\alpha}_{\mu \nu \sigma} J^{\mu} + \Lambda^{\rho}_{\nu \sigma} J_{\mu | \stackrel{\rho}{-}}]) \tilde{S}^{\nu \sigma} ( \frac{ds^{(-)}}{d \bar{s}^{(-)}}) \end{equation} i.e \begin{equation} \frac{\tilde{\nabla} \bar{J}^{\alpha}}{ \tilde{\nabla} \bar{s}^{-}}= \Lambda^{~.~. \alpha }_{\mu \nu} J^{\mu}\tilde{J}^{\nu} + \frac{1}{2m} ( N^{\alpha}_{\mu \nu \sigma} \bar{J}^{\mu} + \Lambda^{\rho}_{\nu \sigma} \bar{J}_{\mu | \stackrel{\rho}{-}}) \tilde{S}^{\nu \sigma} \end{equation} If we regard $$ \frac{ds^{(-)}}{d \tilde{s}^{(-)}} =1 , $$ then, equation (3.73) becomes \begin{equation} \frac{\tilde{\nabla} \bar{J}^{\alpha}}{ \tilde{\nabla} \bar{s}^{-}}= \Lambda^{~.~. \alpha }_{(\mu \nu)} \bar{J}^{\mu} \bar{J}^{\nu} + \frac{1}{2m} ( N^{\alpha}_{\mu \nu \sigma} \bar{J}^{\mu} + \Lambda^{\rho}_{\nu \sigma} \bar{J}_{\mu | \stackrel{\rho}{-}}) \tilde{S}^{\nu \sigma} \end{equation} Which is the version the Papapetrou equation for -ve derivative, for short. \section{Spinning and Spinning Deviation Equations in AP-geometry: Lagrangian Formalism} From the previous results, we can check the reliability of the corresponding Bazanski equation to become in the following way. \subsection{Spinning and Spinning Deviation equations subject to (+)ve derivative } \subsection*{{i} the case of $P_{+} = mV$} \begin{equation} L= g_{\mu \nu} V^{\mu}\frac{\bar{\nabla} \Phi ^{\mu}}{\bar{\nabla}S^{+}} + \bar{S}_{\mu \nu} \frac{\bar{\nabla} \Phi^{\mu \nu}}{ \bar{\nabla}S^{+}} + \frac{1}{2m} [\Lambda^{\rho}_{\nu \sigma} {V}_{\mu | \stackrel{\rho}{+}}] \bar{S}^{\nu \sigma} \Phi^{\mu}. \end{equation} Taking the variation with respect to $\Phi^{\alpha}$ and $\Phi^{\alpha \beta}$ we obtain \begin{equation} \frac{{\nabla} V^{\alpha}}{{\nabla}S^{+}} = \frac{1}{2m}\Lambda_{\rho \delta \nu}\bar{S}^{\delta \nu}V^{\alpha}_{\stackrel{\sigma}{+}} + g^{\alpha \rho} \Lambda^{\rho}_{\nu \sigma} {V}_{\mu | \stackrel{\rho}{+}} \bar{S}^{\nu \sigma}, \end{equation} and \begin{equation} \frac{\tilde{\nabla} S^{\alpha \beta}}{\tilde{\nabla}S^{2-}} = 0 \end{equation} Using the commutation relation (3.31) , conditions (3.32) and \begin{equation} \bar{S}^{\mu \nu}_{| \stackrel{\rho}{+}} \Phi^{\rho} = \Phi^{\mu \nu}_{| \stackrel{\rho}{+}} V^{\rho}, \end{equation} to be substituted in (4.70) and (4.71) in order to derive its corresponding set of deviation equations \begin{equation} \frac{{\nabla}^{2} \Phi^{\alpha}}{\tilde{\nabla}S^{2(+)}} = \Lambda^{\rho}_{\mu \nu}V^{\mu} V^{\nu} \Phi^{\alpha}_{\stackrel{\rho}{+}} , \end{equation} and \begin{equation} \frac{{\nabla}^{2} \Phi^{\alpha \beta}}{\hat{\nabla}S^{2+}} = \Lambda^{\rho}_{\mu \nu}V^{\mu} \Phi^{\nu} \bar{S}^{\alpha \beta}_{| \stackrel{\rho}{+}}. \end{equation} \subsection*{ (ii) the case $ P_{+} \neq m V$} Let us suggest the following Lagrangian: \begin{equation} L= g_{\mu \nu} P_{+}^{\mu}\frac{{\nabla} \Phi ^{\mu}}{{\nabla}S^{+}} + \bar{S}_{\mu \nu} \frac{{\nabla}{\Phi}^{\mu \nu}}{{\nabla}S^{+}} + \frac{1}{2m} g_{\mu \nu} \Lambda^{\rho}_{\delta \rho}\bar{S}^{\delta \rho}V^{\mu}_{\stackrel{\rho}{+}}\Phi^{\nu} + g_{\alpha \mu }g_{\beta \nu}[ P_{+}^{\alpha}V^{\beta} -P_{+}^{\beta}V^{\alpha} ] \Phi^{\mu \nu}, \end{equation} where $$P_{+}^{\mu}= m V^{\mu}+ \frac{V_{\nu} \tilde{\nabla} \bar{S}^{\mu \nu}}{\nabla S^{(+)}}.$$ Taking the variation with respect to $\zeta^{\alpha}$ and $\zeta^{\alpha \beta}$ we obtain \begin{equation} \frac{ {\nabla} P_{+}^{\alpha}}{{S}^{+}} = \frac{1}{2m} \Lambda^{\rho}_{ \delta \nu}\bar{S}^{\delta \nu}V^{\alpha}_{\stackrel{\rho}{+}} + g_{\mu \rho}g_{\nu \delta} [P_{+}^{\rho} V^{\delta}-P_{+}^{\delta} V^{\rho} ] \Phi^{\mu \nu}, \end{equation} and \begin{equation} \frac{{\nabla}^{2} \bar{S}^{\alpha \beta}}{\bar{\nabla}{S}^{2}} = [P_{+}^{\alpha}V^{\beta} -P_{+}^{\beta}V^{\alpha} ]. \end{equation} Using the commutation relation (3.31) , the conditions (3.32) and (3.32) to be substituted in (4.70)and (4.71)and (4.72) in order to derive its corresponding set of deviation equations \begin{equation} \frac{\tilde{\nabla}^{2} \Phi^{\alpha}}{\tilde{\nabla}S^{2-}} = ( \frac{1}{2m}\Lambda^{\rho}_ {\delta \sigma}\hat{S}^{\delta \sigma}V^{\alpha}_{\stackrel{\sigma}{+}})_{| \stackrel{\delta}{+}} \Phi^{\delta} \end{equation} and \begin{equation} \frac{{\nabla}^{2} \zeta^{\alpha \beta}}{\tilde{\nabla}S^{2+}} = \Lambda^{\rho}_{\mu \nu}V^{\mu} \Phi^{\nu} \tilde{S}^{\alpha \beta}_{\stackrel{\rho}{+}} + [P_{+}^{\alpha}V^{\beta} -P_{+}^{\beta}V^{\alpha} ]_{| \stackrel{\delta}{+}}\Phi^{\delta}. \end{equation} From the above results of spinning equations and their corresponding deviation ones, we reach to regard them as the equivalent set of equations of spinning objects in the presence of Tele-parallel gravity [13]. \subsection{Spinning and Spinning Deviation equations subject to (0)ve derivative } {\subsection*{{ii} The case of ${P_{(0)} = mW}$}} \begin{equation} L= g_{\mu \nu} W^{\mu}\frac{\hat{\nabla} \eta ^{\mu}}{\hat{\nabla}S^{(0)}} + \hat{S}_{\mu \nu} \frac{\hat{\nabla} \eta^{\mu \nu}}{ \hat{\nabla}S^{(0)}} + \frac{1}{2m} L_{\mu \nu \rho \delta} \eta^{\mu} W^{\nu} S^{\rho \delta}+ \Lambda^{\rho}_{\nu \sigma} {W}_{\mu | . \stackrel{\rho}{(0)}} \hat{S}^{\nu \sigma} \eta^{\mu} \end{equation} Taking the variation with respect to $\eta^{\alpha}$ and $\eta^{\alpha \beta}$, we obtain \begin{equation} \frac{\hat{\nabla} W^{\alpha}}{\hat{\nabla}S^{o}} = \frac{1}{2} \Lambda^{~.~.~\alpha}_{(\mu \nu)} W^{\mu} W^{\nu} + \frac{1}{2m} L^{\alpha}_{\nu \rho \sigma} W^{\nu} S^{\rho \sigma} + g^{\alpha \mu} \Lambda^{\rho}_{\nu \sigma} {W}_{\mu | \stackrel{\rho}{(0)}} \hat{S}^{\nu \sigma} \end{equation} and \begin{equation} \frac{\hat{\nabla}^{2} \hat{S}^{\alpha \beta}}{\hat{\nabla}S^{2(0)}} = \frac{1}{2}\Lambda^{~.~.~[ \alpha }_{(\mu \nu)}S^{\beta ]\mu} W^{\nu}. \end{equation} Using the commutation relation (3.38) and conditions (3.39) and \begin{equation} \hat{S}^{\mu \nu}_{| \stackrel{\rho}{(0)}} \eta^{\rho} = \eta^{\mu \nu}_{| \stackrel{\rho}{(0)}} W^{\rho} \end{equation} to be substituted in (4.80) and (4.81) in order to derive its corresponding set of deviation equations \begin{equation} \frac{\hat{\nabla}^{2} \eta^{\alpha}}{\hat{\nabla}S^{2(0)}} = L^{\alpha}_{\mu \nu \rho} W^{\mu}W^{\nu}\eta^{\rho} + \Lambda^{\rho}_{\mu \nu}W^{\mu} \xi^{\nu} \eta^{\alpha}_{\stackrel{\rho}{(0)}} + \frac{1}{2} ( \Lambda^{~.~.~\alpha}_{\mu \nu} W^{\mu} W^{\nu} + \frac{1}{2m} L^{\alpha}_{\nu \rho \sigma} W^{\nu} S^{\rho \sigma})_{| \stackrel{\sigma}{0}}\eta^{\sigma}, \end{equation} and \begin{equation} \frac{\hat{\nabla}^{2} \eta^{\alpha \beta}}{\hat{\nabla}S^{2(0)}} = S^{\mu [ \beta}{\rho}L^{\alpha ] }_{\mu \nu \rho} W^{\nu}\eta^{\rho} + \Lambda^{\rho}_{\mu \nu}W^{\mu} \eta^{\nu} S^{\alpha \beta}_{| \stackrel{\rho}{0}}. \end{equation} \subsection*{ (ii) the case $ P_{(0)} \neq m W$} Let us suggest the following Lagrangian: \begin{equation} L= g_{\mu \nu} P_{(0)}^{\mu}\frac{\hat{\nabla} \eta ^{\mu}}{\hat{\nabla}S^{(0)}} + \hat{S}_{\mu \nu} \frac{\hat{\nabla}{\eta}^{\mu \nu}}{\hat{\nabla}S^{(0)}} + \frac{1}{2m} L_{\mu \nu \rho \delta} \eta^{\mu} W^{\nu} \hat{S}^{\rho \delta}+ \frac{1}{2m} g_{\mu \nu} \Lambda^{\rho}_{\delta \rho}\hat{S}^{\delta \rho}W^{\mu}_{\stackrel{\rho}{(0)}}\eta^{\nu}+ g_{\mu \rho}g_{\nu \delta} [P_{(0)}^{\rho} W^{\delta}-P_{0}^{\delta} W^{\rho} ] \eta^{\mu \nu} \end{equation} where $P_{(0)}^{\mu}= m W^{\mu}+ \frac{W_{\nu} D S_{(0)}^{\mu \nu}}{D S^{(0)}}.$\\ Taking the variation with respect to $\eta^{\alpha}$ and $\eta^{\alpha \beta}$ we obtain \begin{equation} \frac{ \hat{\nabla} P_{(0)}^{\alpha}}{\hat{S}^{(0)}} = \frac{1}{2} \Lambda^{..\alpha}_{\mu \nu} P_{(0)}^{\mu} W^{\nu} + \frac{1}{2m} L^{\alpha}_{\nu \rho \sigma} W^{\nu} \hat{S}^{\rho \sigma} + \frac{1}{2m} \Lambda^{\rho}_{ \delta \nu}\hat{S}^{\delta \nu}W^{\alpha}_{| \stackrel{\rho}{o}} \end{equation} and \begin{equation} \frac{\hat{\nabla}^{2} \hat{S}^{\alpha \beta}}{\hat{\nabla}{S}^{2(0)}} = \frac{1}{2} \Lambda^{~.~.[\alpha}_{\mu \nu} S^{\mu \beta]} W^{\nu} \end{equation} Using the commutation relation (3.38) , conditions (3.39) and (4.82) to be substituted in (4.82) and (4.83) in order to derive its corresponding set of deviation equations \begin{equation} \frac{\hat{\nabla}^{2} \eta^{\alpha}}{\hat{\nabla}S^{2(0)}} = L^{\alpha}_{\mu \nu \rho} P_{(0)}^{\mu}W^{\nu}\eta^{\rho} + \Lambda^{\rho}_{\mu \nu}P_{(0)}^{\mu} \eta^{\nu} \eta^{\alpha}_{\stackrel{\rho}{(0)}} + \frac{1}{2} ( \Lambda^{..\alpha}_{\mu \nu} P_{(0)}^{\mu} W^{\nu} + \frac{1}{2m} L^{\alpha}_{\nu \rho \sigma} W^{\nu} \hat{S}^{\rho \sigma}+ \frac{1}{2m}\Lambda^{\rho}_ {\delta \sigma}\hat{S}^{\delta \sigma}W^{\alpha}_{\stackrel{\sigma}{(0)}})_{\stackrel{\delta}{(0)}} \eta^{\delta}, \end{equation} and \begin{equation} \frac{\tilde{\nabla}^{2} \zeta^{\alpha \beta}}{\tilde{\nabla}S^{2-}} = \tilde{S}^{\mu [ \beta}{\rho}N^{\alpha ] }_{\mu \nu \rho} J^{\nu} J^{\rho} + \Lambda^{\rho}_{\mu \nu}W^{\mu} \eta^{\nu} \hat{S}^{\alpha \beta}_{\stackrel{\rho}{o}} + ( \Lambda^{~.~.[\alpha}_{\mu \nu} S^{\mu \beta]} J_{\nu| \stackrel{\delta}{-}}\zeta^{\delta} + [P_{(0)}^{\alpha}W^{\beta} -P_{(0)}^{\beta}W^{\alpha} ]_{| \stackrel{\delta}{(0)}}\eta^{\delta} \end{equation} \subsection{Spinning and Spinning Deviation Equations subject to (-)ve Derivative } \subsection*{{(i)} the case of $P_{-} = mJ$} \begin{equation} L= g_{\mu \nu} J^{\mu}\frac{\tilde{\nabla} \zeta ^{\mu}}{\tilde{\nabla}S^{-}} + \tilde{S}_{\mu \nu} \frac{\tilde{\nabla} \zeta^{\mu \nu}}{ \tilde{\nabla}S^{-}} + \frac{1}{2m} N_{\mu \nu \rho \delta} \zeta^{\mu} J^{\nu} S^{\rho \delta}+ \Lambda^{\rho}_{\nu \sigma} {J}_{\mu | \stackrel{\rho}{-}} \hat{S}^{\nu \sigma} \zeta^{\mu} \end{equation} by taking the variation with respect to $\zeta^{\alpha}$ and $\zeta^{\alpha \beta}$ we obtain \begin{equation} \frac{\tilde{\nabla} J^{\alpha}}{\tilde{\nabla}S^{-}} = \Lambda^{..\alpha}_{\mu \nu} J^{\mu} J^{\nu} + \frac{1}{2m} N^{\alpha}_{\nu \rho \sigma} J^{\nu} S^{\rho \sigma}+ \frac{1}{2m}\Lambda_{\rho \delta \nu}S^{\delta \nu}J^{\alpha}_{\stackrel{\sigma}{-}} + g^{\alpha \rho} \Lambda^{\rho}_{\nu \sigma} {J}_{\mu | \stackrel{\rho}{-}} \tilde{S}^{\nu \sigma} \end{equation} and \begin{equation} \frac{\tilde{\nabla} S^{\alpha \beta}}{\tilde{\nabla}S^{2-}} = \Lambda^{~.~ [\alpha }_{(\mu \nu)}S^{\beta]\mu} J^{\nu} \end{equation} Using the commutation relation (3.44) , conditions (3.48) and \begin{equation} \tilde{S}^{\mu \nu}_{| \stackrel{\rho}{-}} \zeta^{\rho} = \zeta^{\mu \nu}_{| \stackrel{\rho}{-}} J^{\rho} \end{equation} to be substituted in (4.91)and (4.92) in order to derive its corresponding set of deviation equations \begin{equation} \frac{\tilde{\nabla}^{2} \zeta^{\alpha}}{\tilde{\nabla}S^{2(-)}} = N^{\alpha}_{\mu \nu \rho} J^{\mu}J^{\nu}\zeta^{\rho} + \Lambda^{\rho}_{\mu \nu}J^{\mu} J^{\nu} \eta^{\alpha}_{\stackrel{\rho}{-}} + ( \Lambda^{~.~.~\alpha}_{\mu \nu} J^{\mu} W^{\nu} + \frac{1}{2m} N^{\alpha}_{\nu \rho \sigma} J^{\nu} \tilde{S}^{\rho \sigma})_{| \stackrel{\sigma}{-}}\zeta^{\sigma} \end{equation} and \begin{equation} \frac{\tilde{\nabla}^{2} \zeta^{\alpha \beta}}{\hat{\nabla}S^{2(-)}} = S^{\mu [ \beta}{\rho}{\Lambda}^{\alpha ] }_{ \nu \rho} J^{\nu}\zeta^{\rho} + \Lambda^{\rho}_{\mu \nu}J^{\mu} \zeta^{\nu} \tilde{S}^{\alpha \beta}_{| \stackrel{\rho}{-}}. \end{equation} \subsection*{ (ii) the case $ P_{-} \neq m J$} Let us suggest the following Lagrangian: \begin{equation} L= g_{\mu \nu} P_{-}^{\mu}\frac{\tilde{\nabla} \zeta ^{\mu}}{\tilde{\nabla}S^{-}} + \tilde{S}_{\mu \nu} \frac{\tilde{\nabla}{\zeta}^{\mu \nu}}{\tilde{\nabla}S^{-}} + \frac{1}{2m} N_{\mu \nu \rho \delta} \zeta^{\mu} J^{\nu} \tilde{S}^{\rho \delta}+ \frac{1}{2m} g_{\mu \nu} \Lambda^{\rho}_{\delta \rho}\tilde{S}^{\delta \rho}J^{\mu}_{\stackrel{\rho}{-}}\zeta^{\nu} + g_{\mu \rho}g_{\nu \delta} [P_{-}^{\rho} J^{\delta}-P_{-}^{\delta} J^{\rho} ] \zeta^{\mu \nu} \end{equation} where $$P_{(0)}^{\mu}= m W^{\mu}+ \frac{J_{\nu} \tilde{\nabla} \tilde{S}^{\mu \nu}}{D S^{(0)}}$$ by taking the variation with respect to $\zeta^{\alpha}$ and $\zeta^{\alpha \beta}$ we obtain \begin{equation} \frac{ \tilde{\nabla} P_{-}^{\alpha}}{\tilde{S}^{-}} = \frac{1}{2} \Lambda^{~.~.~\alpha}_{\mu \nu} P_{-}^{\mu} J^{\nu} + \frac{1}{2m} N^{\alpha}_{\nu \rho \sigma} J^{\nu} \tilde{S}^{\rho \sigma} + \frac{1}{2m} \Lambda^{\rho}_{ \delta \nu}\tilde{S}^{\delta \nu}J^{\alpha}_{\stackrel{\rho}{-}} \end{equation} and \begin{equation} \frac{\tilde{\nabla}^{2} \tilde{S}^{\alpha \beta}}{\hat{\nabla}{S}^{2-}} = \Lambda^{~.~.[\alpha}_{\mu \nu} S^{\mu \beta]} J^{\nu} \end{equation} Using commutation relation (3.44) and the conditions (3.48) and (4.93) to be substituted in (4.97) and (4.98) in order to derive its corresponding set of deviation equations \begin{equation} \frac{\tilde{\nabla}^{2} \zeta^{\alpha}}{\tilde{\nabla}S^{2-}} = N^{\alpha}_{\mu \nu \rho} P_{-}^{\mu}J^{\nu}\zeta^{\rho} + \Lambda^{\rho}_{\mu \nu}P_{-}^{\mu} \zeta^{\nu} \zeta^{\alpha}_{\stackrel{\rho}{o}} + ( \Lambda^{~.~.~\alpha}_{\mu \nu} P_{-}^{\mu} J^{\nu} + \frac{1}{2m} L^{\alpha}_{\nu \rho \sigma} J^{\nu} \tilde{S}^{\rho \sigma}+ \frac{1}{2m}\Lambda^{\rho}_ {\delta \sigma}\hat{S}^{\delta \sigma}J^{\alpha}_{\stackrel{\sigma}{-}})_{| \stackrel{\delta}{-}} \zeta^{\delta} \end{equation} and \begin{equation} \frac{\tilde{\nabla}^{2} \zeta^{\alpha \beta}}{\tilde{\nabla}S^{2-}} = \tilde{S}^{\mu [ \beta}{\rho}N^{\alpha ] }_{\mu \nu \rho} J^{\nu} J^{\rho} + \Lambda^{\rho}_{\mu \nu}J^{\mu} \zeta^{\nu} \tilde{S}^{\alpha \beta}_{\stackrel{\rho}{-}} + ( \Lambda^{~.~.[\alpha}_{\mu \nu} \tilde{S}^{\mu \beta]} J_{\nu| \stackrel{\delta}{-}}\zeta^{\delta}+ [P_{-}^{\alpha}J^{\beta} -P_{-}^{\beta}J^{\alpha} ]_{\stackrel{\delta}{-}}\zeta^{\delta}. \end{equation} \section{Discussion and Concluding Remarks} The present work is related to extending the concept of geometerization of physics to explain spinning objects in a gravitational field. It has been developed the modified Bazanski Lagrangian in general relativity for spinning objects to be expressed in AP-geometry. Due to the wealth of geometric quantities, one must regard that the existence of spin tensors associated for each path is defined by a specific type of absolute derivative. Also, we have emphasized the relationship between geodesic and geodesic deviation with spinning tensors, to be viable for any type of geometries, by testing its reliability in both Riemannian and AP-geometry. Moreover, the spin tensor has been defined geometrically as a commutation relation between formula between geodesic and geodesic deviation in Riemannian geometry and their counterparts in AP-geometry. Accordingly, we have obtained three different spinning equations different from its counterpart in Riemannian geometry. One of them, can be used to describe the spinning equations and their deviation in Tele-parallel gravity, i.e. these sets of spinning equations are representing the Papapetrou equation of Hayshi-Shirifugi New General Relativity [13], while the other two paths may describe, hypnotically, a set of spinning particles subject to a class of non vanishing curvature and torsion simultaneously. This may present require an efficient field theory feasible to give a physical interpretation of $\hat{S}^{\mu \nu}$ and $\tilde{S}^{\mu \nu} $ which still an open question.\\ Yet, this study has also clarified the viability interaction between torsion tensor and spin deviation equations, as mentioned previously in case of Gauge theories of gravity [24] Nevertheless, these sets of spinning equations can also be applied in PAP-geometry, to give new results. Owing to revisit, the bi-metric theories of gravity using the tetrad formalism, one may find out some promising results able to reveal the mystery of several anomalies such as dark matter and dark energy in our nature, which will be studied in our future work. \section*{References} {[1]} M. Mathisson, Acta Phys. Polon {\bf{6}}, 163 (1937).\\ {[2]} A.Papapetrou , Proceedings of Royal Society London A {\bf{209}} , 248(1951). \\ {[3]} W. G. Dixon Proc. R. Soc. London, Ser. A {\bf{314}}, 499 (1970). \\ {[4]} F. Cianfrani, I. Milillo and G. Montani Phys, Lett. A, {\bf{366}},7 ; gr-qc/0701157 (2007). \\ {[5]}Utyiama, R.(1956) Phys Rev. 101 1597 \\ {[6]}Kibble, T.W. (1960) J. Math Phys., 2, 212 \\ {[7]}Hehl, von der Heyde, P. Kerlik, G.D. and Nester, J.M. (1976) Rev Mod Phys 48, 393-416 \\ {[8]} F.W. Hehl , Proceedings of the 6th Course of the International School of Cosmology and Gravitation on "Spin, Torsion and Supergravity" eds. P.G. Bergamann and V. de Sabatta , held at Erice ,1 (1979).\\ {[9]}H.I. Acros, and J.G. Pereira, International Journal of Modern Physics D {\bf{13}}, 2193 (2004) \\ {[10]} S. Hojman, Physical Rev. D {\bf{18}}, 2741 (1978). \\ ([11])P.H. Yasskin, and W.R. Stoeger, Phys. Rev. D 21, 2081(1980). \\ {[12]}Hammond, R. (2002) Rep. Prog. Phys, 65, 599-449 \\ {[13]}K. Hayashi, and T. Shirifuji , Phys. Rev. D, {\bf{19}}, 3524 (1979). \\ {[14]} Mikhail, F.I. and Wanas, M.I. (1977) Proc. Roy. Soc. Lond. {\bf{A 356}}, 471. \\ {[15]} Wanas, M.I. (2001) Stud. Cercet. \c Stiin\c t. Ser. Mat. Univ. Bac\u au {\bf{10}}, 297; gr-qc/0209050 \\ {[16]} Wanas, M.I. (2000) Turk. J. Phys., {\bf{24}}, 473 ; gr-qc/0010099. \\ {[17]} Wanas, M.I., Melek, M. and Kahil, M.E.(1995) Astrophys. Space Sci.,{\bf{228}}, 273. ; gr-qc/0207113. \\ {[18]} Wanas, M.I.(1998) Astrophys. Space Sci., {\bf{258}}, 237 ;vgr-qc/9904019. \\ \\ {[19]} Wanas, M.I., Melek, M. and Kahil, M.E. (2000) Grav. Cosmol., {\bf{6} }, 319. \\ \\ {[20]} Wanas, M.I., Melek, M. and Kahil, M.E. (2002) Proc. MG IX, part B, p.1100, Eds. V.G. Gurzadyan et al. (World Scientific Pub.); gr-qc/0306086. \\ \\ {[21]} Bazanski, S.I. (1989) J. Math. Phys., {\bf{30}}, 1018.\\ {[22]} D. Bini and A Geralico, Phys. Rev D {\bf{{84}}},104012; arXiv: 1408.4952 (2011)\\ {[23]} Kahil, M.E. (2006) , J. Math. Physics {\bf {47}},052501. \\ {[24]} Magd E. Kahil, Odessa Astronomical Publications, {\bf{vol {28/2}}}, 126. (2015) \\ {[25]} Magd E. Kahil , Gravi. Cosmol. {\bf{24}}, 83 (2018) \\ {[26]} M. Mohseni , Gen. Rel. Grav., {\bf{42}}, 2477 (2010). \\ {[27]}M. Roshan, Phys.Rev. D{\bf{87}},044005 ; arXiv 1210.3136 (2013)\\ \end{document} \section*{Appendix} \section*{ AP-Geometry : A brief Introduction} AP-space is an n-dimensional manifold, each point of which is labeled by a set of n-independent variables $x^{\nu},( ,\nu = 1,2,3,...,n )$. The structure of this space is defined completely by a set of n-contravariant vector fields $\h{i}^{\mu}$ where $i =1,2,3,...,n)$ denotes the vector number, and $\mu ( = 1,2,3,...,n)$ denotes \underline{the coordinate component}; in our future work, we will regard it as a vector representing LLT. This may give rise to modify the notation of AP-space to include all indices that are known as holonomic and anholonomic coordinates both LLT and GCT ; invariant u. The normalized cofactor $\h{i}_{\mu}$ of the vectors $\h{i}^{\mu}$, in the determinant $|| \h{i}^{\mu} ||$, is defined such that (cf.[10]) $$ \h{i}^{\mu}\h{j}_{\mu} = \delta_{ij}, $$ $$ \h{i}^{\mu}\h{i}_{\nu} = \delta^{\mu}_{\nu}. $$ Using these vectors, the following second order symmetric tensors are defined: $$ g^{\mu \nu} \edf \h{i}^{\mu} \h{i}^{\nu} , $$ $$ g_{\mu \nu} \edf \h{i}_{\mu} \h{i}_{\nu} . $$ Consequently, $$ g^{\mu \alpha}g_{\nu \alpha} = {\delta}^{\mu}_{\nu}. $$ The tensor $g_{\mu \nu} $ can be used to play the role of the metric tensor, of Riemannian space, associated with AP-space, when needed. Consequently, using (2.3) and the derivatives (2.4), one can define Christoffel symbols and covariant derivatives using this symbol, in the usual manner. The following third order tensor, the contortion tensor, can be defined as, $$ \gamma^{\alpha}_{. \mu \nu} \edf \h{i}^{\alpha} \h{i}_{\mu ; \nu}, $$ which is non-symmetric in its last two indices $\mu , \nu$. It can be shown that $\gamma^{\alpha}_{. \mu \nu}$ is skew-symmetric in its first two indices. It is well known that the addition of any third order tensor to an affine connection gives another connection, thus the object defined by, $$ \Gamma^{\alpha}_{.\mu \nu} \edf \cs{\mu}{\nu}{\alpha} + \gamma^{\alpha}_{.\mu \nu}, A^{\stackrel{\mu}{+}} _{.~| \nu} \edf A^{\mu} _{, \nu} + A^{\alpha}\Gamma^{\mu}_{.\alpha \nu} $$ $$ A^{\stackrel{\mu}{-}} _{.~| \nu} \edf A^{\mu} _{, \nu} + A^{\alpha}\tilde{\Gamma}^{\mu}_{.\alpha \nu} $$ $$ A^{\mu}_{.~| \nu} \edf A^{\mu} _{, \nu} + A^{\alpha}\Gamma^{\mu}_{.( \alpha \nu ) } $$ where the comma (,) denotes ordinary partial derivative, the stroke and $(+)$ sign denotes tensor derivative using affine connection (...), the stroke and the $(-)$ sign denotes the tensor derivative using the dual connection, $$ \tilde \Gamma^{\alpha}_{. \mu \nu} \edf \Gamma^{\alpha}_{. \nu \mu} $$ while the stroke without signs characterizes tensor derivatives using the symmetric connection, $$ \Gamma^{\alpha}_{. ( \mu \nu )} \edf \frac{1}{2}(\Gamma^{\alpha}_{. \mu \nu} + \Gamma^{\alpha}_{. \nu \mu}), $$ or using (---), we can write $$ \Gamma^{\alpha}_{. ( \mu \nu )} \edf \cs{\mu}{\nu}{\alpha} + \frac{1}{2} \Delta^{\alpha}_{. \mu \nu} $$ where $$ \Delta^{\alpha}_{. \mu \nu} \edf \gamma^{\alpha}_{. \mu \nu} + \gamma^{\alpha}_{. \nu \mu} . $$ As a consequence of using the connection (2.7), it is easy to show that, \begin{equation} \h{i}_{\stackrel{\mu}{+} | \nu} = 0 \end{equation} which is usually known, in the literature, as the AP-condition. The solution of equation (2.15) will give an alternative definition for the non-symmetric connection, \begin{equation} \Gamma^{\alpha}_{. \mu \nu } \edf \h{i}^{\alpha}\h{i}_{\mu , \nu}. \end{equation} Since $ \Gamma^{\alpha}_{. \mu \nu }$ is non-symmetric, then one can define the torsion tensor of AP-geometry as, \begin{equation} \Lambda^{\alpha}_{. \mu \nu }\edf \Gamma^{\alpha}_{. \mu \nu } - \Gamma^{\alpha}_{. \nu \mu }, \end{equation} or using (---), \begin{equation} \Lambda^{\alpha}_{. \mu \nu } \edf \gamma^{\alpha}_{. \mu \nu } - \gamma^{\alpha}_{. \nu \mu }, \end{equation} which is a skew symmetric tensor in the last two indices. As a direct result of using (2.15) and definition (2.4), one can obtain \begin{equation} g_{_{\stackrel{\mu}{+} \stackrel{\nu}{+} | \sigma}} =0 \end{equation} which can be written in the form \begin{equation} g_{\mu \nu , \sigma} = g_{\mu \alpha} \Gamma^{\alpha}_{.\nu \sigma} + g_{\nu \alpha} \Gamma^{\alpha}_{.\mu \sigma}. \end{equation} Also, recall that Christoffel symbol is defined as a result of a metericity condition, \begin{equation} g_{\mu \nu ;~ \sigma} = 0, \end{equation} which gives, \begin{equation} g_{\mu \nu , \sigma} = g_{\mu \alpha} \cs{\nu}{\sigma}{\alpha} + g_{\nu \alpha} \cs{\mu}{\sigma}{\alpha}. \end{equation} In view of (--) and (--) it is clear that the operation of raising or lowering tensor indices commutes with covariant differentiation using Christoffel symbol and with tensor differentiation using (--). Contracting (2.17b) one can obtain the following covariant vector $$ C_{\mu} \edf \Lambda^{\alpha}_{. \mu \alpha} = \gamma^{\alpha}_{. \mu \alpha } \eqno{(2.22)} $$ which is called the basic vector (cf. [11]) \begin{equation} L= g_{\mu \nu} \bar{V}^{\mu} \frac{\nabla \bar{\Phi}^{\nu}}{\nabla \bar{s}^{+}} + \frac{1}{2m}\Lambda^{\rho}_{\mu \nu} \bar{V}_{\delta| \stackrel{\rho}{+}} S^{\mu \nu} \bar{\Psi}^{\delta} \end{equation} If we take the variation with respect to $\bar{\Psi}^{\alpha}$ we obtain equation .... \begin{equation} \frac{\nabla \bar{V}^{\alpha}}{ \nabla \bar{s}^{+}}= \frac{1}{2m} \Lambda^{\rho}_{\nu \sigma} \bar{V}_{\mu | \stackrel{\rho}{+}} \bar{S}^{\nu \sigma} \end{equation} Applying the commutation relation and condition ... we obtain its corresponding deviation equations \begin{equation} \frac{\nabla^{2} \bar{\Phi}^{\alpha}}{ \nabla \bar{s}^{2+}}= \frac{1}{2m} ( \Lambda^{\rho}_{\nu \sigma} \bar{V}_{\mu | \stackrel{\rho}{+}} \bar{S}^{\nu \sigma} )_{| \stackrel{\delta}{+}} \bar{\Phi}^{\delta} \end{equation} In this sense the extended form of the Papapetrou is based on extending the meaningful of spin tensor to be included in non-Riemannian geometry \subsection{Spinning and spinning deviation Equations without precession: AP-geometry} \section*{References} {[1]} A.Papapetrou , Proceedings of Royal Society London A {\bf{209}} , 248(1951). \\ {[2]}E. Corinaldesi and A. Papapetrou, Proceedings of Royal Society London A {\bf{209}}, 259 (1951).\\ {[6]} F.W. Hehl , Proceedings of the 6th Course of the International School of Cosmology and Gravitation on "Spin, Torsion and Supergravity" eds. P.G. Bergamann and V. de Sabatta , held at Erice ,1 (1979).\\ {[7]} F.W. Hehl, J.D. McCrea, E.W. Mielke, and Y. Ne'eman, Phys Reports 1 (1995) \\ {[8]}S.A. Ali, C. Carfao, S. Capozziello, and Ch. Corda (2009); arXiv:0907.0934 \\ {[9]}K. Hayashi, and T. Shirifuji , Phys. Rev. D, {\bf{19}}, 3524 (1979). \\ {[10]} P. Collins, A. Martin, and E. Squires, "{\it{Particle Physics and Cosmology}}", John Wiley and Sons, New York.(1989) \\ {[11]} R. Hammond, Rep. Prog. Phys, {\bf{65}}, 599 (2002) \\ {[12]}H.I. Acros, and J.G. Pereira, International Journal of Modern Physics D {\bf{13}}, 2193 (2004) \\ {[13]}Y. Mao, M. Tegmark, A. Guth, and S. Cabi , Phys. Rev. D {\bf{76}}, 104029 (2007) ; arXiv:gr-qc/060812 \\ {[14]} F.W. Hehl , Phys. Lett.36A, 225 (1971). \\ {[15]} S. Hojman, Physical Rev. D {\bf{18}}, 2741 (1978). \\ ([16])P.H. Yasskin, and W.R. Stoeger, Phys. Rev. D 21, 2081(1980). \\ {[17]}S.L. Bazanski , J. Math. Phys., {\bf {30}}, 1018 ; \\ M.E. Kahil , J. Math. Physics {\bf {47}},052501(2006). \\ {[18)} M.E. Kahil, Odessa Astronomical Publications, {\bf{vol {28/2}}, 126.}(2015) \\ {20} H.I. Acros, V.C. Andrade, and J.G. Pereira (2004); arXiv;gr-qc/0403074 \\ {[21]}S. Hojman, M. Rosenbaum, and M.P. Ryan, Physical Rev. D {\bf{19}}, 430 (1979). \\ {[22]}F.W. Hehl, Yu.N. Obukhov, and D. Puetzfeld , A {\bf{377}} 1775 (2013); arXiv:1304.2769 \\ {[23]} D. Yisi, and Y. Jiang , Gen Rel. Grav. {\bf 31}, 99 (1999). \\ {[24]} F. Cianfani, G. Montani, and V. Scopelliiti (2015); arXiv: 1505.00943 \\ {[25]}L. Fabbri and S. Vignolo (2011) arXiv:1201.286 \\ {[26]} M.I. Wanas, M. Melek, and M.E. Kahil Grav. Cosmol., {\bf{6} }, 319 (2000). \\ {[27]} M.I. Wanas, and M.E. Kahil Gen. Rel. Grav., {\bf{31}}, 1921(1999) ;\\ M.I. Wanas, M. Melek, and M.E. Kahil, Proc. MG IX, part B, p.1100, Eds. V.G. Gurzadyan et al. (World Scientific Pub.) (2003); gr-qc/0306086.\\ {[28]} M.I. Wanas, M.E. Kahil, and Mona, M. Kamal, Grav. Cosmol., {\bf{22} }, 345 (2016). \\ {[29]} D. Puetzfeld and Yu.N. Obukhov, Yu, N. (2013); AriXv:1308-2269 \\ accordingly, using the commutation relation and the condition we obtain its corresponding deviation equation which takes place as \begin{equation} \frac{D^{2} \bar{\Psi}^{\mu}}{D \bar{s}^{2}} = R^{\mu}_{\nu \rho \sigma} \bar{U}^{\nu}\bar{U}^{\rho \sigma}\bar{\Psi}^{\sigma} + \frac{1}{2m} (R^{\alpha}_{\mu \nu \sigma} S^{\nu \sigma} \bar{U}^{\mu})_{; \delta}\bar{\Psi}^{\delta} \end{equation} From this result, we obtained the relationship between trajectories of spinning particles without precession with geodesics of test particle Such a result emphasizes of geometerization of physics, in which physical trajectories of spinning objects can be expressed in terms of geodesic and geodesic deviation of test particles. \subsection{Spinning motion of non-geodesic equation} Paths that follow non-geodesic trajectories, and their corresponding deviation equations are obtained using the Euler-Lagrange equation of the following Lagrangian: \begin{equation} L {\stackrel{def.}{=}} g_{\mu \nu}P^{\mu} \frac{D \Psi^\nu}{Ds} + ( m(s),_{\rho}+ \frac{1}{2} R_{\rho \beta \gamma \sigma}U^{\alpha}S^{\beta \gamma}) \Psi^{\rho} \end{equation} such that: \begin{equation} \frac{d \partial L}{ds \partial{{\dot\Psi^{\alpha}}}} - \frac{\partial L}{ \partial \Psi^{\alpha}} =0 \end{equation} One obtains, \begin{equation} \frac{DP^{\alpha}}{Ds} = m{(s)}_{,\beta}g^{\alpha \beta} + \frac{1}{2} R^{\mu}_{\beta \gamma \sigma} S^{\gamma \sigma} U^{\beta} \end{equation} and using the commutation relation (A.4) and the condition (A.5) we obtain its corresponding deviation equation; \begin{equation} \frac{D^{2}\Psi^{\mu}}{Ds^2}= (R^{\mu}_{\nu \rho \sigma}P^{\nu}U^{\rho} \Psi^{\sigma})_{; \delta} \Psi^{\delta} + ( m{s}_{\nu}U^{\nu} (g^{\alpha \beta}))_{;\rho} \Psi^{\rho} \end{equation} Assuming that the effective mass $m(s) \sim exp(g(\psi)\psi)$ , ,which may contribute to describe the behavior of the parallel force , $f_{||} = \nabla[g(\psi)\psi]$ as shown on the right hand side of equation. It is well known that this force is responsible for mass variation of paths in the presence of the Riemaniann geometry. Yet, it has been found [17] by taking $ \sigma$ as another parameter , such that $ s \sim \sigma$ to obtain the following relation \begin{equation}\frac{1}{m}\frac{dm}{d \sigma} \equiv \frac{\sqrt{\Lambda/2}}{6} \end{equation} in which to be expressed as, \begin{equation} \frac{1}{m}\frac{dm}{d \sigma} \approx 2 a_{0}/c \end{equation} where $a_{0}$ is a constant of acceleration, {$a_{0} \sim 2 \times 10^{-10} m/sec^2$}, as known of the MOND and c is the speed of light. Thus, we can find that the non-geodesic equation can be related to MOND [9] in the following way: \begin{equation} \frac{d\hat{U}^{\alpha}}{d\sigma}+ \Gamma^{\alpha}_{\beta \delta} \hat{U}^{\beta}\hat{U}^{\delta} = 2 \frac{ a_{0}}{c} \hat{U}_{\beta} (g^{\alpha \beta}- \hat{U}^{\alpha}\hat{U}^{\beta}) \end{equation} where, $\hat{U}^{\alpha} = \frac{dx^{\alpha}}{d \sigma}$ and its corresponding deviation equation becomes \begin{equation} \frac{D^{2}\hat{\Psi}^{\mu}}{D{\sigma}^2}= R^{\mu}_{\nu \rho \sigma}\hat{U}^{\nu}\hat{U}^{\rho} \hat{\Psi}^{\sigma} + 2 \frac{ a_{0}}{c}( \hat{U}_{\beta} (g^{\alpha \beta}- \hat{U}^{\alpha}\hat{U}^{\beta}))_{; \rho} \hat{\Psi}^{\rho} \end{equation} such that $\hat{\Psi}^{\alpha}$ is its associated deviation vector. \section{The Spinning Equation for Objects of Dipolar Moment} It has been suggested by .... that equation of spinning motion, with electric moment and magnetic moment together force of torque .... $$ L =g_{\mu \nu} ( P^{\mu}+ \Omega^{\alpha}) \frac{D \Psi^{\nu}}{DS} + S_{\mu \nu} \frac{D \Psi^{\nu \nu}}{DS} + f_{\mu} \Psi^{\mu} + f_{\mu \nu} \Psi^{\mu \nu} $$ Taking the variation with respect to the deviation vector and deviation tensor respectively, we obtain: $$ \frac{D (P^{\alpha}+\Omega^{\alpha })}{DS} = R^{\alpha}_{\beta \gamma \rho} U^{\beta}U^{\gamma}U^{\rho} $$ and $$ \frac{D S^{\alpha \beta}}{Ds} = \Phi^{\alpha \beta}_{; \rho } U^{\rho}, \eqno{(A.6)} $$ Using the mechanism of the condition ... and relation () we find that $$ \frac{D^2 \Psi^{\mu}}{DS^2}= R^{\mu}_{\nu \rho \sigma}(P^{\nu}+ \Omega^{\nu}) U^{\rho} \Psi^{\sigma}+ f^{\mu}_{; \rho} \Psi^{\rho}, \eqno{(A.7)} $$ and $$ \frac{D^2\Psi^{\mu \nu}}{DS^2}= S^{\rho [ \mu} R^{\nu ]}_{\rho \sigma \epsilon} U^{\sigma} \Psi^{\epsilon} + f^{\mu \nu}_{; \rho} \Psi^{\rho}.\eqno{(A.8)} $$ \section{The The associated AP-Space and role of Spin connection} In this part it is important to compare the role and enclosure of spinning connection to be regarded as an independent phycal quantity different form the tetrad vision in the previous section. Yet the importance to include some specific characteristics of particles previously may become deep in its significance once we include a rich geometric sourse. It could be regarded Finsler geometry as a geometric candidate to express these quantities in a geometric way. \section{Modified Parallelizing manifold:Einstein-Cartan} One must insert the importance of considering absolute parallelism as an equivalence with translational gauge group. Accordingly, the AP condition can be extended to be expressed in terms of spin connection as regarding the annholonomic cordinates into consideration. This may let us focus on the workings of Sousa and Malof (2003) to stress on the generalized spin connection. In general The spin connection is regarded as the Cartan connection $A^{ab}_{\mu}$ , that is a connection admitting curvature and torsion as follows: $$ R^{c }_{.d \mu \nu} \edf A^{c}_{d \nu , \mu} - A^{c}_{d \mu , \nu} + A^{c}_{a \mu }A^{a}_{d \nu} -A^{c}_{a \nu }A^{a}_{d \mu} $$ and $$ \Gamma^{c}_{\mu \nu} \edf e^{c}_{\mu , \nu} - e^{c}_{\nu , \mu} + e^{b}_{\mu}A^{c}_{b \nu}- e^{b}_{\nu}A^{c}_{b \mu} $$ Thus the Cartan connection is defined holonomous basis as $$ \Gamma^{\alpha}_{\beta \rho} \edf e^{\alpha}_{c}( e^{c}_{\mu ,\nu} + e^{b}_{\mu}A^{c}_{ b \nu} ) $$ and $$ A^{a}_{b \mu } \edf e^{a}_{\rho}( \partial_{\mu}e^{\rho}_{b} + \Gamma^{\rho}_{\nu \mu}e^{\nu}_{b} ) $$ for tele-parallelism $A^{a}_{b \mu }=0$. while the curvature tensor becomes $$ {\hat{R}}^{\alpha}_{\beta \gamma \delta} \edf \Gamma^{\alpha}_{\beta \gamma, \delta } - \Gamma^{\alpha}_{\beta \delta, \gamma } + \Gamma^{\epsilon}_{\beta \gamma} \Gamma^{\alpha}_{\epsilon \delta} - \Gamma^{\epsilon}_{\beta \delta} \Gamma^{\alpha}_{\epsilon \gamma} $$ and torsion is defined as $$\Lambda^{\alpha}_{\beta \gamma} = \Gamma^{\alpha}_{\beta \rho}- \Gamma^{\alpha}_{\rho \rho}, $$ Although, in GR the spin connection is equal to Ricci coefficient of rotation $ \omega^{a}_{b \mu} = e^{a}_{\rho}e^{\rho}_{a ; \mu} $ Thus, $$ \omega^{a}_{b \mu} \edf A^{a}_{b \mu} - \gamma^{a}_{b \mu} $$ where $\gamma^{a}_{b \mu}$ is a contortion defined as $$ \gamma^{a}_{b \mu} =\frac{1}{2}e^{c}_{\mu} (\Lambda^{~ a~}_{c . b } + \Lambda^{~a~}_{b . c} - \Lambda^{a~~}_{ . b c }) $$ Thus one obtains, $$ R^{c}_{d \mu \nu} = \hat{R}^{c}_{d \mu \nu} - K^{c}_{d \mu \nu} $$ where $$ K^{c}_{d \mu \nu} \edf \gamma^{c}_{d \nu ; \mu } - \gamma^{c}_{d \mu ; \nu} + \gamma^{c}_{a \mu} \gamma^{a}_{ d \nu} - \gamma^{c}_{a \nu} \gamma^{a}_{ d \mu}$$ The modified AP-condition takes place in the following way (Duan and Jiang 1999)The modified AP-condition takes place in the following way (Duan and Jiang 1999) $$ \nabla^{\nu}_{\mu}e^{m}_{\mu} =0 $$ i.e. $$ e^{m}_{\mu || \nu } \equiv 0 $$ $$ e^{a}_{\mu, \nu} - \Gamma^{\lambda}_{\mu \nu} e^{a}_{\lambda} + \omega^{a}_{.b \nu}e^{b}_{\mu} =0 $$ i.e. $$ \Gamma^{\alpha}_{\nu \mu} e^{m}_{\alpha} = e^{\alpha}_{m}( e^{m}_{\nu , \mu} + \omega_{\mu ~.~ n}^{~. m ~.}e^{n}_{\nu}), $$ i.e. the spin connection !!!!\\ $$ \omega_{n.~ ~ \mu }^{~.m ~.} = - e^{\nu}_{n}(e^{m}_{\nu , \mu} +\Gamma^{m}_{\nu \mu}) $$ From this perspective, one obtains: section{Absolute derivatives for spaces of generalized spin connection} If we replace the Christoffel symbol by non-symmetric affine connections, we find the following set of absolute derivatives: $$ D^{+}_{\mu}e^{m}_{\nu} \edf e^{m}_{\nu , \mu} - \Gamma^{\alpha}_{\nu \mu} e^{m}_{\alpha} + \omega_{\mu ~.~ n}^{~. m ~.}e^{n}_{\nu}, $$ $$ D^{o}_{\mu}e^{m}_{\nu} \edf e^{m}_{\nu , \mu} - \Gamma^{\alpha}_{( \nu \mu )}e^{m}_{\alpha} + \omega_{\mu ~.~ n}^{~. m ~.}e^{n}_{\nu}, $$ and $$ D^{-}_{\mu}e^{m}_{\nu} \edf e^{m}_{\nu , \mu} - \Gamma^{\alpha}_{\mu \nu }e^{m}_{\alpha} + \omega_{\mu ~.~ n}^{~. m ~.}e^{n}_{\nu}, $$ Thus for spinning motion can be obtained by the modified Lagrangian: $$ L = g_{\mu \nu} P^{\mu} \frac{D \Psi^{\nu}}{DS}+ \Lambda_{\mu \nu \rho}U^{\mu} U^{\nu} \Psi^{\rho}+ S_{\mu \nu} \frac{D \Psi^{\mu \nu}}{DS} + (P_{\mu}U_{\nu}- P_{\nu}U_{\mu})\Psi^{\mu \nu} $$ $$ \frac{D P^{\alpha}}{DS} = - \Lambda_{\mu \nu}^{~..~ \alpha}U^{\mu}U^{\nu} $$ and $$ \frac{D S^{\alpha \beta}}{DS} = (P^{\alpha}U^{\beta} - P^{\beta}U^{\alpha}), $$ which can be reduced to $$ \frac{D S^{a b}}{DS} = (P^{a}U^{b} - P^{b}U^{a}) $$ \section{Spinning Equations of The New Class of Path Equation} In the present work we are going to to extend the formalism of constructing a new class of path equations, which was published before [...] motivated by the following points: In the context of the Generalized Field Theory (GFT), constructed in the AP-geometry by Mikhail and Wanas [9], the vector $C_{\mu}$ defined by (2.22), represents the electromagnetic generalized potential (using certain system of units). The skew-symmetric part of the field equations of this theory can be written as, $$ F_{\mu \nu} = C_{\mu , \nu} - C_{\nu , \mu} , \eqno{(3.4)} $$ where $F_{\mu \nu}$ is a second order skew-symmetric tensor, defined in the AP-geometry [9], playing the role of the electromagnetic field strength tensor. Now $C_{\mu}$ has the physical meaning mentioned above and is a part of the geometric structure. \subsection{The First Path Equation (using $\Gamma^{\alpha}_{. \mu \nu}$)} Using the non-symmetric connection given by ..... or ...., the Lagrangian (3.5) can be written in the form: $$ L^{+} \edf g_{\mu \nu} (V^{\mu} + \hat{C}^{\mu})\frac{D \xi^{\nu}}{Ds^{+}} + f_{(+)\mu}\xi^{\mu} \eqno{(3.6)} $$ where $V^{\mu}$ is the tangent of the resulting path, $\xi^{\nu}$ is the vector giving the deviation from this path and $s^{+}$ is the evolution parameter along the path. The derivative of the deviation vector is given by, $$ \frac{D \xi^{\nu}}{Ds^{+}} \edf \dot{\xi}^{\nu} + \xi^{\alpha}\Gamma^{\nu}_{.\alpha \beta} V^{\beta} \eqno{(3.7)} $$ where, $$\dot{\xi^{\nu}} \edf \frac{d \xi^{\nu} }{ds^{+}}. \eqno{(3.8)} $$ Now, we have Substituting (3.9) and (3.10) into the Euler-Lagrange equation, $$ \frac{d}{ds^{+}}\frac{\partial L^{+}}{\partial \dot{\xi}^{\sigma}}~ -~ \frac{\partial L^{+}}{\partial{\xi}^{\sigma}} =~ 0 , $$ we get, after necessary reductions, the path equation $$ {\frac{dV^\mu}{ds^+}} + \cs{\alpha}{\beta}{\mu} V^\alpha V^\beta = - \Lambda^{~ ~ ~ ~ \mu}_{(\alpha \beta) .} ~~V^\alpha V^\beta - \hat{F}^{\mu}_{. \nu}V^{\nu} - g^{\mu \delta}\hat{C}_{ \nu ; \delta}V^{\nu}- \Lambda^{~ ~ ~ ~ \mu}_{\alpha \beta .} ~~\hat{C}_{\alpha} V^\beta + f_{(+)}^{\mu} , \eqno{(3.11)} $$ where $$ \hat{F}_{ \mu \nu} \edf \hat{C}_{\mu, \nu} - \hat{C}_{\mu , \nu}. $$ \subsection{The Second Path Equation (using $\Gamma^{\alpha}_{.( \mu \nu )}$)} We use in this subsection the symmetric connection given by (2.13) to evaluate the Lagrangian (3.5), which can be written as, $$ L^{o} \edf g_{\mu \nu}(W^{\mu}+ \hat{C}^{\mu}) \frac{D \zeta^{ \nu}}{D s^{o}} \eqno{(3.12)} $$ where $W^{\mu}$ is the tangent to the resulting path, $\zeta^{\nu}$ is the deviation vector and $s^{o}$ is the parameter varying along the path. Similar to subsection 3.1 we can write the definitions $$ \frac{D \zeta^{\nu}}{Ds^{o}} \edf \dot{\zeta}^{\nu} + \zeta^{\alpha}\Gamma^{\nu}_{.(\alpha \beta)} W^{\beta} \eqno{(3.13)} $$ and $$\dot{\zeta}^{\nu} \edf \frac{d \zeta^{\nu}}{ds^{o}}.$$ Evaluating the variational derivatives of the Lagrangian (3.12) and substituting in the Euler-Lagrange equation, as done in section 3.1, we get after some, relatively long but straightforward, calculations the second path equation $$ {\frac{dW^\mu}{ds^o}} + \cs{\alpha}{\beta}{\mu} W^\alpha W^\beta = -\frac{1}{2} \Lambda^{~ ~ ~ ~ \mu}_{(\alpha \beta) .} ~~W^\alpha W^\beta - \hat{F}^{\mu}_{. \nu}W^{\nu} - g^{\mu \delta} \hat{C}_{\nu ; \delta}W^{\nu} - \frac{1}{2} \Lambda^{~ ~ ~ ~ \mu}_{\alpha \beta .} ~~\hat{C}^{\alpha} W^\beta . \eqno{(3.14)} $$ \subsection{The Third Path Equation (using $\tilde\Gamma^{\alpha}_{.\mu \nu}$)} For this path equation, the Lagrangian (3.5) can be written in the form, $$ L^{-} \edf g_{\mu \nu} (J^{\mu} + \hat{C}^{\mu}) \frac{D \eta^{\nu}}{Ds^{-}} \eqno{(3.15)} $$ where $J^{\mu}$ is the tangent to the path, $\eta^{\nu}$ is the deviation vector and $s^{-}$ is the evolution parameter characterizing the path. Similar to the definitions given in the previous subsections we write, $$ \frac{D \eta^{\nu}}{Ds^{-}} \edf \dot{\eta}^{\nu} + \eta^{\alpha}\tilde{\Gamma}^{\nu}_{. \alpha \beta} J^{\beta} , \eqno{(3.16)} $$ where, $$ \dot{\eta}^{\nu} \edf \frac{d \eta}{ds^{-}}. \eqno{(3.17)} $$ Performing necessary variational calculations, as done in the previous subsections and substituting in the Euler-Lagrange equation we get, after some rearrangements, the third equation $$ {\frac{dJ^\mu}{dS^-}} + \cs{\alpha}{\beta}{\mu} J^\alpha J^\beta = - \hat{F}^{ \mu}_{. \nu}J^{\nu} - g^{\mu \delta}\hat{C}_{\nu ; \delta}J^{\nu}. \eqno{(3.18)} $$ \section{Physical Meaning of the Geometric Terms} The set of path equations (3.11), (3.14) and (3.18) comprises a new class of path equations which can be written in the general form $$ {\frac{dZ^\mu}{d\tau}} + a_{1}~ \cs{\alpha}{\beta}{\mu} Z^\alpha Z^\beta = -~a_{2}~ \Lambda^{~ ~ ~ ~ \mu}_{(\alpha \beta) .} ~~Z^\alpha Z^\beta -~a_{3}~ \hat{F}^{\mu}_{. \nu}Z^{\nu} -~a_{4}~ g^{\mu \delta}\hat{C}_{\nu ; \delta}Z^{\nu} -~ a_{5}~ \Lambda^{~ ~ ~ ~ \mu}_{\alpha \beta .} ~~\hat{C}^{\alpha} Z^\beta , \eqno{(4.1)} $$ where $Z^{\mu}$ is the tangent of the resulting path , $\tau$ is the evolution parameter of the path, $a_{1}, a_{2}, a_{3}, a_{4}, $ and $a_{5}$ are numerical parameters whose values are listed in the following table. \begin{center} {Table 1: Values of the parameters of (4.1) corresponding to connection used} \end{center} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Connection used&$a_{1}$&$a_{2}$& $a_{3}$& $a_{4}$ & $a_{5}$ & Equation \\ \hline & & & & & & \\ ${\Gamma}^{\alpha}_{.~\mu \nu} $ & 1& 1& 1& 1 & 1& (3.11) \\ & & & & & & \\ \hline & & & & & & \\ ${\Gamma}^{\alpha}_{.~ (\mu \nu )}$ &1 & $\frac{1}{2}$ & 1 & 1 & $\frac{1}{2}$ & (3.14) \\ & & & & & & \\ \hline & & & & & & \\ $\tilde{\Gamma}^{\alpha}_{.~\mu \nu } $ & 1& 0 & 1 & 1& 0 & (3.18) \\ & & & & & & \\ \hline \end{tabular} \end{center} In the context of the scheme of geometerization, \section{Absolute derivatives for spaces of generalized spin connection} The problem of geometization may give rise to embody some phyiscal quantities seemingly un able to be If we replace the Christoffel symbol by non-symmetric affine connections, we find the following set of absolute derivatives: $$ D^{+}_{\mu}e^{m}_{\nu} \edf e^{m}_{\nu , \mu} - \Gamma^{\alpha}_{\nu \mu} e^{m}_{\alpha} + \omega_{\mu ~.~ n}^{~. m ~.}e^{n}_{\nu}, $$ $$ D^{o}_{\mu}e^{m}_{\nu} \edf e^{m}_{\nu , \mu} - \Gamma^{\alpha}_{( \nu \mu )}e^{m}_{\alpha} + \omega_{\mu ~.~ n}^{~. m ~.}e^{n}_{\nu}, $$ and $$ D^{-}_{\mu}e^{m}_{\nu} \edf e^{m}_{\nu , \mu} - \Gamma^{\alpha}_{\mu \nu }e^{m}_{\alpha} + \omega_{\mu ~.~ n}^{~. m ~.}e^{n}_{\nu}, $$ Thus for spinning motion can be obtained by the modified Lagrangian: $$ L = g_{\mu \nu} P^{\mu} \frac{D \Psi^{\nu}}{DS}+ \Lambda_{\mu \nu \rho}U^{\mu} U^{\nu} \Psi^{\rho}+ S_{\mu \nu} \frac{D \Psi^{\mu \nu}}{DS} + (P_{\mu}U_{\nu}- P_{\nu}U_{\mu})\Psi^{\mu \nu} $$ $$ \frac{D P^{\alpha}}{DS} = - \Lambda_{\mu \nu}^{~..~ \alpha}U^{\mu}U^{\nu} $$ and $$ \frac{D S^{\alpha \beta}}{DS} = (P^{\alpha}U^{\beta} - P^{\beta}U^{\alpha}), $$ which can be reduced to $$ \frac{D S^{a b}}{DS} = (P^{a}U^{b} - P^{b}U^{a}) $$ \section{Spinning Equations of The New Class of Path Equation} In the present work we are going to to extend the formalism of constructing a new class of path equations, which was published before [...] motivated by the following points: In the context of the Generalized Field Theory (GFT), constructed in the AP-geometry by Mikhail and Wanas [9], the vector $C_{\mu}$ defined by (2.22), represents the electromagnetic generalized potential (using certain system of units). The skew-symmetric part of the field equations of this theory can be written as, $$ F_{\mu \nu} = C_{\mu , \nu} - C_{\nu , \mu} , \eqno{(3.4)} $$ where $F_{\mu \nu}$ is a second order skew-symmetric tensor, defined in the AP-geometry [9], playing the role of the electromagnetic field strength tensor. Now $C_{\mu}$ has the physical meaning mentioned above and is a part of the geometric structure. \subsection{The First Path Equation (using $\Gamma^{\alpha}_{. \mu \nu}$)} Using the non-symmetric connection given by ..... or ...., the Lagrangian (3.5) can be written in the form: $$ L^{+} \edf g_{\mu \nu} (V^{\mu} + \hat{C}^{\mu})\frac{D \xi^{\nu}}{Ds^{+}} + f_{(+)\mu}\xi^{\mu} \eqno{(3.6)} $$ where $V^{\mu}$ is the tangent of the resulting path, $\xi^{\nu}$ is the vector giving the deviation from this path and $s^{+}$ is the evolution parameter along the path. The derivative of the deviation vector is given by, $$ \frac{D \xi^{\nu}}{Ds^{+}} \edf \dot{\xi}^{\nu} + \xi^{\alpha}\Gamma^{\nu}_{.\alpha \beta} V^{\beta} \eqno{(3.7)} $$ where, $$\dot{\xi^{\nu}} \edf \frac{d \xi^{\nu} }{ds^{+}}. \eqno{(3.8)} $$ Now, we have Substituting (3.9) and (3.10) into the Euler-Lagrange equation, $$ \frac{d}{ds^{+}}\frac{\partial L^{+}}{\partial \dot{\xi}^{\sigma}}~ -~ \frac{\partial L^{+}}{\partial{\xi}^{\sigma}} =~ 0 , $$ we get, after necessary reductions, the path equation $$ {\frac{dV^\mu}{ds^+}} + \cs{\alpha}{\beta}{\mu} V^\alpha V^\beta = - \Lambda^{~ ~ ~ ~ \mu}_{(\alpha \beta) .} ~~V^\alpha V^\beta - \hat{F}^{\mu}_{. \nu}V^{\nu} - g^{\mu \delta}\hat{C}_{ \nu ; \delta}V^{\nu}- \Lambda^{~ ~ ~ ~ \mu}_{\alpha \beta .} ~~\hat{C}_{\alpha} V^\beta + f_{(+)}^{\mu} , \eqno{(3.11)} $$ where $$ \hat{F}_{ \mu \nu} \edf \hat{C}_{\mu, \nu} - \hat{C}_{\mu , \nu}. $$ \subsection{The Second Path Equation (using $\Gamma^{\alpha}_{.( \mu \nu )}$)} We use in this subsection the symmetric connection given by (2.13) to evaluate the Lagrangian (3.5), which can be written as, $$ L^{o} \edf g_{\mu \nu}(W^{\mu}+ \hat{C}^{\mu}) \frac{D \zeta^{ \nu}}{D s^{o}} \eqno{(3.12)} $$ where $W^{\mu}$ is the tangent to the resulting path, $\zeta^{\nu}$ is the deviation vector and $s^{o}$ is the parameter varying along the path. Similar to subsection 3.1 we can write the definitions $$ \frac{D \zeta^{\nu}}{Ds^{o}} \edf \dot{\zeta}^{\nu} + \zeta^{\alpha}\Gamma^{\nu}_{.(\alpha \beta)} W^{\beta} \eqno{(3.13)} $$ and $$\dot{\zeta}^{\nu} \edf \frac{d \zeta^{\nu}}{ds^{o}}.$$ Evaluating the variational derivatives of the Lagrangian (3.12) and substituting in the Euler-Lagrange equation, as done in section 3.1, we get after some, relatively long but straightforward, calculations the second path equation $$ {\frac{dW^\mu}{ds^o}} + \cs{\alpha}{\beta}{\mu} W^\alpha W^\beta = -\frac{1}{2} \Lambda^{~ ~ ~ ~ \mu}_{(\alpha \beta) .} ~~W^\alpha W^\beta - \hat{F}^{\mu}_{. \nu}W^{\nu} - g^{\mu \delta} \hat{C}_{\nu ; \delta}W^{\nu} - \frac{1}{2} \Lambda^{~ ~ ~ ~ \mu}_{\alpha \beta .} ~~\hat{C}^{\alpha} W^\beta . \eqno{(3.14)} $$ \subsection{The Third Path Equation (using $\tilde\Gamma^{\alpha}_{.\mu \nu}$)} For this path equation, the Lagrangian (3.5) can be written in the form, $$ L^{-} \edf g_{\mu \nu} (J^{\mu} + \hat{C}^{\mu}) \frac{D \eta^{\nu}}{Ds^{-}} \eqno{(3.15)} $$ where $J^{\mu}$ is the tangent to the path, $\eta^{\nu}$ is the deviation vector and $s^{-}$ is the evolution parameter characterizing the path. Similar to the definitions given in the previous subsections we write, $$ \frac{D \eta^{\nu}}{Ds^{-}} \edf \dot{\eta}^{\nu} + \eta^{\alpha}\tilde{\Gamma}^{\nu}_{. \alpha \beta} J^{\beta} , \eqno{(3.16)} $$ where, $$ \dot{\eta}^{\nu} \edf \frac{d \eta}{ds^{-}}. \eqno{(3.17)} $$ Performing necessary variational calculations, as done in the previous subsections and substituting in the Euler-Lagrange equation we get, after some rearrangements, the third equation $$ {\frac{dJ^\mu}{dS^-}} + \cs{\alpha}{\beta}{\mu} J^\alpha J^\beta = - \hat{F}^{ \mu}_{. \nu}J^{\nu} - g^{\mu \delta}\hat{C}_{\nu ; \delta}J^{\nu}. \eqno{(3.18)} $$ \section{Physical Meaning of the Geometric Terms} The set of path equations (3.11), (3.14) and (3.18) comprises a new class of path equations which can be written in the general form $$ {\frac{dZ^\mu}{d\tau}} + a_{1}~ \cs{\alpha}{\beta}{\mu} Z^\alpha Z^\beta = -~a_{2}~ \Lambda^{~ ~ ~ ~ \mu}_{(\alpha \beta) .} ~~Z^\alpha Z^\beta -~a_{3}~ \hat{F}^{\mu}_{. \nu}Z^{\nu} -~a_{4}~ g^{\mu \delta}\hat{C}_{\nu ; \delta}Z^{\nu} -~ a_{5}~ \Lambda^{~ ~ ~ ~ \mu}_{\alpha \beta .} ~~\hat{C}^{\alpha} Z^\beta , \eqno{(4.1)} $$ where $Z^{\mu}$ is the tangent of the resulting path , $\tau$ is the evolution parameter of the path, $a_{1}, a_{2}, a_{3}, a_{4}, $ and $a_{5}$ are numerical parameters whose values are listed in the following table. \begin{center} {Table 1: Values of the parameters of (4.1) corresponding to connection used} \end{center} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Connection used&$a_{1}$&$a_{2}$& $a_{3}$& $a_{4}$ & $a_{5}$ & Equation \\ \hline & & & & & & \\ ${\Gamma}^{\alpha}_{.~\mu \nu} $ & 1& 1& 1& 1 & 1& (3.11) \\ & & & & & & \\ \hline & & & & & & \\ ${\Gamma}^{\alpha}_{.~ (\mu \nu )}$ &1 & $\frac{1}{2}$ & 1 & 1 & $\frac{1}{2}$ & (3.14) \\ & & & & & & \\ \hline & & & & & & \\ $\tilde{\Gamma}^{\alpha}_{.~\mu \nu } $ & 1& 0 & 1 & 1& 0 & (3.18) \\ & & & & & & \\ \hline \end{tabular} \end{center} In the context of the scheme of geometerization, if the general equation (4.1) is used to study trajectories of charged test particles, we can attribute the following physical meanings to some of its terms: \\ 1- The term whose coefficient is $(a_{1})$, $\cs{\alpha}{\nu}{\mu} Z^{\alpha }Z^{\nu}$, represents , as usual, the effects of gravitation on the motion of the particle. \\ 2- The term whose coefficient is $(a_{2})$, $\Lambda_{( \alpha \nu) .}^{~~~\mu} Z^{\alpha} Z^{\nu}$, is suggested [2] to represent a type of interaction between the torsion of space-time and quantum spin of the moving particle. This interaction will affect the motion of the particle. This term is quantized as clear from the values of $(a_{2})$ in Table 1. It is shown that there are some experimental [3] and observational [4] evidences for the existence of this interaction. \\ 3- The term whose coefficient is $(a_{3})$, $ \hat{F}^{\mu}_{. \beta}Z^{\beta} $, represents the effect of the electromagnetic field on the motion of a charged particle,the Lorentz force, as obvious from the comparison with the R.H.S. of (3.2). The coefficient of this term, as clear from Table 1, does not vary from equation to another. So, this effect is not quantized. \\ 4- The term whose coefficient is $(a_{5})$, $ \Lambda_{ \alpha \nu .}^{~~~\mu} \hat{C}^{\alpha} Z^{\nu}$ represents an interaction between the electromagnetic potential and the torsion of space-time. The term is already quantized, as clear from Table 1. This term represents a direct effect of the electromagnetic potential on the motion of a charged particle similar to the Aharonov-Bohm (AB) effect [14], [15], with one main difference, that is the influence of the torsion on this term. This will be discussed in the following section. \section{Discussion and Concluding Remarks} In the present work, we derived a new class of path equation, in AP-geometry, using Bazanski method. All terms appearing in this class are parts of the geometric structure used. The general equation, representing this class can be used, qualitatively, to explore different interactions affecting trajectories of charged particles in a combined gravitational and electromagnetic field. If the terms representing the electromagnetic effects on the trajectory, $\hat{F}_{\mu \nu}$ and $\hat{C}_{\alpha}$, are switched off, then the new class will reduce to the old set of path equations [1] with spin-torsion term. If further, we neglect this term we get the ordinary geodesic equation. Two of the terms, on the R.H.S. of the general equation (4.1), are naturally quantized in Planck's sense ( terms with jumping coefficients). One of these terms is the spin-torsion term with the coefficient $(a_{2})$ and the other is the term giving rise to AB-type effect (the term with coefficient $(a_{5})$, see Table 1 ). All other terms, including the Lorentz force term, of this equation are not quantized, in the above mentioned sense. It is clear from this equation that the appearance of the quantum properties in this equation is closely connected to the explicit appearance of the torsion in the terms concerned. Since Riemannian geometry is a torsion free geometry, such quantum properties did not appear in any field theory constructed in this geometry. We would like to point out that we are not claiming that we are doing Quantum Mechanics or Quantum Field Theories. Actually, we are dealing with a property, in AP-geometry that some terms of the equations have jumping coefficients, which reflects some quantum features in the sense of Planck's quantization. It is well known, in the literature, that the AB-effect is a quantum phenomenon which is impossible to be accounted for using classical electrodynamics. The scheme followed in present work is a pure geometric scheme, which is considered by many authors as a classical scheme. Now, if the term, whose coefficient is $a_{5}$, is interpreted to give rise to AB-effect, then one can draw the following conclusion : {\it {Either the AB-effect is a classical phenomenon and can be accounted for using a classical scheme, or the type of geometry used, the AP-geometry, admits some quantum properties}}.
{ "timestamp": "2018-02-13T02:18:47", "yymm": "1802", "arxiv_id": "1802.04058", "language": "en", "url": "https://arxiv.org/abs/1802.04058" }
\section{Introduction} Planetary and preplanetary nebulae (PNe, pPNe) often show a remarkable axial symmetry and fast bipolar outflows with velocities around 100 \mbox{km\,s$^{-1}$}. On the contrary, asymptotic giant branch (AGB) circumstellar envelopes, their immediate precursors, are spherical, at least at large-scale, and expand isotropically at moderate velocities (10 -- 20 \mbox{km\,s$^{-1}$}). The development of such axial structure and dynamics remains an open question. It has been proposed to be associated with rotating disks (e.g., Soker 2001, Bujarrabal et al.\ 2001, Balick \& Franck 2002, S\'anchez Contreras et al.\ 2002), from which material would fall onto the star or a companion during early post-AGB phases, powering very fast and collimated stellar jets. In principle, the material ejected during the AGB phases does not have enough angular momentum to form Keplerian disks, which should only appear around binary stellar systems, as these systems have the necessary angular momentum stored in their orbital movement. Other mechanisms to explain bipolar post-AGB nebula involve an anisotropic sudden ejection of stellar gas by a very late AGB (or very early post-AGB) star during a common-envelope phase (Alcolea et al.\ 2007, De Marco et al.\ 2009, 2011, Iaconi et al.\ 2017). However, our theoretical understanding of these phenomena is still poor. The detailed observation of Keplerian disks around post-AGB stars, including their dynamics, is not straightforward, since it requires high angular and spectral resolutions. To date, disks have been well mapped only in three objects, the Red Rectangle, AC Her, and IW Car, by means of interferometric mm-wave maps of CO lines (Bujarrabal et al.\ 2013b, 2015, 2017). These sources belong to a class of binary post-AGB stars surrounded by low-mass nebulae that show independent evidence of the presence of disks (e.g., Van Winckel 2003, de Ruyter et al.\ 2006, Gezer et al.\ 2015, Hillen et al.\ 2016, 2017). Notably, they are characterized by spectral energy distributions (SEDs) with a significant near-infrared (NIR) excess, revealing the existence hot dust close to the stellar system. The very compact nature of the NIR emission has been confirmed by interferometric IR measurements (e.g.,\ Hillen et al. 2017). In addition, the IR spectra reveal the presence of highly processed grains, which is indicative of the longevity of the disks. Single-dish observations of \mbox{$^{12}$CO}\ and \mbox{$^{13}$CO}\ emission in these NIR-excess post-AGB stars (including the Red Rectangle, AC Her, and IW Car) systematically yielded characteristic line profiles, which are exactly those expected from relatively extended Keplerian disks (Bujarrabal et al.\ 2013a, Bujarrabal \& Alcolea 2013). A slowly expanding component was also proposed in most nebulae, although its shape and dynamics could not be well identified from those data. Indeed, ALMA maps of CO lines in the Red Rectangle and IW Car show a bipolar low-velocity outflow, which is particularly noticeable in lines with relatively high opacity and excitation; this bipolar outflow was deduced to be very probably formed of gas extracted from the disk, given its structure and kinematics. The CO emission image of 89 Her, another NIR-excess post-AGB, is dominated by an extended hourglass-like nebula in slow expansion (Bujarrabal et al.\ 2007). Although rotation was not actually resolved in 89 Her, a small disk could be confined to the prominent central clump. Recent NOEMA maps of the similar objects IRAS\,19125+0343 and R Sct (G\'omez-Garrido et al., in preparation) also show an expanding nebula and a possible central disk that remains unresolved. On the other hand, no gas in expansion was found in the mm-wave maps of AC Her, but we cannot rule out a diffuse outflow, since the CO interferometric data show a significant flux loss and no sub-mm observations exist. We stress that no signs of Keplerian disks have been found in other kinds of post-AGB nebulae, in particular in the well-observed high-mass pPNe and young PNe. We cannot exclude that confusion with the strong emission from their expanding nebulae prevents in some way the detection of emission from small central disks, but even high-resolution observations of the very inner nebular regions have yielded no sign of disks to date (e.g.,\ Alcolea et al.\ 2007, Olofsson et al.\ 2015, Santander-Garc\'{\i}a et al.\ 2017). The presence of binary systems in the center of high-mass bipolar pPNe and PNe is also debated. Some well-known nebulae, such as OH\,231.8+4.2, M\,2-9, and several evolved PNe, harbor binary systems (see, e.g.,\ S\'anchez Contreras et al.\ 2004, Castro-Carrizo et al.\ 2012, Hillwig et al.\ 2016). However, long-term radial velocity studies have yielded negative results in a number of post-AGB sources; for example, Hrivnak et al.\ (2017) have only found that one object, out of seven well-studied sources, was probably a wide stellar system. These findings imply that there are significant constraints to eventual binarity in these sources. The evolution of NIR-excess post-AGB objects is not well known (e.g.,\ Van Winckel et al.\ 2009, De Marco 2014) and could be significantly different from that of well-studied (pre)planetary nebulae. Their nebular mass, including rotating and expanding gas, is low, $<$ 0.1 \mbox{$M_{\mbox{\sun}}$}, often $\sim$ 0.01 \mbox{$M_{\mbox{\sun}}$}\ (Bujarrabal et al.\ 2013a). This suggests that they are not ejecting sufficient mass to form a high-mass PN (i.e.,\ a nebula containing most initial mass). However, we point out that in many well-known PNe and pPNe the total detected nebular mass is smaller than $\sim$ 0.1 \mbox{$M_{\mbox{\sun}}$}, including ionized gas, molecular gas, or PDR-like components; see, e.g.,\ the compilation of mass values by Pottasch (1984), Huggins \& Healey (1989), Huggins et al.\ (1996), S\'anchez-Contreras et al.\ (2012), and Castro-Carrizo et al.\ (2001). It is also probable that the interaction of the star with the orbiting disk, including reaccretion of material, slows down their post-AGB evolution (e.g.,\ Van Winckel et al.\ 2009). Indeed, all these NIR-excess stars still show relatively low stellar temperatures and exhibit spectral types usually in the range F-K. It is obvious that the conspicuous nebulae detected in some of these objects could form a low-mass PN, but the star would not necessarily become hot enough to ionize the surrounding gas before, due to expansion, the nebula becomes too diffuse to be detectable. Most remarkably, disks are observed in binary post-AGB stars, which often show orbits that are too small to accommodate an AGB star (Van Winckel et al.\ 2009, Manick et al., 2017). In the best-studied NIR-excess post-AGB star, the Red Rectangle (Bujarrabal et al.\ 2016), the total angular momentum of the disk is not negligible in comparison to that of the binary at present. If, as expected, the disk angular momentum originates from the stellar system during a previous phase of strong interaction, a comparison of both the disk and binary momentum values at present implies that a significant decrease of the distance between the stars occurs (by a factor \raisebox{-.4ex}{$\stackrel{\sf >}{\scriptstyle\sf \sim}$}\ 2) from disk formation. We reach a similar conclusion from our observations; see detailed discussion in Sect.\ 4. These results suggest that the binary orbit was wider than the size of an AGB star in the past; but not much wider, which helps to explain the transfer of angular momentum. A better knowledge of the main properties of the disks is therefore imperative to understand the orbits of evolved binary stellar systems. Finally, we recall that our post-AGB stars are not the only evolved stars surrounded by Keplerian disks. The existence of disks orbiting white dwarfs (WDs) has been known for more than three decades (e.g.,\ Zuckerman \& Becklin 1987, Bil\'{\i}kov\'a et al.\ 2012). Some of these stars are also surrounded by prominent PNe, which are much more extended than the disks. It has been argued, see Clayton et al.\ (2014), that the Keplerian disks found in PNe are remnants of disks formed from circumstellar material ejected by the star in AGB or early post-AGB phases; this process is very similar to that probably responsible for the disks we are studying. The disks around very old WDs can be different; these disks possibly originate from the disruption of asteroids or analogs. Moreover, the presence of Keplerian rotation in a certain class of AGB stars, that is, semiregular variables that show aspherical shells in anisotropic slow expansion, has been also proposed. A relatively small disk has been well identified in one of these objects, L$_2$ Pup (Homan et al.\ 2017). Our sources could therefore represent an evolutionary link between disks around AGB stars and disks around WDs (at least those surrounded by PNe), through the phase of prominent post-AGB disks. If this relation really holds, the disks surrounding WDs, whose mass is very low, would just contain a small fraction of the previous post-AGB disks, after a long process of disk evaporation. We think that the study of disks orbiting NIR-excess post-AGB stars may be fundamental to the understanding of the formation of disks in various phases of the late evolution of intermediate-mass binary stars. This phenomenon can be crucial to understanding the late evolution of binary systems and the shaping of PNe. We present ALMA maps of \mbox{IRAS\,08544$-$4431}\ in CO emission that clearly show both rotating and expanding gas. \mbox{IRAS\,08544$-$4431}\ is a low-amplitude pulsator that belongs to the class of NIR-excess post-AGB stars mentioned before (Maas et al.\ 2003, De Ruyter et al.\ 2006, Bujarrabal et al.\ 2013a, Hillen et al.\ 2016). The hot-dust component of \mbox{IRAS\,08544$-$4431}\ has been observed in the IR using the VLTI. The inner rim of the disk was imaged, showing a diameter of about 15 mas and suggesting an inclination with respect to the plane of the sky of about 20$^\circ$. Our CO data confirm the disk-like structure and the value of the inclination, although the size of the CO-emitting region is much larger. \mbox{IRAS\,08544$-$4431}\ is a double stellar system; see Van Winckel et al.\ (2009). Assuming that the inclination of the orbit is that of the disk and a high-luminosity post-AGB primary with $\sim$ 0.5 \mbox{$M_{\mbox{\sun}}$}\ at present, one deduces a $\sim$ 1$-$2-AU-wide orbit and a secondary with about 1.3 \mbox{$M_{\mbox{\sun}}$}; but we stress that the mass of the individual stars is not well determined and strongly depends on the orbit inclination (see further discussion in Sect.\ 3.1). These authors suggested a distance $D$ $\sim$ 550 pc for \mbox{IRAS\,08544$-$4431}. However, this value is based on an assumed standard luminosity of 3000 \mbox{$L_{\mbox{\sun}}$}\ and is therefore very tentative. The GAIA parallax, 0.86 $\pm$ 0.6 mas, is not very accurate and we recall that the binary nature of the star can affect the parallax measurements in a complex way (Acke et al.\ 2013). As we see later in this work, $D$ $\sim$ 1100 pc is more compatible with our estimates of the total stellar mass, and we have adopted this value (see Sect.\ 3.1). We widely discuss the effects of the distance uncertainty on our modeling, in particular, to allow a comparison with previous results. \begin{figure*} \hspace{.3cm} \includegraphics[width=17.cm]{fig12co.jpg} \caption{ALMA maps per velocity channel of \mbox{$^{12}$CO}\ \mbox{$J$=3$-$2}\ emission in \mbox{IRAS\,08544$-$4431}. The continuum emission has been subtracted to better show the distribution of the weak line. The first contour and spacing are 0.02 Jy beam$^{-1}$ (equivalent to 6 K, Rayleigh-Jeans equivalent temperature). The {\em LSR} velocities are indicated in each panel (upper left corner). } \label{} \end{figure*} \begin{figure*} \hspace{.3cm} \includegraphics[width=17cm]{fig13co.jpg} \caption{ALMA maps per velocity channel of \mbox{$^{13}$CO}\ \mbox{$J$=3$-$2}\ emission in \mbox{IRAS\,08544$-$4431}. The continuum has been also subtracted in this figure. The contours are shown as in Fig.\ 1: 0.02, 0.04, ... Jy beam$^{-1}$. The {\em LSR} velocities are indicated in each panel. } \label{} \end{figure*} \begin{figure*} \hspace{.3cm} \includegraphics[width=17.cm]{mod4-12co.jpg} \caption{Synthetic maps per velocity channel from our best-fit model of \mbox{$^{12}$CO}\ \mbox{$J$=3$-$2}\ emission in \mbox{IRAS\,08544$-$4431}. These maps are comparable with Fig.\ 1; all scales and contours are the same as in that figure.} \label{} \end{figure*} \section{Observations} We present maps of \mbox{IRAS\,08544$-$4431}\ in the \mbox{$^{12}$CO}\ and \mbox{$^{13}$CO}\ \mbox{$J$=3$-$2}\ lines, $\lambda$ = 0.8 mm that were obtained with ALMA, band 7 receiver. A total of five observing runs were performed: three between August 29 and 30, 2015, and two more in August 25 and September 3, 2016. The source was observed for $\sim$ 37 min in each track. The observations were obtained during ALMA Cycle 2. Thirty-four antennas were used with baselines ranging from 15 to 2483 m. Data were calibrated with the CASA software package. The quasars J0538-4405, J0922-3959, and J0904-5735 were observed for bandpass, flux, and phase calibrations. Through a comparison of their fluxes in the different runs, we obtained significant differences between 2015 and 2016 observations, which is not exceptional in measurements of quasar continuum flux. The fluxes of the calibrators were found to change, respectively, from 886/372/464 mJy in 2015 to 1356/332/766 mJy in 2016. The flux calibration was however very consistent between the observations performed in the same year. Finally, by looking at final source data, in particular by comparing the source continuum emission obtained from 2015 data and 2016 observations, the flux calibration was judged to be optimal and no additional flux rescaling was applied. After the data calibration, the rest of the analysis was made with the GILDAS software package. First, additional phase self-calibration was performed using the compact continuum source as reference. Image deconvolution was carried out with natural and also robust weighting, which leads to channel maps with 0.19$\times$0.18 arcsec and 0.13$\times$0.10 arcsec (HPBW) resolutions, respectively. Various CLEANing methods (Hogbom and SDI) were also used in the image synthesis to best represent the different emission components. The SDI method is known to be more adapted to represent the most extended emission. The data here presented were obtained with natural weighting and SDI method. The ALMA backend was set to achieve a spectral resolution of about 0.2 \mbox{km\,s$^{-1}$}; the data were delivered with a channel spacing of 0.11 \mbox{km\,s$^{-1}$}). In the data presentation selected for this paper, the resolution was degraded to about 0.43 \mbox{km\,s$^{-1}$}\ to improve the S/N at high velocities. We kept however the highest spectral resolution for the position-velocity diagrams to give a better representation of the velocity structure of the intense Keplerian disk. All velocity values in this paper are given in the Local Standard of Rest ({\em LSR}) frame. By comparing our data with APEX single-dish profiles (Bujarrabal et al. 2013a), we conclude that a small fraction of the flux, $<$ 20\%, was filtered out in the maps of \mbox{$^{12}$CO}\ \mbox{$J$=3$-$2}. Anyway, we note that such moderate difference is close to usual uncertainties in absolute flux calibration. In addition, both single-dish and integrated ALMA profile shapes are very similar, confirming a low degree of lost flux. The \mbox{$^{13}$CO}\ maps, with a less extended brightness distribution, are not expected to show a significant flux loss. Dust continuum emission was found to be not resolved and centered at RA 08:56:14.165 ~DEC $-$44:43:10.588, which is also the center of all the maps here presented. Fitting of the continuum data in the uv-plane yielded a total flux of $\sim$ 320 $\pm$ 1 mJy and an estimated size of the emission region smaller than 0.1 arcsec. Such a continuum level was subtracted from our maps to better show the weakest features, in particular in the \mbox{$^{13}$CO}\ position-velocity diagrams. \section{Modeling of our ALMA maps} The amount of data available for \mbox{IRAS\,08544$-$4431}\ and their quality are moderate, and these data are not comparable to those obtained for the best-studied NIR-excess post-AGB object, the Red Rectangle. We also lack information on the nebula in general. Under these conditions, very detailed models, such as those developed for the Red Rectangle, are not sensible and the uncertainties for several derived parameters are not negligible, as discussed below. Fortunately, the observational results in \mbox{IRAS\,08544$-$4431}\ are not very different from those obtained for the Red Rectangle, AC Her, 89 Her, and IW Car (Bujarrabal et al.\ 2007, 2015, 2016, 2017). In view of this, our models for \mbox{IRAS\,08544$-$4431}\ follow the general patterns found for these objects. We used codes that are very similar to those described in our previous works (Bujarrabal et al.\ 2013b, 2015, 2017, ...). As for similar objects, all information we have on this nebula is compatible with the presence of axial symmetry. We assume local-thermal-equilibrium (LTE) populations for the involved rotational levels. This is a reasonable assumption for low-$J$ CO transitions in the dense material expected in our sources, $n$ \raisebox{-.4ex}{$\stackrel{\sf >}{\scriptstyle\sf \sim}$}\ 10$^4$ cm$^{-3}$, since their Einstein coefficients are then smaller than the typical collisional rates; see further discussions in Bujarrabal \& Alcolea (2013), Bujarrabal et al.\ (2016). The use of LTE may introduce some uncertainties (see below), but it significantly simplifies the calculations and provides an easier interpretation of the fitting parameters. For each considered model nebula, we assume a shape for the nebula, constant molecular abundances, and distributions of the local velocity dispersion, macroscopic velocity, density, and temperature. With these ingredients it is possible to calculate the absorption and emission coefficients of the two observed lines. These are computed for a high number of projected velocities, according to the actually observed channels, and for a high number of elemental cells occupying the whole nebula, typically around 10$^6$ cells are used in our calculations. The cell density in general oversamples the central regions of the nebula, where the rotation velocity varies faster. We then solved the standard radiative transfer equation in a high number of directions pointing to the telescope (and for the set of projected velocities), taking into account the assumed orientation of the nebula axis with respect to the plane of the sky and to the north. Typically, we solve the transfer equations following 10$^4$ to 10$^5$ rays, which traverse the cells in which the nebula has been divided. We get a predicted brightness distribution, as a function of the coordinates (right ascension and declination offsets) and of the projected velocity. This distribution is numerically convolved with the interferometric clean beam and converted to units that are directly comparable to the observational data, Rayleigh-Jeans-equivalent brightness temperatures, or Jy/beam. As mentioned, we took into account our results for better studied sources to select possible nebula models. In any case, a high number of different configurations were analyzed. \subsection{Description of the best-fit model} We finally adopted a model nebula as a good representation of the source, based on its reasonable properties and comparing the predictions with the observational data and other previous results; see predicted maps per velocity channel and position-velocity diagram in Figs.\ 3 and 5. Graphical representations of the main model parameters are given in Figs.\ 6 and 7. The parameters describing the best-fit model nebula, including those describing the main physical conditions, are given in Table 1. In principle, we give our results for an assumed distance $D$ = 1100 pc. We have seen in Sect.\ 1 that the distance could be significantly lower; the effects of the assumed distance on the derived parameters are discussed in Sect.\ 3.2, in which we consider in particular the case of an alternative distance of 550 pc. \begin{figure} \hspace{-0.cm} \includegraphics[width=8cm]{pa75cen.jpg} \caption{Position-velocity diagrams from our ALMA maps of \mbox{$^{13}$CO}\ \mbox{$J$=3$-$2}\ in \mbox{IRAS\,08544$-$4431}\ along the direction P.A.\ = 75$^{\circ}$. Contours are the same as in the channel maps, but we used a higher spectral resolution to better show the velocity structure. The dashed lines show approximate centroids in velocity and position.} \label{} \end{figure} \begin{figure} \hspace{.63cm} \includegraphics[width=7.35cm]{model-pv13.jpg} \caption{Synthetic position-velocity diagrams from our best-fit model of \mbox{$^{13}$CO}\ \mbox{$J$=3$-$2}\ emission in \mbox{IRAS\,08544$-$4431}, to be compared with Fig.\ 4; all scales and contours are the same as in that figure.} \label{} \end{figure} \begin{table*}[bthp] \caption{Physical conditions in the molecule-rich nebula, derived from our model fitting of the CO data, and assuming $D$ = 1100 pc. The values of the physical conditions depend on three geometrical parameters: the distance to the center, $r$, distance to the axis, $p$, and distance to the equator, $h$. See Figs.\ 6, 7 for more details and cartoons of the density, velocity, and temperature distributions. } {\tiny \begin{center} \begin{tabular}{|l|cc|cc|cc|} \hline\hline & & & & & & \\ & \multicolumn{2}{c|}{Inner disk} & \multicolumn{2}{c|}{Outer disk} & \multicolumn{2}{c|}{Outflow} \\ & \multicolumn{2}{c|}{(pure Keplerian rotation)} & \multicolumn{2}{c|}{(subkeplerian rotation plus slow expansion)} & & \\ & & & & & & \\ {Parameter} & {Law} & { Values} & {Law} & {Values} & {Law} & {Values} \\ & & & & & & \\ \hline\hline & & & & & & \\ Radius & r $<$ $R_i$ & $R_i$ = 6 10$^{15}$ cm & r $<$ $R_o$ & $R_o$ = 2 10$^{16}$ cm & p $<$ $R_{of}$ & $R_{of}$ = 2.5 10$^{16}$ cm \\ (see Figs.\ 6, 7) & & & & & & \\ \hline & & & & & & \\ Total width & $h$ $<$ $H_i$ & $H_i$ = 2.8 10$^{15}$ cm & $h$ $<$ $H_o$ & $H_o$ = 4 10$^{15}$ cm & $h$ $<$ $H_{of}$ & $H_{of}$ = 3.4 10$^{16}$ cm \\ (see Figs.\ 6, 7) & & & & & & \\ \hline & & & & & & \\ Temperature & $T$ = $T_o \left( \frac{5\, 10^{15} {\rm cm}}{p} \right)^{\alpha_T}$ & $T_o$ = 36 K & $T$ = $T_o \left( \frac{10^{16} {\rm cm}}{p} \right)^{\alpha_T}$ & $T_o$ = 27 K & $T$ = $T_o \left( \frac{10^{16} {\rm cm}}{r} \right)^{\alpha_T}$ & $T_o$ = 50 K \\ & & $\alpha_T$ = 0.4 & & $\alpha_T$ = 1 & & $\alpha_T$ = 0.7 \\ \hline & & & & & & \\ Density & $n$ = $n_o \left( \frac{5\, 10^{15} {\rm cm}}{p} \right)^{\alpha_n}$ & $n_o$ = 3.7 10$^6$ cm$^{-3}$ & $n$ = $n_o \left( \frac{10^{16} {\rm cm}}{p} \right)^{\alpha_n}$ & $n$(10$^{16}$cm) = 9 10$^5$ cm$^{-3}$ & $n$ = $n_o \left( \frac{10^{16} {\rm cm}}{r} \right)^{\alpha_n}$ & $n_o$ = 8.5 10$^4$ cm$^{-3}$ \\ & & $\alpha_n$ = 1 & & $\alpha_n$ = 1 & & $\alpha_n$ = 2.3 \\ \hline & & & & & & \\ Local velocity & constant & 0.1 \mbox{km\,s$^{-1}$} & constant & 0.1 \mbox{km\,s$^{-1}$} & constant & 2 \mbox{km\,s$^{-1}$} \\ dispersion & & & & & & \\ \hline \end{tabular} \begin{tabular}{|l|cc|l|} \hline & & & \\ {Other parameters} & {Law} & {Values} & comments \\ & & & \\ \hline\hline & & & \\ Axis inclination with respect to the plane of the sky & & 70$^\circ$ & from IR and CO data \\ & & & \\ \hline & & & \\ Axis inclination in the plane of the sky (PA) & & $-$15$^\circ$ & from IR and CO data \\ & & & \\ \hline & & & \\ Distance & & 1100 pc & various arguments (Sect.\ 1) \\ & & & \\ \hline & & & \\ \mbox{$^{12}$CO}\ relative abundance & ~~constant~~ & ~~1.5 10$^{-4}$~~ & this paper \\ \mbox{$^{13}$CO}\ relative abundance & ~~constant~~ & ~~1.5 10$^{-5}$~~ & this paper \\ & & & \\ \hline\hline \end{tabular} \end{center} } \end{table*} In this best-fit model, we adopted a relative abundance with respect to the total number of particles $X$(\mbox{$^{13}$CO}) $\sim$ 1.5 10$^{-5}$; as in most nebulae around evolved stars, we can assume that the dominant component of the low-excitation gas is H$_2$. This value of $X$(\mbox{$^{13}$CO}) is similar to those adopted in our previous works to ease the comparison with previous results on this and similar objects. Also following those works, we adopted an abundance ratio \mbox{$^{12}$CO}/\mbox{$^{13}$CO}\ = 10. Those values lead to a reasonable fit of the data. The disk is assumed to be formed of two components. In the inner component, the rotation is purely Keplerian. The deduced velocity field corresponds to a central (stellar) mass of about 1.8 \mbox{$M_{\mbox{\sun}}$}, reasonable for this binary system (Sect.\ 1). In the outer component, we assumed sub-Keplerian rotation and the presence of a slow radial expansion, which significantly helps to reproduce the data, as was also the case in our studies of the Red Rectangle and IW Car (see discussions in our previous works, Bujarrabal et al.\ 2005, 2013b, 2017). For the sub-Keplerian velocity law we assumed angular momentum conservation. Remarkably, a similar result was independently found for L$_2$ Pup, the only AGB source in which a rotating disk has been found to date, by Homan et al.\ (2017), who also deduced sub-Keplerian rotation in the outer disk regions; in any case, the size of the disk around L$_2$ Pup is much smaller than in our objects. We found that a small inner region of the disk must be devoid of molecules in some way. It is necessary to adopt an assumption of this kind to fit the high-velocity maps, but the angular resolution of our data does not allow a proper description of this region (see Sect.\ 3.2). A similar result was also found in our previous works for the Red Rectangle and IW Car. As in those papers, we assumed a progressive decrease of the disk width in the inner regions, instead of a sudden disappearance of the emitting gas. We recall that we cannot distinguish from fitting the different options, and here we take a simple law. An empty region at a smaller scale is also found in the IR VLTI maps of hot dust emission in \mbox{IRAS\,08544$-$4431}\ (Hillen et al.\ 2016), which show emission from a ring with a diameter of about 15 mas (1 -- 2 10$^{14}$ cm, compatible with the dust sublimation radius). Those observations strongly select hot-dust inner regions and are probably able to trace the very inner disk, at the vertex of the cone depicted in Figs.\ 6 and 7, where the emission of the hottest dust is prominent despite its small radius and width. In the disk, the density and temperature ($n$ and $T$) are assumed to vary solely with the distance to the axis $p$ and following potential laws; see values of the parameters in Table 1. We note the laws are not the same for both inner and outer disk components. The density distribution is shown in Fig.\ 6. As in our previous works, we find that the disk must show a very small local velocity dispersion to fit the data; we assumed that this is the combination of the thermal movements and a small local dispersion (microturbulence) of 0.1 \mbox{km\,s$^{-1}$}. In the outflow, the density and temperature are assumed to vary with the distance to the center, again following potential laws; see parameters in Table 1. The expansion velocity is basically radial and has a minor component that is parallel to the equator. The local velocity dispersion in the outflow is dominated by microturbulence with a dispersion of 2 \mbox{km\,s$^{-1}$}. The total nebular mass derived from our fitting is $\sim$ 2 10$^{-2}$ \mbox{$M_{\mbox{\sun}}$}, about 90\% of which is placed in the disk. These values are compatible with those found by Bujarrabal et al.\ (2013a, after correcting the different assumed distance), although their treatment, based only on single-dish \mbox{$^{12}$CO}\ observations, was much more uncertain. From the extent and velocity field of the expanding envelope, we derived a typical time required to form it of about 1100 yr. From the disk/outflow mass ratio and assuming, as we deduced for similar sources (Sect.\ 1), that the outflowing gas has been expelled from the disk, we can estimate a disk lifetime of about 10000 yr. This value is comparable to that found for IW Car and the Red Rectangle. This is just an estimate of the disk lifetime scales, since the mass-ejection rate can vary with time. We also stress that our outflow component is unbounded, since the escape velocity is $\sim$ 1.8 \mbox{km\,s$^{-1}$}\ at 1000 AU, a few times smaller than the outflow velocity. As we discuss in Sect.\ 3.2, the derived parameters are affected by the adopted distance, whose value is not well known (Sect.\ 1). If we assume a significantly shorter distance, as mentioned in the Introduction, $\sim$ 550 pc, most model parameters change significantly and \mbox{IRAS\,08544$-$4431}\ becomes a relatively small and low-mass post-AGB nebula. The total size would be $\sim$ 3.5 10$^{16}$ cm (lower than for the Red Rectangle) and the total mass would become $\sim$ 6 10$^{-3}$ \mbox{$M_{\mbox{\sun}}$}\ (the mass of the Red Rectangle is $\sim$ 10$^{-2}$ \mbox{$M_{\mbox{\sun}}$}). The central stellar mass and the disk lifetime would become $\sim$ 0.9 \mbox{$M_{\mbox{\sun}}$}\ and $\sim$ 5000 yr, again lower than those of the Red Rectangle. In view of the parameters deduced for the stellar orbit, Sect.\ 1, a total mass of about 1.8 \mbox{$M_{\mbox{\sun}}$}\ would be necessary to get a mass of the primary of about 0.5 \mbox{$M_{\mbox{\sun}}$}, a very reasonable value for a post-AGB star. A total mass of $\sim$ 1 \mbox{$M_{\mbox{\sun}}$}, as deduced for 550 pc, would yield a primary mass of only 0.1 \mbox{$M_{\mbox{\sun}}$}. Therefore, this reasoning favors a long distance $\sim$ 1100 pc for \mbox{IRAS\,08544$-$4431}, which is taken as our standard value, but its validity depends on the relatively uncertain mass values. A distance of 1.1 kpc implies a luminosity $\sim$ 12000 \mbox{$L_{\mbox{\sun}}$}. Using the core-mass luminosity relation of Miller Bertolami (2016), this results in a post-AGB mass of about 0.65 M$_{\odot}$. But a larger primary mass requires a larger total mass (as deduced from the orbit mass function and inclination), and then still larger distance (to explain the observed disk rotation) and luminosity. To be fully compatible with the stellar evolution calculations, we should adopt a distance $\sim$ 1.5 kpc, a primary mass $\sim$ 0.8 \mbox{$M_{\mbox{\sun}}$}, a total mass of 2.5 \mbox{$M_{\mbox{\sun}}$}, and a luminosity of about 2 10$^4$ \mbox{$L_{\mbox{\sun}}$}. However, this criterion is weak because of the poorly understood post-AGB evolution of the stars studied here (Sect. 1) and the difficulties in measuring the involved parameters, widely mentioned in this paper. In particular, total stellar mass values of 2 -- 2.5 \mbox{$M_{\mbox{\sun}}$}\ are still compatible with a distance of 1.1 kpc and the observed Keplerian velocity field (within the uncertainties, Sect. 3.2). The orbital parameters are also uncertain; in particular, the translation from observed orbital parameters to physical binary parameters is strongly dependent on the inclination. An orbital inclination of 23$^\circ$, a change of just 3$^\circ$ with respect to our standard value, and a post-AGB mass of 0.65 \mbox{$M_{\mbox{\sun}}$}\ give a total stellar mass of 1.8 \mbox{$M_{\mbox{\sun}}$}\ from the measured mass function. This is compatible with the gravitational mass obtained from the ALMA data and with evolution calculations for our distance value, 1.1 kpc. Moreover, a significantly larger distance would lead to very high values of the nebular mass and size compared with results for other similar objects (Sect. 1). Accordingly, we think that the distance derived from the parallax measurements, 1.1 kpc, is a reasonable compromise that is compatible with all the available information. We derived the disk angular momentum by integrating the local momentum of the volume units for the adopted density and velocity laws. The uncertainties of this estimate are discussed in Sect.\ 3.2. Particularly interesting is the distance dependence of the disk angular momentum. For a distance of 550 pc, we derived a disk angular momentum $J$ $\sim$ 1.6 \mbox{$M_{\mbox{\sun}}$}\,\mbox{km\,s$^{-1}$}\,AU. This value is significantly smaller than that of the Red Rectangle ($\sim$ 9 \mbox{$M_{\mbox{\sun}}$}\,\mbox{km\,s$^{-1}$}\,AU), but still high for the low central mass deduced in that case. The strong dependence $J$ $\propto$ $D^3$ (Sect.\ 3.2) yields a high value $J$ $\sim$ 13 \mbox{$M_{\mbox{\sun}}$}\ \mbox{km\,s$^{-1}$}\ AU for our best distance estimate, $D$ = 1100 pc. The angular momentum of the stellar system at present is found to be $\sim$ 20 \mbox{$M_{\mbox{\sun}}$}\,\mbox{km\,s$^{-1}$}\,AU, comparable to that of the disk. The stellar parameters used for this estimate are given in Sect.\ 1; see more details and discussion in Van Winckel et al.\ (2009). Again an angle between the orbit and sky planes of about 20$^\circ$ is assumed. \subsection{Uncertainties in the derivation of the model parameters} Some disk properties are not well determined from the observations, mainly because of the relatively low angular resolution and the weak dependence of the predictions on these properties. In particular, the width of the disk is not well determined because of the insufficient angular resolution and significant inclination of the axis with respect to the plane of the sky. The size and shape of the central region of the disk is assumed to show a decreasing width, which is a result similar to that we found in previously studied nebulae. The exact shape of these regions is also difficult to measure, in fact, we take a typical diameter of 2 10$^{15}$ cm (for $D$ = 1100 pc) that is smaller than the resolution in linear units, 3 10$^{15}$ cm. An inner disk region with relatively low CO emission clearly improves the quality of the modeling and is in fact necessary to get a reasonable data fitting, but we must keep in mind the uncertain structure of these very inner disk regions. On the contrary, the diameter of the disk is well measured, since it is basically given by its extent in the maps, that is, much larger than the resolution. The general structure of the wide outflow is also well determined from the data. However, the details of the boundary shape are of course uncertain, mainly in the farther regions of the X-shaped structure, whose emission is weak. Because of the relatively large total extent, comparable to the distance at which CO is photodissociated by the interstellar UV field in expanding gas, it is probable that the actual shape of the emitting region is rather due to CO photodissociation than to a relatively sudden decrease of the density; for the relatively low mass-loss rates characteristic of the outflows in our sources, see Mamon et al.\ (1988), Groenewegen (2017). Any attempt to deduce the size and shape of the CO-rich gas from the dissociation theory is extremely difficult because of the uncertainties in the path covered by each gas particle and the unknown changes of the velocity with time. We think that the general nebula shape we propose in this paper, after incorporating our previous experience with similar sources, is realistic and could be transferred, with moderate scaling and changes, to most of the sources of this kind. The total mass of both nebular components is relatively well constrained (for an assumed distance) because we match the total intensity of optically thin emission: \mbox{$^{13}$CO}\ emission for the disk and inner outflow regions and \mbox{$^{12}$CO}\ emission for the outermost regions. The main source of uncertainty for the total mass comes from the assumed value of the CO abundances, $X$(\mbox{$^{12}$CO}, \mbox{$^{13}$CO}). Because of the LTE approach we used, there is a degeneracy between density and CO abundance, such that predictions are identical for values of both parameters keeping a constant product. The errors in the mass are, therefore, inversely proportional to those of the abundance. Fortunately, these abundances are not very uncertain, showing a moderate variation between different studies. Following discussions in our previous works, we estimate $X$(\mbox{$^{13}$CO}) and $X$(\mbox{$^{12}$CO}) must vary in the ranges 10$^{-5}$ -- 2 10$^{-5}$ and 10$^{-4}$ -- 2 10$^{-4}$, respectively. We so expect a moderate uncertainty in the estimate of the molecule-rich gas mass, $\sim$ $\pm$50\%. The degree of uncertainty in the estimate of the disk angular momentum is similar to that of the total mass (or just slightly larger), since the detected disk radius and velocity are relatively well measured. We cannot rule out the presence of outer regions that may remain undetected because of their low brightness or photodissociation. Therefore, the derived mass and momentum values may be lower limits corresponding to the actually detected gas. The values of the temperature are relatively uncertain because we only have data of \mbox{$J$=3$-$2}\ emission. However, we note the high brightness of optically thick emission (\mbox{$^{12}$CO}\ line) from the disk and inner outflow, $>$ 30 K, reaching values close to 100 K. These are comparable to the high brightness found in the Red Rectangle, AC Her, and IW Car, suggesting high temperatures similar to those we deduced in our previous papers, over 100 K in central regions and decaying outward. Further details on the temperature distribution are difficult to estimate. The uncertainty in the average gas density in the disk can be important, mainly because of the uncertain disk width, with variations of the density inversely proportional to those of the width. The value adopted for $X$ also affects the values deduced for the density, as mentioned above. The density is particularly difficult to estimate in the outer regions of the outflow because of the low emission and more uncertain temperature law there. It is difficult to determine the effect that the temperature uncertainty has on the density estimate, but it is probably moderate. In principle, $n$ is appoximately proportional to the assumed value of $T$ for temperatures much higher than the line excitation ($\sim$ 30 K), exact LTE, and very low opacities. But the dependence is significantly lower than linear and tends to vanish when the excitation temperature is not very high or the optical depth is moderate. The derived disk/outflow mass ratio and disk lifetime bear somewhat stronger uncertainties than the independent mass values because a change in the abundances affects the determinations of the mass in both components in different ways. The uncertainty in the values of $X$(CO) mentioned above leads to changes in the disk lifetimes between 5000 and 2 10$^4$ yr. The velocity fields, both rotation and expansion, are relatively well measured, since maps are obtained for well-known {\em LSR} velocities and the nebula inclination is well constrained. Of course, the exact direction of the velocity, i.e.,\ the inclination of the arrows in Fig.\ 6, can significantly change. For instance, we cannot rule out a purely radial velocity (as found for IW Car). In any case, we had problems fitting the data with simple models that include radial velocities; the assumption of a radial velocity field would probably require small changes in the symmetry axis direction with the distance to the center. The central stellar mass depends on the square Keplerian velocity; therefore, even for small changes in the velocity of 15\%, we can expect variations in the stellar mass of about 30\%. The same applies for errors in the estimate of the inclination angle, since they affect the measurement of the Keplerian velocity modulus. We estimate that the stellar mass uncertainty is of about 40\%. The distance of the object $D$ is very uncertain (Sect.\ 1), which affects the other parameters. The model nebula size must vary linearly with the distance to continue to fit the data. The velocity field is not affected, provided that we scale the velocity laws to the size of the nebula. The same rule holds for the temperature. The density varies with $D^{-1}$, after scaling the density law to the size of the nebula since the column density must be conserved to yield the same optical depth in all lines of sight. Therefore, the change in the total volume implies that the total mass varies with $D^2$. The variations of the rotation velocities and distances mentioned above imply that the central stellar mass varies proportionally to the assumed distance. The dependencies of the velocity and size also lead to variations of the disk lifetime, proportionally to $D$. The dependence of the disk angular momentum on the distance is particularly strong, varying with the third power of the assumed value, since the momentum of an elementary particle rotating at a given velocity depends on its total mass and distance to the axis. These dependence laws are basically the same in all models of molecular line emission from AGB or post-AGB shells. Finally, we note that some features of our observations are not well reproduced by our model. Most of them are minor details, and probably correspond to the actual complexity of the true nebula in comparison with our very simple model. The most important discrepancy, in our opinion, is the presence of a shift to relatively low {\em LSR} velocities of the central maximum in the \mbox{$^{12}$CO}\ maps. This would correspond to an asymmetry in the emission between the gas that rotates approaching us, which is brighter, and that receding from us. Since the effect is less noticeable in \mbox{$^{13}$CO}\ maps, the natural explanation is that the kinetic temperature is higher in the disk edges at certain azimuthal angles. We can speculate that the position of the central stars can lead to a selective heating of certain regions of the disk. We do not try to incorporate these phenomena in our nebula model, since their nature is very uncertain and other complex effects could be present, such as selective photodissociation and gas evaporation. The effects of such a selective heating on the derived physical parameters remains within the ranges discussed before. \begin{figure} \hspace{-.0cm} \includegraphics[width=8.9cm]{mod11-dens.jpg} \caption{Density and velocity distributions in our best-fit model. The model is shown for $D$ = 1100 pc; the length scale would change proportionally for other distance values. Only expansion velocities are shown because we represent a plane containing the symmetry axis; rotation is only present in the equatorial disk.} \label{} \end{figure} \begin{figure} \hspace{-.0cm} \includegraphics[width=8.5cm]{mod11-temp.jpg} \caption{Temperature distribution in our best-fit model (for $D$ = 1100 pc). } \label{} \end{figure} \section{Conclusions} We present high-quality ALMA maps of \mbox{$^{12}$CO}\ and \mbox{$^{13}$CO}\ \mbox{$J$=3$-$2}\ emission from \mbox{IRAS\,08544$-$4431}\ (Sect.\ 2) and detailed modeling able to explain the main observational features (Sect.\ 3). \mbox{IRAS\,08544$-$4431}\ belongs to a class of binary post-AGB stars that are known to show indications of being surrounded by material in rotation (Sect.\ 1), including a significant NIR excess (supposed to be due to emission of hot dust) and peculiar CO line profiles that are very similar to those expected from rotating disks. The presence of rotating disks was previously confirmed by maps of the velocity field in three of these sources, the Red Rectangle, IW Car, and AC Her. A component of gas in expansion, probably expelled from the disk, is also confirmed in five and probably present in most of these objects (Sect.\ 1). Both rotating and outflowing components were only well detected in the Red Rectangle and IW Car. Our maps of \mbox{IRAS\,08544$-$4431}\ also show a clearly composite nebula with a disk in rotation and gas in expansion. The general properties of our maps and modeling of \mbox{IRAS\,08544$-$4431}\ are remarkably similar to those found in better studied similar objects, such as the Red Rectangle, which confirms our interpretation. We analyze the CO emission by means of nebula models accounting for the complex nature of the source. From our model fitting, we derive the main nebula parameters: shape and velocity field, density distribution and total mass, and characteristic temperature (Sect. 3). We extensively discuss the uncertainties in the derivation of those parameters in Sect.\ 3.2, in particular the dependence of the derived properties on the distance of the star. We adopt a distance $D$ = 1100 pc, but we are aware of that this value is uncertain and also discuss the case of a distance smaller by a factor 2 (Sect.\ 1), particularly to compare our conclusions with previous results. The mass of the nebula is found to be $\sim$ 2 10$^{-2}$ \mbox{$M_{\mbox{\sun}}$}\ ($\sim$ 6 10$^{-3}$ \mbox{$M_{\mbox{\sun}}$}\ for $D$ = 550 pc), and about 90\% of the nebular material would be placed in the disk. These values are compatible with those typically found in similar sources, as well as with previous estimates for \mbox{IRAS\,08544$-$4431}\ from much less complete data (Bujarrabal et al.\ 2013a). The mass of the central stellar system is derived from analysis of the rotation dynamics. We find a central mass of about 1.8 \mbox{$M_{\mbox{\sun}}$}\ (0.9 \mbox{$M_{\mbox{\sun}}$}). The higher stellar mass value is comparable to the result found for the Red Rectangle, while 0.9 \mbox{$M_{\mbox{\sun}}$}\ is very similar to that of the IW Car. A high stellar mass value is more compatible with the measured properties of the binary system, which is composed of a very luminous post-AGB primary with roughly 0.5 -- 0.8 \mbox{$M_{\mbox{\sun}}$}\ and a more massive secondary (see Sects.\ 1, 3.1). In spite of the uncertainties, the need for a relatively high total mass favors a distance of about 1100 pc for \mbox{IRAS\,08544$-$4431}. The typical size of the nebula is $\sim$ 6 10$^{16}$ cm ($\sim$ 3 10$^{16}$ cm for $D$ = 550 pc). As for other well-studied objects, only the central part of the disk is in purely Keplerian rotation; the rotation of the outer disk is probably sub-Keplerian and a slow expansion appears in it. A similar result was found in the Red Rectangle and IW Car, and in the only AGB star in which a rotating disk has been found, i.e., L$_2$ Pup (Homan et al.\ 2017) It is remarkable that, for the low distance value we considered, i.e., $D$ = 550 pc, the nebula would show relatively low mass and size and the total stellar mass would be relatively low compared to those of the Red Rectangle (the best studied object of this class), but closer to the properties of IW Car. However, for our best distance value, $D$ $\sim$ 1.1 kpc, \mbox{IRAS\,08544$-$4431}\ is slightly larger and more massive than the Red Rectangle. We propose that the Red Rectangle and \mbox{IRAS\,08544$-$4431}\ are relatively similar objects. The angular momentum found in the disk (for $D$ = 1.1 kpc) reaches a high value $J$ $\sim$ 13 \mbox{$M_{\mbox{\sun}}$}\ \mbox{km\,s$^{-1}$}\ AU, which is comparable to that found for the binary system at present ($\sim$ 20 \mbox{$M_{\mbox{\sun}}$}\ \mbox{km\,s$^{-1}$}\ AU, see Sects.\ 3.1 and 1). In our case, it is expected that the disk angular momentum comes from the binary system because the gas is ejected with negligible rotation. Therefore, and keeping in mind the uncertainties that affect these measurements, we deduce that the binary angular momentum was $\sim$ 33 \mbox{$M_{\mbox{\sun}}$}\ \mbox{km\,s$^{-1}$}\ AU in the past. Since in a binary star $J$ is basically proportional to the square root of the orbit size, we conclude that the distance between the stars has significantly decreased, by a factor \raisebox{-.4ex}{$\stackrel{\sf >}{\scriptstyle\sf \sim}$}\ 2, owing to the transfer of angular momentum to form the disk. The orbit was probably larger than an AGB star, but not much larger, which allowed a significant momentum transfer. The moderate orbit size change indicates that the system certainly had momentum enough to explain the disk rotation, while maintaining a size of some astronomical units during the whole process. We hope that these results serve to better understand the evolution of binary stars in the presence of dense circumstellar material and as a comparison with theoretical studies of the transfer of angular momentum to circumbinary disks (Chen et al.\ 2017, Akashi \& Soker 2008, Dosopoulou \& Kalogera 2016, etc). We conclude that these NIR-excess post-AGB objects systematically show composite nebula, which contain relatively extended disks in rotation, plus gas in slow expansion that is probably escaping from the disk. The total mass of such nebulae is small compared with those of most PNe and pPNe (Sect.\ 1), i.e., $<$ 10$^{-1}$ \mbox{$M_{\mbox{\sun}}$}\ and often $\sim$ 10$^{-2}$ \mbox{$M_{\mbox{\sun}}$}. The mass of the outflow is several times lower than that of the disk in well-studied cases. \mbox{IRAS\,08544$-$4431}\ is the third object in which these nebular structure and dynamics are well established and the main properties of both components are described. \begin{acknowledgements} This work has been supported by the Spanish MINECO (grants AYA2012-32032, FIS2012-32096, and AYA2016-78994-P), and by the European Research Council (ERC Grant 610256: NANOCOSMOS). We used the SIMBAD database to check some properties of the source. We are grateful to the referee of this paper, Dr.\ O.\ de Marco, for her constructive comments. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2013.1.00338.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. \end{acknowledgements}
{ "timestamp": "2018-02-13T02:17:44", "yymm": "1802", "arxiv_id": "1802.04019", "language": "en", "url": "https://arxiv.org/abs/1802.04019" }
\section{Introduction} The radii of tidally-locked, main-sequence K- and M-dwarfs in eclipsing binary systems are consistently measured to be larger than predicted by most evolutionary models. The radius discrepancies amount to 10--20 per cent at a fixed mass and thus for a fixed luminosity, the effective temperature, $T_{\rm eff}$, can be underestimated by 5--10 per cent (e.g. Lopez-Morales \& Ribas 2005; Morales et al. 2009; Torres 2013). The tidally-locked components of these binary systems are fast-rotating and magnetically active; this, together with the fact that interferometrically measured radii for nearby, relatively inactive K- and M-dwarfs are in much better agreement with models (e.g. Demory et al. 2009; Boyajian et al. 2012), has led to theoretical developments that explain ``radius inflation'' in terms of the effects of dynamo-generated magnetic activity (Morales, Ribas \& Jordi 2008). Magnetic activity might have an influence on the radii of cool, convective stars either through inhibition of convection throughout the star (Mullan \& MacDonald 2001; Feiden \& Chaboyer 2014) or by blocking the emergence of radiative flux at the photosphere with dark, magnetic starspots (e.g. Spruit \& Weiss 1986; MacDonald \& Mullan 2013; Jackson \& Jeffries 2014a). Young stars on the pre-main-sequence (PMS) or at the zero-age main sequence (ZAMS) are also highly magnetically active as a consequence of their rapid rotation. If magnetic activity is responsible for inflating the radii of fast-rotating binary components, then it seems likely that the same phenomenon will be exhibited by low-mass PMS and ZAMS stars. \begin{figure*} \centering \begin{minipage}[t]{0.98\textwidth} \includegraphics[width = 180mm]{FigP1.eps} \end{minipage} \caption{Target selection: panel (a) shows $(v \sin i)_p$ as a function of estimated mass (see section 2.1) for low mass stars in the Peiades with measured period. Symbols correspond to the predicted observed $(v \sin i)_p$ (for an average spin-axis inclination and no radius inflation; triangles indicate $(v\sin i)_p < 5$ km\,s$^{-1}$, crosses have $5< (v\sin i)_p <15$ km\,s$^{-1}$ and squares $(v\sin i)_p \geq 15$ km\,s$^{-1}$), whilst the different colours correspond to intervals in mass inferred from the observed $K_{\rm 2MASS}$ magnitude and $V-K$ colour (blue for $M>0.64M_{\odot}$, red for $0.64>M/M_{\odot}>0.25$ and black for $M<0.25M_{\odot}$. Panel (b) shows $V$ versus $V-K_{\rm 2MASS}$ colour magnitude diagram for the same data set using the same symbols and colour coding. Panel (c) shows spatial distribution of low mass targets in the central region of the Pleiades. The large circles show the fields of view of the twelve observed WIYN Hydra configurations (see Table 1).} \label{fig1} \end{figure*} Substantial evidence has emerged that this is the case. Inflated radii have been invoked to explain a number of puzzles: the rotation-dependent anomalous colours of PMS and ZAMS stars (Stauffer et al. 2003; Kamai et al. 2014; Covey et al. 2016); the rotation-dependent scatter of lithium depletion seen among PMS and ZAMS stars of similar mass and age (e.g. Somers \& Pinsonneault 2014, 2015a,b) and the discrepancies between model predictions and the measured masses, radii and luminosities of PMS and ZAMS eclipsing binary systems (Kraus et al. 2015, 2016; David et al. 2016). The adoption of ``magnetic models'' for low-mass stars leads to the inference of significantly older ages (by a factor of two) and higher masses for PMS stars (Feiden 2016; Messina et al. 2016; Jeffries et al. 2017) from the Hertzsprung-Russell diagram and commensurately longer empirically determined timescales for the duration of star and planet formation. Ideally, testing PMS/ZAMS magnetic models would involve direct measurements of stellar masses and radii, but these are only directly accessible for binary stars that might also be affected by tidal locking. Even the nearest PMS/ZAMS stars are just too far away for precise interferometric radius determinations. Indirect estimates of stellar radii can be made from measured luminosities and $T_{\rm eff}$ determined from colours or spectroscopy. This approach was adopted by Somers \& Stassun (2017) in the Pleiades cluster (age 120 Myr) and they found some evidence for inflated radii (by 10--30 per cent) in ZAMS K-stars with rotation periods $<2$ days compared with slower rotating stars of similar spectral type. Unfortunately this technique is subject to systematic errors in the $T_{\rm eff}$ estimation and is insensitive to any inflation caused by dark starspots, since these can reduce the luminosity of a star whilst leaving the colour and spectral type largely unchanged. An alternative geometric technique is to use the product of rotation period ($P$ in days) with projected equatorial velocity ($v \sin i$ in km\,s$^{-1}$), which yields the {\it projected} stellar radius, in solar units; \begin{equation} R\sin i = 0.0198\, P\, v \sin i\, \end{equation} (e.g. Rhode, Herbst \& Mathieu 2001; Jeffries 2007). By assuming a random spin axis orientation (e.g. Jackson \& Jeffries 2010a) and taking account of observational biases, a set of $R \sin i$ estimates can be modelled to determine the true average radius for a group of stars, with a precision that improves with larger samples. The $P\,v\sin i$ method was used by Hartman et al. (2010) to estimate average radii for G- and K-stars in the Pleiades using rotation periods from the HATNet survey and $v \sin i$ measurements from a variety of literature sources. They concluded that stars with $M\geq 0.85 M_{\odot}$ have radii consistent with non-magnetic evolutionary models, but that stars with $0.6<M/M_{\odot}<0.85$ were 10 per cent larger than predicted. Jackson \& Jeffries (2014a) used the same dataset and a similar modelling technique to compare the radii of Pleiades stars with $0.55 \leq M/M_{\odot}<1.0$ with the interferometric radii of inactive main sequence field stars, finding an over-radius of $13 \pm 3$ per cent. Lanzafame et al. (2017) performed their own analysis showing that the effect appears to be driven by a $\sim 30$ per cent inflation for a subset of stars with $0.6 < M/M_{\odot}<0.8$ that are intermediate in rotation rate, but that faster rotators or more massive stars have radii consistent with model predictions. The goal of the present study is to extend these studies in the Pleiades to lower masses. The motivation is twofold. First, Jackson et al. (2009) and Jackson \& Jeffries (2014a) applied these techniques to M-dwarfs in NGC 2516, a cluster with a similar age to the Pleiades, finding a dramatic radius inflation that increased with decreasing mass, reaching $\sim 40$ per cent for the lowest masses considered ($\simeq 0.25 M_{\odot}$). Whilst a number of systematic effects (differential rotation, binarity) have been considered and accounted for, it is {\it possible} that the rotation periods used, which were based on a relatively short ground-based campaign, might have led to a mistaken upward bias. The result has however been supported (with low precision) by measurements of a few M-dwarfs in an even younger cluster (NGC~2547, Jackson et al. 2016), but we wish to confirm whether such large radius increases are present in a larger sample with better-determined periods (see below). Second, the predictions of the different flavours of magnetic model differ for low-mass stars that are mostly or fully convective. Magnetic inhibition of convection predicts a stronger effect in higher mass stars with large radiative cores (Feiden \& Chaboyer 2015), whereas inflation due to starspots is predicted to be more effective in fully convective stars, especially those which have yet to reach the ZAMS, which is the case for stars with $M<0.4 M_{\odot}$ at the age of the Pleiades (Jackson \& Jeffries 2014a). Hence measurements of radius inflation across the ``fully convective boundary'' could be diagnostic of the mechanism by which radius inflation occurs. In this paper we present new results for low-mass stars in the Pleiades. The cluster was included in the Kepler K2 mission (Howell et al. 2014) for 72 days during ``campaign 4'', starting 8th February 2015. Rebull et al. (2016a) reported 760 rotation periods for low-mass stars, including $\sim 600$ in the range $0.1<M/M_{\odot}<0.9$. To this can be added a further 40 periods for low mass stars ($\leq 0.45 M_{\odot}$) from the Palomar Transit Factory survey (Covey et al. 2016), and together these provide a large catalogue of reliable rotation periods that bridge the fully convective boundary. We have targeted these objects with fibre spectroscopy from the WIYN\footnote{The WIYN Observatory is a joint facility of the University of Wisconsin Madison, Indiana University, the National Optical Astronomy Observatory and the University of Missouri.} 3.5-m telescope at the Kitt Peak National Observatory, in order to determine $v \sin i$ and hence distributions of $R \sin i$. In sections 2 and 3 we describe the target selection and the measurements that were made at the WIYN telescope. Sections 4 and 5 describe the analysis of these spectra to determine $v \sin i$ for individual objects and to determine the average over-radius for groups of objects. In section 6 we discuss the significance of our results and compare them with the predictions of non-magnetic models and models that include the effects of magnetic inhibition of convection and starspots. Section 7 contains our conclusions. \section{Spectroscopic observations} \subsection{Target selection} Potential targets were selected from lists of Pleiades members with measured periods reported by Rebull et al. (2016a), Covey et al. (2016) and Hartman et al. (2010). Rotation period data were taken preferentially from Rebull et al. (705 targets), then from Covey el al. (44) and lastly from Hartman et al. (64). Data were matched with the 2MASS catalogue (Skrutskie el al. 2006) to give target co-ordinates (RA and Dec) and the apparent $K_{\rm 2MASS}$ magnitude. Figure~1 shows the distribution of potential targets in RA and Dec and in the colour-magnitude diagram. Targets for our fiber-spectroscopy study were selected from a 10 square degrees with the highest target density. \begin{table*} \caption{Hydra Configurations observed in the Pleiades. The positions are those of the field centres.} \begin{tabular}{crcccccrccc} \hline Config. & File & Range of & Date & UT of &RA & Dec & Exposure & Number & Fibres on & Fibres on \\ number & number& $I$ magnitude& & exposure \#1 & \multicolumn{2}{c}{(J2000)} & time (s) & exposures & targets & sky \\\hline 1a& 1013 &12.3 to 17.3 & 2016-09-24 & 08:07:09 & 03:43:59.99 & 23:57:59.94 & 3600 & 3 & 48 & 27\\ 1b& 2046 &12.3 to 17.3 & 2016-09-25 & 07:59:09 & 03:43:59.99 & 23:57:59.94 & 3600 & 4 & 48 & 27\\ 2& 4054 &12.3 to 17.3 & 2017-01-03 & 05:43:01 & 03:47:46.00 & 23:55:00.08 & 3600 & 6 & 52 & 20\\ 3& 6063 &12.3 to 17.3 & 2017-01-05 & 04:45:10 & 03:45:59.99 & 24:37:29.93 & 3600 & 5 & 45 & 26\\ 4& 12053 &~9.6 to 14.0 & 2017-01-17 & 05:34:48 & 03:44:47.99 & 24:36:44.96 & 600 & 6 & 22 & 20\\ 5& 13025 &~9.6 to 14.0 & 2017-01-18 & 07:11:14 & 03:45:58.00 & 23:52:00.01 & 600 & 6 & 22 & 28\\ 6& 14025 &~9.6 to 14.0 & 2017-01-19 & 02:03:00 & 03:47:35.99 & 23:14:59.98 & 3000 & 5 & 40 & 29\\ 7& 21063 &~9.6 to 14.0 & 2017-02-02 & 02:20:17 & 03:46:59.99 & 23:04:59.96 & 1200 & 3 & 30 & 25\\ 8& 21066 &~9.6 to 14.0 & 2017-02-02 & 03:47:40 & 03:50:39.99 & 24:03:00.06 & 1200 & 3 & 24 & 26\\ 9& 21075 &~9.6 to 14.0 & 2017-02-02 & 05:53:13 & 03:45:18.10 & 25:05:58.01 & 1200 & 3 & 22 & 25\\ 10& 22073 &~9.6 to 14.0 & 2017-02-03 & 02:25:01 & 03:44:02.79 & 25:39:22.08 & 1200 & 3 & 7 & 25\\ 11& 22076 &~9.6 to 14.0 & 2017-02-03 & 03:51:08 & 03:49:30.00 & 25:00:30.05 & 1200 & 3 & 18 & 26\\ 12& 22079 &12.3 to 17.3 & 2017-02-03 & 05:19:13 & 03:43:27.99 & 23:00:00.05 & 3600 & 2 & 33 & 25\\\hline \end{tabular} \label{observations} \end{table*} \begin{table*} \caption{Properties of observed science targets in the Pleiades and reference slow rotators in Praesepe. Masses and radii are estimated from the models of BHAC15. The final column gives the predicted equatorial velocity -- see section 2.1. A sample of the table is given here, the full table is made available electronically.} \begin{tabular}{lccccccccccc} \hline Target name &RA &Dec &$K_{\rm 2MASS}$&$V-K$ &Period &Ref. &$BC_K$ &$\log L/L_{\odot}$ & $M/M_{\odot}$ &$R/R_{\odot}$ &$(v \sin i)_p$\\ (2MASS) &\multicolumn{2}{c}{(J2000)} & (mag) &(mag)&P (days)& * &(mag)&&& &(km\,s$^{-1}$)\\\hline J03414895+2303235 & 03 41 48.951 & +23 03 23.54 & 13.19 & 6.09 & 0.239 & 1 & 2.86 & -2.26 & 0.19 & 0.24 & 39.3 \\ J03415671+2358434 & 03 41 56.716 & +23 58 43.42 & 13.25 & 5.76 & 0.401 & 1 & 2.82 & -2.27 & 0.18 & 0.24 & 23.3 \\ J03415864+2257020 & 03 41 58.648 & +22 57 02.00 & 11.90 & 4.78 & 6.842 & 1 & 2.72 & -1.68 & 0.40 & 0.38 & 2.2 \\ J03421789+2406578 & 03 42 17.890 & +24 06 57.83 & 12.97 & 5.53 & 0.603 & 1 & 2.80 & -2.15 & 0.22 & 0.26 & 17.0 \\ J03422626+2351386 & 03 42 26.266 & +23 51 38.67 & 13.45 & 5.97 & 0.496 & 1 & 2.85 & -2.36 & 0.17 & 0.22 & 17.7 \\ J03422941+2247261 & 03 42 29.418 & +22 47 26.19 & 10.92 & 4.11 & 0.325 & 1 & 2.62 & -1.25 & 0.56 & 0.52 & 62.4 \\ J03423396+2411008 & 03 42 33.960 & +24 11 00.81 & 13.42 & 5.89 & 0.564 & 1 & 2.84 & -2.34 & 0.17 & 0.23 & 15.8 \\ J03424184+2400158 & 03 42 41.848 & +24 00 15.81 & 12.68 & 5.17 & 0.671 & 1 & 2.76 & -2.01 & 0.26 & 0.29 & 17.1 \\ J03424239+2320218 & 03 42 42.396 & +23 20 21.87 & 11.45 & 5.28 & 0.269 & 1 & 2.77 & -1.53 & 0.46 & 0.43 & 63.4 \\\hline \multicolumn{10}{l}{* Period measurement taken from (1) Rebull et al. 2016a, (2) Covey et al. 2016, (3) Hartman et al. 2010, (4) Douglas et al. 2017}\\ \end{tabular} \label{targets} \end{table*} The targets were prioritised according to their mass, $M$ (highest priority was given to lowest masses, but with a practical faint magnitude limit of $I=17.3$), and a prediction of their observed projected equatorial velocity, $(v \sin i)_p =(\pi /4) 50\,R/P$ in km\,s$^{-1}$, where $P$ is the rotation period in days and $R$ is the estimated stellar radius in solar units. Masses and radii were estimated by comparing the luminosity of the potential target with the predictions of a 120\,Myr solar metallicity, {\it non-magnetic} model isochrone from Baraffe et al. (2015) (hereafter BHAC15). The factor of $\pi/4$ is the mean value of $\sin i$, based on a crude assumption of an unbiased, random distribution of rotation axis orientations. The adopted metallicity and age are consistent with [Fe/H]$=0.03 \pm 0.05$ reported by Soderblom et al. (2009) and the lithium depletion boundary age of $125\pm 8$\,Myr given by Stauffer, Schultz \& Kirkpatrick (1998). The luminosity of each source was estimated from its $M_K$ value (accounting for extinction and distance, see below) using a bolometric correction calculated using $(V-K)_0$ and the BHAC15 models. An intrinsic distance modulus of $5.67\pm 0.02$\,mag was assumed (Melis et al. 2014), a conversion of $K_{\rm CIT}=K_{\rm 2MASS}+0.024$ (Carpenter 2001) and a reddening of $E(B-V)=0.032$\,mag (An, Terndrup \& Pinsonneault 2007). Using the relations of Rieke \& Lebofsky (1985) this reddening corresponds to extinction of $A_V=0.10$\,mag and $A_K=0.01$\,mag. The targets were binned according to $(v \sin i)_p$ (see Fig.~1a). Highest priority was given to the faster rotating targets with $(v \sin i)_p \ge 15$km\,s$^{-1}$, which are expected to yield measurable $v\sin i$ values at the resolution of the spectroscopic data (see section 2.2). Second priority was given to targets with $(v \sin i)_p \le 5$\,km\,s$^{-1}$ to provide a sample of very slow rotators which are expected to have unbroadened spectral lines and serve as a baseline for calibrating the measured $v \sin i$ as a function of spectral line width broadening (see section 3.4). \subsection{Observations} The WIYN Hydra multi-object spectrograph (Bershady et al. 2008) consists of a robotic positioner that can position up to 83 fibres, each with a 3 arcsecond diameter (we used the ``blue'' fibre cable), across a 1 degree diameter unvignetted field of view at the Nasmyth focus of the 3.5-m WIYN telescope. The fibers were used in conjunction with the bench spectrograph, an echelle grating and an order-sorting filter to provide spectra with a resolving power of $\simeq 17,000$. An STA1 2600$\times$4000 pixel CCD camera was used in $2\times2$ binning mode to record spectra covering $\sim 400$\AA\ centred at $\sim 7850$\AA. The FWHM of a resolution element corresponded to about 2.5 binned pixels. Twelve field centres were chosen to maximise the number of high priority targets (see section 2.1). Spare fibres were allocated to second priority targets and to repeat observations of targets in cases of over-lap between fields. Finally, $\sim$20 spare fibres were allocated to clear sky $>$20 arcsec away from the nearest source in the 2MASS catalogue. The observing program was performed over 9 nights during a 6 month period from September 2016 to February 2017, although poor weather restricted the total observing time available. Details of when each of the twelve fields was observed (field 1 was observed on two nights), how many targets were in each field and how long the fields were exposed for are given in Table 1. The range in apparent $I$ magnitude (the relevant magnitude for the wavelength at which we observed) within a particular configuration was restricted to $<5$ magnitudes to limit any cross-contamination of spectra between adjacent fibres. To make best use of varying observing conditions we further divided the targets into ``bright'' and ``faint'' samples (with overlap). For configurations of fainter targets ($12.3<I<17.3$), several one hour exposures were required to produce sufficient signal to noise (SNR) in the spectra to allow resolution of $v\sin i$ in the faintest targets. The brighter targets ($9<I<14$) required less time and could be observed more readily in partially cloudy skies. The names, co-ordinates, photometry, rotation periods, estimated masses and radii (from the BHAC15 models) and derived luminosities for the 324 individual Pleiades targets that were actually observed are listed in Table 2. \subsection{Additional observations of slowly rotating stars} There were an insufficient number of slow-rotating M-dwarfs in the Pleiades to adequately characterise the width of spectral lines in stars with negligible rotational broadening (see section 3.4). To that end, also listed at the end of Table 2, are 88 low-mass targets from Praesepe, an older cluster, which also has measured periods based on K2 observations (Douglas et al. 2017) and which contains a higher proportion of slow-rotating M dwarfs ($(v \sin i)_p \le 3\,$km\,s$^{-1}$). The spectra of these stars were observed on the same nights as the Pleiades targets and with exactly the same Hydra spectrograph set up and were taken from a comprehensive set of observations made of low-mass Praesepe stars with known periods, which will be reported on in a subsequent paper. $M_K$ values are estimated assuming a distance modulus of 6.29$\pm$0.07 mag (van Leeuwen 2009) and zero reddening. Stellar masses and radii were estimated from intrinsic $V-K$ colour and the BHAC15 models, assuming a cluster age of 670\,Myr (e.g. Cummings et al. 2017). \section{Data reduction} Many of the target spectra were faint, requiring an optimal extraction strategy to provide sufficient signal-to-noise ratio (SNR) for useful analysis. Strong sky emission lines were a dominant feature in the fainter spectra. For these reasons we used purpose-built software for data reduction based on the pipeline described in Jackson \& Jeffries (2010b), adapted where necessary, to the characteristics of the WIYN telescope and Hydra spectrograph. \subsection{Extraction of target spectra} Images of the science fields and associated flat and arc exposures were debiased and rebinned to compensate for the initial curvature of the spectra on the CCD image. The flat frames were the median of 11 tungsten-lamp flat exposures recorded in the afternoon prior to night-time observations. One dimensional spectra were optimally extracted from the science frames using the procedure described by Horne (1986). Counts per bin and uncertainties were calculated for a gain of 0.44 electrons/ADU and a read out noise of 3.1 electrons. \begin{figure} \centering \includegraphics[width = 85mm]{FigP2.eps} \caption{Representative spectra of stars in the Pleiades showing the presence of skylines in lower SNR spectra. The spectral types (SpT) shown are estimated from ($V$-$K$)$_0$ using the relationship proposed by Kenyon \& Hartmann (1995). The lower (red) plots indicate sections of the spectra that are masked to minimise the effect of sky lines for measurement of $RV$ and $v\sin i$. } \label{fig2} \end{figure} Arc spectra were extracted from Thorium-Argon lamp exposures recorded during the day prior to observations. Gaussian fits were used to determine the locations of 6 well-spaced unsaturated arc lines recorded through each fibre. Cubic polynomial fits to these were used to rebin spectra onto a common wavelength scale. The observations in September 2016 were centered on $\sim$7830\AA\ with a common wavelength range of 7620--8035\AA. Subsequent observations were centered on $\sim$7890\AA~\ with a common wavelength range of 7681--8095\AA. A fine adjustment was made to the wavelength scale applied to each observation to compensate for drift between day-time calibration and night-time observations. The adjustment was determined by comparing the measured wavelength of six strong, unblended emission lines in the median sky spectra with their reported wavelengths. The weighted mean skyline correction varied from $-0.6$ to $+1.5$~km\,s$^{-1}$ ($-0.02$\AA~to $+0.04$\AA) with, in most cases, an uncertainty of $\le$ 0.25~km\,s$^{-1}$ although a higher uncertainty (0.9~km\,s$^{-1}$) was found for configuration 5 (see Table 1). Target spectra were sky subtracted using fibre efficiencies estimated from the amplitude of the flat field spectra which, when checked, showed good agreement ($\sim$1.5 per cent rms) with the throughput measured from a twilight sky exposure of the same configuration on the same night. Spectra from repeated exposures in the same configuration (see Table 1) were corrected for heliocentric radial velocity and the median taken to produce final spectra, and corresponding uncertainties, in 0.1\AA~steps over the common wavelength ranges. Figure~2 shows typical spectra with spectral types estimated from ($V$-$K$)$_0$ (Kenyon \& Hartmann, 1995). Despite care taken with sky subtraction, lower SNR spectra show residual sky lines which could adversely effect $v\sin i$ measurements if not masked prior to further analysis. A total of 411 spectra were collected for 324 separate targets in the Pleiades. 10 spectra were rejected from further analysis because of a low SNR ($\leq 9$) reducing the number of Pleiades targets to 319. \subsection{Measurement of radial velocities and spectral broadening} $RV$ and $v\sin i$ were measured by cross correlating the median spectra of individual targets with the spectra of standard stars and then fitting a Gaussian function to characterise the peak in the cross-correlation function (CCF). $RV$s were determined from the position of the peak in the CCF and the spectral broadening was estimated from the increase in full width half maximum (FWHM) of the Gaussian fit with respect to FWHM$_0$, the CCF FWHM measured for slow-rotating stars of similar spectral type. For this analysis the spectra were truncated shortward of 7705\AA~to avoid strong telluric features. Spectra were also masked at the positions of the strong skylines (see Fig.~2) and rebinned (with 10000 points) on a logarithmic wavelength scale, The masked spectra were cross-correlated with template spectra taken from the UVES atlas (Bagnulo et al. 2003). Five templates were used to approximately match the expected spectral types of Pleiades targets based on their (binned) absolute K magnitude (see Table 3). Cross-correlation yielded $RV$ and FWHM values for 319 targets, but 5 of these were rejected after visual inspection; 2 were well-separated spectroscopic binaries and 3 had very poor Gaussian fits to the peaks in their CCFs. \begin{table} \caption{Calibration standards. Target spectra are cross correlated with spectra of CCF templates (see section 3.2). Spectra of $v\sin i$ standard stars are used to define calibration curves of $v\sin i$ versus FWHM (see section 3.3).} \begin{tabular}{llll} \hline No. &$M_K$ & CCF template &$ v\sin i$\\ & range & (spectral type) & standard(s)\\\hline 1 & $>$5.5 & HD~34055 (M6V) & Gl\,133/Gl\,285\\ 2 & 4.9 - 5.5 & HD~130328 (M3III) & Gl\,133/Gl\,285\\ 3 & 4.4 - 4.9 & HD~156274 (M0V) & Gl\,184/Gl\,205\\ 4 &3.9 - 4.4 & HD~10361 (K5V) & Gl\,184/Gl\,205\\\hline \end{tabular} \label{standards} \end{table} \subsection{Measurement precision} The precision of $RV$ and FWHM measurements were determined empirically from the change in $RV$ and FWHM between repeated measurements of the same target either in the same configuration (1a and 1b in Table 1) or for targets present in two or more configurations. To maximise the sample size, targets from Praesepe (see section 2.3) were also included in the analysis to give a total of 174 repeats compared with 65 in the Pleiades alone. Assuming that the standard deviation of the measurements of both $RV$ and $v \sin i$ are proportional to the FWHM of the CCF, the distribution of measurement uncertainties measurement was characterised by a t-distribution, with $\nu$ degrees of freedom, with a width that is scaled by a function that features a fixed systematic component plus a component that depends on SNR. In the limit of $\nu \rightarrow \infty$ this would be equivalent to a Gaussian with a standard deviation given by the scaling function. Uncertainties were estimated from repeat observations of individual targets ($E_{RV}$=$\Delta RV/\sqrt{2}$~and $E_{\rm{FWHM}}$=$\Delta$FWHM/$\sqrt{2})$. The distributions of these were modelled in order to choose an appropriate $\nu$ and to obtain empirical values for the dimensionless parameters, $A$,$B$,$\alpha$ and $\beta$ of the scaling functions $S_{RV}$ and $S_{\rm{FWHM}}$, where $S$ in each case is a measure of the standard deviation and defined as \begin{equation} S_{RV} = \rm{FWHM}\sqrt{A^2 + (B/SNR)^2}\, , \end{equation} where $A=0.025$ and $B=0.95$, and \begin{equation} S_{\rm{FWHM}} = \rm{FWHM}\sqrt{\alpha^2 + (\beta/SNR)^2}\, , \end{equation} where $\alpha=0.036$ and $\beta =0.68$. Given that the FWHM is $\geq 22$ km\,s$^{-1}$, this indicates absolute uncertainties in $RV$ of at least 0.5 km\,s$^{-1}$ and FWHM uncertainties of at least 0.8 km\,s$^{-1}$, once the SNR greatly exceeds $\sim 40$. The upper plot in Fig.~3 shows the cumulative distribution of the normalised uncertainty in $RV$ (i.e the ratio of measured uncertainty to the uncertainty predicted by $S_{RV}$ using the best fit values of $A$ and $B$. A t-distribution with $\nu=4$ degrees of freedom is a good match to the data. Note that a finite value of $\nu$ indicates that the tails of the distribution are more prominent than a normal distribution and that a 68.3 per cent confidence interval would be given by $1.14\, S_{RV}$ for $\nu=4$. The lower plot shows the distribution of normalised measurement precision for FWHM. In this case the tail of the normalised distribution is slightly more extended such that a t-distribution with $\nu=3$ provides a better fit and a 68.3 per cent confidence error bar would be given by $1.20\, S_{\rm FWHM}$. \begin{figure} \centering \includegraphics[width = 70mm]{FigP3.eps} \caption{Measurement precision of $RV$ and FWHM estimated from the the CCF with template field stars (see Table~3): Plots shows the cumulative distribution functions of the uncertainties (normalised using equations~2 and~3 with the parameters shown on the plots).} \label{fig3} \end{figure} \begin{table*} \caption{Measured values of relative $RV$, FWHM, $v\sin i$ and $R\sin i$. When the relative uncertainty in $v\sin i$ is greater than 30 per cent an upper limit of $v\sin i$ is shown based on the measurement uncertainty in FWHM (see section 3.4.3). The corresponding upper limit in $R\sin i$ is treated as left-censored data in the maximum likelihood analysis determination of over-radius. A sample of the Table is shown here, the full version is available electronically.} \begin{tabular}{lrrrrrrrrrrrr} \hline Target name & $M_K$ &$\log L/L_{\odot}$&Period &SNR&$RV$&$S_{RV}$&FWHM&$S_{\rm FWHM}$& FWHM$_0$&$v\sin i$&$S_{v\sin i}$&$R\sin i$\\ (as 2MASS) &(mag)& &(d) & &(km/s)&(km/s)&(km/s)&(km/s)&(km/s)&(km/s)&(km/s)&($R_{\odot}$) \\\hline J03414895+2303235 & 7.51 & -2.26 & 0.239 & 9.9 & -1.2 & 2.9 & 29.2 & 2.3 & 24.2 & 18.2 & 4.1 & 0.087 \\ J03415671+2358434 & 7.57 & -2.27 & 0.401 & 15.9 & 0.7 & 2.4 & 37.6 & 2.1 & 24.2 & 28.4 & 2.2 & 0.228 \\ J03415864+2257020 & 6.22 & -1.68 & 6.842 & 36.3 & -1.2 & 0.9 & 24.7 & 1.0 & 24.2 & $<$10.6 & --- & $<$1.44 \\ J03421789+2406578 & 7.29 & -2.15 & 0.603 & 47.7 & 0.1 & 0.9 & 26.9 & 1.0 & 24.2 & 13.2 & 2.5 & 0.159 \\ J03422626+2351386 & 7.77 & -2.36 & 0.496 & 13.7 & -1.1 & 1.9 & 25.4 & 1.6 & 24.2 & $<$12.8 & --- & $<$0.13 \\ J03422941+2247261 & 5.24 & -1.25 & 0.325 & 73.7 & 3.1 & 2.1 & 74.8 & 2.8 & 24.7 & 50.8 & 1.4 & 0.330 \\ J03423396+2411008 & 7.74 & -2.34 & 0.564 & 28.5 & -0.5 & 1.2 & 27.9 & 1.2 & 24.2 & 15.9 & 2.6 & 0.179 \\ J03424184+2400158 & 7.00 & -2.01 & 0.671 & 65.3 & -0.5 & 0.9 & 30.4 & 1.1 & 24.2 & 20.2 & 1.9 & 0.271 \\ J03424239+2320218 & 5.77 & -1.53 & 0.269 & 45.7 & 0.6 & 1.1 & 33.4 & 1.3 & 24.2 & 23.8 & 1.7 & 0.128 \\\hline \end{tabular} \label{vsini} \end{table*} \subsection{$RV$ and $v\sin i$ for the Pleiades targets} Table 4 gives the measured $RV$ and FWHM and estimated uncertainties of the 314 Pleiades targets with well defined CCFs that can be used to determine stellar $v\sin i$. Where repeated measurements were made of the same target the values shown in Table~4 are the weighted (by $S^{-2}$) mean values. \subsubsection{Cluster $RV$s} The $RV$s in Table 4 are measured relative to the central $RV$ of the cluster such that $RV_{\rm rel}=RV-RV_0$ where $RV_0$ is the median value of the target $RV$s~measured relative to a particular CCF standard (see Table~3). The dispersion of the measured $RV_{\rm rel}$, estimated from the median absolute dispersion (MAD) of the target $RV$s, is $\sigma_{t}$=1.2\,km\,s$^{-1}$ (using the approximate relation $\sigma_{t}=$MAD/0.68 for a t-distribution with $\nu=4$). $\sigma_{t}$ is due to the combined effect of (a) intrinsic dispersion in the cluster, (b) measurement uncertainties and (c) the effects of binarity. It is used here to define a window of acceptable $RV$s for Pleiades membership as $|RV_{rel}|<10$\,km\,s$^{-1}$. Using this criterion eliminates 9 targets as possible non-members or short period binary systems. \begin{figure} \centering \includegraphics[width = 85mm]{FigP4.eps} \caption{FWHM of the CCF measured on targets in the Pleiades (blue circles) and Praesepe (red triangles) as a function of $M_K$. Dashed vertical lines delineate the ranges where particular templates were used to calculate the CCF (see Table 3). Horizontal bars show the derived zero-point, FWHM$_0$, for slow-rotating stars as a function of $M_K$.} \label{fig4} \end{figure} \begin{figure} \centering \includegraphics[width = 77mm]{FigP17.eps} \caption{Calibration curves of $v\sin i$ as a function of the FWHM of the CCF. Results are shown for the four CCF templates listed in Table~3 cross correlated with artificially broaded vsini standards of similar spectral type (see section 3.4.3). Curves are offset on the vertical axis in increments of 20\,km\,s$^{-1}$.} \label{fig17} \end{figure} \subsubsection{The FWHM zeropoint} Figure~4 shows FWHM of the CCFs as a function of $M_K$ for the Pleiades targets, with vertical dashed lines showing the absolute magnitude ranges used to decide which templates were used to calculate the CCF. Also shown are the FWHM values for slow-rotating targets in Praesepe (i.e. targets with $(v\sin i)_p<3$\,km\,s$^{-1}$). The FWHM of 93 (predicted) slowly rotating Praesepe targets together with 22 similar slow rotators in the Pleiades were used to define the median CCF width for slow-rotating stars, FWHM$_0$, in the respective $M_K$ bins. The relationship between the FWHM$_0$ and $M_K$ is shown in Fig.~4 and given for each star in Table~4. \subsubsection{$v \sin i$ values and their precision} The target $v\sin i$ values were determined from $\Delta$FWHM=FWHM-FWHM$_0$ using calibration curves produced by artificially broadening the spectra of bright, slowly rotating standard stars measured at the beginning and end of each observing night. The $v\sin i$ standards used to calibrate each range of M$_K$ are given in the final column of Table 3. The broadening kernel assumed a linear limb darkening coefficient of 0.6 (Claret, Diaz-Cordoves \& Gimenez 1995). Figure~5 shows the relationship between $v \sin i$ and $\Delta$FWHM for the range of spectral types in our sample. For values of $v \sin i > 60$ km\,s$^{-1}$ we found that the exact calibration depended on which template star was chosen, even at the same spectral type. Instead, for these rapid rotators, the relationship was linearly extrapolated from smaller values, which we found provided a good match to the cross-correlation FWHM values obtained from high SNR spectra of slowly rotating Pleiades members of similar absolute magnitude that were artificially broadened. This comparison also reveals that the calibration uncertainties appear to grow from very small values at low $v \sin i$ to $\sim \pm 5$ per cent for $v \sin i \geq 70$ km\,s$^{-1}$. However, since fewer than 5 per cent of the sample used to determine the over-radius in Pleiades stars (see section 4) are in this regime, this systematic calibration uncertainty leads to $<1$ per cent uncertainty in our final results and we neglect it in the rest of the analyses. The calibration curves in Fig.~5 vary approximately as $(v\sin i)^2 \propto \Delta$FWHM for $v \sin i <60$ km\,s$^{-1}$ and the precision in $v\sin i$ varies as $S_{v\sin i}=\tfrac{\partial v\sin i}{\partial \rm{FWHM}}S_{\rm{FWHM}}$. Using these expressions the \textit{relative} precision in $v\sin i$, defined here as $E_{v\sin i} = \Delta v\sin i/(\sqrt{2}v\sin i)$, scales as \begin{equation} \frac{S_{v\sin i}}{v\sin i} \simeq \frac{\sqrt{\alpha^2 + (\beta/SNR)^2}}{2(1 - \rm{FWHM}_0/\rm{FWHM})}\, . \end{equation} This formulation should be reasonably accurate up to $v \sin i \sim 60$ km\,s$^{-1}$, but may underestimate the uncertainties for the small number of very fast rotators in our sample. Figure~6 shows the variation of the relative uncertainty in $v\sin i$ with FWHM/FWHM$_0$ for increasing levels of SNR. These plots were used to calculate the $v \sin i$ uncertainty and also to define a threshold for FWHM/FWHM$_0$, as a function of SNR, that marks where the level of rotational broadening is large enough to yield a resolved value of $v\sin i$. We choose this threshold such that the relative uncertainty in $v \sin i$ is $<0.3$ and targets with FWHM/FWHM$_0$ below this level are assigned an upper limit value of $v\sin i$ at the value corresponding to this threshold value. The threshold varies from FWHM/FWHM$_0 = 1.12$ for SNR$=10$ to FWHM/FWHM$_0 = 1.06$ for SNR$=100$. These threshold FWHM values correspond to $v \sin i$ upper limits of $<13$ km\,s$^{-1}$ and $<10$ km\,s$^{-1}$ respectively, with a small dependence on spectral type. \begin{figure} \centering \includegraphics[width = 80mm]{FigP5.eps} \caption{Measurement precision of $v\sin i$. The plot shows shows the relative uncertainty in $S_{v\sin i}/v\sin i$ (see eqn.~4) as a function of the measured FWHM for increasing levels of SNR. Tabulated values on the plot show the minimum levels of FWHM/FWHM$_0$ at each SNR value that can be discerned to yield a 30 per cent uncertainty in $v\sin i$.} \label{fig5} \end{figure} \subsubsection{Comparison of $v\sin i$ with other work} The empirical analysis described above gives only a partial estimate of the absolute accuracy, which is due to both the measurement precision and the uncertainty in the absolute calibration. To assess the calibration accuracy, our values for $v\sin i$ were compared with those reported in the literature. Matches were found for 31 targets with $|RV_{rel}|\le10$\,km\,s$^{-1}$ and relative uncertainty in $v \sin i$ $\le$~0.3. The comparison is shown in Fig.~7 and demonstrates satisfactory agreement between the two datasets. There are three distinct outliers, two of which, marked (a) and (c) in Fig.~7, are identified as spectroscopic binaries in Mermilliod et al. 1992. Linear regression of the two datasets (excluding the three outliers) shows no significant systematic difference between our $v\sin i$ measurements and the literature values. \begin{figure} \centering \includegraphics[width = 75mm]{FigP6.eps} \caption{A comparison of $v\sin i$ values (with fractional uncertainties of $<$\,30 per cent and $|RV_{\rm rel}|$$\le$10\,km\,s$^{-1}$) with literature values (Queloz et al. 1998, Marilli et al. 1997, Stauffer and Hartmann 1987, Soderblom et al. 1993, O'Dell et al. 1994 and Krishnamurthi et al. 1998). The solid line is a linear regression, neglecting the three outliers J03484894+2416027, J03434841+2511241 and J03475973-2443528 marked as a,b and c respectively on the plot. Error bars are 68 per cent empirical uncertainties in our measurement precision.} \label{fig6} \end{figure} \begin{figure} \centering \includegraphics[width = 85mm]{FigP7.eps} \caption{Variation of the projected radii, $R\sin i$ with M$_K$ for Pleiades K- and M-dwarfs. Diamonds with error bars show targets with a relative uncertainty in $R\sin i$$\le$~30 per cent. Triangles show upper limit values for targets with higher levels of uncertainty. Solid and dashed lines show predicted radii in solar units of the BHAC15 and Dartmouth (Dotter et al. 2008) 120\,Myr solar metallicity isochrones.} \label{fig7} \end{figure} \begin{figure*} \centering \includegraphics[width = 180mm]{FigP8.eps} \caption{Normalised radii in the Pleiades. Plot A shows $r\sin i$ as a function of luminosity (see Table 2). Diamonds with error bars show $r\sin i$ for targets with a relative uncertainty $\le$30 per cent. Triangles show upper limits for targets $(v\sin i)_p >10$~km\,s$^{-1}$ with higher levels of uncertainty ($\ge 30$ per cent). Plot B shows a the number density ($P_{r\sin i}$) of targets with a relative uncertainty $\le 30$ per cent as a solid histogram, with the open histogram including the stars with upper limits at their upper limit values. The dotted line shows $P_{r\sin i}$ for stars with the radii that are predicted by the BHAC15 evolutionary model (i.e. an over-radius $\rho=1$). The solid red line shows the maximum likelihood model which corresponds to an average over-radius of $\rho \sim 1.14$ relative to the BHAC15 model.} \label{fig8} \end{figure*} \section{Comparison of measured radii with current evolutionary models} The measurements of $v\sin i$ in Table 4 are used with the reported rotation periods to estimate projected radii $R\sin i$ (using Eqn. 1) for Pleiades members with $|RV_{\rm rel}|<10$\,km\,s$^{-1}$. The uncertainty in $R\sin i$ is estimated on the basis that the uncertainty in $v\sin i$ is much greater than the uncertainty in period giving a {\it fractional} uncertainty in $R\sin i$ of $S_{v\sin i}/v\sin i$, as in Eqn.~4. For targets where this fractional uncertainty is greater than 0.3, we assign upper limits to $R\sin i$ as calculated from the target's period and the upper limit to $v\sin i$. In the subsequent analyses the sample is restricted to those objects with $(v \sin i)_p > 10$ km\,s$^{-1}$. This threshold is chosen to approximately match the resolution limit of our measurements of $v \sin i$. Whilst our analyses {\it do} incorporate upper limits, enlarging the sample to include more slowly rotating stars simply adds many upper limits that do not constrain any over-radius and just add more noise to the results. The reader should then be aware that our radius measurements can only apply to those relatively fast-rotating stars that are in this restricted sample. The effects of increasing the $(v \sin i)_p$ threshold to a more restrictive 15 km\,s$^{-1}$ (and decreasing the sample size) has no systematic influence on the results (see section 5.1). Figure 8 shows $R\sin i$ versus $M_K$ for the restricted sample together with predicted model radii $R_m$ from the BHAC15 and Dartmouth evolutionary codes (Dotter et al. 2008). For practical purposes these two models are identical over the range of the data considered here. In what follows, the ratio of projected radius to model radius at the target luminosity, $r\sin i = R\sin i/R_m$, is referred to as the ``normalised radius''. In the absence of radius inflation the distribution of $r\sin i$ would simply reflect the values of $\sin i$ in the sample convolved with any measurement uncertainties and biases. Where targets show radius inflation (i.e. $R/R_m>1$) then the $r\sin i$ distribution would also be scaled by a similar amount. Fig.~9 recasts Fig.~8 to show $r\sin i$ as a function of $\log L/L_{\odot}$ for targets with $-2.7 \leq \log L/L_{\odot} \leq -0.6$. According to BHAC15 this range of $\log L/L_{\odot}$ is roughly equivalent to $0.8 \geq M/M_{\odot} \geq 0.1$, which spans spectral types of $\sim$K3 to $\sim$M5 (Kenyon \& Hartmann 1995). There are a total 172 targets with measured values of $r\sin i$ with a relative uncertainty $\le$0.3. A further 22 targets have only upper limits to their $r\sin i$ and are represented as left censored data in subsequent analyses. The peak in the distribution {\it appears} higher than would be expected for stars with a random alignment of spin axes, $\overline{r\sin i} \sim \pi/4$, indicating that the stellar radii may be systematically larger than predicted by the models. \subsection{Maximum likelihood method} A maximum likelihood method is used to determine the average radius ratio or over-radius, $\rho=R/R_m$ of the measured data relative to BHAC15 model radii, as a function of luminosity. The approach is similar to that used by Lanzafame et al. (2017) to estimate the over-radius of higher mass stars in the Pleiades. In the present case the probability of achieving a particular value of $r\sin i$, written as $\phi(r\sin i|P_j,L_j,\rho)$, is calculated for individual targets depending on their period, $P_j$, luminosity, $L_j$, and $\rho$ rather than using uniform probability density function for all targets with measured $r\sin i$. \begin{figure} \centering \includegraphics[width = 80mm]{FigP9.eps} \caption{The probability density of measuring a normalised radius $r\sin i$ for a representative star of mass, $0.4\,M_{\odot}$, SNR$=50$ and $(v\sin i)_p= 20$\,km\,s$^{-1}$. The black line shows the ideal case of precise measurements of $P$ and $v\sin i$ on a single star. The dashed red histogram shows the combined effects of SDR (assumed to increase $r \sin i$ by a fixed 1 per cent, see section 4.2.1) and binarity. The solid green histogram shows the net effect of SDR, binarity and measurement uncertainties. } \label{fig9} \end{figure} \subsection{Probability function for measured data} In the ideal case of error-free measurements of $P$ and $v\sin i$ for single stars, with a random alignment of their spin axes in space, the probability density increases with inclination as $\phi(i)=\sin i$. Hence the probability density function of $r\sin i$ is \begin{equation} \phi(r\sin i|\rho) = \rho \tan[\arcsin(r\sin i/\rho)] \; {\rm for} \; r\sin i \le \rho. \end{equation} In practice $\phi$ is modified by the effects of surface differential rotation, binarity and measurement errors. These effects are investigated in the following subsections and as an example, Figure~10 shows how $\phi$ would be modified for a representative star of mass, 0.4$M_{\odot}$, SNR$=50$ and $(v \sin i)_p= 20$\,km\,s$^{-1}$. \subsubsection{Surface differential rotation} Surface differential rotation (SDR) can lead to systematic errors in the estimated radii, because solar-type SDR causes the rate of surface rotation to reduce towards the poles (Krause \& Raedler 1980). If the starspots responsible for light curve modulation are distributed over a range of latitudes then the measured rotation rate, $\Omega_m$ will be less than the rotation rate at the equator, $\Omega_{\rm eq} $. Reinhold et al. (2013) measured periods for thousands of active stars in the Kepler field. In most cases a second period close to the rotation period was detected which they interpreted as resulting from SDR. For low mass stars they found an average difference in rotation rate of $\Delta \Omega$ = 0.08 radians/day between the two measured periods that was almost independent of measured period and is a small fraction of the average angular frequency of our Pleiades targets that have a measured $r \sin i$, where $\overline{\Omega_m}=$ 14\,radians/day. This is in agreement with the analysis of multi-periodic stars in the Pleiades data by Rebull et al. (2016b), where no evidence could be found for differential rotation among the fast-rotating M-dwarfs that are the subject of this paper. Taking $\Omega_m-\Omega_{eq}$ as \textit{approximately} equal to $\Delta \Omega$ then the fractional increase in measured period compared to the true equatorial period, $\Delta\Omega/\overline{\Omega_m}$ would be $\leq 1$ per cent. The corresponding increase in $r\sin i$ will be similarly small. The potential effects of SDR are shown in Fig.~10 for illustrative purposes, but it is neglected as insignificant in our main analysis and results. \subsubsection{Binarity} A proportion of the targets will be part of unresolved binary systems. Short-period binary systems are easily identified from the offset in $RV_{rel}$ from the cluster mean and/or double peaks in their CCF and these are rejected from the sample. However, a fraction of the retained targets will be in longer period spectroscopic binaries, resulting in a broadening and shift in $\phi$ for two separate reasons, both of which are accounted for with a Monte Carlo simulation of the binary population. We assume a binary frequency for low-mass Pleiades stars of 30 per cent (Duch{\^e}ne \& Kraus 2013). We also adopt the lognormal period distribution and flat mass ratio and eccentricity distributions found for field stars by Raghavan et al. (2010). First, the CCF may be broadened due to the unresolved velocity difference between the two components. In these cases the measured $v\sin i$ will (on average) be systematically larger than the true $v\sin i$ of the primary star, by an amount that depends on the difference in $RV$ and relative luminosity of the primary and secondary. The effect is modelled as described in Appendix A of Jackson et al. (2016). For each target, the properties of possible secondary stars are drawn at random from the binary distribution described above. The increase in the FWHM of the CCF is then estimated as a function of the line of sight velocity of the primary and secondary stars (relative to the centre of mass) and the relative flux contribution of the secondary at the wavelength of the observed spectra. This is done by measuring the FWHM of a Gaussian profile fitted to the sum of two separate Gaussian profiles representing the primary and secondary stars. The ratio of $v\sin i$ determined from the FWHM of the combined profile to the true $v\sin i$ of the primary is averaged to determine the bias in $v\sin i$ and hence $r\sin i$ caused by binarity. The typical effect is to broaden the distribution of $\phi$ and produce a small tail of detections with $r\sin i > \rho$. This potential bias in $v\sin i$ decreases with increasing $v\sin i$, but the average effect in our sample is to increase the estimated $r \sin i$ values by an average of $<2$ per cent. Second, the model radius used to calculate $r \sin i$ is systematically over-estimated in binary systems, because the system luminosity is larger than the luminosity of the primary star. This leads to an {\it under-estimate} of $r \sin i$ if the effect is ignored. For each binary in the simulation we estimate the luminosity of the primary and the system and use this to calculate the bias in $r \sin i$ that is introduced. The average effect for stars in our sample is to decrease the estimated $r \sin i$ values by an average of 3 per cent, and the $\phi$ distribution is also modified by the appearance of a small ``bump'' at $r\sin i \sim 0.8$ due to binaries with similar mass components (see Fig.~10). The net effect of binarity for our sample is to cause a broadening of the $\phi$ distribution and to decrease the estimated $r \sin i$ by about 1 per cent, although this number will depend linearly on the assumed binary frequency and in a more complex way on the assumed details of the distributions of mass ratio and orbital period. In what follows we will usually neglect the effects of binarity, except where the details of the $r \sin i$ distribution become important in section 5.3.2. \subsubsection{Measurement uncertainties} Uncertainties in the measurements will broaden the distribution of $\phi$ according to the expected fractional uncertainty in $r\sin i$. The uncertainty in $P$ is assumed to be small compared to the uncertainty in $v\sin i$, hence the fractional uncertainty in $r\sin i$ also equals $S_{v\sin i}/v\sin i$. A Monte Carlo method is used to calculate the effect of measurement uncertainties on $\phi$, where the fractional uncertainties in trial values of $r\sin i$ are the product of the fractional uncertainty in $v\sin i$ given by Eqn.~4 and random values drawn from a Student's-t distribution with $\nu=3$. Measurement uncertainties broaden the peak in $\phi$ but have almost no effect on $\overline{r\sin i}$ (see Fig.~10). The effects of measurement uncertainties are included in all of our subsequent analyses. \subsubsection{Multiple periods} Any uncertainty in $P$ is neglected in our analyses, but in a fraction of cases -- 36 of the 194 stars used in the final analysis -- Rebull et al. (2016a) report more than one possible period from the Kepler K2 light curves. In all these cases we have used the first period identified by Rebull et al. as $P_1$, the most likely rotation period of the star. The status and cause of these multiple periods is discussed in detail by Rebull et al. (2016b). For the fast rotating M-dwarfs that constitute most of our sample, the multiple periods are probably due to unresolved binary companions. Given that Rebull et al. (2016a) find very little correlation between photometric amplitude and either rotation period or photometric colour for $V-K>2$ (which applies to all our targets) then we expect that in the majority of these cases the rotation period reported as $P_1$ is the rotation period of the brighter primary star and that this corresponds to the $v \sin i$ we have measured. However, there is a possibility that some of these periods are actually the period of an unresolved, fainter secondary star and that the $P v \sin i$ value is in error. Without further information we have no way of knowing the probability that the measured period, $P_1$, is that of the primary star. To test whether this could have any implications for our results we simply repeated the analysis after excluding these stars. The average over-radius (reported in section 5 and see Table~6) inferred from the filtered dataset was increased by 1 per cent, which is smaller than the size of the statistical uncertainties. This small effect is entirely consistent with our earlier assessment of the effect on the $r \sin i$ determination of including unresolved binary systems if $P_1$ is in fact the period of the brighter primary in all cases. \subsection{Censored data} A probability distribution $\phi(r\sin i|P_j,m_j,\rho)$ is calculated for the $j^{\rm th}$ target. The value of this function at the measured $r\sin i_j$ defines the probability assigned to a particular target, $\phi(r\sin i_j|P_j,m_j,\rho)$. Targets with a relative measurement uncertainty $>0.3$ are treated as left censored data where an upper limit $r\sin i_{\rm UL}$ is calculated from the target's period and upper limit to $v\sin i$. The probability density for these stars is estimated as; \begin{equation} \phi(r\sin i < r\sin i_{\rm UL}|P_i,m_i,\rho) = \frac{\int_{0}^{r\sin i_{\rm UL}} \phi(r\sin i|P_i,m_i,\rho)\,d(r\sin i)}{r \sin i_{\rm UL}}\, , \end{equation} corresponding to the average value of $\phi$ between 0 and $r\sin i_{\rm UL}$. \subsection{Estimating the best fitting over-radius} The best fit value of $\rho$ is determined by maximising the log-likelihood function; \begin{equation} \ln \mathscr{L} = \displaystyle\sum_{l}\ln \phi(r\sin i_j) + \displaystyle\sum_{m}\ln \phi(r\sin i < r\sin i_{{\rm UL},k}) \end{equation} where $1 \leq j \leq l$ are the set of targets with measured values of $r\sin i$ and $1\leq k \leq m$ are those targets with $r\sin i$ upper limits. The log-likelihood is computed over a range of values of $\rho$. The maximum of the likelihood-function, $\ln{\widehat{\mathscr{L}}}$, is used to define the most likely value of $\rho$, and the standard deviation of the likelihood-function is used to estimate the uncertainty. \begin{figure} \centering \includegraphics[width = 77mm]{FigP18.eps} \caption{Plot A shows the rotation periods of low mass stars in the Pleiades as a function of luminosity. Circles indicate stars with measured spectra; those circles that are filled are the subset of stars with a measured $r\sin i$ value with uncertainty $\leq 30$ per cent. Crosses show all other stars with measured periods reported by Rebull et al. (2016a), Covey et al. (2016) and Hartman et al. (2010) which were {\it not} included in our observations. The solid red line marks the locus of stars with $(v\sin i)_p = 10$\,km\,s$^{-1}$; stars below this line are included in the $r \sin i$ analysis. Dashed vertical lines indicate boundaries between the upper, central and lower luminosity bins with numbers indicating the number of $r\sin i$ values bin. Plot B shows the $M_V$ versus $(V-K)_0$ colour magnitude diagram using the same symbols and colour coding.} \label{fig18} \end{figure} \section{Results} In this section we present the measured radii of fast rotating low mass stars in the Pleiades relative to the radii predicted for a 120\,Myr cluster using BHAC15 evolutionary models. Figure 11 shows the measured period versus luminosity of Pleiades targets with $\log L/L_{\odot} < -0.2$ reported in Rebull et al. (2016a), Covey et al. (2016) and Hartman et al. (2010), together with those stars for which we obtained spectroscopy and those stars which were included as part of the $r \sin i$ analysis. \begin{table*} \caption{The maximum likelihood value of radius ratio, $\rho$, for faster rotating low mass stars in the Pleiades relative to the radii predicted for a 120\,Myr solar metallicity cluster using the BHAC15 evolutionary model. $N_{\rm targ}$ is the number of targets included in the calculation of the likelihood function and $N_{\rm rsin i}$ is the number of those targets with measured values of $r\sin i$.} \begin{tabular}{llllllll} \hline Subset with bins & Bin & $N_{\rm targ}$ & $N_{\rm rsini}$ & $\log L$ & $\overline{r\sin i}$ &$\ln{\widehat{\mathscr{L}}}$ & $\rho$ \\\hline All targets in bin & All & 194 & 172 & -1.823 & 0.883 & -75.3 & 1.138 $\pm$ 0.013 \\ & Lower & 69 & 63 & -2.239 & 0.907 & -24.1 & 1.115 $\pm$ 0.028 \\ & Central & 63 & 53 & -1.874 & 0.863 & -32.4 & 1.146 $\pm$ 0.033 \\ & Upper & 62 & 56 & -1.308 & 0.876 & -19.0 & 1.151 $\pm$ 0.021 \\\hline Slower rotators & All & 106 & 88 & -1.788 & 0.972 & -25.7 & 1.164 $\pm$ 0.024 \\ & Lower & 34 & 30 & -2.197 & 1.000 & -5.0 & 1.166 $\pm$ 0.043 \\ & Central & 36 & 28 & -1.874 & 0.991 & -10.3 & 1.192 $\pm$ 0.035 \\ & Upper & 36 & 30 & -1.315 & 0.925 & -9.5 & 1.127 $\pm$ 0.041 \\\hline Fast rotators & All & 88 & 84 & -1.866 & 0.790 & -49.2 & 1.121 $\pm$ 0.020 \\ & Lower & 35 & 33 & -2.281 & 0.820 & -17.4 & 1.087 $\pm$ 0.029 \\ & Central & 27 & 25 & -1.875 & 0.720 & -19.4 & 1.081 $\pm$ 0.039 \\ & Upper & 26 & 26 & -1.297 & 0.821 & -9.1 & 1.166 $\pm$ 0.034 \\\hline Lower amplitude & All & 98 & 86 & -1.886 & 0.803 & -56.0 & 1.056 $\pm$ 0.022 \\ light curves & Lower & 37 & 33 & -2.241 & 0.833 & -16.5 & 1.063 $\pm$ 0.031 \\ & Central & 33 & 28 & -1.886 & 0.751 & -25.1 & 1.057 $\pm$ 0.046 \\ & Upper & 28 & 25 & -1.415 & 0.825 & -13.5 & 1.080 $\pm$ 0.050 \\\hline Higher amplitude & All & 91 & 83 & -1.795 & 0.965 & -9.2 & 1.165 $\pm$ 0.017 \\ light curves & Lower & 32 & 30 & -2.237 & 0.987 & -4.9 & 1.167 $\pm$ 0.038 \\ & Central & 29 & 24 & -1.866 & 0.994 & -4.0 & 1.181 $\pm$ 0.037 \\ & Upper & 30 & 29 & -1.255 & 0.920 & -0.1 & 1.165 $\pm$ 0.025 \\\hline \end{tabular} \label{reference} \end{table*} \begin{table*} \caption{Sensitivity of the inferred values of the over-radius $\rho$ (overall and in the three luminosity bins introduced in section 5.1), to the assumed age and distance of the cluster and to parameters set in the data reduction pipeline. $N_{\rm targ}$ and $N_{\rm rsin i}$ are as defined in Table~5. } \begin{tabular}{llcccccr} \hline & &N$_{targ}$/ & \multicolumn{4}{c}{$\rho$ for bin:} & $\Delta \rho^{*}$ \\ & &N$_{r\sin i}$ & Upper & Central & Lower & All & \\\hline Case 0 - & Reference: age 120\,Myr, distance 136.2\,pc & 194/172 & 1.15 $\pm$ 0.02 & 1.15 $\pm$ 0.03 & 1.12 $\pm$ 0.03 & 1.138 $\pm$ 0.013 & --- \\ Case 1 - & Model age reduced to 80Myr & 197/173 & 1.13 $\pm$ 0.02 & 1.11 $\pm$ 0.03 & 1.08 $\pm$ 0.03 & 1.109 $\pm$ 0.018 & -0.029 \\ Case 2 - & Model age increased to 160Myr & 194/172 & 1.15 $\pm$ 0.02 & 1.16 $\pm$ 0.03 & 1.13 $\pm$ 0.03 & 1.141 $\pm$ 0.016 & +0.003 \\ Case 3 - & Increased distance distance 140.0\,pc & 196/173 & 1.15 $\pm$ 0.02 & 1.11 $\pm$ 0.03 & 1.08 $\pm$ 0.03 & 1.120 $\pm$ 0.015 & -0.018 \\ Case 4 - & Reduced distance distance 132.4\,pc & 192/172 & 1.17 $\pm$ 0.02 & 1.17 $\pm$ 0.03 & 1.13 $\pm$ 0.03 & 1.156 $\pm$ 0.015 & +0.018 \\ Case 5 - & Compensation for binarity and SDR & 194/172 & 1.16 $\pm$ 0.03 & 1.17 $\pm$ 0.03 & 1.14 $\pm$ 0.03 & 1.152 $\pm$ 0.016 & +0.014 \\ Case 6 -&Minimum $(v\sin i)_p=$ 15\,km\,s$^{-1}$ & 167/153 & 1.15 $\pm$ 0.02 & 1.15$ \pm$ 0.03 & 1.11 $\pm$ 0.03 & 1.134 $\pm$ 0.014 & -0.004\\ Case 7 -& Excluding targets with multiple periods & 158/138 & 1.17 $\pm$ 0.03 & 1.17 $\pm$ 0.03 & 1.12 $\pm$ 0.03 & 1.150 $\pm$ 0.014 & +0.012 \\ Case 8 - & Increase value of FWHM$_0^{**}$ & 194/166 & 1.14 $\pm$ 0.02 & 1.12 $\pm$ 0.03 & 1.10 $\pm$ 0.03 & 1.124 $\pm$ 0.015 & -0.014 \\ & (i) slower rotators & 106/83 & 1.11 $\pm$ 0.04 & 1.17 $\pm$ 0.04 & 1.14 $\pm$ 0.05 & 1.137 $\pm$ 0.024 & -0.027 \\ & (ii) Faster rotators & 88/83 & 1.16 $\pm$ 0.04 & 1.08 $\pm$ 0.04 & 1.08 $\pm$ 0.03 & 1.117 $\pm$ 0.021 & -0.004 \\ Case 9 - & Upper 90\% of light curve amplitudes & 173/158 & 1.16 $\pm$ 0.02 & 1.16$ \pm$ 0.03 & 1.13 $\pm$ 0.03 & 1.148 $\pm$ 0.016 & +0.010 \\ \hline \multicolumn{8}{l}{* Change in $\rho$ calculated for data over the full luminosity range relative to the reference case.}\\ \multicolumn{8}{l}{** The value of FWHM assumed for slow-rotating stars in the calculation of $v\sin i$ is increased by 0.5\,km\,s$^{-1}$ ($\sim$3 times uncertainty in FWHM$_0$)}\\ \end{tabular} \label{sensitivity} \end{table*} \subsection{The estimated over-radius} The data were allocated to three luminosity bins for analysis with roughly equal numbers of targets per bin (see Fig.~11). The upper bin spans a relatively wide range of mass (0.4 to 0.8\,M$_{\odot}$, estimated from the BHAC15 models) and includes both fast rotating stars and stars that appear to be in transition between the gyrochronological ``C sequence'' and slower ``I sequence'' defined for F-K stars by Barnes (2003, 2007). The central and lower mass bins are more densely populated, but consist of almost exclusively of fast rotating stars. It is clear that the stars included in the $r \sin i$ analysis represent a subset of the total population that is heavily biased towards faster rotators with $P<2$ days and the majority with $P<1$ day. It is the radius of these faster rotating stars that is reported here. The results of the maximum likelihood analysis are presented in Figs. 8 and 12 and summarised in Table 5. Figure~12 shows the main result of this paper: the targets considered here have an average over-radius $\rho =1.138 \pm 0.013$ relative to the radius-luminosity relation predicted by the solar-metallicity evolutionary models of BHAC15 at an age of 120\,Myr. Also shown in Fig.~12 are over-radii for the upper, central and lower mass/luminosity bins, where the maximum likelihood analysis has been conducted separately for each bin. The over-radius in each bin is significantly larger than unity, and consistent with the mean over-radius. There is no strong evidence for any variation in the level of over-radius across this luminosity and mass range. The results in Table 5 assume an age for the Pleiades of 120\,Myr, a distance of 136.2\,pc and neglect the small effects of surface differential rotation and binarity discussed in section 4.2. Table~6 shows the effects of changing the assumed age and distance or including the effects of SDR and binarity on the calculated over-radius. Varying the assumed age between 80 and 160\,Myr gives $1.11 < \rho < 1.14$ (the change in $\rho$ is asymmetric because the stars are already close to the ZAMS at 120 Myr). Altering the distance between 132.6\,pc and 140.0\,pc (corresponding to the 3$\sigma$ limits reported by Melis et al. 2014) changes the derived luminosities and hence the predicted radii and gives $1.12< \rho <1.16$. The combined effects of SDR and binarity are small, act in opposite directions and depend to some extent on the assumed binary properties of the sample and latitude distribution of spots. The net effect of including these would be only a 1 per cent increase in $\rho$, but given the uncertainties we do not include this in our final estimate. Table 6 also shows the effects of excluding stars with multiple periods and of changing the minimum $(v \sin i)_p$ threshold for stars included in the analysis (see section 4). These also lead to only $\sim 1$ per cent changes in the main results. Combining expected levels of uncertainties in age and distance with the precision-based uncertainty shown in Table 5 gives a final average over-radius $\rho=1.14\pm0.02$ relative to the solar metallicity BHAC15 model where the uncertainty represents the 68 per cent confidence intervals for an age of $120 \pm 20$\,Myr and a distance of $136 \pm 2$\,pc. \begin{figure} \centering \includegraphics[width = 80mm]{FigP11.eps} \caption{ The estimated over-radius for the subsample of Pleiades stars included in the maximum likelihood analysis described in section 4. The over-radius is with respect to the radius predicted by the BHAC15 models for 120\,Myr solar metallicity stars at a given luminosity. The horizontal line shows the mean over-radius, for all the data considered, with dashed lines indicating the 1-sigma confidence interval. The individual points with error bars show the estimated mean over-radius and corresponding uncertainties for stars in three luminosity/mass bins. The mass scale at the top of the plot is based on the same BHAC15 model. A green solid line shows the predicted effect of radius inflation due to magnetic inhibition of convection (Feiden et al. 2015); the blue dashed line shows the predicted effect of starpots with an effective dark spot coverage of $\beta =0.16$ (see section 6.1). The red dotted line shows the combined effect of both magnetic inhibition of convection and starspots with $\beta =0.16$.} \label{fig11} \end{figure} \subsection{Comparison of fast and slow rotators and with interferometric radii} In Fig.~13 the over-radius is shown in the three luminosity/mass bins, but now also dividing the stars into faster- and slower-rotating subsamples (note that all these stars should be considered fast rotators when compared with the parent sample of Pleiades stars and that we cannot resolve projected radii for slowly rotating stars in the Pleiades). For the upper and central luminosity bins, the faster rotators are defined as those having with $P<0.55$\,d. For the lower bin the split is made at $P<0.4$\,d and this gives roughly equal numbers of stars in each subsample. Both the central and lower mass bins show a marginal ($\sim 2 \sigma$) difference in over-radius between the faster and slower rotators, with the {\it slower} rotators showing the larger over-radius. No significant difference is seen for stars in the upper mass bin. Figure~13 also shows $\rho$ for individual field stars based on interferometric measurements of stellar radii report by Boyajian et al. (2012) plotted against luminosity values derived from their 2MASS $K$ magnitudes and $V-K$ colours in the same way as the Pleiades data. For these stars, the measured radii are compared to a 5\,Gyr solar metallicity BHAC15 isochrone, although neither the age or metallicity are well constrained for most of these targets. There are 19 stars with reported radii in the upper bin, with a weighted mean $\rho = 1.026 \pm 0.007$. The two lower bins contain only 4 stars with significantly scattered normalised radii, so no useful comparison can be made with the models. There is thus evidence for a small over-radius {\it with respect to the solar metallicity models} for the field M-dwarfs, but it is much smaller than the over-radius found in the Pleiades. \begin{figure} \centering \includegraphics[width = 75mm]{FigP12.eps} \caption{ The influence of rotation rate on the over-radius of Pleiades stars relative to the predictions of a BHAC15 120\,Myr isochrone. Pleiades targets in each luminosity bin are split by period into faster and slower rotating subsamples (see section 4.5). Also shown are over-radii for low-mass field stars, derived from interferometric radius measurements in Boyajian et al. (2012), and with respect to a 5 Gyr solar-metallicity isochrone from BHAC15. The green point shows a previously reported measurement of $\rho$ for Pleiades stars of $0.6<M/M_{\odot}<0.8$ and periods in the transition between the C and I gyrochronological sequences defined by Barnes (2007) ($P>2$d, see Table~2 of Lanzafame et al. 2017).} \label{fig12} \end{figure} \begin{figure} \centering \includegraphics[width = 75mm]{FigP13.eps} \caption{ The influence of light curve amplitude on the estimated over-radius. Pleiades targets in each luminosity bin are split into subsamples with higher and lower spot-induced light curve amplitudes (see section 5.3.1). Green triangles show the over-radius of targets in NGC\,2516 estimated using previously reported measurements of $P$ and $v\sin i$ (see section 6.2 and Jackson et al. 2010a).} \label{fig13} \end{figure} \subsection{Biases and selection effects due to spot coverage and the $\sin i$ distribution.} The analysis described above explicitly assumes that the spin axes of targets are randomly distributed in space giving a probability density $\phi(i)=\sin i$. There are a number of reasons why this may not be true: it is easier to resolve $v\sin i$ in targets with higher values of $\sin i$; measurements of period may be biased towards targets with larger $\sin i$ since these would exhibit larger light curve amplitudes due to starspot modulation; or the spin axes may be partially aligned, yielding a bias that could result in either a higher or lower mean $\sin i$ and perhaps a narrowing of the $\sin i$ distribution. The possibility of bias in the measured over-radius due to the inability to measure $v\sin i$ at low inclinations is already circumvented in the present analysis by explicitly including targets with upper limits in $v\sin i$ as left-censored data. The remaining sources of bias are considered separately below. \subsubsection{Selection of stars with higher amplitude light curves} The possibility of bias due to selection effects in the period measurements depends on the completeness of the period data; i.e. whether periods are available for a representative sample of stars, including those with low inclinations. Figure 14 illustrates the potential effect of incomplete sampling of period data on the measured over-radius -- selecting stars with higher spot-modulated light curve amplitude is expected to preferentially select stars with higher $\sin i$. Selecting for analysis only the 50 per cent of Pleiades targets with the highest light curve amplitude (taken from Rebull et al. 2016a) increases the estimated value of $\rho$ by $\sim 3$ per cent compared to an analysis of the entire sample. This is actually less than would be expected if light curve amplitude depended solely on $\sin i$, since $\overline{\sin i}$ increases from 0.785 for a randomly distributed sample to 0.96 for a group of objects with $\sin i$ restricted to be above the median of a random distribution. We do not believe our over-radius results for the Pleiades can be biased by anything like this amount. Most of the targets (189 out of 192) have periods measured from K2 light curves. Rebull et al. (2016a) measured periods for 92~per cent of the Pleiades targets observed. For half of the remaining stars periods were not measured because of non-astrophysical or instrumental effects, leaving just 4 per cent without measured periods that could be Pleiades members with low inclination angles or targets with very long periods or very few (or very symmetrical) star spots. The effect of missing a small proportion of targets with the lowest amplitude light curves can be assessed by evaluating $\rho$ using only those targets from our list with the top 90 per cent of light curve amplitudes. This produces a small (1 per cent) increase in $\rho$ (see Table~6), indicating that our sample of measured periods, which is complete to $\sim$96 per cent provides an almost unbiased estimate of $\rho$. \begin{figure} \ \centering \includegraphics[width = 85mm]{FigP14.eps} \caption{The effect of partial alignment of stellar spin axes. The upper plot shows the variation of $\ln{\widehat{\mathscr{L}}}$ with cone angle $\lambda$, for 3 different values of cone inclination $\alpha$, relative to the line of sight. The lower plot shows the variation of $\rho$ over the same parameter range} \label{fig14} \end{figure} \subsubsection{Alignment of stellar spin axes} Jackson and Jeffries (2010a) investigated the effects of partial alignment of spin vectors by modelling cases where spin axes are uniformly distributed inside a cone and zero elsewhere. The cone semi-opening angle, $\lambda$, determines the degree of alignment, and the mean inclination of the stars within the cone is $\alpha$ (see Fig.~15). In this case the probability function $\phi(\sin i|\rho)$ in eqn.~5 is replaced by a more complex function, $\phi(\sin i|\rho,\alpha,\lambda)$ calculated using a Monte Carlo method (see eqns.~2 to 6 in Jackson \& Jeffries 2010a). Figure~15 shows the effect of partial alignment of stellar spin axes on the maximum log-likelihood (see Eqn. 7) and the derived value of $\rho$ for $15^{\circ} < \lambda < 90^{\circ}$ (the upper limit corresponds to random alignment of the spin axes). In this analysis the effects of SDR and binarity {\it are} included because whilst they have little effect on the mean inferred $r \sin i$, they {\it do} have a small, but non-negligible effect on the detailed shape of $\phi$. Results are shown for three values of $\alpha$: \begin{itemize} \item For $\alpha=25^{\circ}$ the spin axes are aligned close to the line of sight, such that the average value of $\sin i$ is lower than the case of a uniform distribution. Consequently a higher value of $\rho$ is required to match the observed set of $r\sin i$ values. \item For $\alpha=45^{\circ}$ the spin axes are aligned as shown in the sketch in the upper panel of Fig.~15. If $\lambda <45^{\circ}$ then $\overline{\sin i}$ is lower than the uniform case and hence $\rho$ is higher. At larger $\lambda$ values both the maximum likelihood and $\rho$ are similar to the case of a uniform distribution. \item For $\alpha=75^{\circ}$ and small $\lambda$ the spin axes are aligned almost perpendicularly the line of sight. This both increases $\overline{\sin i}$ and suppresses the expected number of targets with low $r\sin i$ (relative to the mean) and therefore provides a poor match to the measured distribution of $r\sin i$. This allows us to say that if $\alpha$ is as large as this, then $\lambda >80^{\circ}$ degrees. \end{itemize} \section{Discussion} \subsection{The over-radius in low-mass Pleiades stars} Observations of low-mass, short-period eclipsing binaries reveal that their components may be inflated by $\sim$10 per cent at a given mass compared with the usual evolutionary models. We have found a similar phenomenon here. The average over-radius in our sample of fast-rotating, low-mass Pleiades stars is $14\pm2$ per cent {\it at a given luminosity}, which according to the polytropic models discussed by Jackson \& Jeffries (2014a), is equivalent to a $\sim$9 per cent over-radius {\it at a given mass}. There is no evidence for any mass or luminosity dependence of this over-radius across the range covered by our sample. In particular, we have no evidence that the inflation changes markedly as we move from stars with higher luminosities that have radiative cores, to lower luminosity stars that should be fully convective. For non-inflated stars aged 120\,Myr the BHAC15 model shows a radiative core developing at the transition between the central and lower bins in Fig.~12 ($M>0.3M_{\odot}$, $\log L/L_{\odot}>-2.0$). It should be stressed that the inferred over-radius is with respect to the evolutionary models of BHAC15 (although the comparison with the models of Dotter et al. 2008 is almost identical). The evolutionary models might fail to correctly predict the measured radii for a number of reasons, although uncertainties in the assumed age and distance are already incorporated into the error bars on the results. Before concluding that the over-radius is due to magnetic activity, as opposed to some other deficiency in the models, we should compare the same models to the measured radii of older, less magnetically active, but otherwise similar stars. The Boyajian et al. (2012) sample of stars with interferometric measurements of angular radii offers this test for the higher mass stars ($>$0.4\,$M_{\odot}$) in our sample (see Fig.~13). Stars in this upper mass bin have a weighted mean over-radius of $2.6 \pm 0.7$ per cent relative to a 5\,Gyr solar metallicity isochrone. Hence the over-radius of the higher mass Pleiades stars relative to the measured radii of inactive field stars of similar luminosity is $\sim 10$ per cent, although a detailed comparison is hampered by uncertainties in the age and metallicities of the field stars. Radius inflation at a given luminosity leads to lower effective temperatures and lower core temperatures in contracting PMS stars. Work by Jackson \& Jeffries (2014a,b); Somers \& Pinsonneault (2015a,b) and Feiden (2016) has considered how this influences the determination of ages and masses of PMS stars in the Hertzsprung-Russell diagram and the onset and rate of lithium depletion in their photospheres as it is burned in the core. The amount of radius inflation we have determined is consistent with what was assumed or modelled in these works and so the consequences will also be similar. In a cluster like the Pleiades, ages come from either the main-sequence turn-off or the ``lithium depletion boundary'' (LDB) -- the luminosity below which Li is preserved in the interior of a fully convective low-mass PMS star (e.g. Stauffer et al. 1998; Jeffries \& Oliveira 2005). The former is unaffected by radius inflation in low-mass stars, but the latter may be. We caution the reader that the LDB in the Pleiades occurs in objects close to the substellar boundary at $\log L/L_{\odot} \simeq -2.9$ and radius inflation has not yet been established at these low-masses. However, if stars near the LDB were inflated by 14 per cent then the calculations presented by Jackson \& Jeffries (2014b; calculated for inflation due to spots, but valid for inflation by any other cause) suggest the LDB age should be increased by 11 per cent, from 125\,Myr to 139\,Myr. Somers \& Pinsonneault (2015a) and Somers \& Stassun (2017) have suggested that inflation varies between roughly zero for the slowest rotators and 15 per cent for the fastest rotators, and could explain the observed rotation-dependent Li depletion pattern in Pleiades K-dwarfs. These stars are at the upper end of the mass range consider here, but the overall level of radius inflation we measure in the fastest rotating cluster members, is in agreement with this hypotheses. The effects of radius inflation are likely to be even more significant if present at at younger ages. Jeffries et al. (2017) showed, using the example of the Gamma Velorum cluster, that ages inferred from the Hertzsprung-Russell diagram could be doubled by 10 per cent inflation at a given luminosity (slightly less than found here) and that inferred masses would also be significantly underestimated by non-magnetic models, particularly at the lowest stellar masses. \subsubsection{The possible causes of radius inflation} That an over-radius has been observed in the Pleiades whilst the models work reasonably well for older fields stars is circumstantial evidence that magnetic activity and rotation are the factors responsible for the over-radius; although some other age-dependent variation in the physical model could conceivably lead to the observed results. There are two main "flavours" of magnetic model that might provide an explanation for the observed over-radii - the magnetic inhibition of convection at and just below the surface in layers with significant super-adiabaticity (Feiden \& Chaboyer 2012, 2014), or the blocking of radiative flux from the surface of the star by cool, magnetic starspots (Jackson \& Jeffries 2014a; Somers \& Pinsonneault 2015b). These models predict a different behavior of over-radius as a function of luminosity. Magnetic inhibition becomes less effective as the convection zone deepens and the stars become fully convective (Feiden et al. 2014). The solid line in Fig.~12 shows the over-radius using the Dartmouth code modified for the effects of magnetic field (Feiden, Jones \& Chaboyer 2015) relative to the "standard" Dartmouth model (Dotter et al. 2008), assuming a surface field strength of 2.5\,kG as described by Malo et al. (2014). Our results suggests that the magnetic inhibition models, based on an approximate equipartition magnetic field strength at the stellar surface do not inflate the low luminosity stars in our sample sufficiently (by a factor of two). Conversely, the effect of a given coverage of starspots becomes larger in fully convective stars (Spruit \& Weiss 1986). Fang et al. (2016) used the TiO band strengths measured in LAMOST spectra to estimate the spot coverage and temperatures of low mass stars in the Pleiades. Their results can be used to estimate an ``effective spot coverage'', $\beta$, defined as the fraction of stellar flux blocked by starspots compared to the flux of an immaculate photosphere (equivalent to $f{_s}^{\prime}$ in Fig.~11 of Fang et al.). Comparing target lists we find 22 stars analysed by Fang. et al. with a measured $r\sin i$ in our analysis. The average value of $\beta$~is 0.16, with a dispersion of 0.09. This can be used to model the effects of spot coverage on the \textit{average} stellar radii (Spruit 1982). The dashed line in Fig.~12 shows the predicted radius ratio for $\beta =0.16$ as a function of $\log L$, estimated from a linear interpolation of the calculations of Somers \& Pinsonneault (2015a), that use a version of the YREC evolutionary code (van Saders \& Pinsonneault 2012) modified to include starspots. If radius inflation were caused {\it solely} by starspots then this would require $\beta \simeq 0.3$ for the higher mass stars in our sample, decreasing to $\beta=0.2$ at lower masses where the effects of a given spot coverage are stronger. These spot coverages are only a little larger than suggested by Fang et al. (2016) but it is possible that $\beta$ has been underestimated by their simple two-component modelling of the optical spectra. Alternatively, one could have both mechanisms in operation, with the more modest \textit{average} spot coverage measured by Fang et al. (2016) ($\beta=0.16$) plus magnetic inhibition of convection by an equipartition surface magnetic field ($\sim 2.5$\,kG), and the sum of these two would match the measured over-radii reasonably well (dotted line in Fig.~12). \subsubsection{Influence of rotation rate} Given that we are hypothesising that strong, dynamo-induced, magnetic fields are the root cause of the over-radius, it is interesting to investigate whether there is any dependence on rotation rate. Lanzafame et al. (2017) found a complex behavior in Pleiades K-stars and suggested, albeit with low number statistics, that stars with intermediate rotation rates (those between the C- and I-sequences described by Barnes) had larger over-radii than stars with the fastest rotation rates. By splitting our sample into fast and slow(er) rotating halves we have found marginal evidence (see Fig.~13) that partially supports Lanzafame et al.'s result -- though we note (i) that our sample does not contain many stars rotating as slowly as those included in Lanzafame et al.'s sample and (ii) that there is no suggestion of separate C- and I-sequences in the rotation period data of lower mass stars ($M<0.6M_{\odot}$) in the Pleiades (see Fig.~11). The slow(er) sample has a mean over-radius about 2-sigma higher than the fastest rotators, though note that all of these stars rotate fast enough to be considered magnetically saturated. It is possible that this difference is linked to the structure of the star and possibly the presence of a radiative core. When considered in three luminosity bins (Fig.~13), our results suggest that any difference in over-radius is confined only to the lowest luminosity stars and in fact there is no significant difference for the high luminosity end of our sample where there is overlap with the sample considered by Lanzafame et al. (2016). We would caution against ascribing too much significance to this result at this stage, since the samples may be affected by analysis biases that could separate the over-radii of fast- and slow(er)-rotators. For example we are not able to measure $v\sin i$ on slowly rotating targets. Whilst we have taken steps to address this bias in our analysis it is possible that some uncertainties remain. There is also the possibility of uncertainty in the zero-point of the $v\sin i$ calibration. As pointed out by Hartman et al. (2010), if the zero-point is too low then this could result in a significant over-estimate of $v\sin i$ (and hence $r \sin i$) for stars with the smallest resolvable $v \sin i$, but much less effect for the fastest rotators. To test this, we artificially raised the zeropoint by 0.5\,km\,s$^{-1}$, which is far beyond any likely statistical error in our zero-point (see Table~6). This reduces the overall level of inflation by 1 per cent for the entire sample whilst decreasing the "gap" between faster and slower rotators by 2 per cent. A more intriguing possibility is that this difference is real. Reiners \& Mohanty (2012) have claimed that the angular momentum loss rate due to a magnetically coupled wind is much more strongly dependent on the stellar radius ($\propto R^{16/3}$) than assumed in previous work (e.g. Kawaler et al. 1988) and more strongly than it depends on rotation rate ($\propto \Omega$ in the magnetically saturated regime, which all our stars are). From this perspective, two similar stars with radii that differ by 10--20 per cent would have quite different angular momentum loss rates. Even if greater radius inflation were initially caused by more rapid rotation and greater magnetic activity, the consequent spin-down timescale could be much shorter than the thermal timescale on which an inflated star could react to a slower rotation rate and so we might expect to see that the stars that have begun to spin down are indeed those with larger radii. A more detailed analysis of this possibility is beyond the scope of this paper and perhaps not yet warranted by the quality of the data. \subsection{The discrepancy with NGC 2516} In Jackson et al. (2009) we undertook a similar analysis of spectra for low-mass stars with known rotation period in NGC 2516 - a cluster with a similar age and metallicity to the Pleiades. The results differed in that the deduced over-radius at the lowest masses considered in that paper ($\simeq 0.25M_{\odot}$) reached $\sim 40$ per cent. Stars with higher masses were in reasonable agreement with what we find for similar stars in the Pleiades. Here, we have adopted a maximum likelihood technique including stars with upper limits in $v\sin i$ as left censored data. In the NGC\,2516 work, we gave equal weight to each measured $r\sin i$ value and allowed for left-censored data by adopting a lower cut-off in $\sin i$, below which $r\sin i$ could not be measured. Re-running the NGC 2516 dataset through the current analysis pipeline (and also using the BCAH15 models as our baseline), we instead find $\rho =1.31 \pm 0.06$ for data in the two lower luminosity bins (see Fig.~14). This value of $\rho$ is still significantly higher than the average over-radius measured from Pleiades data. A substantial and pertinent difference between the Pleiades and NGC\,2516 datasets is the fraction of observed targets with measured rotation periods. Jackson and Jeffries (2012) reported that less than half of the NGC\,2516 members monitored (from the ground) by Irwin et al. (2007) had subsequently derived rotation periods. This fraction was about 50 per cent in the higher mass bins, dropping to 30 per cent for the lowest luminosity bin. Selecting a similar subset of Pleiades targets, (those with the top 40 per cent of light curve amplitudes in Rebull et al. 2016a) yields $\rho =1.18 \pm 0.04$ for stars with $M/M_{\odot} < 0.40$. Whilst this is 4 per cent higher than obtained using the full range of amplitudes it is still $13 \pm 7$ per cent lower than found for similar NGC\,2516 targets. We have been unable to identify any other significant systematic differences between the two data sets that might account for this remaining discrepancy, if indeed it is real. This comparison has highlighted the importance of having as near complete a set of period data as possible when estimating $\rho$. Whilst the maximum likelihood method used here includes targets with low $v\sin i$ as left censored data it neglects targets without measured periods and this can lead to a bias in the $\sin i$ distribution. Details of cluster members {\it without} measured periods are often not reported in catalogues of rotation periods and this fraction can be high, especially for ground-based surveys with limited sensitivity to low-amplitude modulation. \subsection{The $\sin i$ distribution} Corsaro et al. (2017) used asteroseismology-based estimates of inclination angles to claim that the distribution of spin-axis vectors is not random among stars with $M>M_{\odot}$ in two old open clusters in the main Kepler field. They attribute the strong alignment effect they find to the formation of these clusters from a collapsing cloud with a high ratio of rotational to turbulent kinetic energy and the inheritance of some of this angular momentum by the forming stars, especially those with higher masses. From their simulations, Corsaro et al. suggest that this effect may be much weaker in the lower mass stars ($<0.7 M_{\odot}$) that constitute most of our Pleiades sample. If we assume that the dispersion in $r\sin i$ that we see is mostly caused by a variation in $\sin i$ and not by a star-to-star variation in over-radius, then our observations put constraints on how narrow the distribution of $\sin i$ could be. Figure 15 showed the effects of alignment of spin axes on $\ln{\widehat{\mathscr{L}}}$ and $\rho$ when spin axes are uniformly distributed over a cone with half opening angle $\lambda$ and average inclination $\alpha$ relative to the line of sight (see section 5.3.2). There is no strong evidence of preferential alignment. A model with $\lambda=90^{\circ}$, equivalent to a random distribution, has $\ln{\widehat{\mathscr{L}}}$ that is not significantly lower than the best fitting model with $\lambda<90^{\circ}$ according to the Bayesian information criterion (BIC; $\Delta {\rm BIC} \sim4$). The minimum cone angle that provides a similar value of $\ln{\widehat{\mathscr{L}}}$ to a random distribution of spin axes is $\lambda \geq 30^{\circ}$ if $\alpha <45^{\circ}$ i.e. a strongly aligned spin axis distribution with $\lambda <30^{\circ}$ does not match the measured distribution of $r\sin i$ for any mean inclination. If $\alpha >45^{\circ}$, which is $>70$ per cent likely for a randomly distributed $\alpha$, then the lower limit to $\lambda$ becomes much larger. Although the measured distribution of $r\sin i$ could be matched by a partial alignment of spin axes with $\lambda \geq 30^{\circ}$, the most likely value of $\rho$ in those cases is always similar to, or larger than, the value obtained by assuming a random distribution of spin axes (see Fig.~15). Thus the mean over-radius of 14 per cent shown in Fig.~12 and Table~5 is the {\it minimum} that provides an acceptable fit to the measured data. \section{Summary} Precise measurements of rotation periods from the Kepler K2 survey of the Pleiades have been combined with new, precise measurements of rotational broadening for the same stars, in order to estimate their projected radii. Using a maximum likelihood analysis technique and assuming random spin-axis orientation, the average radius of fast-rotating ($P \sim 2$ days or less), low-mass ($0.1 \leq M/M_{\odot} \leq 0.8$) Pleiades members is $14\pm2$ per cent larger than predicted for stars of the same luminosity by the solar-metallicity Baraffe et al. (2015) and Dotter et al. (2008) models for an assumed age of 120\,Myr. The analysis considered unresolved binarity, differential rotation and biases due to the difficulty of measuring rotational broadening for low inclination objects, but these are unlikely to change the results by more than 1-2 per cent. The quoted uncertainties include the statistical precision, which is dominant, and also contributions that account for any plausible uncertainty in the cluster distance and age. The distribution of projected radius in these low-mass Pleiades stars is consistent with a random orientation of spin axes and is inconsistent with strong alignments where the spin-axis vectors are confined to cones with semi-opening angle $<30^{\circ}$. Weaker alignments are possible if the mean inclination angle is $\leq 45^{\circ}$, but these scenarios would lead to a larger inferred over-radius. There is no evidence that the radius inflation with respect to model predictions varies over the luminosity range considered (approximately $-0.7 \geq \log L/L_{\odot} \geq -2.7$) and in particular, no evidence for a change for PMS stars that are fully convective. The same models do predict radii that reasonably match the interferometrically measured radii of older, magnetically inactive field stars with masses and luminosities in the upper half of this range, which is circumstantial evidence that magnetic activity or rapid rotation are the factors responsible. A comparison with existing ``magnetic models'' suggests that neither magnetic inhibition of convection or flux blocking by starspots can solely explain the over-radius at the expected levels of surface magnetic field or spot coverage, however a simple combination of the two effects does match the data quite well. One remaining puzzle is that although all the stars we consider are very fast-rotating and likely to have saturated levels of magnetic activity, there is evidence that it is the slowest rotating half of this sample that have the largest over-radii. That low-mass, active stars have larger radii at a given luminosity than predicted by the most commonly used evolutionary models has several important implications. Effective temperatures would be lower; ages derived using the Hertzsprung-Russell diagram and non-magnetic, standard PMS isochrones would be underestimated, as would stellar masses; core temperatures would be lower than expected, leading to delays in the onset of lithium depletion and an extension of the PMS lifetime. The calibration of these effects and the identification of the causes of radius inflation requires careful observation and radius measurements for stars at a range of masses in clusters covering the full range of PMS evolution. \section*{Acknowledgments} Data presented herein were obtained at the WIYN 3.5m Observatory from telescope time allocated to NN-EXPLORE through (a) the scientific partnership of the National Aeronautics and Space Administration, the National Science Foundation, and the National Optical Astronomy Observatory, and (b) Indiana University's share of time on the WIYN 3.5-m. This work was supported by a NASA WIYN PI Data Award, administered by the NASA Exoplanet Science Institute, though JPL RSA \# 1560105. RJJ and RDJ also wish to thank the UK Science and Technology Facilities Council for financial support. \nocite{Rebull2016b} \nocite{Morales2009a} \nocite{Torres2013a} \nocite{Feiden2014a} \nocite{MacDonald2013a} \nocite{Jackson2014a} \nocite{Stauffer2003a} \nocite{Covey2016a} \nocite{Rebull2016a} \nocite{Somers2014a} \nocite{Jackson2009a} \nocite{Jackson2016a} \nocite{Hartman2010a} \nocite{Lanzafame2017a} \nocite{Baraffe2015a} \nocite{Soderblom2009a} \nocite{Stauffer1998a} \nocite{Carpenter2001a} \nocite{An2007a} \nocite{Rieke1985a} \nocite{Jackson2010a} \nocite{Jackson2010b} \nocite{Horne1986a} \nocite{Bagnulo2003a} \nocite{Claret1995a} \nocite{Krishnamurthi1998a} \nocite{Kenyon1995a} \nocite{Skrutskie2006a} \nocite{Melis2014a} \nocite{vanLeeuwen2009a} \nocite{Queloz1998a} \nocite{Reinhold2013a} \nocite{Stauffer1987a} \nocite{ODell1994a} \nocite{Marilli1997a} \nocite{Feiden2012a} \nocite{Feiden2015a} \nocite{Boyajian2012a} \nocite{Somers2015a} \nocite{Somers2015b} \nocite{Barnes2003a} \nocite{Barnes2007a} \nocite{Somers2015a} \nocite{Dotter2008a} \nocite{Fang2016a} \nocite{Spruit1986a} \nocite{Demory2009a} \nocite{Malo2014a} \nocite{Spruit1982a} \nocite{vanSaders2012a} \nocite{Kawaler1988a} \nocite{Reiners2012a} \nocite{Corsaro2017a} \nocite{Jackson2010a} \nocite{Douglas2017a} \nocite{Howell2014a} \nocite{Mermilliod1992a} \nocite{Krause1980a} \nocite{LopezMorales2005a} \nocite{Feiden2016a} \nocite{Messina2016a} \nocite{Jeffries2017a} \nocite{Somers2017a} \nocite{Morales2008a} \nocite{Mullan2001a} \nocite{Kamai2014a} \nocite{Kraus2015a} \nocite{Kraus2016a} \nocite{David2016a} \nocite{Bershady2008a} \nocite{Soderblom2009a} \nocite{Soderblom1993a} \nocite{Pecaut2013a} \nocite{Jackson2014b} \nocite{Jeffries2005a} \nocite{Cummings2017a} \nocite{Duchene2013a} \nocite{Raghavan2010a} \nocite{Rhode2001a} \nocite{Jeffries2007b} \nocite{Jackson2012a} \nocite{Irwin2007a} \bibliographystyle{mn2e}
{ "timestamp": "2018-02-14T02:00:36", "yymm": "1802", "arxiv_id": "1802.04288", "language": "en", "url": "https://arxiv.org/abs/1802.04288" }
\section{Introduction} Traditionally, image retrieval systems relied on keyword annotation to search for images relevant to a user's query. Although these systems were effective but they suffered from several problems such as the need to annotate huge image databases, deal with inconsistencies among the annotations provided by different annotators etc. To overcome these problems content-based image retrieval (CBIR) \cite{veltkamp2001content, smeulders2000content} was proposed, which used the low level features (such as color, shape etc.) extracted from the images. However, their performance suffered because the low level image features could not capture the high level semantics of an image. Relevance feedback (RF) techniques \cite{rui1998relevance} were used to bridge this semantic gap. But, traditional RF methods required many rounds of feedback before the system could learn what the user was looking for. Active learning techniques \cite{hoi2009semisupervised, hoi2005semi, tong2001support} solved this problem and reduced the number of feedback rounds by identifying the most informative points and asking the user to label only those. Since a common way for a user to provide feedback to an interactive system is by providing binary labels, that indicate whether an image belongs to the query concept or not, we formulate the problem of finding images relevant to a user's query concept from large databases as a binary classification problem. The challenge is to retrieve relevant images with minimal user interaction. In this work, we address this problem by using a method that combines active learning with graph-based semi-supervised learning. Our method learns the user's concept quickly by querying labels of informative points and is scalable to databases with several million images. Since the user starts the search with only a small set of labeled images it's not ideal to use supervised learning methods. This is because supervised learning methods require a large number of training examples to perform well. Moreover, they have no way of using the abundant unlabeled data. Combination of active learning and semi-supervised learning can alleviate these problems. Active learning\cite{settles2010active} expands the training set by querying the labels of the most informative points from the pool of unlabeled data. Semi-supervised learning \cite{zhu2006semi}, allows to utilize the abundant unlabeled data and helps to make predictions which are consistent with the inherent graph structure of the data. Many methods have been proposed which use the combination of active and semi-supervised learning \cite{zhu2003combining, hoi2009semisupervised}. However, due to the high computational cost, these methods are not scalable to large datasets such as Imagenet. This is because the manifold regularization framework \cite{belkin2006manifold} for GSSL first builds the neighborhood graph and then propagates the labels. Although we get a closed form solution using this framework but computing that solution requires to invert a large $n$ x $n$ matrix. This is impractical for large applications as it has a time complexity of $O(n^3)$, where $n$ is the total number of points. To use GSSL on a large scale, we use an efficient approximation based method proposed by Fergus et al. \cite{fergus2009semi} which uses the convergence of eigenvectors of a normalized graph Laplacian matrix to eigenfunctions of Laplace-Beltrami operators and brings down the complexity of GSSL from $O(n^3)$ to $O(n)$. The cost of retraining the classifier with this method is also $O(n)$ which makes it possible to quickly retrain the classifier after every round of active learning and incorporate user feedback. We augment this with an uncertainty sampling based method for active learning \cite{campbell2000query, tong2001support} which queries the labels of the points nearest to the decision boundary as they are hardest to classify and hence the most informative. We propose a heuristic based on adaptive threshold to estimate the decision boundary of GSSL classifier and identify the most informative points. To make the classifier robust to the diversity of the images as well as to the noisy labels associated with images in huge databases, we propose to combine information from multiple modalities such as visual information extracted from state-of-the-art deep learning models such as Resnet\cite{he2016deep}, Xception \cite{chollet2016xception} etc. and semantic information obtained from the WordNet hierarchy\cite{miller1990introduction} in the GSSL method. To achieve this we construct separate graphs using the different features and combine the individual predictions from these graphs. Using the efficient approximation based GSSL method and our heuristic of adaptive threshold, we can classify and find the most informative images in huge databases (>1 million images) in under 5 seconds using a single core cpu machine with 64 GB memory. We then repeat our active learning method to incorporate the user's feedback. We present experiments on concepts defined on AnimalWithAttributes (AWA) \cite{xian2017zero} dataset with 37 thousand images and the Imagenet\cite{deng2009imagenet} dataset with 1.2 million images. Concepts such as ``\textit{furry animals with black stripes}'', ``\textit{person playing a wind instruments}'' etc.~are used for evaluation. The real power of our method lies in its ability to quickly learn from example images provided by the user. Thus, the main contributions of our work can be summarized as follows. \begin{itemize} \item We propose a scalable active semi-supervised learning method that uses active learning and efficient approximation based GSSL to quickly learn a user's query concept from a huge database with minimal user interaction. \item We propose a method of adaptive threshold that speeds up the learning of a user's concept by iteratively updating the decision boundary of the GSSL classifier for uncertainty sampling based active learning. \item We present a method that can integrate information from several heterogeneous domains such as visual information from deep learning models, semantic information from WordNet etc. under the GSSL framework. \end{itemize} The rest of the paper is organized as follows. In section 2, we briefly discuss some related work. We describe the GSSL framework in section 3 and the method of integrating multimodal features in section 4. In section 5 we present our active learning method. In section 6 we present the results of our experiments on Imagenet and AWA datasets. Finally we conclude in section 7. \section{Related Work} Active learning \cite{settles2010active} has been widely applied to CBIR to improve its performance. The works \cite{gosselin2004retin, gosselin2008active} use active learning to understand the user's query concept based on the examples provided. Our work differs from their work as they use an SVM based method for classification which cannot directly use unlabeled data whereas we employ an efficient semi-supervised learning method that can make use of abundant unlabeled data. The works in \cite{hoi2009semisupervised, zhu2003combining} use semi-supervised learning but their method is not scalable to large datasets like the Imagenet. Moreover, their method of choosing the most informative points is also different when compared to our uncertainty sampling based method. Our method of active learning which adds points nearest to the decision boundary is similar to that of \cite{tong2001support}. But the method of \citeauthor{tong2001support} is for SVM based learner where the decision boundary is known where as for our GSSL method we estimate the decision boundary by using an adaptive threshold. Many approaches to CBIR consider it a ranking problem. These approaches can be inductive or transductive based on whether they use unlabeled data during training or not. Inductive methods \cite{tieu2004boosting, tong2001support} make use of the limited labeled data to learn a classifier that differentiates relevant and irrelevant images. The decision values from this classifier are then used to rank unlabeled data. Transductive methods, on the other hand, use both labeled and unlabeled data. Manifold ranking \cite{zhou2004learning, zhou2004ranking} is a popular transductive method. Significant improvements have been observed by using a large amount of unlabeled data \cite{wan2007manifold, he2004manifold, xu2011efficient, yuan2006manifold}. Our method is close to transductive methods since we also use unlabeled data to make predictions consistent with the inherent geometrical structure of the data. Since GSSL method implicitly gives a continuous output for labels, we can use it directly to rank the images. The manifold ranking algorithm can be seen as an extension of the GSSL methodology \cite{belkin2006manifold, zhu2003semi}. The large computational cost of these methods restrict their application to large scale systems. Several works have proposed to improve the performance of these methods. \cite{wang2012scalable} constructs an approximate kNN graph using multiple random divide-and-conquer approach. \cite{tsang2007large} solves the dual optimization problem of manifold regularization \cite{belkin2006manifold} under sparsity constraint. \cite{zhang2009prototype} uses the Nystrom approximation for computing the affinity matrix. \cite{liu2010large} proposed an anchor graph regularization framework which provides an efficient method to create the graph over all the data points. \cite{fergus2009semi} proposed a method for solving the GSSL by working with only a small number of eigenvectors of the graph Laplacian. Our work extends the work of \citeauthor{fergus2009semi}\cite{fergus2009semi} to use multi-modal data. \citeauthor{guillaumin2010multimodal}\cite{guillaumin2010multimodal} proposed a method to combine multi-modal information utilizing tags associated with images, unlike our approach of using class labels and WordNet hierarchy to extract semantic information. \section{Graph-based Semi-Supervised Learning Framework} In this section, we introduce the notation for the semi-supervised learning framework that we use and then discuss the issues which arise when we apply the method in a large scale setting and show how to address them. \subsection{Notation and formulation} Assume we have a dataset $X$ with $n$ points of which $l$ points are labeled i.e. $\{(x_{1}, y_{1}), ..., (x_{l}, y_{l})\}$ and remaining $u = n - l$ points are unlabeled $\{x_{l+1}, ..., x_{n}\}$ where $l \ll u$. The labels, $y_{i} \epsilon \{-1, 1\}$. Using all the $n$ points from $X$, we define a graph, $G(V, E)$ with points in $X$ being the vertices $V$ and $E$ being the set of edges between these vertices. The weights on the edges between these points is captured by a $n$ x $n$ affinity matrix, $W$. An example of the function to compute the weights is the radial basis function(RBF) where $W_{ij} = exp( \frac{-1}{2\sigma^{2}} (x_{i} - x_{j})^{2} )$. Next, we define the graph Laplacian $L = D - W$, where $D$ is a diagonal matrix with diagonal elements as row sum of $W$, $D_{ii} = \Sigma_{j} W_{ij}$. We define the objective function for semi-supervised learning using the formulation presented by \citeauthor{fergus2009semi}. \begin{displaymath} J(f)= \Sigma_{i = 1}^{l} \lambda(f(i) - y_{i})^{2} + \frac{1}{2}\Sigma_{i,j = 1}^{l+u}(f_{i} - f_{j})^{2} W_{ij} \end{displaymath} The first term in the above equation corresponds to the least square loss, which ensures correctness of the labels. The second term defines the smoothness penalty and ensures that the labels of the points are consistent with the manifold and cluster assumptions \cite{zhou2004learning}. Simplifying the equation further, we can write $J(f)$ as, \begin{displaymath} J(f) = (f-y)^T\Lambda(f-y) + f^{T}Lf \end{displaymath} Here $y$ are the labels, $y_{i} \epsilon \{-1, 1\}$ and $\Lambda_{ii} = \lambda$, for labeled points and $\Lambda_{ii} = 0$ for unlabeled points. The optimal $f^{*}$ is the one that minimizes $J(f)$. We can obtain the minimum by setting the gradient of $J(f)$ to zero. The minimizer $f^{*}$ is the solution to the following equation $(L + \Lambda)f = \Lambda y$, neglecting the trivial solution corresponding to a constant $f$ value. The equation has a closed form solution but it involves inverting a $n$ x $n$ matrix, which has a cubic complexity $O(n^3)$ and is infeasible when $L$ is large. But, as described in \citeauthor{fergus2009semi}, we can reduce the dimension of the problem by working with only a few eigenvectors of the graph Laplacian. Since eigenvectors corresponding to the smallest eigenvalues are the smoothest ( $\Phi_i^TL\Phi_i = \sigma_i$, where $\Phi_i$, $\sigma_i$ are generalized eigenvectors and eigenvalues of L, respectively), hence keeping $k$ eigenvectors corresponding to smallest eigenvalues reduces the dimension of the problem from $n$ x $n$ to $k$ x $k$ assuming we can write $f$ as, $f = U \alpha$ where $U$ is a $k$ x $k$ matrix whose columns are the $k$ eigenvectors corresponding to smallest eigenvalues. Substituting $f=U\alpha$ in $J(f)$ and simplifying, our objective function becomes $J(\alpha) = \alpha^{T}\Sigma\alpha + (U\alpha - y)^{T}\Lambda(U\alpha - y)$ We can now calculate $\alpha$ by solving the following $k$ x $k$ system of equations \begin{equation} (\Sigma + U^{T}\Lambda U)\alpha = U^{T}\Lambda y \end{equation} \subsection{Approximating eigenvectors of $L$} Even though we can obtain $\alpha$ in equation 1, by solving the $k$ x $k$ system, computing the eigenvectors of the $n$ x $n$ matrix is not easy, as it requires diagonalizing the $n$ x $n$ matrix. Following the analysis presented by \citeauthor{fergus2009semi} which shows the convergence of eigenfunctions as the limit of eigenvectors, as the number of points go to infinity, we compute the eigenfunctions to approximate the eigenvectors of a normalized graph Laplacian. Assuming the data $x_{i} \epsilon R^{d}$ come from a distribution $p(x)$ and rotations make the data independent such that $s = Rx$ then $p(s) = p(s_1)p(s_2)...p(s_d)$. This allows us to calculate the eigenfunctions using only the marginals $p(s_i)$. For each independent component, we can approximate the density $p(s_i)$ using a histogram with $B$ bins. Let $g$ be the eigenfunction values at $B$ discrete points, then $g$ satisfies \begin{equation} (\tilde{D} - P\tilde{W}P)g = \sigma P\hat{D}g \end{equation} where $\tilde{W}$ is the affinity between the $B$ discrete points, $P$ is a diagonal matrix whose diagonal elements give the density at the discrete points, and $\tilde{D}$ is a diagonal matrix whose diagonal elements are the sum of the columns of $P \tilde{W} P$, $\hat{D}$ is a diagonal matrix whose diagonal elements are the sum of the columns of $P\tilde{W}$ and $g$ is the eigenfunction values at discrete points. Following the assumption that data density is dimension separable, we can compute the eigenfunctions for all axes in our data and then keep $k$ eigenfunctions corresponding to the smallest eigenvalues over all the axes. Then we can linearly interpolate the data in 1D and repeat it for all the $k$ eigenfunctions. This step has the complexity of $O(nk)$. Using these $k$ eigenfunctions, which are the approximation to the eigenvectors of the normalized graph Laplacian, we can solve equation 1 efficiently, since we only need to invert a $k$ x $k$ matrix. \begin{algorithm} \caption{Fergus' Algorithm} \label{alg:alg-fergus} \begin{enumerate} \item (Offline steps) \begin{algorithmic}[1] \item Perform Principal Component Analaysis (PCA) on input data \item For each dimension \begin{algorithmic}[1] \item Construct a 1D histogram of the marginal \item Solve for eigenfunctions and eigenvalues numerically using equation 2 \end{algorithmic} \item Order eigenfunctions from all dimensions by increasing eigenvalues and take first k \item Interpolate data using these k eigenfunctions \end{algorithmic} \item (Online steps) \begin{algorithmic}[1] \item Update $y$ and $\Lambda$ for labeled points \item Solve k x k least squares system in equation 1 to obtain the label function $f^*$ \end{algorithmic} \end{enumerate} \end{algorithm} \vspace{-0.08in} \section{Integrating Multi-Modal Features} Here we show how to extend the GSSL framework described in the previous section to incorporate information from multiple modalities. In particular, we show a way to integrate visual and semantic information in the GSSL framework by computing separate graphs for each of them and then combining the individual predictions. \subsection{For visual features} In cases when constructing the exact affinity matrix $\tilde{W}$ is not feasible algorithm \ref{alg:alg-fergus} should be used. We use this method to incorporate visual information from the images. We first extract visual features, for every image, using state-of-the-art deep learning models. After we have these features we use algorithm \ref{alg:alg-fergus} to compute the label function $f^{*}_{visual}$. \subsection{For semantic features} Semantic information is often associated with a group or a category of images rather than a single image. This limits the number of unique entries for a large dataset. For example, the number of unique classes in the Imagenet dataset are only 1000 which is significantly less when compared to the number of images in the dataset, 1.2 million. The presence of small number of categories, makes the computation of the exact affinity matrix $\tilde{W}$, in equation 2, possible without the histogram, hence we can simplify algorithm \ref{alg:alg-fergus}. The modified algorithm is presented in algorithm \ref{alg:alg-fergus-modified}. We use algorithm \ref{alg:alg-fergus-modified} to compute the label function for the semantic features associated with the class labels. We use the WordNet hierarchy, Lin similarity \cite{lin1998information} measure and RBF function to compute the affinity matrix $\tilde{W}$ that captures the similarity between the class labels of different images. The formulation used for Lin similarity is defined as in \cite{deselaers2011visual} \( D^{Lin}(A,B)= \frac{2 log(p(lso(A, B )))}{(log(p(A)) + log(p(B))} \) where p(A) is the percentage of all images in A, lso(A, B) is the lowest common ancestor of A and B. For example, images with class label ``\textit{cat}'' are closer to the images with class label ``\textit{dog}'' than to images with class label ``\textit{aircraft}''. We compute the label function $f^{*}_{semantic}$ using this information. \vspace{-0.01in} \begin{algorithm} \caption{Modified Fergus' algorithm for semantic features} \label{alg:alg-fergus-modified} \begin{enumerate} \item (Offline steps) \begin{algorithmic}[1] \item Compute the exact affinity matrix $\tilde{W}$ for all the classes. This matrix reflects the similarity value between different classes. \item Using this affinity matrix, solve numerically for eigenfunctions and eigenvalues using equation 2. \item Order all the eigenfunctions by increasing eigenvalues and take first k. (Note: there is only a single dimension here) \item Construct a matrix of size $Number of Classes$ x $k$, where each row can be treated as a vector associated with one of the classes. \item Assign the same vector to each data point belonging to the same class. (This is like the interpolation step) \end{algorithmic} \item (Online steps) \begin{algorithmic}[1] \item Update $y$ and $\Lambda$ for labeled points \item Solve $k$ x $k$ least squares system in equation 1 to obtain the label function $f^*$ \end{algorithmic} \end{enumerate} \end{algorithm} \vspace{-0.1in} \subsection{Combining the predictions} Once we have the label functions from all the different modalities we obtain the final label function as a convex combination of the individual label functions \(f^{*} = \Sigma_{i = 1}^{k} \lambda_{i}f_{i}^{*}\), where \(\Sigma_{i = 1}^{k}\lambda_{i} = 1\). The parameter $\lambda_{i}$ decides the importance of each modality. This value is dependent on the concept user is trying to search. In our work, we combine the visual information and semantic information by giving equal importance to both. We use $\lambda_{i} = 0.5$ obtain the final label function, $f^{*}$. However, for concepts which purely depend on visual features the value for $\lambda_{visual}$ should be higher than the value of $\lambda_{semantic}$ and it should be the other way for concepts which only rely on semantic information. For example, the concept ``\textit{red objects}'' is a visual concept since color is usually captured by visual features where as ``\textit{works of Picasso}'' is a semantic concept as a user is interested in seeing the works done by Picasso rather than simply seeing works that are visually similar to the provided examples. \begin{figure} \centering \noindent\subfigure[Mean label values in bin]{\label{fig:makeup-1}\includegraphics[width=0.2\textwidth, height=0.13\textheight]{0.pdf}} \quad \noindent\subfigure[Cumulative distribution of $f^*$]{\label{fig:makeup-2}\includegraphics[width=0.2\textwidth, height=0.13\textheight]{01.pdf}} \caption{Plot of mean label values for the points in a bin, obtained by creating a histogram of $f^*$ values and cumulative distribution for the $f^*$ values, for the concept ``\textit{makeup accessories}'' with 10 labeled points.} \label{fig:makeup} \end{figure} \section{Active learning for concept detection} An interactive system learns a user's query concept based on the few example images provided by him. However, these images may not be sufficient for the classifier to understand the concept user is trying to search. Active learning helps to systematically increase the labeled training data by querying the user for labels of the most informative points. Since active learning involves interaction with the user, the aim is to maximize the learning with as few user interactions as possible. In this work, we use an uncertainty sampling based active learning technique which considers the labels of the most ambiguous points to be most informative. We propose a method to find the most ambiguous points in GSSL framework, which first estimates the decision boundary and then queries the user for labels of the points around it. \subsection{Uncertainty sampling} As shown by \citeauthor{tong2001support}, points around the decision boundary are the most informative ones. However, unlike SVM where the separating hyperplane is known, the decision boundary for the GSSL based method is not known. As seen in the previous section the label function $f^*$ is a continuous output. Since the $f^*$ values are not $p(y|X)$ we must find the decision boundary in a different way. The correct decision boundary is the one that can separate all the positive points from the negative points and can also reflect the class prior correctly. Since the value of $f^*$ for a point is consistent with the manifold and cluster assumptions, we know that a point which is surrounded by more positive points will likely be positive and points surrounded by more negative points will likely be negative. This means the points that have a high $f^*$ would be positive and points with smaller $f^*$ will be negative. Thus, to find the decision boundary we must find a threshold between the range of $f^*$ such that all points above this threshold are positive and all points below this threshold are negative. Once we find such a threshold we can find the points closest to it and ask the user to label them since these are the points which the classifier is most uncertain about. \begin{figure*}[!ht] \centering \noindent\subfigure[Zeroth iteration]{\label{fig:makeup-1}\includegraphics[width=0.18\textwidth, height=0.09\textheight]{0wo.pdf}} \quad \noindent\subfigure[With adaptive threshold after 20 and 200 labeling rounds]{\label{fig:makeup-1}\includegraphics[width=0.18\textwidth, height=0.09\textheight]{20.pdf}\includegraphics[width=0.18\textwidth, height=0.09\textheight]{200.pdf}} \quad \noindent\subfigure[With constant threshold after 20 and 200 labeling rounds]{\label{fig:makeup-1}\includegraphics[width=0.18\textwidth, height=0.09\textheight]{20wo.pdf}\includegraphics[width=0.18\textwidth, height=0.09\textheight]{200wo.pdf}} \noindent\subfigure[Zeroth iteration]{\label{fig:makeup-1}\includegraphics[width=0.18\textwidth, height=0.09\textheight]{0wo1.pdf}} \quad \noindent\subfigure[With adaptive threshold after 20 and 200 labeling rounds]{\label{fig:makeup-1}\includegraphics[width=0.18\textwidth, height=0.09\textheight]{201.pdf}\includegraphics[width=0.18\textwidth, height=0.09\textheight]{2001.pdf}} \quad \noindent\subfigure[With adaptive threshold after 20 and 200 labeling rounds]{\label{fig:makeup-1}\includegraphics[width=0.18\textwidth, height=0.09\textheight]{20wo1.pdf}\includegraphics[width=0.18\textwidth, height=0.09\textheight]{200wo1.pdf}} \caption{Plot of mean label values(top) and cumulative distribution of the $f^*$ values(bottom), for the concept ``\textit{makeup accessories}'' with 10 labeled points after zero, 20, 200 rounds of active learning using adaptive(b, e) and constant(c, f) threshold} \label{fig:img-threshold} \end{figure*} \subsection{Adaptive threshold} The choice of threshold is crucial for using the GSSL based methods for classification. The correct threshold should be indicative of the prior probability of the concept. We discuss two different techniques of choosing the threshold. The first is by simply setting it to a value of zero and the other is to adapt the threshold to better estimate the class prior. \subsubsection{Constant threshold} The usual choice of threshold with the $y_i \epsilon \{-1, 1\}$ is to set it to a constant such as zero. This is a reasonable choice when the true prior probability is nearly fifty percent. However, in case of concept detection from a large database, very few items actually belong to the concept user is looking for and thus the actual prior probability is extremely small. In such a case, choosing zero as the threshold is not a good choice. Figure \ref{fig:makeup} (right) shows an example of the cumulative distribution of $f^*$ values for an Imagenet concept, ``\textit{makeup accessories}''. The true prior for that concept is about 0.005, and the corresponding threshold should be about 0.2. If one uses the zero threshold ($f^*=0$), all images will be classified as positive (i.e., relevant) which is certainly not correct. In addition to the cummulative histogram, Figure \ref{fig:makeup} (left) shows the mean label value ($y \epsilon \{-1, 1\}$) of the corresponding points after binning the $f^*$ values. We see that the points near the threshold ($f^*=0$) are mostly negative which shows that not only the threshold is wrong but also the new points chosen near this wrong decision boundary for the next round of active learning will not be very informative, since the current classifier has already assigned low $f^*$ values to these points. \begin{algorithm} \caption{Algorithm for adaptive threshold} \label{alg:alg-adaptive} \begin{flushleft} (Online steps) \begin{algorithmic} \State $i \Leftarrow 1$ \State $f^{*} \Leftarrow $ final $f^{*}$ for all features \State $\Theta \Leftarrow 0$ \While{$i \neq $ Maximum number of active learning rounds} \State $x \Leftarrow $ Point whose $f^{*}$ value is closest to $\Theta$ \State Query the label for $x$ \If{Predicted label of $x$ $\neq$ actual label of $x$} \If{Predicted label of $x$ $=$ -1} \State $\Theta \Leftarrow \Theta - \frac{1}{\alpha i}$. \Else \State $\Theta \Leftarrow \Theta + \frac{1}{\alpha i}$ \EndIf \EndIf \State Add queried point to the set of labeled points \State Obtain $f^{*}$ using the new set of labeled points \State $i \Leftarrow i + 1$ \EndWhile \end{algorithmic} \end{flushleft} \end{algorithm} \subsubsection{Adaptive threshold} As described earlier, an ideal threshold is one which can correctly determine the labels of all the points. But this is possible only if the classifier is highly accurate. Due to availability of only a few examples at the start of the search, the GSSL classifier is not very accurate initially. Examining figure \ref{fig:makeup} closely, we can see that a good choice of threshold is to choose the $f^*$ value corresponding to the bin with mean label value close to zero as this would be the bin which has equal number of positive and negative points in it. But, since we do not know the label values of all the points beforehand we need a method that can help us approximate such a threshold. We propose algorithm \ref{alg:alg-adaptive} to do this. The algorithm starts with an initial value of zero for the threshold and queries a point whose $f^*$ values is closest to it. If the predicted and actual values (obtained from the user) match, then no update is made to the threshold, otherwise the threshold is updated. If the predicted label of the queried point is negative when it should be positive, we lower the threshold to classify the point correctly and when the predicted label of the queried point is positive when it should be negative we increase the threshold. Doing this brings us closer to the actual decision boundary. Initially, we make larger adjustments to the threshold, but as the classifier becomes accurate because of the active learning process, we need only small adjustments to the threshold as the $f^*$ values stabilize. In figure \ref{fig:img-threshold}, we contrast the adaptive threshold vs constant threshold. The figure 2(b) and 2(e) shows the effect of adaptive threshold, where we can see that the algorithm does well to move the threshold to the bin with zero mean label value. The increase in the F1 score suggests that the adaptive threshold is providing better classification results and also the points being queried during active learning are helping the classifier refine the concept boundary. This can be contrasted with the figure 2(c) and 2(f), that shows the effect of choosing points around zero, which actually hurts the learning of the classifier in the beginning. In figure \ref{fig:img-threshold}, starting with same labeled points and doing twenty rounds of active learning adaptive threshold provides much better F1 score (0.61) than obtained by using zero as the threshold (0.27). The classifier with zero threshold begins to catch up with the adaptive threshold only after many iterations, which makes the proposed method superior to the constant threshold method in practice. An alternative to the incremental update in algorithm \ref{alg:alg-adaptive} is a bisection based search. However, using a bisection based method, there is a possibility that the correct value of the threshold might actually never be reached as the distribution of $f^*$ changes a lot during first few rounds of active learning. Due to these abrupt changes in $f^*$ values the figures \ref{fig:img-f1} and \ref{fig:svm-img-f1} are slightly rough. On the other hand, if the distribution of $f^*$ values remained fixed then the bisection method would be ideal to find the correct threshold in logarithmic time. \vspace{-0.1in} \begin{algorithm} \caption{Active Semi-Supervised Algorithm (Full)} \label{alg:alg-active} \begin{enumerate} \item (Offline steps) \begin{algorithmic}[1] \item Compute \textit{visualEigenfunctions} using the offline steps of algorithm \ref{alg:alg-fergus} \item Compute \textit{semanticEigenfunctions} using the offline steps of algorithm \ref{alg:alg-fergus-modified} \end{algorithmic} \item (Online steps) \begin{algorithmic}[1] \item Start with a few initial examples with labels \item Obtain $f^*_{visual}$ using the online steps of algorithm \ref{alg:alg-fergus} \item Obtain $f^*_{semantic}$ using the online steps of algorithm \ref{alg:alg-fergus-modified} \item Use algorithm \ref{alg:alg-adaptive} with $f^*$ = $f^*_{visual}\lambda_{visual}$ + $f^*_{semantic}\lambda_{semantic}$ and $\lambda_{visual}$ + $\lambda_{semantic}$ = 1 to adapt the threshold and query labels of informative points. \end{algorithmic} \end{enumerate} \end{algorithm} \vspace{-0.1in} \section{Experiments} We use two different datasets for our experiments. The first is the AWA\cite{xian2017zero} dataset that contains about 37 thousand images of 50 animals. Each animal is annotated with 85 different attributes based on categories such as color, texture, shape, habitat etc. The second dataset is the down sampled version of the Imagenet\cite{deng2009imagenet} dataset \cite{chrabaszcz2017downsampled}. This dataset includes all images from the Imagenet, resized to 64x64. It has a total of 1.2 million images belonging to 1000 classes. \subsection{Creating concepts from class labels} We create concepts using original labels of the datasets by grouping the labels based on common attribute(s) which can either be visual, semantic or both. For the AWA dataset, we create 30 different concepts by grouping the animals based on attributes such as color, texture, shape and body parts. For example, a concept, ``\textit{big animals with black stripes}'' includes animals like ``zebras'' and ``tigers''. For Imagenet, we rely on the WordNet hierarchy to create concepts. We merge the leaf level nodes into their common ancestor based on the hierarchy and then use this ancestor node as the concept. For example, the labels ``daisy'' and ``yellow lady's slipper orchid'' have a common ancestor ``\textit{flower}'' and thus ``\textit{flower}'' is a concept. We create 30 different concepts in this way. Since a typical retrieval task in a large database, has only a handful of relevant results, we ensure that all the concepts which we test on have at most $<5\%$ (< 1800 out of 37,000) positive points for AWA and $<1\%$ (< 12,000 out of 1,200,000) for Imagenet. \subsection{Feature extraction} We use pre-trained deep learning networks like Resnet\cite{he2016deep} and Xception\cite{chollet2016xception} for visual features. To extract the features for the Imagenet dataset we pass all the images through a pre-trained Xception network available in Keras \cite{chollet2015keras}. The input to the Xception network is an image of size 299x299. We resize our images to this size and obtain the output of 2048 dimensions. We use these features directly for the Imagenet dataset. For AWA dataset, we extract the features from the Resnet network using a similar procedure. Then we use these features in a multi-task learning setting to train a neural network that can detect different attributes. We use features from the shared layer as visual features for AWA dataset. We use the WordNet hierarchy for obtaining semantic features. Since the Imagenet dataset is built using the WordNet hierarchy, each class label has a synset associated with it. Likewise, for AWA, we map each animal to its nearest synset in the WordNet. We use these synsets to compute semantic features. Specifically, we use the Lin's similarity\cite{budanitsky2001semantic} measure to construct the affinity matrix as described in section 4.2. \subsection{Train and test splits} For the Imagenet dataset, we randomly select 500 images belonging to each class label (leaf nodes) and use those to form the test set. For AWA, we randomly choose 100 images from each of each class label for our test set. Thus, test set size for Imagenet is 500,000 images and is 5000 images for AWA. Performance evaluation (F1 scores) is based on the images in the test set. Remainder of the images act as a big pool of unlabeled data. The user labels a few images from this pool and starts the search. The system uses the remaining images from this pool of unlabeled data for active learning. \subsection{Computing the eigenfunctions} Here we describe the implementation details for computing the eigenfunctions for visual and semantic features. For eigenfunctions of the visual features, we first use PCA to reduce the dimensionality of the features to 64 from 2048 dimensions in Imagenet and 512 dimensions in AWA. Then we use offline steps of algorithm \ref{alg:alg-fergus} to compute the eigenfunctions by discretizing the density into 500 bins and computing the eigenfunctions for a single axis using equation 2. We repeat this procedure 64 times and get eigenfunctions for each axis in the input. Then we sort the eigenvalues obtained over all the axes and discard the eigenfunctions corresponding to the nearly zero eigenvalues (< $10^{-10}$), as these eigenfunctions correspond to the constant solution for equation 2. After discarding the eigenfunctions corresponding to extremely small eigenvalues, we keep 256 eigenfunctions corresponding to the smallest eigenvalues (roughly 4 per dimension) for both Imagenet and AWA. Then we interpolate all the data using these 256 eigenfunctions. To compute eigenfunctions of the semantic features, we follow the offline steps of algorithm \ref{alg:alg-fergus-modified}. We use the affinity matrix created using the WordNet Hierarchy and use it to solve Equation 2. We keep 100 eigenfunctions for Imagenet and 10 eigenfunctions for AWA after discarding the eigenfunctions corresponding to extremely small values. Using these eigenfunctions we interpolated the rest of the data. It takes $<1$ minute for visual eigenfunctions and $<10$ seconds for semantic eigenfunctions on a single core CPU machine. \begin{figure}[t] \centering \noindent\subfigure[AnimalWithAttributes]{\label{fig:awa-f1}\includegraphics[width=0.22\textwidth, height=0.13\textheight]{F1_AWA.pdf}} \quad \noindent\subfigure[Imagenet]{\label{fig:img-f1}\includegraphics[width=0.22\textwidth, height=0.13\textheight]{F1_img.pdf}} \caption{F1 Scores for 30 concepts defined on the AWA (left) and Imagenet (right) dataset averaged over 20 times. The graph contrasts the method with and without using active learning(Non-AL). It also compares the active learning methods with and without using adaptive threshold.} \label{fig:scores} \end{figure} \subsection{Computing $f^*$} In order to compute the initial label functions, we randomly label 10 images, 9 that belong to the concept and 1 that does not. This is equivalent to the user starting the search, where they would provide some labels for images which they are interested in seeing. Using these labeled images we compute the label function for visual and semantic features and by solve equation 1 as per the online steps of algorithm \ref{alg:alg-fergus} for visual features and algorithm \ref{alg:alg-fergus-modified} for semantic features. Once we have the label functions ($f^*_{visual}$ and $f_{semantic}^*$) for both the features, we combine the individual predictions as described in section 4.3. This process takes $<5$ seconds on a single core CPU machine. We use $\lambda = 0.5$ while combining the predictions. This value worked well for the concepts we tested on and is equivalent to giving equal weights to the information from both visual and semantic features. We evaluate the performance of the method by measuring the F1 scores for all the concepts averaged over 20 runs. \subsection{Using Active Learning} As described in section 5, we use active learning to query the labels of the most informative points. These are the points which are near the decision boundary. We follow algorithm \ref{alg:alg-adaptive} to find the threshold which acts as the decision boundary and then select points closest to it. For our experiments, the threshold is initialized to zero at the start and $\alpha = 2$ (based on cross validation). We start our experiment by labeling 10 images and from then on label only a single image in each round of active learning. The classifier is retrained with this newly added point. The results in figure \ref{fig:scores} show the performance of three techniques of adding points to the labeled set. The first method is random sampling where the system queries a random point to be labeled. We see that adding a random point does not help the learning of the classifier after a certain stage, in fact, it actually hurts performance. This is because random sampling has a higher probability of adding a negative point to the labeled set, as there are a large number of irrelevant images and only a few relevant images. Due to the addition of many negative points, the influence of the positive points in the graph decreases eventually, rendering all predictions as negative. Since Imagenet has many more negative points, this effect is prominent in figure \ref{fig:img-f1}. The other two methods are based on active learning, with and without the use of adaptive threshold. We see both the active learning methods perform considerably better than the random selection. The performance of active learning methods using the adaptive threshold is significantly better than the one which samples points around a constant threshold. The steep increase in the F1 scores with using adaptive threshold for both AWA and Imagenet datasets indicates that the method with adaptive threshold learns the concept with only a few rounds of interaction with the user. We see in figure \ref{fig:img-f1}, the F1 score for adaptive threshold goes from nearly zero to a high value. This is because the threshold is initially set to zero and after the first round of active learning when adaptive threshold takes over it boosts the performance. Although, both methods of active learning give similar performance in the long run, no user would like to do more than a few rounds of active learning. Thus, the initial region of the graph is considered the most important. In figure \ref{fig:svm_scores} we present comparison of our method against SVM with active learning. Since it is difficult to incorporate graph-type information directly in SVM we evaluate it only using visual features. We query the points nearest to the decision boundary for active learning. Our experiments show that our approach with only visual features performs better than SVM on both datasets and is also significantly faster. Combining both visual and semantic information provides significant gains to our approach over SVM. Additional experimental results are available at http://bit.ly/2EfjXf2. \begin{figure}[t] \centering \noindent\subfigure[AnimalWithAttributes]{\label{fig:svm-awa-f1}\includegraphics[width=0.22\textwidth, height=0.13\textheight]{F1_SVM_awa.pdf}} \quad \noindent\subfigure[Imagenet]{\label{fig:svm-img-f1}\includegraphics[width=0.22\textwidth, height=0.13\textheight]{F1_SVM_img.pdf}} \caption{Comparison of our method against SVM with active learning. We contrast the performance of SVM with visual(V) features against our method using only visual features, only semantic(S) features and using both visual and semantic features.} \label{fig:svm_scores} \end{figure} \begin{figure*} \begin{center} \subfigure[\textbf{Concept: ``\textit{Person playing a wind instrument}''} Images must show people playing different wind instruments ]{\label{fig:img-img1}\includegraphics[width=\textwidth, height=0.14\textheight]{new_img3}} \subfigure[\textbf{Concept: ``\textit{Group of animals}''} Images retrieved must have more than one animal in it. ]{\label{fig:img-awa1}\includegraphics[width=\textwidth, height=0.14\textheight]{new_awa3}} \subfigure[\textbf{Concept: ``\textit{Keyboard instruments}''} Images of instruments such as accordion, piano, upright are considered relevant.]{\label{fig:img-img2}\includegraphics[width=\textwidth, height=0.13\textheight]{new_img2}} \\ \subfigure[\textbf{Concept: ``\textit{Brown bulbous animals with horns}''} Images of animals such as buffaloes, oxes etc. are considered relevant]{\label{fig:img-awa2}\includegraphics[width=\textwidth, height=0.13\textheight]{new_awa2}} \\ \subfigure[\textbf{Concept: ``\textit{Makeup accessories}''} Images of makeup accessories such as face powder and lipstick are considered relevant.]{\label{fig:img-img3}\includegraphics[width=\textwidth, height=0.13\textheight]{new_img1}} \subfigure[\textbf{Concept: ``\textit{Furry animals with black stripes}''} Images of animals such as racoons, zebras, tigers etc. are considered relevant.]{\label{fig:img-awa3}\includegraphics[width=\textwidth, height=0.13\textheight]{new_awa1}} \end{center} \caption{Retrieving images for various concepts defined on the Imagenet and AWA dataset. In images (a)-(f), the images above the first arrow are the images provided by the user to start the search, images over the second and the third arrow are images queried by the system for labels during active learning. Remaining images show top-16 results retrieved by the system after 0, 4 and 8 rounds of active learning.} \label{fig:imagenet} \end{figure*} \subsection{Use case examples The results for retrieval of various concepts using our method are present in figure \ref{fig:imagenet}. The user starts by providing some examples images to the system. Using visual and semantic information from these images $f^*$ is computed. Images with highest $f^*$ are shown to the user along with an image near the decision boundary for labeling. Using the newly labeled image $f^*$ is recomputed. This process continues till the user is satisfied with the results. Figures \ref{fig:img-img1}, \ref{fig:img-img2} and \ref{fig:img-img3} show retrieval results for concepts ``\textit{a person playing a wind instrument}'', ``\textit{keyboard instruments}'' and ``\textit{makeup accessories}'' defined on Imagenet. For the concept ``\textit{makeup accessories}'' the system must find images of face powder, lipstick etc. and for the concept ``\textit{keyboard instruments}'' it must retrieve images of organ, grand piano, accordion etc. The concept ``\textit{person playing a wind instrument}'' is a difficult one as it requires the system to retrieve images of all wind instruments (semantic) with a condition that images must have a person playing that instrument (visual). Good results for this concept indicate that the system is using both semantic and visual information. Figures \ref{fig:img-awa1}, \ref{fig:img-awa2} and \ref{fig:img-awa3} shows ranking for concepts defined on AWA. For the concept ``\textit{furry animals with black stripes}'', images of animals such as zebras, tigers, raccoons, skunks are relevant and for the concept ``\textit{brown bulbous animals with horns}'' images of moose, oxes etc. are relevant. The concept ``\textit{group of animals}'' requires images retrieved to have more than one animal. Our results clearly show that the system is able to retrieve images of several different animals (semantic) present in a group (visual). For all concepts, it can be clearly seen that after just 8 rounds of labeling i.e. having about 12 labeled images the top ranked results start to precisely reflect the concept. It takes less than three minutes for the system to reach this state (including the time for computing the ranking 9 times and the time taken by the user to label 12 images) on a single core CPU machine with 64 GB memory. This attests that our method is suitable for retrieval from large databases. \section{Conclusion and Discussion} In this work, we propose a fast and scalable method that combines active learning with efficient GSSL to learn a user's query concept with minimal user interaction. We presented how to use features of different modalities in GSSL framework, by constructing separate graphs and then obtaining the final predictions as a convex combination of individual predictions. We showed that points selected based on the adaptive threshold are the most informative ones and help the classifier learn the query concept quickly. Good results on Imagenet and AWA datasets are a concrete proof of effectiveness of our method for a large scale system. In the future, we plan to explore applicability of different active learning methods to large scale problems. \section{Appendix} \subsection{Names of concepts defined on Imagenet} The following are the different concepts we defined on the Imagenet dataset for evaluating our method:\\ \textit{wheel, makeup, poodle, elephant, shore, hosiery, setter, fox, wolf, bear, free-reed instrument, source of illumination, flower, heron, soft-finned fish, coraciiform bird, bridge, domestic cat, crocodilian reptile, bowl, guitar, piece of cloth, sled dog, thrush, sailboat, seabird, stork, citrus, frozen dessert, piano}. \subsection{Names of concepts defined on AWA} The following attributes from the list of 85 attributes were combined to create different concepts on AWA:\\ \textit{blue hairless strain teeth, blue tough skin bulbous, blue tough skin flippers, brown hairless hooves, orange spots quadrupedal, white spots flippers, yellow spots claws, white spots small, brown tough skin flippers, black tough skin hands, blue hairless tail, brown hairless flippers, brown spots claws, yellow furry big, white spots claws, yellow spots meat teeth, black hairless small, blue hairless big, white stripes small, black hairless strain teeth, black stripes meat teeth, brown tough skin hands, white stripes paws, orange spots lean, brown tough skin claws, white hairless hooves, orange furry chew teeth, black tough skin strain teeth, brown spots small, black spots lean} \subsection{Mean Average Precision Curves} A precision/recall curve plots precision and recall for every all possible thresholds. The curve decreases regularly between the points of (higest precision, lowest recall) and (lowest precision, highest recall). A slow decreasing curve is considered ideal. Mean average precision is a value that summarizes the precision/recall curve. This value is equivalent to the area under the precision recall curve and is independent of the threshold. Here we report the change in average precision scores after adding a single point queried by active learning. Figure \ref{fig:ap-scores} contrasts the performance of our active learning method with adaptive threshold against active learning with constant threshold and not using active learning at all for AWA and Imagenet datasets. The comparison suggests that our method of using active learning with adaptive threshold is better than the other methods. Figure \ref{fig:ap-scores} compare the performance of our method against only visual, only semantic and with the combination of visual and semantic features against SVM with only visual features. Our method performs well here too. \begin{figure}[!htb] \centering \noindent\subfigure[AnimalWithAttributes]{\label{fig:awa-ap}\includegraphics[width=0.22\textwidth, height=0.15\textheight]{AP_awa.pdf}} \quad \noindent\subfigure[Imagenet]{\label{fig:img-a}\includegraphics[width=0.22\textwidth, height=0.15\textheight]{AP_img.pdf}} \caption{Average Precision Scores for 30 concepts defined on the AWA and Imagenet datasets. The graph contrasts the method with and without using active learning. It also compares the active learning methods with and without using adaptive threshold.} \label{fig:ap-scores} \end{figure} \begin{figure}[!htb] \centering \noindent\subfigure[AnimalWithAttributes]{\label{fig:awa-svm-ap}\includegraphics[width=0.22\textwidth, height=0.15\textheight]{AP_SVM_awa.pdf}} \quad \noindent\subfigure[Imagenet]{\label{fig:img-svm-ap}\includegraphics[width=0.22\textwidth, height=0.15\textheight]{AP_SVM_img.pdf}} \caption{Average Precision Scores for 30 concepts defined on the AWA and Imagenet datasets. We contrast the performance of SVM with visual features against our method using only visual features, only semantic features and using both visual and semantic features.} \label{fig:ap-scores} \end{figure} \subsection{Effect of different step size $\alpha$} The figure \ref{fig:img-ap} shows the cross validation results for choosing the step size $\alpha$ in Algorithm 3. This is important since we want to have the step size which is neither too low nor too high. A slow step size will reduce the speed of convergence to the decision boundary of the concept and a high value will lead to unstability in initial runs. To do cross validation we choose 30 concepts different from the ones chosen for evaluation and run our method of adaptive threshold with different values of $\alpha$. Since $\alpha$ = 2 gives the best performance and hence we use this value in our experiments. \begin{figure}[!htb] \centering \includegraphics[width=0.35\textwidth, height=0.20\textheight]{AWA_step.pdf} \caption{F1 Scores for 30 concepts defined on the AWA dataset. The graph shows the effect of different step sizes.} \label{fig:img-ap} \end{figure} \subsection{F1 scores for individual concepts} In this section we show the results of F1 scores for individual concepts. We can see our method performs well on all different concepts. The roughness in the curves of Imagenet is attributed to the fact that $f^*$ values changes quite a lot when we try to estimate the ranking of a million points just based on a few points. \begin{figure*}[!htb] \foreach \i in {1,...,6}{ \foreach \j in {1,...,5}{ \noindent\subfigure{\includegraphics[width=0.19\textwidth, height=0.16\textheight]{f1-awa-\i-\j.pdf}} }\\ } \caption{F1 scores for all concepts defined on AWA dataset} \label{fig:f1-awa} \end{figure*} \begin{figure*} \foreach \i in {1,...,6}{ \foreach \j in {1,...,5}{ \subfigure{\includegraphics[width=0.19\textwidth, height=0.16\textheight]{f1-img-\i-\j.pdf}} }\\ } \caption{F1 scores for all concepts defined on Imagenet dataset} \label{fig:f1-imagenet} \end{figure*} \subsection{Average Precision scores for individual concepts} In this section we show the results of Average Precision values for individual concepts. We can see our method of adaptive threshold performs well on all different concepts against constant threshold and approach that does not use active learning at all. \begin{figure*}[!htb] \foreach \i in {1,...,6}{ \foreach \j in {1,...,5}{ \noindent\subfigure{\includegraphics[width=0.18\textwidth, height=0.15\textheight]{ap-awa-\i-\j.pdf}} }\\ } \caption{Average Precision scores for all concepts defined on AWA dataset} \label{fig:ap-awa} \end{figure*} \begin{figure*}[!htb] \foreach \i in {1,...,6}{ \foreach \j in {1,...,5}{ \noindent\subfigure{\includegraphics[width=0.18\textwidth, height=0.15\textheight]{ap-img-\i-\j.pdf}} }\\ } \caption{Average Precision scores for all concepts defined on Imagenet dataset} \label{fig:ap-imagenet} \end{figure*} \end{document}
{ "timestamp": "2018-02-13T02:22:28", "yymm": "1802", "arxiv_id": "1802.04204", "language": "en", "url": "https://arxiv.org/abs/1802.04204" }
\section{Introduction} Gravitational waves (GWs) carry energy, linear momentum, and angular momentum, and are therefore responsible for the final evolutionary stages of compact binary systems. As energy and angular momentum are dissipated away, the two objects inspiral and eventually merge. The GW-driven orbital decay of two neutron stars was first observed by pulsar timing, leading to a major confirmation of Einstein's theory of general relativity~\cite{1982ApJ...253..908T}. The first landmark detection of GWs was from a binary black hole (BH) which was brought to merger by those same GWs that ultimately reached our detectors~\cite{2016PhRvL.116f1102A}. Similar to how the dissipation of energy and angular momentum causes the orbit of a BH binary to shrink, the emission of linear momentum through GWs causes the binary's center of mass to recoil~\cite{1961RSPSA.265..109B, 1962PhRv..128.2471P}. The key property to generate a GW recoil (or ``kick'') is asymmetry. It is straightforward to show that symmetry prevents linear momentum dissipation during the inspiral and merger of equal-mass, nonspinning BHs. Conversely, a generic BH binary radiates GWs anisotropically: linear momentum is preferentially emitted in some direction, and the binary consequently recoils. BH kicks were first studied using the post-Newtonian (PN) approximation (e.g., Refs~\cite{1983MNRAS.203.1049F, 1995PhRvD..52..821K, 2005ApJ...635..508B}) but their full astrophysical relevance was only realized after numerical relativity (NR) simulations of BH mergers became possible \cite{2005PhRvL..95l1101P,2006PhRvL..96k1101C,2006PhRvL..96k1102B}. Most of the linear momentum is emitted during the last few orbits and merger, which corresponds to the highly dynamical, fully nonlinear regime that can only be captured with NR simulations. In particular, simulations showed that BHs formed following a merger may be imparted recoil velocities of up to $5000$ km/s~\cite{2007PhRvL..98w1102C, 2007PhRvL..98w1101G, 2007PhRvD..76f1502T, 2011PhRvL.107w1102L}. The striking astrophysical consequences of these findings were quickly realized (e.g., Refs.~\cite{2007MNRAS.382L...6S, 2007ApJ...662L..63S, 2008ApJ...678..780G, 2008ApJ...682..758S, 2008ApJ...686..829H, 2008MNRAS.390.1311B}): BH recoils might exceed the escape speed of even the most massive galaxies in the Universe~\cite{2004ApJ...607L...9M, 2015MNRAS.446...38G}, thus making galactic ejections a possible outcome of binary mergers~\cite{1989ComAp..14..165R}. Recoiling BHs might give rise to a variety of electromagnetic signatures~\cite{2012AdAst2012E..14K} ---notably a kinematical offset of a set of broad emission lines--- which led to the identifications of a few observational candidates~\cite{2008ApJ...678L..81K, 2012ApJ...752...49C, 2014MNRAS.445.1558D, 2014MNRAS.445..515K, 2017A&A...600A..57C, 2017ApJ...840...71K, 2017ApJ...851L..15K} (see also Refs.~\cite{2014ApJ...795..146L, 2016MNRAS.455..484R, 2016MNRAS.456..961B} for detection strategies). As the system recoils, a Doppler shift of the emitted GWs can provide a possible direct observational signature of BH kicks within the reach of future space- and ground-based GW observatories~\cite{2016PhRvL.117a1101G}. Since NR simulations are far too expensive to be performed in astrophysical population studies, BH kicks have mostly been modeled using fitting formulas based on PN theory and calibrated to NR simulations (e.g., Refs.~\cite{2007ApJ...659L...5C, 2007PhRvL..98i1101G, 2008PhRvD..77d4028L, 2008PhRvD..77d4028L, 2012PhRvD..85h4015L, 2013PhRvD..87h4027L}). These ``black box'' expressions return the final kick of the BH remnant given the intrinsic parameters (mass ratio and spins) of the merging binary at some initial separation. Another so far unexplored possibility to model BH kicks is to compute the flux of linear momentum in GWs using a waveform approximant that can be quickly evaluated in parameter space. Linear momentum dissipation, however, is encoded in both differences between the dominant $l=2, m=\pm 2$ modes and higher harmonics ($l>2$)~\cite{2008PhRvD..77l4047B}. This approach, therefore, requires an inspiral-merger-ringdown approximant able to model both higher harmonics (crucial to linear momentum flux) and misaligned spins (which are known to generate the largest kicks). In this paper we present the first attempt in this direction using the recent NR surrogate model by Blackman \emph{et al.}~\cite{2017PhRvD..96b4058B} --- the first waveform approximant able to model generic precessing systems with higher harmonics. In contrast with the available fitting formulas, our procedure provides not only the final kick speed $v_k$, but also the entire velocity accumulation profile $\mathbf{v}(t)$. We present a thorough exploration of BH recoils for generic systems, which summarizes and extends various previous findings in a coherent fashion. Our numerical code, \textsc{surrkick}, is publicly available and allows for reliable computation of the radiated quantities (energy, linear momentum, and angular momentum) at a moderate computational cost. Our implementation is therefore ideal to be ported to larger-scale astrophysical codes which require fast estimates of BH kicks, such as galaxy merger-tree simulations, populations synthesis studies, and GW event-rate predictions. This paper is organized as follows. Section \ref{methods} introduces the main tools of our analysis. Section \ref{results} presents results and comparisons with other methods. Section \ref{accuracy} explores the numerical accuracy of our procedure. Section \ref{code} briefly describes the implementation and usage of our public code. Section \ref{conclusions} draws conclusions and future prospects. Unless otherwise stated, we use relativists' units $c=G=1$. \section{Methods} \label{methods} \subsection{Numerical-relativity surrogate models} \label{surrogatemodels} Surrogate models interpolate a set of precomputed GW signals and make use of advanced decomposition and interpolation schemes to quickly produce waveforms for any desired point in parameter space. Surrogate models are typically optimized to accurately reproduce the complex gravitational-wave strain, here expanded in terms of spin-weighted spherical harmonics~\cite{1980RvMP...52..299T} \begin{align} h(t,\theta,\phi,\boldsymbol{\lambda}) &= h_+(t,\theta,\phi,\boldsymbol{\lambda}) - i h_\times(t,\theta,\phi,\boldsymbol{\lambda}) \notag \\ &=\sum_{l=2}^\infty \sum_{m=-l}^{+l} h^{lm}(t,\boldsymbol\lambda) \; _{-2}Y_{lm}(\theta,\phi)\,, \label{hmodes} \end{align} where $t$ denotes time, $\theta$ and $\phi$ describe the GW propagation direction, and the symbol $\boldsymbol\lambda$ encodes all the binary's intrinsic parameters. For quasicircular BH binaries, these are the mass ratio $q$ and spin vectors $\boldsymbol{\chi_1},\boldsymbol{\chi_2}$ (the total mass $M$ is a free scale). Surrogate models have been presented for both effective-one-body~\cite{2014PhRvX...4c1006F, 2014CQGra..31s5010P, 2016PhRvD..93f4041P} and NR waveforms~\cite{2017PhRvD..95j4023B, 2017PhRvD..95j4023B, 2017PhRvD..96b4058B}. In this paper we use the NR waveform surrogate model \mbox{NRSur7dq2}~\cite{2017PhRvD..96b4058B} to generate our waveforms. \mbox{NRSur7dq2} is the very first model able to cover the seven-dimensional parameter space describing generic precessing systems. \mbox{NRSur7dq2} is trained on 886 NR waveforms generated with the Spectral Einstein Code (SpEC)~\cite{2000PhRvD..62h4032K} and interpolated using the technique put forward in Ref.~\cite{2014PhRvX...4c1006F}. It provides modes $h^{lm}$ up to $l\leq4$ for binaries with mass ratios $q=m_2/m_1 \in[0.5,1]$ and dimensionless spin magnitudes $\chi_1,\chi_2\in[0, 0.8]$; updates to extend its validity range are under active development. The model has been shown to be extremely accurate at reproducing the gravitational-wave strain $h$: it outperforms all other available waveform approximants by several orders of magnitude, reaching a level of accuracy comparable to the NR simulations used in the training process~\cite{2017PhRvD..96b4058B}. Waveforms generated with \mbox{NRSur7dq2} span the % time range $-4500M \leq t \leq 100M$, where $t=0$ is defined as the time that maximizes the total waveform amplitude $\mathcal{A}^2(t) = \sum_{l,m} | h^{lm}(t) |^2$. The initial time $t=-4500M$ corresponds to about $20$ orbits before merger and the final value $t=100M$ allows for a full dissipation of the signal. Values of $h^{lm}$ are computed at carefully selected time nodes~\cite{2017PhRvD..96b4058B} and later interpolated in time using standard cubic univariate B-splines. More specifically, \mbox{NRSur7dq2} provides the distance-independent dimensionless strain, extrapolated to $\mathcal{I}^+$, i.e.~$\lim_{r\to \infty} r h / M$ where $r$ is the distance from the binary's center of mass and $M$ is the total mass of the binary at the beginning of the evolution. \mbox{NRSur7dq2} allows for the spin directions to be specified at a reference time $-4500M\leq t_{\rm ref}\leq -100M$, in a frame defined such that the more (less) massive BH sits on the positive (negative) x-axis and the Newtonian orbital angular momentum $\mathbf{L}$ lies along the z-axis. Unless otherwise stated, we use $t_{\rm ref}=-100M$. \subsection{Radiated energy and momenta} \label{radiatedexpressions} Multipolar expansions for the radiated energy, linear momentum and angular momentum have been worked out in detail in~Ref.~\cite{2008GReGr..40.1705R} (derived from Refs.~\cite{1980RvMP...52..299T, 2007PhRvD..76d1502L}). We report their expressions here for completeness.\footnote{% The author of Ref.~\cite{1980RvMP...52..299T} presented his formulas in specially chosen coordinate systems. A more rigorous mathematical framework for these calculations is to go to $\mathcal{I}^{+}$ and present the news tensor, Bondi mass aspect, and other Bondi charges (e.g.~Ref.~\cite{2017PhRvD..95d4002F}). The authors of Ref.~\cite{2008GReGr..40.1705R} used the convention $\Im(a+ib)=ib$, while here we use $\Im(a+ib)=b$.} Whenever terms with $l<2$ or $|m|>l$ are present in the following summations, their coefficients are intended to be zero. In practice, one is also limited to $l\leq l_{\rm max}$ (where, e.g., $l_{\rm max}=4$ for \mbox{NRSur7dq2} waveforms and $l_{\rm max}=8$ for SpEC waveforms). % The energy flux emitted in GWs is provided in terms of the first time derivative of the complex strain $\dot h$ and reads: \begin{align} \frac{dE}{dt} = \lim_{r \rightarrow \infty} \frac{r^2}{16\,\pi} \sum_{l,m} \,\left| \dot h^{l,m} \right|^2 \; . \label{energyflux} \end{align} When integrating to obtain $E(t)$ we set the integration constant $E_0$ to account for the binding energy dissipated in GWs at times $t<-4500M$, before the start of our waveforms, thus enforcing $\lim_{t\to-\infty} E(t)=0$. A straightforward Newtonian calculation yields~\cite{1964PhRv..136.1224P} \begin{equation} \frac{E_0}{M}= \left( \frac{5}{1024} \frac{q^3}{(1+q)^6} \dot E_0\right)^{1/5}, \end{equation} where $\dot E_0$ is estimated from Eq.~(\ref{energyflux}) by averaging over the first $100M$ in time. We have verified that corrections up to 2PN (including spin effects~\cite{Arun:2008kb}) have a negligible impact on $E_0$. One can then define the time-dependent (Bondi) mass of the binary, \begin{equation} M(t) = M - E(t) + E_0 \,, \label{Moft} \end{equation} such that $M(t)$ at the beginning of our waveforms is equal to $M$. The mass of the post-merger BH in units of the total mass of the binary at early times is \begin{equation} \frac{ \displaystyle\lim_{t\to+\infty} M(t)}{{\displaystyle \lim_{t\to-\infty} M(t)}} = 1- \frac{\displaystyle \lim_{t\to+\infty} E(t)}{M+E_0}. \label{masslimits} \end{equation} The emitted linear momentum is also fully specified by $\dot h$ and crucially includes mixing between modes with different $l$ and $m$: \begin{align} \frac{d P_x}{dt} = &\lim_{r \to \infty} \frac{r^2}{8\, \pi} \Re \Bigg[ \sum_{l,m} \, \dot h^{l,m} \Big( a_{l,m}\, \dot{\bar{h}}^{l,m+1} \notag\\ &+ b_{l,-m} \,\dot{\bar{h}}^{l-1,m+1} - b_{l+1,m+1}\, \dot{\bar{h}}^{l+1,m+1} \Big)\Bigg] \; , \label{eq:dt_px} \\ \frac{d P_y}{dt} = &\lim_{r \to \infty} \frac{r^2}{8\, \pi}\Im \Bigg[ \sum_{l,m}\, \dot h^{l,m} \Big( a_{l,m}\, \dot{\bar{h}}^{l,m+1} \notag\\ &+ b_{l,-m} \,\dot{\bar{h}}^{l-1,m+1} - b_{l+1,m+1}\, \dot{\bar{h}}^{l+1,m+1} \Big)\Bigg] \; , \label{eq:dt_py} \\ \frac{d P_z}{dt} = &\lim_{r \to \infty} \frac{r^2}{16 \pi} \sum_{l,m}\, \dot{{h}}^{l,m} \Big( c_{l,m}\, \dot{\bar{h}}^{l,m} \notag\\ &+ d_{l,m}\, \dot{\bar{h}}^{l-1,m} + d_{l+1,m}\, \dot{\bar{h}}^{l+1,m} \Big) \; , \label{eq:dt_pz} \end{align} where the upper bar denotes complex conjugation and \begin{eqnarray} a_{l,m} &=& \frac{\sqrt{(l-m)\,(l+m+1)}}{l\,(l+1)} \; , \\ b_{l,m} &=& \frac{1}{2\,l}\, \sqrt{\frac{(l-2)\,(l+2)\,(l+m)\,(l+m-1)} {(2l-1)(2l+1)}} \; , \\ c_{l,m} &=& \frac{2\,m}{l\,(l+1)} \; , \\ d_{l,m} &=& \frac{1}{l}\, \sqrt{\frac{(l-2)\,(l+2)\,(l-m)\,(l+m)} {(2l-1)(2l+1)}} \; . \end{eqnarray} The integration constant for the $d\mathbf{P}/dt$ integration is chosen so that the average of $\mathbf{P}$ over the first $1000M$ in time, where linear momentum emission is expected to be negligible, is zero. By conservation of linear momentum, the time profile of the kick imparted to the system is\footnote{% Relativistic corrections are irrelevant here. The largest BH kicks are $v_k/c\sim 10^{-2}$, corresponding to Lorentz factors $\gamma-1 \sim 10^{-4}$.} \begin{equation} \mathbf{v}(t) = - \frac{{P_x}(t) \mathbf{\hat x} + {P_y}(t) \mathbf{\hat y} + {P_z}(t) \mathbf{\hat z}}{M(t)}\,, \label{voftprofile} \end{equation} and the final velocity of the post-merger remnant BH is \begin{align} \mathbf{v_k} = \lim_{t\to \infty} \mathbf{v}(t)\,. \label{vkicklimit} \end{align} One can further integrate $\mathbf{v}(t)$ in time to obtain the trajectory $\mathbf{x}(t)= \int \mathbf{v}(t) dt$. Although the binary trajectory is a coordinate-dependent notion, the time integral of the linear momentum dissipated in GWs can be interpreted as the motion of the spacetime's center of mass seen by an observer at $\mathcal{I}^+$~\cite{2017PhRvD..95d4002F}. The angular momentum carried by GWs involves both $h$ and $\dot h$: \begin{align} \frac{d J_x}{dt} = &\lim_{r\rightarrow\infty} \frac{r^2}{32 \pi} \: \Im \Bigg[ \sum_{l,m} \,h^{l,m} \Big( f_{l,m}\, \dot{\bar{h}}^{l,m+1} \notag \\&+ f_{l,-m}\, \dot{\bar{h}}^{l,m-1} \Big) \Bigg]\; , \label{eq:dt_jx} \\ \frac{d J_y}{dt} = &- \lim_{r\rightarrow\infty} \frac{r^2}{32 \pi} \: \Re \Bigg[ \sum_{l,m} \, h^{l,m} \Big( f_{l,m}\, \dot{\bar{h}}^{l,m+1} \notag \\ &- f_{l,-m}\, \dot{\bar{h}}^{l,m-1} \Big) \Bigg]\; , \label{eq:dt_jy} \\ \frac{d J_z}{dt} = &\lim_{r\rightarrow\infty} \frac{r^2}{16 \pi} \: \Im \Bigg[ \sum_{l,m} \,m\, h^{l,m} \,\dot{\bar{h}}^{l,m} \Bigg] \; , \label{eq:dt_jz} \end{align} where \begin{eqnarray} f_{l,m} = \sqrt{l(l+1) - m(m+1)} \; . \label{flm} \end{eqnarray} When integrating $d\mathbf{J}/dt$, we do not adjust the integration constant to account for the angular momentum radiated before the beginning of our waveforms. Contrary to the binding energy, the Newtonian angular momentum of a binary system diverges as separation grows ($J\propto \sqrt{r}$). We perform all differentiations and integrations required to extract these radiated quantities analytically on the spline interpolants provided by \mbox{NRSur7dq2}, over the range $-4500M\leq t \leq 100M$. The $t\to\infty$ limits [e.g.~Eqs.~(\ref{masslimits}) and (\ref{vkicklimit})] are approximated with values at $t=100M$. \section{Results} \label{results} \subsection{Anatomy of the kick} \label{anatomy} Nonspinning BH binaries do not receive any recoil for both $q=1$ (because of symmetry) and $q=0$ (which corresponds to the test-particle limit). Recoils are present in between these two limits. Figure~\ref{nospinprofiles} shows the kick profile $\mathbf{v}(t)$ for a series of BH mergers with $q=0.5,\dots, 1$. Axisymmetry prevents linear momentum dissipation along the direction of the orbital angular momentum, i.e.~$\mathbf{v}(t)\cdot \mathbf{\hat z}= 0$ (within numerical errors; see Sec.~\ref{exploiting}). The binary's center of mass oscillates in the orbital plane x-y during the inspiral, until the merger halts these oscillations and imparts the final recoil. The kick velocity grows as $q$ decreases, reaching % $v_k\simeq 148$ km/s for $q=0.5$. The largest kick achievable for a nonspinning system is $v_k\simeq 175$ km/s and corresponds to $q\sim 0.36$~\cite{2007PhRvL..98i1101G}, which is outside the parameter space currently covered by \mbox{NRSur7dq2}. The trajectory of the spacetime's center of mass $\mathbf{x}(t)$ for $q=0.5$ and $\chi_1=\chi_2=0$ is shown in the left panel of Fig.~\ref{centerofmass}. One last oscillation occurs after merger, and is responsible for most of the kick. This effect is also visible in Fig.~\ref{nospinprofiles}, where we see the system typically accelerates at $t\sim 10M$ after merger, with the final burst of linear momentum radiation lasting only for a few $M$ in time. Interestingly, the projection of the recoil profile along the final kick direction $\mathbf{v}(t)\cdot \mathbf{\hat v_k}$ is not monotonic after merger: the binary suddenly decelerates at about $t\sim 15M$, after which the imparted velocity settles down to the asymptotic value $v_k$. This effect has been dubbed \emph{antikick}~\cite{2010PhRvL.104v1101R}, and turns out to be a rather generic feature of BH mergers (cf.~Sec.~\ref{statistics} below). \begin{figure}[t] \includegraphics[width=\columnwidth]{nospinprofiles} \caption{Kick profile $\mathbf{v}(t)$ projected along $\mathbf{\hat x}$, $\mathbf{\hat y}$, $\mathbf{\hat z}$ and the direction of the final kick $\mathbf{\hat v_k}$ for a series of non-spinning BH binaries with mass ratio ranging from $q=0.5$ (light orange) to $q=1$ (black). The binary's center of mass oscillates in the orbital plane during the inspiral; the final recoil is imparted with a sudden acceleration at $t\sim 10M$ after the peak-amplitude time.} \label{nospinprofiles} \end{figure} \begin{figure*}[t] \includegraphics[page=1,clip,trim=1.cm 0cm 0cm 0cm, width=0.325\textwidth]{centerofmass} \includegraphics[page=2,clip,trim=1.cm 0cm 0cm 0cm, width=0.325\textwidth]{centerofmass} \includegraphics[page=3,clip,trim=1.cm 0cm 0cm 0cm, width=0.325\textwidth]{centerofmass} \caption{Center-of-mass trajectory $\mathbf{x}(t)=\int\mathbf{v}(t) dt$ for three binary configurations as described in the legends. The circle markers on each curve correspond to $t=0$. The left panel shows a recoil due to mass asymmetry only: the center of mass oscillates in the orbital plane during the inspiral and is finally pushed after merger. The middle panel shows a complicated interplay of mass and spin asymmetry, with the initial oscillations being greatly distorted at merger by the superkick effect. Finally, the right panel shows the simpler trajectory of a binary receiving a very large kick of $\sim 3000$ km/s. An animated version of this figure is available at \href{https://davidegerosa.com/surrkick}{davidegerosa.com/surrkick}.} \label{centerofmass} \end{figure*} \begin{figure}[t] \includegraphics[width=\columnwidth]{hangupErad} \caption{Radiated energy ${E}(t)$ for binaries with mass ratio $q=0.5$ and spins of magnitude $\chi_1=\chi_2=0.8$ (anti)aligned to the orbital angular momentum. Four configurations are shown ---up-up, down-down, up-down, down-up--- where the term before (after) the hyphen refers to the spin of the heavier (lighter) BH being co-/counter-aligned with the binary's orbital angular momentum. For comparison, we also show $E(t)$ for a non-spinning system with the same mass ratio. Because of the orbital hang-up effect, BH binaries with (anti-)aligned spins radiate more (less) energy compared to non-spinning systems with the same mass ratio.} \label{hangupErad} \end{figure} \begin{figure*} \centering \includegraphics[page=1,width=0.47\textwidth]{spinaligned} \hspace{0.04\textwidth} \includegraphics[page=2,width=0.47\textwidth]{spinaligned} \caption{Kick profile $\mathbf{v}(t)$ projected along $\mathbf{\hat x}$, $\mathbf{\hat y}$, $\mathbf{\hat z}$ and the direction of the final kick $\mathbf{\hat v_k}$ for binaries with mass ratio $q=1$ (left) and $q=0.5$ (right), and spins of magnitude $\chi_1=\chi_2=0.8$ (anti)aligned to the orbital angular momentum. Four configurations are shown: up-up, down-down, up-down, down-up, where the term before (after) the hyphen refers to the spin of the heavier (lighter) BH being co-/counter-aligned with the binary's orbital angular momentum. Kicks from non-precessing systems lie in the binary's orbital plane, with the spin kicks being more pronounced for the up-down and down-up configurations in accordance with PN predictions.} \label{spinaligned} \end{figure*} BH spins introduce additional sources of linear momentum dissipation. The impact of aligned spins on the radiated energy and linear momentum profile is illustrated in Figs.~\ref{hangupErad} and \ref{spinaligned}, respectively. In particular, we study BH binaries with spin magnitude $\chi_1=\chi_2=0.8$ and four different spin orientations: $\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat z}=\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat z}=1$ (up-up), $\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat z}=\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat z}=-1$ (down-down), $\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat z}=-\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat z}=1$ (up-down), $\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat z}=-\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat z}=-1$ (down-up), where $\mathbf{\hat z} =\mathbf{\hat L}$ at $t_{\rm ref}=-100M$. Although the up-down configuration is generically unstable to spin precession~\cite{2015PhRvL.115n1102G}, the instability develops on longer timescales and can therefore be neglected in this context. The orbital hang-up effect~\cite{2001PhRvD..64l4013D, 2006PhRvD..74d1501C, 2015CQGra..32j5009S} causes binaries with spins co- (counter-) aligned with the binary's angular momentum to merge later (sooner) compared to non-spinning systems with the same mass ratio. Consequently, the energy emitted in GWs increases (decreases) if the total spin $\mathbf{S} = m_1^2 \boldsymbol{\chi_1} + m_2^2\boldsymbol{\chi_2}$ is (\mbox{anti-})aligned with $\mathbf{L}$ (c.f.~Fig.~\ref{hangupErad}). For $q=1$ (Fig.~\ref{spinaligned}, left panel), moderately large recoils of $v_k\sim 350$ km/s are achieved for the up-down and down-up configurations, in agreement with the PN predictions $v_k\propto |\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat L}-\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat L}|$~\cite{1995PhRvD..52..821K} (see~\cite{2007ApJ...668.1140B,2008PhRvD..77d4028L} for numerical explorations). The recoil is mostly imparted in the orbital plane, but its magnitude is somewhat smaller than the mass-asymmetry case explored above and reduces to a single burst of linear momentum emitted at $t\sim10M$, preceded by a smaller one in the opposite direction at $t\sim-5M$. The $q=1$ up-up configuration presents some linear momentum emitted perpendicular to the orbital plane, resulting in $v_k \sim 50$ km/s. This is the inherent error scale in our model, as symmetry implies $v_k=0$ for both the up-up and down-down configuration at $q=1$~\cite{2008PhRvL.100o1101B,2008PhRvD..78b4017B}, see Sec.~\ref{exploiting}. For binaries with unequal masses and aligned spins (Fig.~\ref{spinaligned}, right panel), both the orbital hang-up and the mass asymmetry effect are present: the binary's center of mass first oscillates in the orbital plane (% because $q\neq 1$) and then receive a further push at $t\sim10M$ (% because $\boldsymbol{\chi_i} \cdot \mathbf{\hat z}\neq 0$). \begin{figure*} \includegraphics[page=1,width=0.47\textwidth]{leftright} \hspace{0.04\textwidth} \includegraphics[page=2,width=0.47\textwidth]{leftright} \caption{Kick profile $\mathbf{v}(t)$ projected along $\mathbf{\hat x}$, $\mathbf{\hat y}$, $\mathbf{\hat z}$ and the direction of the final kick $\mathbf{\hat v_k}$ for binaries $q=1$ (left) and $q=0.5$ (right), and spins of magnitude $\chi_1=\chi_2=0.8$ lying into the orbital plane. Four configurations are shown: right-right, left-left, right-left, left-right, where the term before (after) the hyphen refers to the spin of the heavier (lighter) BH being co-/counter-aligned with initial separation vector $\mathbf{\hat x}$. The right-left and left-right orientations correspond to the superkick configurations. Here we set $t_{\rm ref}=-125M$ to maximize kicks for the $q=1$ case (c.f.~Fig.~\ref{alphaseries}).} \label{leftright} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{alphaseries} \caption{Left panel: Recoil velocities for a series of right-left binaries with $q=1$ and $\chi_i=0.8$ initialized at various reference times $t_{\rm ref}$; the orange circle marks the reference time used in Fig.~\ref{leftright}. Right panel: Recoil velocities for BH binaries with $q=1$ and $\boldsymbol{\chi_1}=-\boldsymbol{\chi_2}=[0.8 \cos\alpha,0.8\sin\alpha,0]$ (such that $\alpha=0$ corresponds to the right-left configuration) at $t_{\rm ref}=-100M$. The angle $\alpha$ corresponds to a rotation of both spins about the orbital angular momentum, and is degenerate with the reference time at which spins are specified. Gray crosses mark the same configuration in both panels.} \label{alphaseries} \end{figure*} The largest kicks are achieved for BHs merging with misaligned spins~\cite{2007PhRvL..98w1102C, 2007PhRvL..98w1101G, 2007ApJ...659L...5C, 2007PhRvD..76f1502T, 2008PhRvD..77l4047B, 2011PhRvL.107w1102L}. Figure~\ref{leftright} shows kick profiles for four binary configurations with spins $\chi_i=0.8$ lying in the orbital plane: $\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat x}=\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat x}=1$ (right-right), $\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat x}=\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat x}=-1$ (left-left), $\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat x}=-\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat x}=1$ (right-left), $\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat x}=-\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat x}=-1$ (left-right), where $\mathbf{\hat x}$ is defined as the axis connecting the lighter to the heavier BH at $t_{\rm ref}$. For reasons clarified below, here we take $t_{\rm ref} = -125 M$. Kicks as large as $\sim 2820$ km/s are achieved for the right-left and left-right configurations, which correspond to the \emph{superkick} scenario discovered in Refs.~\cite{2007PhRvL..98w1102C,2007PhRvL..98w1101G}. During the inspiral, frame dragging from the two holes acts constructively and pushes the binary's center of mass up and down along the direction of the orbital angular momentum $\mathbf{\hat z}$. The final kick is imparted as the BHs merge and the last of these oscillations is abruptly interrupted. The phenomenology is rather similar to the case of aligned spins studied above, although with the key difference that in this case linear momentum is emitted along the binary's orbital angular momentum, not orthogonal to it. It is worth noting that binaries with these large kicks present a remarkably simple accumulation profile: the acceleration $d\mathbf{P}/dt$ is well described by a Gaussian centered at $t\sim 10M$ with width $\sigma\sim 5M$ (cf.~\cite{2008PhRvD..77l4047B} and Sec.~\ref{statistics} below). Conversely, frame dragging from the two BHs add destructively for the right-right and left-left binaries. This cancellation is perfect (within numerical errors, cf.~Sec.~\ref{exploiting}) if the two spins have the same magnitude $m_1^2 \chi_1 = m_2^2 \chi_2$ (Fig.~\ref{leftright}, left panel). For $q=0.5$ and $\chi_i=0.8$ (Fig.~\ref{leftright}, right panel), the dynamics is dominated by the largest spin and the four configurations reach values between 650 and 1530 km/s. Interestingly, smaller mass ratios excite a sizable kick along the orbital plane of $\sim 300$ km/s, which exceed the recoil imparted to nonspinning systems with the same $q$ of about a factor $\sim 2$ (cf.~Fig.~\ref{nospinprofiles}). The spacetime trajectory $\int\mathbf{v}(t) dt$ for one such binary is illustrated in the middle panel of Fig.~\ref{centerofmass}: the center of mass oscillates at early time, undergoes a complicated motion right before merger, after which the superkick effect becomes dominant. To the best of our knowledge, this mass-spin asymmetry mixing in the kick profile has not been reported elsewhere. \begin{figure} \includegraphics[width=0.95\columnwidth]{alphaprof} \caption{Velocity accumulation profile $\mathbf{v}(t)$ projected along the direction of the final kick $\mathbf{\hat v_k}$ for binaries with $q=1$ and antiparallel spins of magnitude $\chi_1=\chi_2=0.8$ lying in the orbital plane. The rotation angle $\alpha$ (defined as $\cos\alpha=\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat x}= -\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat x}$) controls the orbital phase at merger and thus sets the velocity of the center of mass when the final kick is imparted. Curves are colored according to $\alpha$ as it spans from $-\pi$ (black) to $\pi$ (orange).} \label{alphaprof} \end{figure} Superkick velocities critically depend on the orbital phase at merger, as it controls the abrupt interruption of the oscillatory behavior described above. In the left panel of Fig.~\ref{alphaseries} we study a series of right-left binaries ($q=1$, $\chi_1=\chi_2=0.8$, $\boldsymbol{\hat \chi_1}\cdot \mathbf{\hat x}=-\boldsymbol{\hat \chi_2}\cdot \mathbf{\hat x}=1$) specified at various reference times $t_{\rm ref}/M \in [-250, -100]$. The final kick velocity $v_k$ shows a clear sinusoidal dependence, as already found in, e.g., Refs.~\cite{2008PhRvD..77l4047B, 2012PhRvD..85h4015L, Zlochower:2015wga}. The peaks (e.g.~at $t\sim-125 M$) correspond to configurations for which the center-of-mass velocity happens to be at its maximum when the last oscillation is interrupted. The orbital phase at merger can also be controlled by an overall rotation of both spins about the orbital angular momentum. The right panel of Fig.~\ref{alphaseries} shows $v_k$ for binaries with spins $\boldsymbol{\hat \chi_1} =-\boldsymbol{\hat \chi_2}= [\cos\alpha,\sin\alpha,0]$ specified at $t_{\rm ref}=-100M$ (a similar series of NR simulations was reported in Ref.~\cite{2008PhRvD..77l4047B}). The right-left (left-right) configuration corresponds to $\alpha=0$ ($\pi$). The two curves in Fig.~\ref{alphaseries} span the very same range, showing that the angle $\alpha$ and the reference time $t_{\rm ref}$ are indeed degenerate. In practice, this means that only binaries with a specific orbital phase at merger are subject to superkicks, thus making their occurrence very rare. Figure~\ref{alphaprof} shows the velocity accumulation profile for the same series of binaries with different values of $\alpha$: the BH merger abruptly stops the center-of-mass oscillation at different phases, thus setting the final kick velocities. As first noted in Refs.~\cite{2011PhRvL.107w1102L, 2013PhRvD..87h4027L}, binaries with partially aligned spins give rise to BH kicks even larger than those imparted to binaries in the superkick configuration. Equal-mass, maximally spinning BH binaries are predicted to reach $v_k\sim 5000$ km/s for spins misaligned by angles $\theta_i=\cos^{-1}( \boldsymbol{\hat \chi_i}\cdot \mathbf{L})\sim 50^\circ$. These recoils were dubbed \emph{hang-up kicks}, and are due to a combination of the BH frame-dragging addition (responsible for superkicks) and the orbital hang-up effect (which enhances the energy radiated in GWs for aligned spins). To check that our model reproduces these hang-up kicks, we generate $10^5$ binaries with $q=1$, $\chi_1=\chi_2=0.8$, and isotropic spin orientations. The largest kick detected is $v_k\sim 3300$ km/s, and is obtained for $\theta_1\sim \theta_2 \sim 57^\circ$. For the same values of $q$, $\chi_1$ and $\chi_2$, the hang-up kick fitting formula of Refs.~\cite{2011PhRvL.107w1102L,2013PhRvD..87h4027L} returns a largest kick of $\sim 3500$ km/s (a more careful comparison is postponed to Sec.~\ref{statistics}). The spacetime trajectory corresponding to one of these cases is shown in the right panel of Fig.~\ref{centerofmass}, confirming our earlier claims that large kicks present rather simple accumulation profiles. \begin{figure} \includegraphics[width=0.95\columnwidth]{lineofsight} \caption{Kick profiles for a right-left binary with $q=0.5$ and $\chi_1=\chi_2=0.8$ projected along various random directions $\mathbf{\hat n}$. Curves are colored from black to orange according to the final projected kick $\lim_{t\to \infty} \mathbf{v}(t)\cdot\mathbf{\hat n}$.} \label{lineofsight} \end{figure} Finally, Fig.~\ref{lineofsight} explores projection effects of the kick accumulation profile. For a single system with $q=0.5$ and $\chi_1=\chi_2=0.8$ in the right-left configuration, we show the projection of $\mathbf{v}(t)$ along various randomly chosen directions $\mathbf{\hat n}$. Although some features are solid, the kick profile appears rather different if viewed from different orientations. This behavior is important to model BHs recoiling into astrophysical environments with well-defined geometries, such as accretion disks~\cite{2010MNRAS.401.2021R, 2010MNRAS.404..947C}, and to implement the effect of the BH kick in waveform models through the induced Doppler shift~\cite{2016PhRvL.117a1101G}. \subsection{Statistical exploration and comparison with fitting formulas} \label{statistics} After exploring the main features of the kick profile in controlled scenarios, we now turn our attention to statistical samples. We generate a sample of $10^6$ binaries with mass ratio uniform in $q\in[0.5,1]$ and spins uniformly distributed in volume with magnitude $\chi_i\leq0.8$. Figure~\ref{explore} shows the distributions of total energy, linear momentum, and angular momentum radiated in GWs by this BH binary population. The energy and angular momentum distributions are roughly symmetric, with peaks at $E\sim0.045 M$ and $J\sim0.45 M^2$, respectively. The recoil distribution peaks at $v_k\sim0.001 c$, with a long tail extending up to $v_k\sim 0.01 c\sim 3000$ km/s. Figure~\ref{explore} also shows predictions for $v_k$ obtained with fitting formulas currently available in the literature. In particular, we use the expressions summarized in Ref.~\cite{2016PhRvD..93l4066G}, which are calibrated on various numerical simulations from Refs.~\cite{2007ApJ...659L...5C, 2007PhRvL..98i1101G, 2008PhRvD..77d4028L, 2012PhRvD..85h4015L, 2013PhRvD..87h4027L, 2008PhRvD..77d4028L}. Although kick predictions for individual binaries might differ significantly, the two methods largely agree on the overall distribution. We note, however, that the fitting formula tends to overestimate the number of binaries receiving large recoils. In particular, the fractions of binaries with $v_k>2000$ km/s are $\sim 2.4$\% and $\sim 3.2$\% for the surrogate extraction and fitting formula, respectively. The largest kicks found in these distributions are $v_k\sim 3160$ km/s (surrogate) and $v_k\sim 3330$ km/s (fit). We speculate that this disagreement might be due to the calibration of the hang-up kick terms in the fitting formula, which was only performed with $q=1$ simulations (cf. Ref.~\cite{2015ASSP...40..185S} for a critical discussion on this point). Although some runs for unequal-mass binaries with largely misaligned spins have been presented~\cite{2007ApJ...659L...5C, 2008ApJ...682L..29B, 2009PhRvD..79f4018L, Zlochower:2015wga}, the effect of the mass ratio on the largest kick might not be fully captured by the expressions currently available. Figure.~\ref{explore} also compares the total radiated energy extracted from the surrogate model against the final-mass fitting formula of~\cite{2012ApJ...758...63B}, corrected according to Eq.~(\ref{masslimits}). Agreement is found at the $\sim 2\%$ level: the median for the surrogate (fit) estimate of $E/M$ is $\sim 0.047$ ($\sim 0.046$) with standard deviations of $\sim 0.008$ ($\sim 0.009$). The authors of Ref.~\cite{2017PhRvD..95f4024J} presented a careful analysis comparing different estimates of the energy radiated following BH mergers and reported similar, if not higher, differences between various approaches. % \begin{figure*}[t] \includegraphics[width=0.847\textwidth]{explore} \caption{Distribution of radiated linear momentum $v_k$ (left panel), energy $E$ (top right panel) and angular momentum $J$ (bottom right panel) for a distribution of binaries with mass ratio uniformly distributed in $[0.5,1]$ and spin of magnitude $\chi_i<0.8$ uniformly distributed in volume. Our results (``Surrogate'') are compared to the model summarized inRef.~\cite{2016PhRvD..93l4066G} based on Refs.~\cite{2007ApJ...659L...5C, 2007PhRvL..98i1101G, 2008PhRvD..77d4028L, 2012PhRvD..85h4015L, 2013PhRvD..87h4027L, 2008PhRvD..77d4028L}~(``Fitting formula''): the two distributions largely agree, although differences are present for large values of $v_k$.} \label{explore} \end{figure*} \begin{figure}[t!] \includegraphics[width=\columnwidth]{normprofiles} \caption{Kick profiles $\mathbf{v}({t})$ for a sample of BH binaries with uniform mass ratio and isotropic spin directions projected along random directions $\mathbf{\hat n}$. Curves are normalized according to the final projected kick $\mathbf{v_k}\cdot \mathbf{\hat n}$ and are colored according to the total kick magnitude $v_k$. The dashed blue line corresponds to a Gaussian acceleration profile of width $\sigma=8M$ centered at $t=10M$, which well approximates the largest kick in our sample. Smaller kicks require more complicated profiles to be modeled carefully. \vspace{0.3cm} } \label{normprofiles} \end{figure} In order to highlight the ``shape'' of the kick, Fig.~\ref{normprofiles} shows 200 velocity accumulation profiles $\mathbf{v}(t)$ from the same binary distribution projected along random directions $\mathbf{\hat n}$ and normalized to the value of the final kick $\mathbf{v_k}\cdot\mathbf{\hat n}$. Despite the remarkable complexity explored above, the kick accumulation profiles present very robust features. In particular, profiles are simpler for binaries receiving large recoils, for which the acceleration $d\mathbf{v}/dt\cdot \mathbf{\hat n}$ is well approximated by a single Gaussian with mean $t=10M$ and width $\sigma=8M$. Smaller kicks, on the other hand, present more complicated profiles which typically include an antikick~\cite{2010PhRvL.104v1101R}. These findings corroborate the approach of Ref.~\cite{2016PhRvL.117a1101G}, where $\mathbf{v}(t)\cdot\mathbf{\hat{n}}$ was modeled with a basis of damped oscillatory functions. We stress that the population explored here is far from being astrophysically relevant. Astrophysical processes (such as the Bardeen-Petterson effect in the case of disk accretion~\cite{1975ApJ...195L..65B} and tidal interactions for stellar-mass BH progenitors~\cite{1981A&A....99..126H}) deeply modify the BH spin orientations, thus affecting the expected kick distribution~\cite{2013MNRAS.429L..30L, 2013ApJ...774...43M, 2015MNRAS.451.3941G}. Moreover, PN effects in the long inspiral before merger have been shown to preferentially suppress or enhance recoils in specific regions of the parameter space~\cite{2010ApJ...715.1006K, 2015MNRAS.451.3941G}. \section{Accuracy} \label{accuracy} \subsection{Exploiting symmetries} \label{exploiting} Before presenting a detailed comparison with NR simulations, we first perform internal tests of our kick extraction procedure by leveraging the symmetries of the problem. For instance, equal-mass nonspinning systems are not expected to recoil ($v_k=0$). Our extraction procedure returns $v_k\sim 10^{-5}$, which has to be considered a numerical error. Following Refs.~\cite{2008PhRvL.100o1101B,2008PhRvD..78b4017B}, we further exploit this argument using other symmetries of the system. In particular: \begin{enumerate} \item[(i)] $q=1$ and $\boldsymbol{\chi_1}=\boldsymbol{\chi_2}$ imply $v_k=0$. \item[(ii)] Aligned spins ($\boldsymbol{\chi_1}\parallel \mathbf{\hat L}$ and $\boldsymbol{\chi_2}\parallel \mathbf{\hat L}$) force the recoil to be confined to the orbital plane ($\mathbf{v_k}\cdot \mathbf{\hat L}=0$); this property is independent of $q$. \item[(iii)] For $q=1$ and spins with opposite orbital-plane components ($\boldsymbol{\chi_1}\cdot \mathbf{\hat L}=\boldsymbol{\chi_2}\cdot \mathbf{\hat L}$ and $\boldsymbol{\chi_1}\times \mathbf{\hat L} = -\boldsymbol{\chi_2}\times \mathbf{\hat L}$) the kick is restricted to be orthogonal to the orbital plane ($\mathbf{v_k}\parallel \mathbf{\hat L}$). \end{enumerate} Some of the special cases encountered in Sec.~\ref{anatomy} belong to these classes. For instance, equal-mass nonspinning systems are a trivial example of all categories. The $q=1$ up-up, down-down, right-right and left-left cases shown in Figs.~\ref{spinaligned} and \ref{leftright} are an instance of (i) and are therefore expected to have $v_k=0$. All up-up, down-down, up-down and down-up configurations are an instance of (ii), while right-left and left-right binaries with $q=1$ are an instance of (iii). These symmetries are investigated in the three panels of Fig.~\ref{symmetry}, respectively. For the top panel, we generate binaries with $q=1$ and random spins $\boldsymbol{\chi_1}=\boldsymbol{\chi_2}$ uniform in volume with magnitude $<0.8$. For the middle panel, we take $q$ to be uniformly distributed in $[0.5,1]$, generate $\boldsymbol{\chi_i}\cdot\mathbf{\hat z}$ uniformly in $[-0.8,0.8]$, and set all of the x and y components of the spins to zero. For the bottom panel, we fix $q=1$, generate $\boldsymbol{\chi_1}$ uniform in volume with magnitude $<0.8$, and set $[\chi_{2x},\chi_{2y},\chi_{2z}] = [-\chi_{1x},-\chi_{1y},\chi_{1z}]$. The values of $v_k$, $|\mathbf{v_k}\cdot \mathbf{\hat z}|$ and $|\mathbf{v_k}\times \mathbf{\hat z}|$ shown in Fig.~\ref{symmetry} are expected to be zero under symmetries (i), (ii) and (iii), respectively. We see that symmetry (i) exhibits the largest violations. The absolute largest deviations are $\sim 6 \times10^{-4} c\sim 180 $ km/s, which is therefore a generous upper limit of our numerical errors. The median of the errors is as small as $\sim 1.1 \times10^{-4} c$, while the 90th percentile is $\sim 2.8 \times10^{-4} c$. Symmetries (ii) and (iii) are better preserved, with a precision which is roughly an order of magnitude higher. The error medians for both are $\sim 1.5 \times10^{-5} c$. \begin{figure} \includegraphics[width=0.49\textwidth]{symmetry} \caption{Test of the kick numerical extraction by exploiting some of the symmetries of the system. All quantities shown in these plots are expected to be zero; deviations are interpreted as numerical inaccuracies of our extraction procedure. Top panel, symmetry (i): equal mass binaries with the same spin vectors are expected to have zero kicks. Middle panel, symmetry (ii): binaries with generic mass ratio and aligned spins are expected to have kicks in the orbital plane. Bottom panel, symmetry (iii): equal-mass binaries with opposite orbital-plane spin components and same aligned components are expected to have kicks directed along the binary's orbital angular momentum. Each panel contains a sample of $10^4$ binaries generated as described in the text. Dashed (dotted) lines show medians (90th percentiles) of the distributions.} \label{symmetry} \end{figure} It is worth noting that the errors reported here are rather conservative, as they take into account inaccuracies accumulated throughout the entire extraction pipeline---from the NR simulations that were used to calibrate \mbox{NRSur7dq2}, to the surrogate waveform interpolations, and finally the numerical operations described in this paper. \subsection{Comparison with numerical relativity simulations: SpEC} \label{NRcomparison} \begin{figure*}[t!] \includegraphics[width=0.84\textwidth]{nr_comparison_histograms} \caption{Accuracy of the surrogate extraction of the kick velocity $v_k$ compared to NR simulations from SpEC. Filled histograms show distributions of $v_k$ extracted by both approaches, while the black dashed line shows residuals between the two methods. Solid thin lines explore some of the possible causes of the observed differences: the orange line shows a lower limit on the NR extraction accuracy, computed using the two highest resolutions available; the purple line shows residuals between NR kicks extracted with $l_{\rm max}=8$ (default) and $l_{\rm max}=4$ (corresponding to the highest modes available in NRSur7dq2); the green line shows residuals in the surrogate extraction when the same NR runs are reproduced setting either $t_{\rm ref}=-100M$ or $t_{\rm ref}=-4500M$.} \label{nrcomparison_hist} \end{figure*} \begin{figure}[t!] \includegraphics[width=0.99\columnwidth]{nr_comparison_scatter} \caption{Comparison between BH kicks extracted from NR SpEC simulations (horizontal) and the surrogate model \mbox{NRSur7dq2} (vertical). The NR runs used here are the same that entered the surrogate model calibration, which was not designed to model large kicks specifically. 50th and 90th percentiles are shown with dashed and dotted lines, respectively. Red crosses mark the four cases explored in Fig.~\ref{nrcomparison_profiles}.} \label{nrcomparison_scatter} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=0.85\textwidth]{nr_comparison_profiles} \caption{Linear momentum profiles $\mathbf{P(t)}$ projected along the direction of the final kick $\mathbf{\hat v_k}$ for four selected NR simulations from SpEC compared to predictions obtained with the surrogate model. These same four cases are marked with crosses in Fig.~\ref{nrcomparison_scatter}. While the vast majority of the kick morphologies are faithfully represented, some outliers are present. An example is provided in the bottom right panel, where profiles are in good agreement before merger but then diverge at $t\sim10M$.} \label{nrcomparison_profiles} \end{figure*} We now estimate the accuracy of our extraction procedure by directly comparing our results to numerical relativity simulations from the SpEC code~\cite{2000PhRvD..62h4032K}. In particular, we compare against the 744 simulations\footnote{% NRSur7dq2 is trained on 886 waveforms obtained from 744 simulations --- 142 simulations have $q=1$ and $\boldsymbol{\chi_1} \ne \boldsymbol{\chi_2}$, so that a rotation enables one simulation to represent two sets of binary parameters and therefore two input waveforms~\cite{2017PhRvD..96b4058B}.} used to construct NRSur7dq2~\cite{2017PhRvD..96b4058B}. These simulations constitute the majority of the waveforms available in the SpEC catalog~\cite{2013PhRvL.111x1104M} in the relevant parameter range, and especially so for generic spin orientations. This is not the most ideal comparison: each of these numerical simulations occupies a special point in the binary parameter space of the surrogate model. However, it is worth noting that (i)~the surrogate waveforms do not reproduce the NR waveforms exactly, even at the parameter-space location of the simulations that entered the training process; and (ii)~NRSur7dq2 was designed to maximize the overlap between the interpolated and the NR strain $h$, not to accurately model BH kicks. The comparison to NR simulations will therefore be sensitive to errors from the surrogate's reproduction of the training set of gravitational waveforms, but insensitive to errors from the surrogate's interpolation between these waveforms. Recoils are extracted from SpEC waveforms using the expressions reported in Sec.~\ref{radiatedexpressions}, and normalized by the remnant mass computed from the BH horizon at the end of the SpEC simulation. We include modes up to $l_{\rm max}=8$ from the highest-resolution data. To compare with the surrogate kick, we must determine the correct binary parameters by first time shifting and rotating the NR waveforms consistently with NRSur7dq2 (per criteria given in Sec.~\ref{surrogatemodels}) and then measuring the BH spins at $t_{\rm ref}=-4500M$ as in~\cite{2017PhRvD..96b4058B}. Consequently the surrogate is evaluated with $t_{\rm ref} = -4500 M$. Filled histograms in Fig.~\ref{nrcomparison_hist} show the distributions of $v_k$ obtained for both the NR and surrogate extractions. Differences $\Delta v_k$ between the two (thick dashed line) are typically $\sim 10^{-4} c$; 90\% of the simulations are reproduced within $\Delta v_k = 5.5 \cdot 10^{-4} c$. In this histogram we also plot several sources of error to evaluate their importance. One of these is the difference between NR kicks extracted from different resolutions of each SpEC simulation ---a solid upper limit on the accuracy of the NR kick extraction. This also presents a tail up to $\sim 2\cdot 10^{-3} c$, similar to that of $\Delta v_k$. The selection of the reference time $t_{\rm ref}$ in the surrogate extraction is a marginally smaller effect, with tail up to $\sim 10^{-3} c$. The contribution of higher-order modes $l>4$ to the NR kick is a subdominant effect and contributes only on the scale of $\sim10^{-5} c$. Finally, the error from evaluating the kick at a finite time $t=100M$, instead of taking the kick's $t\to\infty$ limit, is negligible: the NR kicks extracted at $t=100M$ and $135M$ (each simulation has a different final time in $[139M,165M]$) differ by $\sim 10^{-8}c$ only. The surrogate-to-NR comparison is also presented as a scatter plot in Fig.~\ref{nrcomparison_scatter}, which shows how the surrogate kick extraction faithfully reproduces the vast majority of the simulations. A few outliers with $\Delta v_k\sim 2\cdot 10^{-3} c$ are present in the bottom-center panel of the figure (also in Fig.~\ref{nrcomparison_hist} as the tail of the $\Delta v_k$ distribution), for which our surrogate extraction underestimates the value of $v_k$. These are cases where the surrogate model fails to correctly reproduce some cycles in the waveform's higher harmonics around the time of merger, when the majority of the kick is being accumulated. We note that cases with large $\Delta v_k$ are preferentially located at the high-spin edge of the NRSur7dq2 parameter space: the three outliers mentioned above, and $\sim 2/3$ among the 5\% of cases with the largest $\Delta v_k$, have $\chi_1 = \chi_2 = 0.8$. This occurs because the error of the SpEC simulations, and consequently the surrogate model waveforms, increases towards this maximum-spin boundary. Restricting to the 464 NR simulations (or $\sim 2/3$ of the sample) with zero or one spin of magnitude $\chi = 0.8$, we find that the surrogate reproduces 90\% of the kicks within $\Delta v_k$ of $3.8\cdot10^{-4}c \sim 113$ km/s. The error is about twice as large for the 280 simulations (or $\sim 1/3$ of the sample) with $\chi_1 = \chi_2 = 0.8$, with 90\% of the kicks being within $7.7\cdot10^{-4}c \sim 232$ km/s. Finally, Fig.~\ref{nrcomparison_profiles} shows comparisons for the kick accumulation profiles $\mathbf{P}(t)\cdot \mathbf{\hat v_k}$ in four selected cases. We find the the surrogate model reproduces not only the kick magnitude $v_k$, but also the morphology of the time accumulation profile for the vast majority of the NR simulations. The lower left panel of Fig.~\ref{nrcomparison_profiles} shows one of the few outliers, which has $\Delta v_k\sim 3 \cdot 10^{-3} c$. The NR and surrogate profiles diverge around $t \sim 10M$, when the surrogate fails to capture the merger waveform. These two curves appear similar to the kick profiles of Fig.~\ref{alphaprof}, suggesting that the surrogate model fails to reconstruct the orbital phase at merger. Even if \mbox{NRSur7dq2} well reproduces the strain $h$, its small errors might propagate to the phase of center-of mass-oscillation causing a relatively large error on the final kick velocity. \subsection{Comparison with numerical relativity simulations: LazEv} \begin{figure} \includegraphics[width=\columnwidth]{RIT_check} \caption{Distribution of BH kicks extracted from 132 NR LazEV simulations \cite{2013PhRvD..87h4027L,Zlochower:2015wga}, rescaled between the minimum and maximum kicks obtained from \mbox{NRSur7dq2} [cf. Eq.~(\ref{nuk})]. If $0\leq \nu_k \leq 1$, there exists a suitable choice of $t_{\rm ref}$ for which the surrogate model reproduces the NR value of the kick. On the other hand, the NR data cannot be reproduced if $\nu_k<0$ or $\nu_k>1$.} \label{RIT_check} \end{figure} Finally, we compare our results against NR simulations performed by the RIT group with the LazEv code \cite{2005PhRvD..72b4021Z}. This additional comparison is noteworthy because not only were these simulations not used in the surrogate calibration, but they were performed with a completely different numerical scheme (for a detailed comparison between SpEC and LazEv see Ref.~\cite{2016CQGra..33x4002L}). We compare against several series of simulations performed by Lousto and Zlochower that vary over the relative azimuthal projection of the spin (i.e.~the angle $\alpha$ defined in Sec.~\ref{anatomy}) \cite{2013PhRvD..87h4027L,Zlochower:2015wga}. Of the 223 NR simulations described in these references, 132 of them lie within the parameter range covered by NRSur7dq2.\footnote{Some of the simulations have parameters which exceed the range of validity of the surrogate model only very marginally ($q\simeq 0.498$ and/or $\chi_i\simeq 0.802$). We do not filter those runs out, but rather use NRSur7dq2 in extrapolation mode.} We extract horizon masses, spins, and final kicks from the relevant tables in Refs.~\cite{2013PhRvD..87h4027L,Zlochower:2015wga}; then, we use the mass ratios and spins as inputs to NRSur7dq2. Case-by-case comparisons between the RIT simulations and the surrogate model are not possible because differences in gauges preclude us from converting their initial separations to our $t_{\mathrm{ref}}$'s. We can, however, check for each case whether there exists a choice of $t_{\mathrm{ref}}$ for which the surrogate reproduces the reported value of the kick. To this end, we rescale each of the RIT kick values $v_k^{(\rm NR)}$ with an affine transformation determined by the minimum and maximum surrogate kicks $v_k^{(\rm surr)}$ as $t_{\mathrm{ref}}$ is varied over the range $t_{\rm ref}/M\in [-4500,-100]$, while holding all other parameters fixed: % \begin{equation} \nu_k = \frac{ v_k^{(\rm NR)} - \min_{t_{\rm ref}}\, v_k^{(\rm surr)}}{ \max_{t_{\rm ref}}\, v_k^{(\rm surr)} - \min_{t_{\rm ref}}\,v_k^{(\rm surr)}} \,. \label{nuk} \end{equation} Therefore the kicks from Refs.~\cite{2013PhRvD..87h4027L,Zlochower:2015wga} that can be reproduced lie in the range $0\le\nu_{k}\le 1$. The resulting distribution of $\nu_k$ is shown in Fig.~\ref{RIT_check}. We find that $0\leq \nu_k\leq 1$ for $117/132\simeq 89\%$ of the simulations. The remaining simulations cannot be matched by our procedure; in particular, the surrogate underestimates the NR result in $15/132\simeq 11\%$ of the cases for which $\nu_k>1$ (no simulations are found with $\nu_{k}<0$). We stress, however, that these disagreements are very moderate, with $\nu_k<1.12$ over all the simulations we analyzed. The different comparisons presented in this section show that the surrogate kick extraction reaches precisions similar to those of the NR simulations that entered its calibration, well respects the symmetries of the problem, and matches kick results obtained with an independent NR code. We quote an overall average precision of $40$ km/s on the surrogate extraction of $v_k$. \addtolength{\tabcolsep}{+13pt} \begin{table*}[htp] \begin{center} \begin{tabular}{llll} \hline \hline Method & Description & Equation & Default inputs \\ \hline \hline \texttt{sur()} & Instance of the surrogate class from \mbox{NRSur7dq2}. &&\\ \texttt{q} & Binary mass ratio $q\in[0.5,1]$. && $q=1$.\\ \texttt{chi1} & Spin vector $\boldsymbol{\chi_1}$ of the heavier BH at $t_{\rm ref}$. && $\boldsymbol{\chi_1}=[0,0,0]$.\\ \texttt{chi2} & Spin vector $\boldsymbol{\chi_2}$ of the lighter BH at $t_{\rm ref}$. &&$\boldsymbol{\chi_2}=[0,0,0]$.\\ \texttt{t\_ref} & Reference time $t_{\rm ref}/M\in[-4500,-100]$. && $t_{\rm ref}/M=-100$.\\ \texttt{times} & Time nodes $t_i/M\in[-4500,100]$. &&\\ \texttt{lmax} & Largest available $l$ mode ($l_{\rm max}=4$ in \mbox{NRSur7dq2}).&&\\ \texttt{h(l,m)} & Modes of the complex GW strain $h^{lm}$.&Eq.~(\ref{hmodes})& \\ \texttt{hdot(l,m)} & Modes of the time derivative $\dot h^{lm}$ && \\ \texttt{dEdt} & Energy flux $dE/dt$. & Eq.~(\ref{energyflux})& \\ \texttt{Eoft} & Radiated energy profile $E(t)$. & & \\ \texttt{Erad} & Total radiated energy $\lim_{t\to\infty}E(t)$. & & \\ \texttt{Moft} & Mass profile $M(t)$. & Eq.~(\ref{Moft})& \\ \texttt{Mrad} & Mass of the remnant BH $\lim_{t\to\infty}M(t)$. & & \\ \texttt{Mfin} & Mass of the remnant BH in units of the mass at $t=-\infty$. &Eq.~(\ref{masslimits}) & \\ \texttt{dPdt} & Linear momentum flux $d\mathbf{P}/dt$ & Eqs.~(\ref{eq:dt_px}-\ref{eq:dt_pz})& \\ \texttt{Poft} & Radiated linear momentum profile $\mathbf{P}(t)$. & & \\ \texttt{Prad} & Total radiated linear momentum $\lim_{t\to\infty}|\mathbf{P}(t)|$. & & \\ \texttt{voft} & Recoil velocity profile $\mathbf{v}(t)$. & Eq.~(\ref{voftprofile})& \\ \texttt{kickcomp} & Kick velocity, vector $\mathbf{v_k}=\lim_{t\to\infty}\mathbf{v}(t)$. & Eq.~(\ref{vkicklimit})& \\ \texttt{kick} & Kick velocity, magnitude $v_k$. & & \\ \texttt{kickdir} & Kick velocity, unit vector $\mathbf{\hat v_k}= \mathbf{v_k}/v_k$. & & \\ \texttt{dJdt} & Angular momentum flux $d\mathbf{J}/dt$. & Eqs.~(\ref{eq:dt_jx}-\ref{eq:dt_jz})& \\ \texttt{Joft} & Radiated angular momentum profile $\mathbf{J}(t)$. & & \\ \texttt{Jrad} & Total radiated angular momentum $\lim_{t\to\infty}|\mathbf{J}(t)|$. & & \\ \texttt{xoft} & Center-of-mass trajectory $\mathbf{x}(t)=\int \mathbf{v}(t) dt$. & & \\ \hline \hline \end{tabular} \end{center} \caption{Main methods of the \texttt{surrkick} class. A class instance has to be initialized with e.g.~\texttt{sk=surrkick.surrkick(q=1,chi1=[0,0,0],chi2=[0,0,0],t\char`_ref=-100)}. Methods can then be accessed with e.g.~\texttt{sk.voft}.} \label{codefunctions} \end{table*} \addtolength{\tabcolsep}{-13pt} \section{Code distribution and usage} \label{code} Our numerical code, \textsc{surrkick}, is publicly available as a module for the Python programming language. The latest stable release is kept updated on the Python Package Index (PyPI) and can be installed via \begin{verbatim} pip install surrkick \end{verbatim} Python packages {numpy}~\citep{Walt}, {scipy}~\cite{Jones:2001aa}, {matplotlib}~\citep{2007CSE.....9...90H}, {h5py}~\cite{h5py}, {pathos}~\cite{mckerns-proc-scipy-2011}, tdqm~\cite{casper_da_costa_luis_2017_1012577}, {\mbox{NRSur7dq2}}~\cite{2017PhRvD..96b4058B} and precession~\cite{2016PhRvD..93l4066G} are specified as dependencies and will automatically be installed if missing. The \textsc{surrkick} module has to be imported with \begin{verbatim} import surrkick \end{verbatim} from within a Python environment. Information on all classes, methods, and functions of the code can be obtained from the code docstrings using Python's \texttt{help} function. \textsc{surrkick} is hosted under version control on GitHub at \href{https://github.com/dgerosa/surrkick}{github.com/dgerosa/surrkick}, where development versions are available. Further information and code outputs can be found at \href{https://davidegerosa.com/surrkick}{davidegerosa.com/surrkick}. \textsc{surrkick} is structured as an add-on to any waveform approximant. In particular, it will be straightforward to update it as new surrogate models become available. The code is currently compatible with Python 2; porting to Python 3 is foreseen. Results in this paper were obtained with version 1.1 of \textsc{surrkick}. All of the main functionalities of the code are provided as methods of a single class \texttt{surrkick.surrkick}. An instance of the class is created providing mass ratio $q$, spin vectors $\boldsymbol\chi_{i}$ and reference time $t_{\rm ref}/M$: \begin{verbatim} sk=surrkick.surrkick(q=1,chi1=[0,0,0], chi2=[0,0,0],t_ref=-100) \end{verbatim} A list of the relevant methods is provided in Table~\ref{codefunctions}. All quantities are returned in units of the binary's total mass (i.e.~$c=G=M=1$). Time profiles are evaluated at the time nodes \texttt{sk.times}. For instance, the following code snippet computes the final kick imparted to a right-left binary with $q=0.5$ and $\chi_1=\chi_2=0.8$, and plots the velocity profile $\mathbf{v}(t)$ projected along $\mathbf{\hat x}$, $\mathbf{\hat y}$, $\mathbf{\hat z}$ and $\mathbf{\hat v_k}$. \begin{samepage} \begin{verbatim} import surrkick import matplotlib.pyplot as plt sk=surrkick.surrkick(q=0.5,chi1=[0.8,0,0], chi2=[-0.8,0,0]) print "vk/c=", sk.kick plt.plot(sk.times,sk.voft[:,0],label="x") plt.plot(sk.times,sk.voft[:,1],label="y") plt.plot(sk.times,sk.voft[:,2],label="z") plt.plot(sk.times,surrkick.project(sk.voft, sk.kickdir),label="vk") plt.xlim(-100,100) plt.legend() plt.show() \end{verbatim} \end{samepage} The class \texttt{surrkick.plots} provides tools to reproduce all figures and results presented in this paper. The snippet above is implemented as \texttt{surrkick.plots.minimal()}. Performance of the code was evaluated on a single processor of an Intel Xeon CPU E5-2660 v3 @2.60GHz averaging over $10^3$ binaries with generic parameters. Computation of $v_k$ takes $\sim 0.1$ s, where $\sim50$ ms are spent evaluating $h$ from \mbox{NRSur7dq2}~\cite{2017PhRvD..96b4058B} and $\sim 50$ ms are spent integrating the energy and linear momentum fluxes. These low execution times make our code ideal to be ported into large-scale computational studies. \section{Conclusions} \label{conclusions} New waveform approximants able to model precessing BH binaries with higher harmonics have been recently developed for GW detection and parameter estimation. Here we have shown, for the first time, how these tools present an interesting by-product, namely the quick and reliable estimation of energy and momenta radiated in GWs during BH inspirals and mergers. In particular, the dissipation of linear momentum is responsible for powerful BH recoils, which might even eject BHs from their host galaxies. We exploited the recent NR surrogate model \mbox{NRSur7dq2}~\cite{2017PhRvD..96b4058B} to explore the phenomenology of the recoil velocity profile $\mathbf{v}(t)$ imparted to generic binaries as they merge. Our findings are implemented in the numerical code \textsc{surrkick}, which is made available to the community as a module for the Python programming language. Our extraction procedure inherits both strengths and weaknesses of \mbox{NRSur7dq2}. The model can reproduce the GW strain with mismatches $\sim 10^{-3}$, orders of magnitude better than any other model currently available. This translates into an average accuracy $\Delta v_k/c \lesssim 10^{-4}$ on the recoil estimates. The model has only been calibrated on BH binaries with mass ratios $q\geq0.5$ and spin magnitudes $\chi_i\leq0.8$. Both \mbox{NRSur7dq2} and \textsc{surrkick} can in principle be used outside this range, but those extrapolations have not been tested accurately. \mbox{NRSur7dq2} provides evolutions over a time $\Delta t\sim 5000M$, corresponding to $\sim 20$ orbits before merger. While this is a severe limitation for waveform modeling (because low-mass systems spend many more cycles in the sensitivity windows of the detectors), it is irrelevant for kick estimation. Linear momentum emission is concentrated in a small time window ($2 \sigma \sim 20M$) around merger which is well covered by \mbox{NRSur7dq2}. The tools presented here provide an alternative way to estimate BH kicks which, contrary to fitting formulas, does not require specific ans\"{a}tze. Moreover, they provide information on the full $\mathbf{v}(t)$ profile, not just the final recoil velocity $v_k$. With executions times of $\sim 0.1$ s, our approach allows for quick and reliable implementations of BH kicks in a variety of astrophysical studies, from galaxy evolution codes to population synthesis studies of compact binaries. Future developments include building new NR surrogate models specifically designed to accurately reproduce mass, spin, and recoil of the post-merger BH. \bigskip \acknowledgments We thank Jonathan Blackman, Chad Galley, Mark Scheel, Ulrich Sperhake, Saul Teukolsky, and Vijay Varma for fruitful discussions and technical help. D.G. is supported by NASA through Einstein Postdoctoral Fellowship Grant No.~PF6-170152 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under Contract NAS8--03060. F.H. acknowledges the support of the Sherman Fairchild Foundation, and NSF grants PHY-1404569, PHY-1708212, and PHY-1708213 at Caltech. L.C.S. acknowledges the support of NSF grant PHY-1404569 and the Brinson Foundation. Computations were performed on resources provided by NSF CAREER Award PHY-1151197, and on the Wheeler cluster at Caltech, which is supported by the Sherman Fairchild Foundation and by Caltech. \bibliographystyle{apsrev4-1}
{ "timestamp": "2018-05-29T02:19:33", "yymm": "1802", "arxiv_id": "1802.04276", "language": "en", "url": "https://arxiv.org/abs/1802.04276" }
\section{Introduction} In this paper, we systematize and analyze some results obtained in {\it Subset Combinatorics of Groups } after publications the surveys [1], [2], [3], [4]. The main topics: the descriptive and dynamical characterizations of subsets of a group with respect to their combinatorial size, Ramsey-product subsets in connection with some general concept of recurrence, new ideals in the Boolean algebra $\mathcal{P}_{G}$ of all subsets of $G$ and in the Stone-$\check{C}$ech compactification $ \beta G$ of $G$, the combinatorial derivation. In these investigations, the principal part play ultrafilters on a group $G$. On one hand, ultrafilters are using as a tool to get some purely combinatorial results. On the other hand, the {\it Subset Combinatorics of Groups } allows to prove new facts about ultrafilters, in particular, about the Stone-$\check{C}$ech compactification $\beta G$ of $G$. In this connection, we recall some basic definitions concerning ultrafilters. A {\it filter} $\mathcal{F}$ on a set $X$ is a family of subsets of $X$ such that \vskip 5pt $\bullet$ $\emptyset \notin \mathcal{F}$, $X\in \mathcal{F}$; \vskip 5pt $\bullet$ $A, B \in \mathcal{F} \Longrightarrow A\bigcap B \in\mathcal{F}$; \vskip 5pt $\bullet$ $A \in \mathcal{F}$, $A\subseteq C \Longrightarrow C\in\mathcal{F}$. \vskip 7pt The family of all filters on $X$ is partially ordered by inclusion. A filter maximal in this ordering is called an {\it ultrafilter}. A filter $\mathcal{F}$ is an ultrafilter if and only if $X= A \bigcup B$ implies $A \in \mathcal{F}$ or $B\in \mathcal{F}$. Now we endow $X$ with the discrete topology and identity the Stone-$\check{C}$ech compactification $\beta X$ with the set of all ultrafilters on $X$. An ultrafilter $\mathcal{F}$ is principal if there exists $x\in X$ such that $\mathcal{F}=\{A\subseteq X: x\in A\}$. Otherwise, $\bigcap\mathcal{F}=\emptyset$ and $\mathcal{F}$ is called free. Thus, $X$ is identified with the set of all principal ultrafilters and the set of all free ultrafilter on $X$ is denoted by $X^{\ast}$. To describe the topology on $\beta X$, given any $A \subseteq X$ we denote $\bar{A} = \{\mathcal{F}\in X: A\in \mathcal{F}\}$. Then the set $\{ \bar{A}: A\subseteq X\}$ is a base for the topology of $X$. The characteristic topological property of $\beta X$: every mapping $f: X\longrightarrow K$, $K$ is a compact Hausdorff space, can be extended to the continuous mapping $f^{\beta}: \beta X\longrightarrow K$. Given a filter $\varphi$ on $X$, the set $\bar{\varphi} = \{p\in\beta X: \varphi\subseteq p\}$ is closed in $\beta X$, and for every non-empty closed subset $K$ of $\beta X$, there is a filter $\varphi$ on $X$ such that $\bar{\varphi}=K$. Now let $G$ be a discrete group. Using the characteristic property of $\beta G$, we can extend the group multiplication on $G$ to the semigroup multiplication on $\beta G$ in such a way that, for every $g\in G$, the mapping $\beta G\longrightarrow G: p\longmapsto gp$ is continuous and, for every $q\in \beta G$, the mapping $\beta G \longrightarrow \beta G: p\longmapsto pq$ is continuous. To define the product $pq$ of ultrafilters $p$ and $q$, we take an arbitrary $P\in p$ and, for each $x\in P$, pick some $Q _{x}\in q$. Then, $\bigcup_{x\in P} x Q_{x}$ is a member of $pq$, and each member of $pq$ contains some subsets of this form. For properties of the compact right topological semigroup $\beta G$ and a plenty of its combinatorial application see [5]. \section{Diversity of subsets and ultracompanions} Let $G$ be a group with the identity $e$, $\mathcal{F}_{G}$ denotes the family of all finite subsets of $G$. We say that a subset $A$ of $G$ is \vskip 5pt \begin{itemize} \item{} {\it large} if $G=FA$ for some $F\in \mathcal{F}_G$;\vskip 5pt \item{} {\it small} if $L\setminus A$ is large for every large subset $L$ ; \vskip 5pt \item{} {\it extralarge} if $G\setminus A$ is small;\vskip 5pt \item{} {\it thin} if $gA\cap A$ is finite for each $g\in G\setminus \{e\}$;\vskip 5pt \item{} {\it thick} if, for every $F\in \mathcal{F}_G$, there exists $a\in A$ such that $Fa \subseteq A$;\vskip 5pt \item{} {\it prethick} if $FA$ is thick for some $F\in \mathcal{F}_G$; \vskip 5pt \item{} {\it $n$-thin}, $n\in {\mathbb N}$ if, for every distinct elements $g_0 , \dots , g_n \in G$, the set $g_0 A \cap \dots \cap g_n A$ is finite; \vskip 5pt \item{} {\it sparse} if, for every infinite subset $X$ of $G$, there exists a finite subset $F\subset X$ such that $\bigcap _{g\in F} gA$ is finite. \vskip 5pt \end{itemize} \vskip 10pt {\bf Remark 2.1.} In {\it Topological dynamics}, large subsets are known as syndetic, and a subset is small if and only if it fails to be piecewise syndetic. In \cite{b4}, the authors use the dynamical terminology. \vskip 6pt All above definitions can be unified with usage the following notion \cite{b6}. Given a subset $A$ of a group $G$ and an ultrafilter $p\in G^*$, we define a $p$-{\it companion} of $A$ by $$ \Delta _p (A)= A^* \cap Gp=\{gp: g\in G, \ A\in gp \}. $$ \vskip 5pt Then, for every infinite group $G$, the following statement hold: \vskip 10pt \begin{itemize} \item{} $A$ is large if and only if $\Delta _p (A) \neq \emptyset $ for each $p\in G^*$;\vskip 5pt \item{} $A$ is small if and only if, for every $p\in G^*$ and every $F\in \mathcal{F}_G$, we have $\Delta _p (F A) \neq Gp $;\vskip 5pt \item{} $A$ is thick if and only if, there exist $p\in G^{\ast}$ such that $\Delta _p (A) = Gp $; \vskip 5pt \item{} $A$ is thin if and only if, $\Delta _p (A) \leq 1 $ for every $p\in G^*$;\vskip 5pt \item{} $A$ is $n$-thin if and only if, $\Delta _p (A) \leq n $ for every $p\in G^*$; \vskip 5pt \item{} $A$ is sparse if and only if, $\Delta _p (A) $ is finite for each $p\in G^*$.\vskip 5pt \end{itemize} \vskip 10pt Following \cite{b1}, we say that a subset $A$ of $G$ is {\it scattered} if, for every infinite subset $X$ of $A$, there is $p\in X^* $ such that $\Delta _p (X)$ is finite. Equivalently [7, Theorem 1], $A$ is scattered if each subset $\Delta _p (A)$ is discrete in $G ^* $. \vskip 10pt {\it Comments}. For motivations of above definitions see \cite{b1}, for more delicate classification of subsets of a group and $G$-spaces see \cite{b2}, \cite{b8}. \section{The descriptive look at the size of subsets of groups} Given a group $G$, we denote by ${\bf P}_{G}$ and ${\bf F}_{G}$ the Boolean algebra of all subsets of $G$ and its ideal of all finite subsets. We endow ${\bf P}_{G}$ with the topology arising from identification (via characteristic functions) of ${\bf P}_{G}$ with $\{0,1\}^{G}$. For $K\in F_{G}$ the sets $$ \{X\in {\bf P}_{G}: K\subseteq X\}, \ \ \{X\in {\bf P}_{G}: X\cap K=\emptyset\}$$ form the subbase of this topology. After the topologization, each family $\mathcal{F}$ of subsets of a group $G$ can be considered as a subspace of ${\bf P}_{G}$, so one can ask about the Borel complexity of $\mathcal{F}$, the question typical in the {\it Descriptive Set Theory} (see \cite{b9}). We ask these questions for the most intensively studied families in {\it Combinatorics of Groups.} For a group $G$, we denote by ${\bf L}_{G}$, ${\bf EL}_{G}$, ${\bf S}_{G}$, ${\bf T}_{G}$, ${\bf PT}_{G}$ the sets of all large, extralarge, small, thick and prethick subsets of $G$, respectively. \vskip 10pt {\bf Theorem 3.1.}{\it For a countable group $G$, we have: ${\bf L}_{G}$ is $F_{\sigma}$, ${\bf T}_{G}$ is $G_{\delta}$, ${\bf PT}_{G}$ is $G_{\delta\sigma}$, ${\bf S}_{G}$ and ${\bf EL}_{G}$ are} $F_{\sigma\delta}$. \vskip 10pt A subset $A$ of a group $G$ is called \begin{itemize} \item {\em $P$-small\/} if there exists an injective sequence $(g_{n})_{n\in\omega}$ in $G$ such that the subsets $\{g_{n} A : n\in\omega \}$ are pairwise disjoint; \item {\em weakly $P$-small\/} if, for any $n\in\omega$, there exists $g_{0},\ldots , g_{n}$ such that the subsets $g_{0}A,\ldots , g_{n}A$ are pairwise disjoint; \item {\em almost $P$-small\/} if there exists an injective sequence $(g_{n})_{n\in\omega} $ in $G$ such that $g_{n}A\cap g_{m}A$ is finite for all distinct $n,m$; \item {\em near $P$-small\/} if, for every $n\in\omega$, there exists $g_{0},\ldots , g_{n}$ such that $g_{i}A\cap g_{j}A$ is finite for all distinct $i,j\in \{0,\ldots,n\}$. \end{itemize} \vskip 10pt Every infinite group $G$ contains a weakly $P$-small set, which is not $P$-small, see \cite{b10}. Each almost $P$-small subset can be partitioned into two $P$-small subsets \cite{b8}. Every countable Abelian group contains a near $P$-small subset which is neither weakly nor almost $P$-small \cite{b11}. \vskip 10pt {\bf Theorem 3.2.}{\it For a countable group $G$, the sets of thin, weakly $P$-small and near $P$-small subsets of $G$ are $F_{\delta\sigma}$.} \vspace{5 mm} We recall that a topological space $X$ is {\it Polish} if $X$ is homeomorphic to a separable complete metric space. A subset $A$ of a topological space $X$ is {\it analytic} if $A$ is a continuous image of some Polish space, and $A$ is {\it coanalytic} if $X\setminus A$ is analytic. Using the classical tree technique \cite{b9} adopted to groups in \cite{b12}, we get.\vspace{3 mm} \vskip 10pt {\bf Theorem 3.3.} {\it For a countable group $G$, the ideal of sparse subsets is coanalytic and the set of $P$-small subsets is analytic in ${\bf P}_{G}$.} \vspace{5 mm} Given a discrete group $G$, we identify the Stone-$\check{C}$ech compactification $\beta G$ with the set of all ultrafilters on $G$ and consider $\beta G$ as a right-topological semigroup (see Introduction). Each non-empty closed subspace $X$ of $\beta G$ is determined by some filter $\varphi$ on $G$: $$X=\bigcap\{\overline{\Phi} : \Phi\in\varphi\}, \ \ \overline{\Phi}=\{p\in\beta G: \Phi\in p \}. $$ On the other hand, each filter $\varphi$ on $G$ is a subspace of ${\bf P}_{G}$, so we can ask about complexity of $X$ as the complexity of $\varphi$ in ${\bf P}_{G}$. The semigroup $\beta G$ has the minimal ideal $K_{G}$ which play one of the key parts in combinatorial applications of $\beta G$. By \cite{b5}, Theorem 1.5, the closure $cl (K_{G})$ is determined by the filter of all extralarge subsets of $G$. If $G$ is countable, applying Theorem 3.1, we conclude that $cl (K_{G})$ has the Borel complexity $F_{\sigma\delta}$. An ultrafilter $p$ on $G$ is called {\it strongly prime} if $p\notin cl(G^{\ast} G^{\ast})$, where $G^{\ast}$ is a semigroup of all free ultrafilters on $G$. We put $X= cl(G^{\ast} G^{\ast})$ and choose the filter $\varphi_{X}$ which determine $X$. By \cite{b13}, $A\in \varphi_{X}$ if and only if $G\backslash A$ is sparse. If $G$ is countable, applying Theorem 3.3, we conclude that $\varphi_{X}$ is coanalitic in ${\bf P}_{G}$. Let $(g_{n})_{n\in\omega}$ be an injective sequence in $G$. The set $$\{g_{i_{1}} g_{i_{2}}\ldots g_{i_{n}}: 0 \leq i_{1}< i_{2}< \ldots < i_{n}<\omega \}$$ is called an {\it FP-set}. By the Hindman Theorem 5.8 \cite{b5}, for every finite partition of $G$, at least one cell of the partition contains an $FP$-set. We denote by ${\bf FP}_{G}$ the family of all subsets of $G$ containing some $FP$-set. A subset $A$ of $G$ belongs to ${\bf FP}_{G}$ if and only if $A$ is an element of some idempotent of $\beta G$. By analogy with Theorem 3.3, we can prove that ${\bf FP}_{G}$ is analytic in ${\bf P}_{G}$. \vskip 10pt {\it Comments.} This section reflects the results from \cite{b14}. \section{The dynamical look at the subsets of a group} Let $G$ be a group. A topological space $X$ is called a $G$-{\it space} if there is the action $X \times G \longrightarrow X: (x, g) \longmapsto xg$ such that, for each $g\in G$, the mapping $X \longrightarrow X: x \longmapsto xg$ is continuous. Given any $x\in X$ and $U \subseteq X$, we set $$[U]_{x} = \{g \in G: xg\in U\}$$ and denote $$O(x) = \{xg : g \in G\}, T (x) = cl O(x),$$ $W(x) = \{y \in T (X) : [U]_{x}$ is infinite for each neighbourhood $U$ of $y\}$. We recall also that $x \in X$ is a {\it recurrent point} if $x \in W(x)$. Now we identify $\mathcal{P}_G$ with the space $\{0, 1\}^{G}$, endow $\mathcal{P}_G$ with the product topology and consider $\mathcal{P}_G$ as a $G$-space with the action defined by $$A \longmapsto Ag, \ Ag = \{ ag : a \in A\}.$$ We say that a subset $A$ of $G$ is {\it recurrent} if $A$ is a recurrent point in $(\mathcal{P}_G, G)$. All groups in this sections are supposed to be infinite. \vskip 10pt {\bf Theorem 4.1.} {\it For a subset $A$ of a group $G$, the following statements hold \vspace{5 mm} (i) $A$ is finite if and only if $W(A) =\emptyset $; \vspace{5 mm} (ii) $A$ is thick if and only if $G \in W(A)$.} \vskip 10pt {\bf Theorem 4.2.} {\it For a subset $A$ of a group $G$, the following statements hold \vspace{5 mm} (i) $A$ is $n$-thin if and only if $|Y | \leq n$ for every $Y \in W(A)$; \vspace{5 mm} (ii) $A$ is sparse if and only if each subset $Y \in W(A)$ is finite; \vspace{5 mm} (iii) $A$ is scattered if and only if, for every subset $B \subseteq A$ there exists $Y \in \mathcal{F} _{G}$ in the closure of} $\{Bb^{-1} : b \in B \}.$ \vspace{5 mm} Let $(g_{n})_{n\in\omega}$ be an injective sequence in $G$. The set $$FP(g_{n})_{n\in\omega} = \{g_{i_{1}}g_{i_{2}} \ldots g_{i_{n}} : 0 \leq i_{1} < i_{2} < \ldots < i_{n} < \omega\}$$ is called an $FP$-set. Given a sequence $(b_{n})_{n\in\omega}$ in $G$, the set $$\{g_{i_{1}}g_{i_{2}} \ldots g_{i_{n}} b_{i_{n}} : 0 \leq i_{1} < i_{2} < \ldots < i_{n} < \omega\}$$ is called a {\it (right) piecewise shifted} $FP$-set [7]. \vskip 10pt {\bf Theorem 4.3.} {\it For a subset $A$ of a group $G$, the following statements hold \vspace{5 mm} (i) $A$ is not $n$-thin if and only if there exist $F \in [G]^{n+1}$ and an injective sequence $(x_{n})_{n<\omega}$ in $G$ such that $Fx_{n} \subseteq A$ for each $n \in \omega$; \vspace{5 mm} (ii) $A$ is not sparse if and only if there exists two injective sequences $(x_{n})_{n<\omega}$ and $(y_{n})_{n<\omega}$ such that $x_{n}y_{m} \in A$ for each $0 \leq n \leq m < \omega$; \vspace{5 mm} (iii) $A$ is not scattered if and only if $A$ contains a piecewise shifted $FP$-set; \vspace{5 mm} (iv) $A$ contains a recurrent subset if and only if there exists $x \in A$ and an $FP$-set $Y$ such that $xY \subseteq A$.} \vskip 10pt {\bf Corollary 4.1. } {\it Every scattered subset of a group G has no recurrent points.} \vspace{5 mm} {\bf Remark 4.1.} By [4, Theorem 2], every scattered subset $A$ of an amenable group $G$ is absolute null, i.e. $\mu(A) = 0$ for every left invariant Banach measure $\mu$ on $G$. But this statement could not be generalized to subsets with no recurrent points. By [17, Theorem 11.6], there is a subset $A$ of $\mathbb{Z}$ of positive Banach measure such that $(a + B) \setminus A \neq \emptyset $ for any $FP$-set $B$. By Theorem 4.3(iv), $A$ has no recurrent subsets. \vskip 10pt {\bf Remark 4.2.} Let $G$ be an arbitrary infinite group. In \cite{b15}, we constructed two injective sequences $(x_{n})_{n\in\omega}$, $(y_{n})_{n\in\omega}$ in $G$ such the set $\{x_{n}y_{m} : 0 \leq n \leq m < \omega\}$ is scattered. By Theorem 4.3(ii), this subset is not sparse. \vspace{5 mm} {\it Comments}. This section reflects the first part of \cite{b15}. \section{Ramsey-product subsets and recurrence} In this section, all groups under consideration are supposed to be infinite; a countable set means a countably infinite set. Let $G$ be a group and let $\overrightarrow{m}= (m_{1} \ldots, m_{k}) \in\mathbb{Z}^{k}$ be a number vector of length $k\in \mathbb{N}$. We say that a subset $A$ of a group $G$ is a {\it Ramsey $\overrightarrow{m}$-product subset} if every infinite subset $X$ of $G$ contains pairwise distinct elements $x_{1},\ldots, x_{k} \in X$ such that, $$x^{m_{1}} _{\sigma(1)} \ x^{m_{2}} _{\sigma(2)} \ldots x^{m_{k}} _{\sigma(k)} \in A$$ for every substitution $\sigma\in S_{k}$. \vskip 10pt {\bf Theorem 5.1.}. {\it For a group $G$ and a number vector $\overrightarrow{m}=(m_{1},\ldots , m_{k} ) \in\mathbb{Z}^{k}$, the following statements hold: \vskip 15pt $(i)$ a subset $A$ of $G$ is a Ramsey $\overrightarrow{m}$-product subset if and only if every infinite subset $X$ of $G$ contains a countable subset $Y$ such that $y_{1}^{m_{1}} \ldots y_{k}^{m_{k}}\in A$ for any distinct elements $y_{1}, \ldots , y_{k} \in Y$. \vskip 15pt $(ii)$ the family $ \varphi _{\overrightarrow{m}}$ of all Ramsey $\overrightarrow{m}$-product subsets of $G$ is a filter. }\vskip 15pt For $t\in\mathbb{Z}$ and $q\in G^{\ast}$ we denote by $q^{\wedge}t$ the ultrafilter with the base $\{x^{t}: x\in Q\}$, $ \ Q\in q$. Warning: $q^{\wedge}t$ and $q^{t}$ are different things. Certainly, $q^{\wedge}t=q^{t}$ only if $t\in\{-1,0,1\}$. We remind the reader that, for a filter $\varphi$ on $G$, $\overline{\varphi}= \{p\in\beta G:\varphi\subseteq p\}$. {\bf Theorem 5.2.} {\it For every group $G$ and any number vector $\overrightarrow{m}=(m_{1}, \ldots , m_{k}) \in\mathbb{Z}^{k}$, we have} $$\overline{\varphi}_{\overrightarrow{m}} \ = \ cl\{(q^{\wedge}m_{1}) \ \ldots \ (q^{\wedge}m_{k}): \ q\in \ G^{\ast}\}. $$ \vskip 5pt Now we consider some special cases of vectors $\vec{m}$. \vskip 10pt {\bf Proposition 5.1.} {\it For any totally bounded topological group G, any neighborhood $U$ of the identity $e$ of $G$ is a Ramsey $\vec{m}$-product subset for any vector $\vec{m} = (m_{1}, \ldots , m_{k})$ such that} $ m_{1} + \ldots + m_{k} = 0.$ \vskip 5pt We recall that a {\it quasi-topological group} is a group $G$ endowed with a topology such that, for any $a, b \in G$ and $\varepsilon \in \{-1, 1\}$, the mapping $G \longrightarrow G: x \longmapsto ax^{\varepsilon}b$, is continuous. \vskip 10pt {\bf Proposition 5.2.} {\it The closure $\bar{A}$ of any Ramsey $(-1, 1)$-product set $A$ in a quasi-topological group $G$ is a neighborhood of the identity.} \vskip 10pt {\bf Proposition 5.3.} {\it Let $\vec{m} = (m_{1}, \ldots , m_{k})$ be a number vector and $s = m_{1} + \ldots + m_{k}$. For any Ramsey $\vec{m}$-product subset A of a group G, the set $\{x^{s} : x \in G\}$ is contained in the closure of $A$ in any non-discrete group topology on $G$.} \vskip 10pt {\bf Proposition 5.4.} {\it Let $G$ be the Boolean group of all finite subsets of $\mathbb{Z}$, endowed with the group operation of symmetric difference. The set $$A = G\setminus \{ \{x, y\} : x, y \in \mathbb{Z}, 0 \neq x - y \in \{z^{3} : z \in \mathbb{Z}\}\}$$ has the following properties: \vskip 5pt (i) $A$ is a Ramsey $\vec{m}$-product for any vector $\vec{m}=(m_{1}, \ldots ,m_{k}) \in (2\mathbb{Z} + 1)^{k}$ of length $k \geq 2$; \vskip 5pt (ii) $A$ does not contain the difference $BB ^{-1}$ of any large subset $B$ of $G$; \vskip 5pt (iii) $A$ is not a neighborhood of zero in a totally bounded group topology on} $G$. \vskip 10pt Now we show how Ramsey $(-1, 1)$-product sets arise in some general concept of recurrence on $G$-spaces. Let $G$ be a group with the identity $e$ and let $X$ be a $G$-space with the action $G\times X\longrightarrow X$, $(g,x)\longmapsto gx$. If $X=G$ and $gx$ is the product of $g$ and $x$ then $X$ is called a {\it left regular $G$-space}. Given a $G$-space $X$, a family $\mathfrak{F}$ of subset of $X$ and $A\in \mathfrak{F}$, we denote \begin{eqnarray} \nonumber \Delta_{\mathfrak{F}}(A)=\{g\in G: gB\subseteq A \text{ for \ some } B\in\mathfrak{F}, B\subseteq A\}. \end{eqnarray} Clearly, $e\in \Delta_{\mathfrak{F}}(A)$ and if $\mathfrak{F}$ is upward directed $(A\in \mathfrak{F}$, $A\subseteq C$ imply $C\in \mathfrak{F})$ and if $\mathfrak{F}$ is $G$-invariant $(A\in \mathfrak{F}$, $g\in G$ imply $gA\in \mathfrak{F})$ then \begin{eqnarray} \nonumber \Delta_{\mathfrak{F}}(A)=\{g\in G: gA\cap A \in \mathfrak{F}\}, \Delta_{\mathfrak{F}}(A)= (\Delta_{\mathfrak{F}}(A))^{-1}. \end{eqnarray} If $X$ is a left regular $G$-space and $\emptyset\notin \mathfrak{F}$ then $\Delta_{\mathfrak{F}}(A)\subseteq A A^{-1}.$ For a $G$-space $X$ and a family $\mathfrak{F}$ of subsets of $X$, we say that a subset $R$ of $G$ is $\mathfrak{F}$-{\it recurrent} if $\Delta_{\mathfrak{F}}(A)\cap R \neq \emptyset$ for every $A\in \mathfrak{F}$. We denote by $\mathfrak{R}_{\mathfrak{F}}$ the filter on $G$ with the base $\cap\{\Delta_{\mathfrak{F}\prime}(A): A\in \mathfrak{F}^{\prime}\},$ where $\mathfrak{F}^{\prime}$ is a finite subfamily of $\mathfrak{F}$, and note that, for an ultrafilter $p$ on $G$, $\mathfrak{R}_{\mathfrak{F}}\in p$ if and only if each member of $p$ is $\mathfrak{F}$-recurrent. The notion of an $\mathfrak{F}$-recurrent subset is well-known in the case in which $G$ is an amenable group, $X$ is a left regular $G$-space and $\mathfrak{F}=\{ A\subseteq X: \mu(A)>0$ for some left invariant Banach measure $\mu$ on $X\}$. See \cite{b16}, \cite{b17}, \cite{b18} for historical background. We recall \cite{b19} that a filter $\varphi$ on a group $G$ is {\it left topological} if $\varphi$ is a base at the identity $e$ for some (uniquely defined) left translation invariant (each left shift $x\longmapsto gx$ is continuous) topology on $G$. If $\varphi$ is left topological then $\overline{\varphi}$ is a subsemigroup of $\beta G$ \cite{b19}. If $G=X$ and a filter $\varphi$ is left topological then $\varphi= \mathfrak{R}_{\varphi}$. \vskip 10pt {\bf Proposition 5.5.} {\it For every $G$-space $X$ and any family $\mathfrak{F}$ of subsets of $X$, the filter $\mathfrak{R}_{\mathfrak{F}}$ is left topological. } \vskip 5pt Let $X$ be a $G$-space and let $\mathfrak{F}$ be a family of subsets of $X$. We say that a family $\mathfrak{F}^{\prime}$ of subsets of $X$ is $\mathfrak{F}$-{\it disjoint} if $A\cap B\notin \mathfrak{F}$ for any distinct $A,B \in \mathfrak{F}^{\prime}$. A family $\mathfrak{F}^{\prime}$ of subsets of $X$ is called $\mathfrak{F}$-{\it packing large} if, for each $A\in\mathfrak{F}^{\prime}$, any $\mathfrak{F}$-disjoint family of subsets of $X$ of the form $gA,$ $g\in G$ is finite. \vskip 10pt {\bf Proposition 5.6.} {\it Let $X$ be a $G$-space and let $\mathfrak{F}$ be a $G$-invariant upward directed family of subsets of $X$. Then $\mathfrak{F}$ is $\mathfrak{F}$-packing large if and only if, for each $A\in \mathfrak{F}$, the set $ \ \triangle_{\mathfrak{F}}(A) \ $ is a Ramsey (-1,1)-product set}. \vskip 5pt Applying Theorem 5.2, we conclude that $ \ \triangle_{\mathfrak{F}}(A) \ $ contains all ultrafilters of the form $q^{-1}q$, $q\in G^{\ast}$, and in the case $X=G$, $G$ is amenable and $\mathfrak{F}$ is the family of all subsets of positive Banach measure, we get Theorem 3.14 from \cite{b18}. \vskip 5pt {\it Comments}. The proofs of all above statements can be find in \cite{b20}, \cite{b21}. \section{ Ideals in $\mathcal{P} _G$ and $\beta G$ } We recall that a family $ \mathcal{I}$ of subsets of a set $X$ is an {\it ideal} in the Boolean algebra $\mathcal{P} _G$ of all subsets of $G$ if $G \notin \mathcal{I}$ and $A\in \mathcal{I}$, $B\in \mathcal{I}$, $C\subseteq A$ imply $A\cup B\in \mathcal{I}$, $C\in \mathcal{I}$. A family $\varphi$ of subsets of $G$ is a filter if and only if the family $\{ X\setminus A: A \in \varphi \}$ is an ideal. For an infinite group $G$, an ideal $\mathcal{I}$ in $ \mathcal{P}_G$ is called {\it left (right) translation invariant} if $gA\in \mathcal{I}$ ($Ag\in \mathcal{I}$) for all $g\in G$, $A\in \mathcal{I}$. If $\mathcal{I}$ is left and right translation invariant then $\mathcal{I}$ is called {\it translation invariant}. Clearly, each left (right) translation invariant ideal of $G$ contains the ideal $\mathcal{F} _G$ of all finite subsets of $G$. An ideal $\mathcal{I}$ in $\mathcal{P} _G$ is called a {\it group ideal} if $\mathcal{F} _G\subseteq \mathcal{I}$ and if $A\in \mathcal{I}$, $B\in \mathcal{I}$ then $AB ^{-1}\in \mathcal{I}$. Now we endow $G$ with the discrete topology and use the standard extension of the multiplication on $G$ to the semigroup multiplication on $\beta G$, see Introduction. It follows directly from the definition of the multiplication in $\beta G$ that $G^{*}$, $ \overline{ G^{*} G^{*}}$ are ideals in the semigroup $\beta G$, and $G^{*}$ is the unique maximal closed ideal in $\beta G$. By Theorem 4.44 from [5], the closure $ \overline{ K(\beta G)}$ of the minimal ideal $K(G)$ of $\beta G$ is an ideal, so $ \overline{ K(\beta G)}$ is the smallest closed ideal in $\beta G$. For the structure of $ \overline{ K(\beta G)}$ and some other ideals in $\beta G$ see [5, Sections 4,6]. \vskip 10pt For an ideal $\mathcal{I}$ in $\mathcal{P}_G$, we put $$\mathcal{I}^{\wedge} = \{p\in \beta G: G\setminus A \in p \text{ for each } A\in \mathcal{I} \}, $$ and use the following observations: \vskip 5pt \begin{itemize} \item{} $\mathcal{I}$ is left translation invariant if and only if $\mathcal{I}^{\wedge}$ is a left ideal of the semigroup $\beta G$ ; \vskip 5pt \item{} $\mathcal{I}$ is right translation invariant if and only if $(\mathcal{I}^{\wedge})G\subseteq \mathcal{I}^{\wedge}$. \end{itemize} \vskip 5pt We use also the inverse to $^\wedge$ mapping $^\vee$. For a closed subset $K$ of $\beta G$, we take the unique filter $\varphi$ on $G$ such that $K=\overline{\varphi}$ and put $$K^{\vee} = \{G\setminus A : A\in \varphi\}.$$ In this section, all groups under consideration are suppose to be infinite. We denote by $Sm_G$, $Sc_G$, $Sp_G$ the families of all small, scattered and sparse subsets of a group $G$. These families are translation invariant ideals in $\mathcal{P}_G$ (see [6, Proposition 1 ]), and for every group $G$, the following inclusions are strict [6, Proposition 12] $$ Sp_G \subset Sc_G \subset Sm_G . $$ We say that a subset $A$ of $G$ is {\it finitely thin} if $A$ is $n$-thin for some $n\in \ \mathbb{N}$. The family $ FT_G$ of all finitely thin subsets of $G$ is a translation invariant ideal in $\mathcal{P}_G$ which contains the ideal $<T_G>$ generated by the family of all thin subsets of $G$. By [22, Theorem 1.2] and [23, Theorem 3], if $G$ is either countable or Abelian and $|G| < \aleph _\omega$ then $FT_G = <T_G>$. By [23, Example 3], there exists an Abelian group $G$ of cardinality $\aleph _\omega$ such that $<T_G>\subset FT_G $. \vskip 10pt {\bf Theorem 6.1.} {\it For every group $G$, we have $Sm_G ^\wedge = \overline{ K(\beta G)}$. }\vskip 10pt This is Theorem 4.40 from \cite{b5} in the form given in [24, Theorem 12.5]. \vskip 10pt {\bf Theorem 6.2.} {\it For every group $G$, $Sp_G ^\wedge= \overline{ G ^*G ^*}$;} \vskip 10pt This is Theorem 10 from \cite{b13}. {\bf 6.1. Between $\overline{ G ^*G ^*}$ and $G ^*$.} \vskip 10pt {\bf Theorem 6.3.} {\it For every group $G$, the following statements hold: \vskip 5pt $(i)$ if $\mathcal{I}$ is a left translation invariant ideal in $\mathcal{P}_G$ and $\mathcal{I} \neq \mathcal{F}_G$ then there exists a left translation invariant ideal $\mathcal{J}$ in $\mathcal{P}_G$ such that $\mathcal{F}_G \subset \mathcal{J} \subset \mathcal{I} $ and $\mathcal{J} \subset Sp_G $; \vskip 5pt $(ii)$ if $\mathcal{I}$ is a right translation invariant ideal in $\mathcal{P}_G$ and $\mathcal{I} \neq \mathcal{F}_G$ then there exists a right translation invariant $\mathcal{J}$ in $\mathcal{P}_G$ such that $\mathcal{F}_G \subset \mathcal{J} \subset \mathcal{I} $; \vskip 5pt $(iii)$ if $G$ is either countable or Abelian and $\mathcal{I} $ is a translation invariant ideal in $\mathcal{P}_G$ such that $\mathcal{I} \neq \mathcal{F}_G$ then there exists a translation invariant ideal $\mathcal{J}$ in $\mathcal{P}_G$ such that $\mathcal{F}_G \subset \mathcal{J} \subset \mathcal{I} $ and $\mathcal{J} \subset Sp_G $;} \vskip 10pt {\bf Theorem 6.4.} {\it For every group $G$, the following statements hold: \vskip 5pt $(i)$ if $L$ is a closed left ideal in $\beta G$ such that $L\subset G^*$ then there exists a closed left ideal $L ^\prime$ of $\beta G$ such that $L\subset L^\prime \subset G^* $, $\overline{ G ^*G ^*}\subset L ^\prime$; \vskip 5pt $(ii)$ if $R$ is a closed subset of $G^*$ such that $R\neq G^*$ and $RG\subseteq R$ then there exists a closed subset $R ^\prime$ of $G^*$ such that $R\subset R^\prime \subset G^* $, $ R ^\prime G\subseteq R $; \vskip 5pt $(iii)$ if $G$ is either countable or Abelian and $I$ is a closed ideal in $\beta G$ such that $I\subset G^*$ then there exists a closed ideal $I ^\prime$ in $\beta G$ such that $I\subset I^\prime \subset G^* $, $\overline{ G ^*G ^*}\subset I $.} \vskip 15pt For a cardinal $\kappa$, $S_\kappa$ denotes the group of all permutations of $\kappa$. \vskip 15pt {\bf Theorem 6.5.} {\it For every infinite cardinal $\kappa$, there exists a closed ideal $I$ in $\beta S_\kappa$ such that \vskip 10pt $(i)$ $S_\kappa ^* S_{\kappa}^*\subset I $; \vskip 10pt $(ii)$ if $M$ is a closed ideal in $\beta S_\kappa $ and $I\subseteq M\subseteq G^*$ then either $M=I$ or $M= S_\kappa ^*$.} \vskip 10pt {\bf Theorem 6.6.} {\it For every group $G$, we have $ {\it FT_G \subset Sp _G} $ so $\overline{ G ^*G ^*}\subset FT_G ^{\wedge}$.} \vskip 10pt For subsets $X, Y$ of a group $G$, we say that the product $XY$ is an $n$-{\it stripe} if $|X|=n$, $n\in \mathbb{N}$ and $|Y|=\omega$. It is easy to see that a subset $A$ of $G$ is $n$-thin if and only if $A$ has no $(n+1)$-stripes. Thus, $p\in FT_G ^\wedge$ is and only if each member $P\in p$ has an $n$-stripe for every $n\in \mathbb{N}$. We say that $XY$ is an $(n,m)$-{\it rectangle} if $|X|=n$, $|Y|=m \ $, $ \ n,m\in \mathbb{N}$. We say that a subset $A$ of $G$ {\it has bounded rectangles} if there is $n\in \mathbb{N}$ such that $A$ has no $(n, n)$-rectangles (and so $(n, m)$-rectangles for each $m>n$). We denote by $BR_G$ the family of all subsets of $G$ with bounded rectangles. \vskip 15pt {\bf Theorem 6.7.} {\it For a group $G$, the following statements hold:\vskip 10pt $(i)$ $BR_G$ is a translation invariant ideal in $ \mathcal{P}_G$ ; \vskip 10pt $(ii)$ $BR_G^\wedge$ is a closed ideal in $\beta G$ and $p\in BR_G^\wedge$ if and only if each member $P\in p$ has an $(n,n)$-rectangle for every $n\in \mathbb{N} $; \vskip 10pt $(iii)$ $BR_G \subset FT_G$.\vskip 15pt } \vskip 10pt {\bf 6.2. Between $\overline{ K(G)}$ and $\overline{ G ^*G ^*}$} \vskip 7pt {\bf Theorem 6.8.} {\it For a group $G$, the following statements hold: \vskip 10pt $(i)$ $Sc_G^{\wedge} = cl \{\epsilon p: \epsilon\in G ^* , \ p\in \beta G, \ \epsilon\epsilon=\epsilon\}$; \vskip 10pt $(ii)$ $Sc_G^{\wedge}$ is an ideal in $\beta G$ and $p\in Sc_G^{\wedge}$ if and only if each member of $p$ contains a piecewise shifted $FP$-set; \vskip 10pt $(iii)$ $Sc_G^{\wedge}$ is the minimal closed ideal in $\beta G$ containing all idempotents of $G ^*$.\vskip 15pt} For a group $G$, we put $I _{G,n} = G^{\ast} $, $I _{G,n+1} = \overline{G^{\ast} I_{G,n}} $ and note that $I _{G,n}$ is an ideal in $\beta G$. \vskip 15pt {\bf Theorem 6.9.} {\it For every group $G$ and $n\in \omega$, we have \vskip 5pt $(i) \ $ $I_{G,n+1} \subset I_{G,n }$ \vskip 10pt $(ii) \ $ $Sc_{G}^{\wedge} \subset I_{G,n }$.} \vskip 15pt For a natural number $n$, we denote by $(G ^*)^n$ the product of $n$ copies of $n$. Clearly, $\overline{ (G ^*) ^{n+1}}\subseteq \overline{ (G ^*) ^{n}}$. and $ \overline{ (G ^*) ^{n}}\subseteq I_{G,n }$. \vskip 10pt {\bf Theorem 6.10.} {\it For every group $G$ and $n\in\omega$, we have \vskip 5pt $(i) \ $ $\overline{ (G ^*) ^{n+1}} \subset \overline{ (G ^*) ^{n}}$; \vskip 5pt $(ii) \ $ $Sc_{G}^{\wedge} \subset \overline{ (G ^*) ^{n}}$.} \vskip 10pt {\it Comments.} This section is an extract from \cite{b25}. \section{ The combinatorial derivation} Let $G$ be a group with the identity $e$. For a subset $A$ of $G$, we denote $$\triangle (A)= \{ g\in G: |gA\bigcap A = \infty | \}, $$ observe that $(\triangle (A))^{-1}= \triangle (A)$, $\triangle (A)\subseteq AA^{-1}$, and say that the mapping $$\bigtriangleup : \mathcal{P}_{G}\longrightarrow \mathcal{P}_{G}, \ \ A\longmapsto \triangle (A)$$ is the {\it combinatorial derivation.} \vskip 10pt {\bf Theorem 7.1.} {\it For an infinite group $G$ and a subset $A$ of $G,$ the following statements hold (1) $A$ is finite if and only if $\bigtriangleup(A)=\emptyset$; (2) $\bigtriangleup(A)=\{ e \}$ if and only if $A$ is infinite and thin; (3) if $A$ is thick then $\triangle (A)=G$; (4) if $A$ is prethick then $\triangle(A)$ is large;} \vskip 10pt {\bf Theorem 7.2.} {\it Every infinite group $G$ contains a subset $A$ such that $G=AA^{-1}$ and $\triangle(A)= \{ e\}$. } \vskip 10pt {\bf Theorem 7.3.} {\it Let $A$ be a subset of an infinite group $G$ such that $ A=A^{-1}$. Then there exist two thin subsets $X$, $Y$ of $G$ such that $\triangle(X\bigcup Y)=A$.} \vskip 10pt We consider also the inverse to $\triangle$, multivalued mapping $\nabla$ defined by $$\nabla(A)= \{B\subseteq G: \triangle (B)= A\}.$$ For a family $F$ of subsets of a group $G$, we say that $\mathcal{F}$ is $\triangle$-complete ($\nabla$-complete) if $\triangle(A)\in \mathcal{F}$ $(\nabla(A)\subseteq \mathcal{F})$ for each $A\in \mathcal{F}$. \vskip 10pt {\bf Theorem 7.4.} {\it For every infinite group $G$, the following statements hold \vskip 5pt (1) the families of all small and sparse subsets of $G$ is $\nabla$-complete;\vskip 5pt (2) if an ideal $\mathcal{I}$ in $\mathcal{P}_{G}$ is $\triangle$-complete and $\nabla$-complete then $\mathcal{I}=\mathcal{P}_{G}$;\vskip 5pt (3) If $\mathcal{I}$ is a group ideal in $\mathcal{P}_{G}$, $\mathcal{I} \neq \mathcal{P}_{G}$, then $\mathcal{I}$ is $\triangle$-complete and $\mathcal{I}$ is contained in the ideal of all small subsets of} $G$. \vskip 10pt {\it Comments.} More information on combinatorial derivation in \cite{b26}, \cite{b27}, \cite{b28}. In particular, Theorem 6.2 from \cite{b26} shows that the trajectory $A\longrightarrow \triangle(A)\longrightarrow \triangle ^{2}(A)\longrightarrow \ldots $ of a subset $A$ of $G$ could be surprisingly complicated: stabilizing, increasing, decreasing, periodic or chaotic. Also \cite{b26} contains some parallels between the combinatorial and topological derivations.
{ "timestamp": "2018-02-13T02:19:56", "yymm": "1802", "arxiv_id": "1802.04106", "language": "en", "url": "https://arxiv.org/abs/1802.04106" }
\section{Introduction} \label{intro} Electronic mail or e-mail is a method of electronic communication between two or more users using the Internet. Nowadays emails are not just used for communication but also used for managing the task, solving customer queries. Email Classification or Categorization has been inspired from the text categorization in machine learning (Supervised) and now email classification has been adopted in different variations such as categorising emails into pre-defined folder, blocking spam email, identifying tone of consumer from email content etc.\\ Latest email application and the service provider such as Gmail, Outlook allow the user a simple method of filtering incoming emails based on the email subject, keywords in the body, this method best suitable for personal work or home users, which means one need to create keyword-based rules to filter emails into different folders. Implementing or Creating these rules manually in email software can be difficult if one wants to categorise each incoming emails. X. Carreras and L. Marquez \cite{carreras2001boosting} noted that most users waste a large amount of time in managing their emails or they simply do not prefer to create keyword-based rules to filter email inbox\\ Today, in the world of big data, the volume of emails growing so fast. As per Radicati \cite{levenstein2013email} , in 2016, there was 2672 million email user who exchanged about 215.3 billion emails per day. It is estimated by The Radicati Group \cite{levenstein2013email} that email database will grow by 4.7\% \\ Consider a large eCommerce website which has about the active customer base of 200 million and supposes at least 10\% customer make a purchase every month. Gaint eCommerce like Amazon, eBay etc generally has a common email address (cs@amazon.com) for all kind of queries. It means that an eCommerce company must be getting about 100000 emails every month (considering 10\% customer do write emails), therefore a company requires big database to store all emails and a system which can automatically identify/classify an email into correct department categories such as Refunds, Shipping, Quality Issues etc. \\ A customer support manager who is responsible for assigning thousands of emails to respective teams so that quick solutions and service can be provided to the customer. Imagine how much time one has to spend if a company gets millions of e-mails during Boxing Day or Black Friday sale. Companies do need a system which can automatically classify or assign a label to an email. However, To provide customer support using email communication channel one need a model or system which learns from the previous dataset and categorise new emails with higher accuracy is very much desired by companies.\\ \begin{figure}[!htb] \centering \includegraphics[width=0.50\textwidth]{anns.png} \caption{Basic Neural Network.} \label{fig:anns} \end{figure} This study investigates and carried out experiments to find out how artificial neural networks algorithm can be utilised for email classification.\\ \section{Artificial Neural Networks} Artificial Neural Networks (ANNs) can resemble with the human brain. The key element of a neural network is a general model of a Neuron Perceptron Fig.~\ref{fig:anns}. A neural network consists of a set of neurons and each neuron is connected to one or more neurons in a direct manner. \\ Anderson, Dave and McNeill, George\cite{anderson1992artificial} noted that an artificial neural network consists of multiple inputs which can be represented with the help of symbol, $x(n)$ Fig.~\ref{fig:neural} and these inputs directly fed into the network. Information from inputs is weighted $w(n)$ Fig.~\ref{fig:neural} before sending to next level layers i.e hidden layers depending on the number of hidden layers in a network. Connection weight of each input is summed and then directly fed via a transfer function to produce final output i.e. classification of data. \\ \begin{figure}[!htb] \centering \includegraphics[width=1\textwidth]{neural.png} \caption{Neural Network Architecture} \label{fig:neural} \end{figure} There are three types of learning for an ANNs: Supervised, Unsupervised and Reinforced. Supervised learning is more commonly used for training a neural network for a given dataset. One can train the perceptron with supervised learning in ANNs by calibrating the inputs weights. For supervised learning, training dataset already has predefined labels or category for given input weights. Each training dataset is fed into perceptron which performs some computation and then generates an output. The output result is matched against predefined class/label, no input weight adjusted if it's a match otherwise input weight slightly modified according to the expected final results. The process is repeated number of time so that model can be trained with higher accuracy. As per \cite{shao2011comparison} "The most appropriate point to stop training may be the point at which the reduction of Mean Square Error (MSE) becomes marginal." \\ A propositional algorithm developed by Cohen \cite{cohen1996learning} called RIPPER to categorising emails into folders based on "keyword-spotting rules". Cohen also said that keyword spotting rules are easy to create, update and reuse. On the other hand, Sahami \cite{sahami1998bayesian} did classification of Spam emails using a bag of words by applying the naive Bayesian algorithm. \section{Methodology} \subsection{About Dataset} To perform experiments on neural network for email classification, personal Gmail inbox data has been imported. In this data, each email has been assigned a pre-defined class/category using \textit{\textbf{Gmail\textsuperscript{TM} label}} feature and dataset has 608 emails. \begin{table}[!htb] \caption{Label Email Count Breakdown} \centering \begin{tabular}{l|c} \hline {\bf Label}&{\bf Count}\\ \hline bvp\_ & 102 \\ corprova2011 & 238 \\ deepak@gmail.com & 73 \\ Inbox & 47 \\ gupta@live.com & 21 \\ Imagic & 91 \\ Placement & 8 \\ Friends & 17 \\ Jobs & 11 \\ \hline \bf Total emails & \bf 608 \\ \hline \end{tabular} \label{emailcount} \end{table} \subsection{Model Setup} This experiment has been carried out on a neural network which is build using Keras \cite{chollet2015keras} as a back-end, as Keras provide a playground that facilitates easy and fast implementation using Python to carry out deep learning experiment in a jiffy. Keras convert emails into a numeric matrix by assigning a rank based on the number of times a word appeared, thereafter converting the number into vectors which represent the label of each email. A neural network can be trained by consuming this data with LSTM ( Long Short Term Memory) to correct categorise new emails. But, by extracting best and bad features for each email, neural net accuracy improved. To extract features from the text, Keras tokenizer class allow us to do so. \begin{figure}[!htb] \centering \begin{verbatim} \begin{verbatim} Label word breakdown: Bank:4 bvp_:7441 Sent:1561 Unread:6810 corprova2011:10932 deepak@gmail.com:3880 Inbox:4244 Starred:60 gupta@live.com:1093 Google:347 Imagic:3982 MyUnplugged:34 Placement:401 Friends:228 Jobs:1202 Total word count: 42219 \end{verbatim} \caption{Output: Word Count for each Label in dataset }% \label{code:wordcount}% \end{figure} \section{Experiments \& Results} Experimental data used is from first author personal Gmail inbox. Initially, the dataset has been cleaned and synthesised for the consumption of ANNs and labels which do not have enough number of emails has been removed. Such emails will not contribute much in training of the model. The model has been trained using 548 (90.13\%) emails and 60 (9.86\%) emails used for validating the model. Two experiments carried out to measure the performance of the model for different input parameters and the number of words selection. \subsection{Experiment \#1} In this experiment number of words, the ratio has been fixed while the number of hidden layers changed to test the performance of the neural network with an increase in the number of hidden layers. When $HN = 1$ (Number of Hidden Layers), the accuracy achieved was 33\%, which is very poor but when hidden layers increased linearly, it is observed that accuracy of algorithm increased to 85\% when HN = 100. \\ \begin{figure}[!htb] \centering \includegraphics[width=1\textwidth]{Main.png} \caption{Test Accuracy vs Number of Hidden Layers.} \label{fig:exp1} \end{figure} For $HN \geq 100$, accuracy didn't improve much, but best accuracy achieved was 90\% when $HN = 1500$. In Fig.~\ref{fig:exp1}, it is clear that accuracy line is almost parallel to the x-axis from $HN = 100$ to $HN = 2000$. We can verify that accuracy of the neural network to correctly classify improves with an increase in the number of hidden layers \cite{surkan1990neural}. \subsection{Experiment \#2} \begin{table}[!htb] \caption{Accuracy vs Number of Words selection} \centering \begin{tabular}{c|c|c} \hline {\textbf{Number of Words}} &{\textbf{No. of Hidden Layers}} & {\textbf{Accuracy}} \\ \hline 5500 & 100 & 81.67\% \\ 12000 & 100 & 88.33\% \\ \hline \end{tabular} \label{table:numberofwords} \end{table} In this model of ANNs (Artificial Neural Network), English helping verbs and conjunction words such as ’and’, ’what’, ’of’ etc. have been excluded from the data stream as such words won’t add much in improving the accuracy of the classification model. Moreover, such words may deviate the actual results. The second experiment has been performed to see, how the algorithm performs when a number of words feed into ANNs vary. It can be noted from Table \ref{table:numberofwords} that accuracy of the model is improved by 6.67\% when $num\_words$ (number of words) value is increased by 118\%. It is understood from the test that when a large number of words is feed into the neural network, model prediction accuracy increased. \\ \begin{figure}[!htb] \centering \includegraphics[width=0.8\textwidth]{scores_best.png} \caption{Best Words vs Score.} \label{fig:exp1} \end{figure} This model selects the best feature words based on chi-square using Scikit library. Word features selection allow the model to exclude less significant words from dataset to improve model correctness and data processing time, as the model is configured to not to include words which are not significant. \section{Conclusions} During experiments, it is noted that more words in an email lead to better accuracy while keeping algorithm processing time lower. Datasets don't have enough number of email labelled for 'Friends' and 'Jobs'. From confusion matrix \cite{csurka2004visual} in Fig.~\ref{fig:test1}, it is seen that the model able to accurately classify labels for all email categories except for labels 'Friends' and 'Jobs'. After conducting two experiments, it is concluded that large dataset is required to train the model for classifying emails into the folder with high accuracy and model trained with more words selection has higher accuracy compared to the model which is trained with less number of words. \\ \begin{figure}[!htb] \centering \includegraphics[width=0.7\linewidth]{conf_normalized.png} \captionof{figure}{Confusion Matrix for Exp} \label{fig:test1} \end{figure} The diagonal line in Fig.~\ref{fig:test1} has dark coloured patches mean that the true positive value of each label is accurately classified by the model. \\ One can further improvise this algorithm by customising it for particular use case scenarios such us Customer Support, CEO Email Management, Enterprise level user etc. \section{Acknowledgements} The first author would like to thank Prof Michael O'Neill, Dr Mike Fenton, Dr David Fagan for their help and support. \bibliographystyle{abbrv}
{ "timestamp": "2018-02-13T02:16:41", "yymm": "1802", "arxiv_id": "1802.03971", "language": "en", "url": "https://arxiv.org/abs/1802.03971" }
\section{Introduction} Lyman-$\alpha$ (\ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi) is the torch that lights up the distant Universe. \cite{Partridge1967} recognized \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi to be the strongest tracer of recombining ionized hydrogen ({\text{H\MakeUppercase{\romannumeral 2}}}\xspace) in young, (star) forming galaxies. However, the search for redshifted Ly$\alpha$ emission was fruitless until the late 1980s, when \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi finally was found in known radio galaxies (see e.g.~\citealt{Djorgovski1985,Hu1987} or the overview by~\citealt{Spinrad1989}). Today, \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi heralds the presence of the most distant sources known to humankind (e.g.~through absorption, \citealt{Oesch2016} or emission, \citealt{Zitrin2015}), and detecting \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi has become one of the primary science goals of future instruments and telescopes that are developed to understand the high-$z$ Universe (see e.g.~reviews by \citealt{Dijkstra2014} and \citealt{Hayes2015}). The question remains what one can learn from observations of \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi emission (and/or absorption). Thus far, observational efforts, as well as theoretical advances geared toward \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi radiation, have focused primarily on the modulation of {\it intensity}. However \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi radiation (or radiation of any wavelength) possesses two more degrees of freedom\footnote{A third degree of freedom also exist for circularly polarized light: the time-dependence of the polarization angle, expressed through Stokes $V$ parameter.}, which quantify its polarization properties. These are often represented through the Stokes $Q$ and $U$ parameters, and give the direction and degree of polarization. In this work we explore what additional knowledge can be obtained from these observables. The potential power of \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi lies in its resonance nature. In contrast to \ifmmode{\mathrm{H}\alpha}\else H$\alpha$\xspace\fi, which escapes unobstructed from its production site following recombination, a \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi photon can undergo a tremendous number of scatterings after creation, where the precise number depends on HI column density, geometry and kinematics \citep[][]{Adams1972,Dijkstra2014}. Each scattering event results in a slight change in position and frequency. This dual diffusion process \citep{Osterbrock1962} imprints signatures on the emergent observables, and potentially reveals properties of the scattering medium along the paths that offered least resistance to the photons \citep[see e.g.~][]{Dijkstra2016,Gronke2016}. These signatures can also act as keys to uncovering the emission mechanism. Centrally emitted \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi photons, e.g.~when hey were created as nebular emission powered by Pop II stars \citep{Chapman2004}, Pop III stars \citep{Schaerer2002,Schaerer2003} or a nuclear black hole\footnote{Such spectrally hard sources would leave notably large \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi equivalent widths.} \citep{Geach2009} must scatter significantly in most cases prior to escape. Spatially extended \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi emission can be produced by inflowing, cooling gas \citep{Haiman2000}, gas that has been shock heated by supernova explosions \citep{Mori2004} or by galactic superwinds \citep{Taniguchi2000}, or as fluorescent radiation from an external ionizing field \citep{Hogan1987,Cantalupo2005}. These photons do not need to escape from the dense interstellar medium (ISM), and therefore typically scatter less. With \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi ubiquitously present in galaxies, surveys provide a wealth of observations open for interpretation \citep[e.g.~][]{Steidel2011,Wisotzki2016,Shibuya2017,Herenz2017}. Theoretical work exploring the modulation of Ly$\alpha$ observables by radiative transfer effects, aim to convert these observables into constraints on the physical conditions of the gas in and around galaxies. Currently, two quantities provide the main observables. One is the \textit{spectrum}, which encodes information on the frequency diffusion process of the photons, which leads to broadening and shifting of the spectral line shape \citep{Neufeld1990,Dijkstra2006}, by an amount which depends on kinematics, geometry and dust content of the scattering medium \citep[e.g.][]{Ahn1998,Hansen2006,Verhamme2006,Dijkstra2008,Gronke2015}. These models been successful at reproducing observations \citep{Verhamme2008,Hashimoto2015,Karman2016,Yang2017a}, though it still unclear how physically realistic the models are (see e.g. \citealt{Gronke2016,Gronke2016a,Gronke2017}). One problem is that widely different models can provide similar spectra. For example: the large majority of emission sources having a \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi peak that is redshifted with respect to other lines in the system \citep{Kunth1998,Trainor2015}, something that can be easily explained by Ly$\alpha$ scattering through a galactic outflow \citep{Verhamme2006,Dijkstra2006}. However, it is known that the intergalactic medium (IGM) can also process away \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi mainly in the blue part of the intensity spectrum, which can leave an intrinsically symmetric emission line with a net redshift \citep{Dijkstra2007,Laursen2011}. \cite{Dijkstra2008} showed that these different models give rise to different levels of polarization. This illustrates that polarization, when combined with spectroscopy, may tell models apart that otherwise are indistinguishable. The other quantity, the surface brightness profile, can reveal the spatial diffusion process that Ly$\alpha$ photons undergo, before escaping possibly far from the site of emission. The resulting \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi nebulae have been detected around many \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi emitting galaxies \citep{Hayes2013,Wisotzki2016}, with larger counterparts around many quasars \citep{Cantalupo2014,Lake2015,Hennawi2015,Cai2017}, but not all \citep{Herenz2015}. With integral field unit spectrometers (IFUs) such as MUSE \citep{Bacon2015}, or deep imaging surveys as SILVERRUSH \citep{Ouchi2017}, the number of detailed \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi observations -- that is, spectra and sometimes surface-brightness information -- exceed thousands. There are far fewer observations of polarized \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi \citep{Prescott2011,Hayes2011,Humphrey2013,Beck2016,You2017}. This is partially due to the observational difficulty associated with polarization measurements of distant sources. Polarization-equipped instruments have presently small fields of view (FOV), and multiplexed observations of the Stokes parameters is generally hard. However, another reason is a lack of theoretical foundation which makes \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi polarization results difficult to interpret. This is something we wish to improve upon with this work. Presently, there are two ways of implementing polarized \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi transfer in numerical codes. The {\it first} approach treats polarization solely in the macroscopical sense, and assumes that all photons are 100\% linearly polarized, by `carrying' a polarization vector in addition to its direction vector and frequency \citep{Angel1969,Rybicki1999,Dijkstra2008,Trebitsch2016}. The {\it second} approach is that of \citet{Lee1994} \citep[also used in][]{Lee1997a,Lee1997,Lee1998,Lee1999,AHN2015POLARIZATIONHYDROGEN,Chang2017}, who employ a quantum mechanically precise treatment of scattering and polarization using density matrices, allowing unpolarized photons to {\it develop} polarization through scatterings (and also allowing polarized photons to become {\it depolarized}). We employ this latter method, as it is quantum mechanically more accurate, and implement it in the Monte-Carlo radiative transfer code \texttt{tlac} \citep{Gronke2014}. The goal of this paper is to explore what additional information is encoded within the polarization properties of \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi on the physical properties of the scattering medium. More concretely, our goal is to make go beyond the `standard' predictions for intensity $I$, and focus on two linear polarization parameters $Q$ and $U$, and to see whether this extra information can break degeneracies between different models, and this to gain a deeper physical understanding of sources of \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi. This work is structured as follows: we describe the detailed numerical implementation of the density matrix formalism of Ly$\alpha$ polarization in \S~\ref{sec:MC_polarization}. This section is technical and can be skipped for readers who are mostly interested in the results, which we present in \S~\ref{sec:results}. We discuss our results in more detail and in a broader context in \S~\ref{sec:discussion}, before concluding in \S~\ref{sec:conclusion}. \section{Lyman-$\alpha$ Monte-Carlo polarization} \label{sec:MC_polarization} Radiative transfer is the art of describing the complex and arduous journey light takes on after being emitted. The equation of radiative transfer\footnote{For instance, given by Eq.~(1) in the review by \cite{Dijkstra2014} in its differential form.} illustrates this: a change in intensity at one frequency $\nu$ along an differential path length is affected by three factors: (1) attenuation, (2) emission and (3) redistribution in both space and frequency. The third factor is of paramount importance for Ly$\alpha$. It expresses any contributions into the intensity that did not originate at the same frequency or from the same direction. It is thus an integral over all frequencies and all solid angles embedded in a differential equation. Monte-Carlo methods are the preferred way of treating radiative processes where the photons do not alter the state of the medium they travel through, but still are sensitive to the redistributions caused by scattering through it \citep[see eg.~][]{Avery1968,Lee1997,Loeb1999,Ahn2000,Zheng2002,Dijkstra2008,Pierleoni2009,Laursen2010} because -- albeit slow -- they guarantee convergence even in complex density or velocity fields. Here, we describe the basics of polarization in \S~\ref{sec:MC_emission}-\S~\ref{sec:MC_detection}, and how we implement the density matrix formalism into the radiative transfer code \texttt{tlac} \citep{Gronke2014} in \S~\ref{sec:MC_summary}. \begin{figure}[tb] \begin{minipage}[c]{0.40\columnwidth} \centering \includegraphics[width=\columnwidth]{spinvector.pdf} \hspace{2em} \end{minipage} \begin{minipage}[c]{0.48\columnwidth} \centering \includegraphics[width=\columnwidth]{polarvector.pdf} \end{minipage} \caption{Conceptual sketches of two possible choices of bases in the plane perpendicular to the photon propagation direction which is chosen to be $\hat{\boldsymbol{\varepsilon}}_3$: \textbf{(a)} complex, helical coordinates representative of the intrinsic spin of the photon, and \textbf{(b)} cartesian coordinates, representative of linear polarization appearing from a photon being in both spin states having a fixed, non-varying phase between the helical spins. The oscillations of the complex polarization vector $\mathbf{P}$ in the plane are also drawn.} \label{fig:polarization_vector} \end{figure} \subsection{Emission} \label{sec:MC_emission} Ly$\alpha$ photons are emitted at, or near, the Ly$\alpha$ resonance frequency of $\nu_0 = 2.47 \times 10^{15}$ Hz for hydrogen. We parametrize their offset from the line center through $x\equiv \left( \nu - \nu_0 \right)/\Delta \nu_{\rm D}$ where the Doppler width is defined as $\Delta \nu_{\rm D} = v_{\rm th} \nu_0/c$ with the thermal velocity $v_{\rm th} = \sqrt{2 k_{\rm B} T_g / m_p}$, which depends on the gas temperature $T_g$ as well as Boltzmann's constant $k_{\rm B}$ and the proton mass $m_p$. The speed of light is $c$. We also parametrize the offset from the line center in terms of the thermal velocity. The relationship is $v = - \lambda_0 \Delta \nu_{\rm D} x$. We also represent the spread in emission around the line center as the standard deviation of a Gaussian with $\sigma_i$ in units km s$^{-1}$. The photons are massless and have four degrees of freedom through their two spins. Measuring the spin is synonymous to measuring their polarization. Intrinsically, the photons possess helical spins, whereas observationally, it is advantageous to consider the linear representation instead, and include a possible phase relation. Following the approach of \cite{Lee1994}, we construct a complex state vector $\mathbf{P}$ with four degrees of freedom, represented through the complex coefficients $c_1$ and $c_2$, given in an orthogonal basis $\{\hat{\boldsymbol{\varepsilon}}_1, \hat{\boldsymbol{\varepsilon}}_2, \hat{\boldsymbol{\varepsilon}}_3 \}$ (where $\hat{\boldsymbol{\varepsilon}}_3$ denotes the propagation direction) as \begin{equation} \mathbf{P} = c_1 \hat{\boldsymbol{\varepsilon}}_1 + c_2 \hat{\boldsymbol{\varepsilon}}_2. \label{eq:polarization_vector} \end{equation} For a given $\mathbf{P}$, the values of the coefficients $c_1$ and $c_2$ depend on the choice of basis, which is determined by what we desire to observe: circular or linear polarization. For example, a helical basis is best suited to describe {\it circular polarization}, and we may use $c_1 \hat{\boldsymbol{\varepsilon}}_1 = c_l \hat{\boldsymbol{\varepsilon}}_l$ and $c_2 \hat{\boldsymbol{\varepsilon}}_2 = c_r \hat{\boldsymbol{\varepsilon}}_r$ to represent the left- and right-handed components of the spin, with probabilities $|c_l^2|$, $|c_r^2|$ of finding the photon being either left- or right-handed polarized, respectively. On the other hand, {\it linear polarization} arises from the superposition of the helical spins, describing it in terms of a parallel and perpendicular component, these can be written as $c_\parallel \hat{\boldsymbol{\varepsilon}}_\parallel = ( c_l \hat{\boldsymbol{\varepsilon}}_l + c_r \hat{\boldsymbol{\varepsilon}}_r)/\sqrt{2}$ and $c_\perp \hat{\boldsymbol{\varepsilon}}_\perp = i(c_l \hat{\boldsymbol{\varepsilon}}_r - c_r \hat{\boldsymbol{\varepsilon}}_r)/\sqrt{2} $, respectively (see Figure~\ref{fig:polarization_vector} for a conceptual sketch of these representations of the polarization). We may ask: if the square of the coefficients are equally large: is a polarization signal observable? The answer lies in the phase delay between the components: if both coefficients have equal magnitude {\it and} no fixed phase delay exists, there is no polarization. If there is a phase delay however, we will obtain a polarization signal. These additional constraints may be obtained from the cross-terms $c_1 c_2^*$ and $c_1^* c_2$. This discussion illustrates that the \textit{density matrix} of the photon, $\rho_{\rm phot}$, contains all information on its quantum state, \begin{equation} \rho_{\rm phot} = \mathbf{P} \mathbf{P}^\dagger = \left( { \begin{array}{cc} c_1 c_1^* & c_1 c_2^* \\ c_2 c_1^* & c_2 c_2^* \end{array} } \right), \label{eq:dens_matrix} \end{equation} where the off-diagonal elements give the time-dependent phase between the two states, and the diagonal elements give the probabilities of measuring the photon in either of the two states. \subsection{Scattering} \label{sec:MC_scattering} After emission, the photons may scatter with neutral hydrogen gas particles\footnote{In this work we focus exclusively on scattering by HI-atoms. Scattering by dust and electrons can be included in future studies. However, although dust clearly plays an important role in the \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi radiative transfer process, its effect is mostly to destroy \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi photons.}. This interaction excites the particle from its ground state to an intermediate state, which it immediately de-excites into its final state. Should the initial and final states be the same, the photon will neither gain nor loose any energy\footnote{This is not entirely true. The photons deposit and gain energy through atomic recoil \citep{Madau1997} and hyperfine excitation of the ground state \citep{Wouthuysen1952,Field1958}}. However, for our usage cases these effects can be ignored \citep{1971ApJ...168..575A}. and the scattering is \textit{elastic}. We will not treat inelastic Raman scattering, but refer the interested reader to \cite{Lee1997} for an in-depth study of the polarization properties of Raman-scattered light. When a Ly$\alpha$ photon elastically scatters it experiences three types of redistributions: (i) \textit{change in propagation direction}, (ii) \textit{change of frequency}, and ({\it iii}) the change of polarization. We discuss each below. \begin{itemize} \item {\bf Change of propagation direction.} The change of direction is quantified by the {\it phase-function}, which we denote with $p(\theta', \phi' |\, \rho_{\rm phot}, \theta, \phi)$. Primed quantities denote scattered values. As we describe below, this phase-function depends on frequency and polarization of a Ly$\alpha$ photon. It gives the probability of a photon being in the state it would obtain following a scattering, and can be directly related to the density matrix; \begin{equation} p(\theta', \phi' |\, \rho_{\rm phot}, \theta, \phi) = \frac{|c_1^{2'}| + |c_2^{2'}|}{\int |c_1^{2'}| + |c_2^{2'}| \ensuremath{\; \text{d}} \Omega}. \label{eq:phase_function_chandra} \end{equation} Each scattered density matrix component is obtained by a linear combination of the three incoming components, with trigonometric functions weighing each contribution. Expressions for the density matrix are complex (see e.g.~\citealt{Lee1994} or \citealt{Lee1994a} for prescriptions for obtaining them, or \citealt{AHN2015POLARIZATIONHYDROGEN} for the relevant expressions for \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi), and we refer interested reader to these papers. Full expressions for $c_1^{2'}$ and $c_2^{2'}$ as a function of $(\theta', \phi',\rho_{\rm phot}, \theta,\phi)$ are given in Appendix~\ref{app:density_matrices}. \item {\bf Change of frequency.} The `type' of elastic scattering depends on the offset from the resonance frequency $\nu_0$, as seen from the scattering atom. We express the velocity $v_{\rm atom}$ of the scattering atom as a dimensionless velocity $\mathbf{u} = \mathbf{v}_{\rm atom}/v_{\rm th}$. The frequency shift of the Ly$\alpha$ photon in the rest frame of the atom, $x_e$, is then \begin{equation} x_e = x_i - \mathbf{u} \cdot \hat{\mathbf{k}} \label{eq:x_excitation_frequency} \end{equation} where $x_i$ was the initial frequency shift \citep{Laursen2010}. We can differentiate between \textit{resonance scattering} ($x_e \sim 0$) and \textit{wing scattering} ($|x_e| \gg 0$). This distinction is important: \cite{Stenflo1980} showed that, for resonant scattering the polarization properties of scattered Ly$\alpha$ relate to the \textit{spin} properties of the atomic configuration of the H-atom. On the other hand, for wing scattering, the electron behaves as if it is free. The transition from core to wing occurs at a temperature-dependent frequency offset $x_{cw}\sim 3$ \cite[see e.g.][for an expression for $x_{cw}$]{Laursen2010}. We also use an acceleration scheme for Ly$\alpha$ Monte-Carlo radiative transfer as in \cite{Dijkstra2006}, but have explicitly verified that our results are not affected by this. \item {\bf Change of polarization.} The change of polarization properties is quantified by the change of the density matrix per scattering event. The newly obtained total\footnote{In the absence of circular polarization, $c_1 c_2^* = c_1^* c_2$ if we have chosen a linear basis. This cross term then only gives the $U$ polarization. Otherwise it also gives the $V$ polarization.} degree of polarization of a photon $P_{\rm phot}$ following a scattering is the fraction of the linearly ($Q$ and $U$) and circularly ($V$) polarized intensity to the total intensity ($I$), \begin{align} P_{\rm phot}&(\theta',\phi' | \rho_{\rm phot}, \theta, \phi) = \frac{\sqrt{Q^2 + U^2 + V^2}}{I} \label{eq:polarization_general} \\ &= \frac{\sqrt{\left(|c_1^{2'}| - |c_2^{2'}|\right)^2 + 2^2(c_1 c_2^{*})^{'}(c_1^{*} c_2)^{'}} }{|c_1^{2'}| + |c_2^{2'}|} \label{eq:polarization_function} \end{align} following \cite{AHN2015POLARIZATIONHYDROGEN}. \end{itemize} We now turn to discuss resonant and wing scattering in more detail, as the distinction between the two plays an important role in the above processes. \subsubsection{Resonance scattering} \label{sec:MC_scattering_resonance} For $|x_e| < x_{\rm cw}$, we will consider scatterings dominated by the transition from the ground energy state of hydrogen, denoted\footnote{We use the notation $nL_J$, $n$: energy level, $L=0,1,2,3,\dots$ denoted $S,P,D,F,\dots$ for the orbital angular momentum quantum number and $J=L+S$ where $S=\pm 1/2$ is the electron spin.} $1S_{1/2}$ to the excited $n=2$ state, comprising the two available orbital configurations $2S_{1/2}$ or $2P_J$, where the $2P_J$ level is degenerate into $J=1/2$ and $J=3/2$, and back to the final $1S_{1/2}$ state. This degenerate upper state with the similar angular configuration is also found in other atoms, but with larger frequency separations than $\Delta \nu = 1.1 \times 10^{10}$ Hz \citep{Brasken1998} obtained for hydrogen. We will therefore adopt the terminology from those transitions: for Ca II, the transition from $J=1/2 \to J=3/2 \to J=1/2$ is denoted K (or D$_2$ for Na I), while for the transition $J=1/2 \to J=1/2 \to J=1/2$, it is denoted H (or D$_1$ for Na I). \textit{H scattering}: The wave function of the $2P_{1/2}$ state has no angular dependence, and when it de-excites, conservation of momentum may result in a photon traveling in any direction, with any perpendicular polarization vector. Transitions through this state will yield a constant, angle-independent phase function, and zero polarization independent on any prior polarization, \begin{equation} p_{\rm H} (\theta', \phi' | \rho_{\rm phot}, \theta, \phi) = \text{const}, \label{eq:phase_K} \end{equation} with the subsequent density matrix being $|c_1^{2'}| = |c_2^{2'}| = 1/2$, and $c_1' c_2'^* = c_2' c_1'^* = 0$. \textit{K scattering}: The wave function of the $2P_{3/2}$ state, on the other hand has a strong angular dependence. The phase function now depends on all the density matrix coefficients, and hence also on the incoming polarization. We present how the density matrix elements transform in Eqs.~(\ref{eq:K_11}--\ref{eq:K_22}), as given in Eq.~(11) in \cite{AHN2015POLARIZATIONHYDROGEN}\footnote{Or Eq.~(5) in \cite{Ahn2002}.}. These transformations are given for a left-handed photon basis, with one vector parallel to the plane of the scattering, and the other perpendicular to it. The elements of the scattered density matrix obtained here are linear combinations of the elements of the incoming matrix, where the weights are determined by the incoming and scattered angles, as well as their differences. \textit{Core scattering}: We will from now refer to the resonant H- and K-transitions collectively as \textit{core scatterings.} The small frequency separation between the two make it difficult to determine exactly the transition type. However, we use that the effective ratio between the cross sections are $2\lambda_{\rm H}/\lambda_{\rm K} \approx 2$ \citep{Stenflo1980}. In the resonance core, H-scattering is then twice as likely as K-scattering. \subsubsection{Wing Scattering} \label{sec:MC_scattering_wing} As shown by \cite{Stenflo1980}, scattering far from the line center will, due to the interference between the two available sublevels of the excited Ly$\alpha$ state, resemble that of a classic oscillator. Wing scattering may be approached as a $J=0 \to J=1 \to J=0$ transition, which is the one representing Rayleigh and Thomson scattering alike \citep{Chandrasekhar1960}. For this transition, we obtain the phase function and degree of polarization from density matrix of Eq.~(4) in \cite{Ahn2002}. Scattering at right angles yields 100\% polarization, while light that is forward or backward scattered, retains its initial degree of polarization and the phase relation---thus preserving the polarization direction as well. \subsection{Escape and Detection} \label{sec:MC_detection} Detection is the last step involved in the Monte Carlo procedure. Observationally, polarization properties of radiation are quantified by the Stokers parameters. To construct these parameters, we need extract these from the polarization properties of individual photons in our Monte-Carlo simulation (which are quantified by the density matrix/polarization state vector $\mathbf{P}$). We achieve this by constructing a $3\times3$ `observable' density matrix $\rho_{\rm obs}$ which projects the complex polarization state vector $\mathbf{P}$ (i.e. the density matrix, see Eq~\ref{eq:dens_matrix}) onto the plane of the sky defined by the observer. First, we specify the direction along which we `observe' our model. Without loss of generality, we define this direction to correspond to the $+z$ direction, and thus assume that plane of the sky corresponds to the $xy$-plane. We then only select those photons that escape within a solid angle $\omega$ from the $+z$ direction, and calculate the Stokes parameters for each photon in this subset as follows: \begin{align} I &= |c_x^2| + |c_y^2| \label{eq:stokes_I} \\ Q &= |c_x^2| - |c_y^2| \label{eq:stokes_Q} \\ U &= c_x c_y^* + c_x^* c_y = 2 c_x c_y^* = 2 c_x^* c_y\label{eq:stokes_U} \\ V &= i \left( c_x c_y^* - c_x^* c_y \right) = 0 \label{eq:stokes_V} \end{align} where the coefficients $|c^2_x|$, $|c^2_y|$, $|c^2_z|$ and their phase relations $c_x c_y^*$, $c_x c_z^*$ and $c_y c_z^*$ and how these relates to the (intrinsic) density matrix of the photon, $\rho_{\rm phot}$ are given in Appendix~\ref{app:translating_to_observer_basis}. The last equalities of Eqs.~(\ref{eq:stokes_U}) and (\ref{eq:stokes_V}) further indicate that we have no circular polarization as we neither have emission of circularly polarized Ly$\alpha$ or processes that induce it. We may then proceed to create images of the binned Stokes components, either for all frequencies, or further bin the photons given their frequency. We may then define the \textit{degree of polarization}, \begin{equation} P = \frac{\sqrt{Q^2 + U^2}}{I}, \label{eq:degree_of_polarization} \end{equation} and the relevant \textit{polarization angle}, \begin{equation} \chi = \frac{1}{2} \arctan \left( \frac{U}{Q} \right), \label{eq:angle_of_polarization} \end{equation} in line with observational work \citep{Hayes2011}. The degree of polarization and the polarization angle are thus derived quantities from the primarily binned Stokes parameters we calculated for each photon. Note also that $I^2 \geq Q^2 + U^2$ \citep[see eg.~][]{Rybicki1979}, meaning that both $Q$ and $U$ may be zero when the intensity is not. We have tested our implementation against known solutions. In Appendix~\ref{app:slab} we have tested our code against scattering of a plane parallel, semi-infinite slab known from \cite{Chandrasekhar1960} for which \cite{AHN2015POLARIZATIONHYDROGEN} also obtained results; against scattering in a Hubble-expanding cosmological volume known from \cite{Rybicki1999} in Appendix~\ref{app:hubble}; and against the expanding shell of \cite{Dijkstra2008} in Appendix~\ref{app:expanding_shell}. The density matrix implementation in \texttt{tlac} yields equal results to those of \cite{AHN2015POLARIZATIONHYDROGEN}. Additionally, it reproduces the degree of polarization as well as the surface brightness profiles for the expanding IGM and outflowing shell, even though the results that were compared to were obtained with the approach of \cite{Angel1969}, ie.~with fully polarized photons. \subsection{Monte Carlo Implementation Summary} \label{sec:MC_summary} We implement the density matrix formalism for polarization into \texttt{tlac} as follows: \begin{enumerate} \item We assign a $2 \times 2$ (possibly complex) density matrix $\rho_{\rm phot}$ to each photon. We emit photons in a random direction $(\theta,\phi)$, and unpolarized. In practise this means that we assign a density matrix with $|c_1^2| = |c_2^2| = 1/2$ with no time-dependent correlation between them, i.e.~$c_1 c_2^* = c_2 c_1^* = 0$. \item We generate an {\text{H\MakeUppercase{\romannumeral 1}}}\xspace optical depth $\tau$ from the distribution $\exp(-\tau)$, and convert $\tau$ into a physical distance $s$ the Ly$\alpha$ photon travels before it escapes, by solving the line integral $\tau = \int_0^s dr' n_{\rm HI}(r')\sigma_{\alpha}(\nu[r'])$ (see \citealt{Gronke2014} for a more extended description of the code). \item The new propagation direction after scattering depends on the phase-function, which depends on the density matrix, which depends in turn on the type of scattering event (H \textit{vs} K \textit{vs} wing), and the density matrix of the photon prior to scattering. The frequency of the photon determines whether the scattering occurs in the damping wing or in the core. For wing scattering the post-scattering density matrix is given by\footnote{See also Eq.~(4) in \cite{Ahn2002}.} Eqs.~(\ref{eq:wing_11}--\ref{eq:wing_22}) . For core scattering, we draw a random number $\mathcal{R} \sim \mathrm{Unif}[0,1)$. If $\mathcal{R} > 1/3$ the scattering is H-type and the photon is depolarized ($\rho_{\rm phot,00}' = \rho_{\rm phot, 11}' = 1/2$, other elements zero). Otherwise, the scattering is K-type and the post-scattering density matrix is given by\footnote{See also Eq.~(11) in \cite{AHN2015POLARIZATIONHYDROGEN}.} Eqs.~(\ref{eq:K_11}--\ref{eq:K_22}). We sample from these density matrices using the rejection method: we draw a random set of trial polar angles $\theta',\phi'$ uniformly from a sphere, and calculate the corresponding post-density density matrix $\rho_{\rm phot}'$, \begin{equation} \rho_{\rm phot}' = f(\theta',\phi'\,|\, \rho_{\rm phot}, \theta, \phi), \label{eq:rho_intrinsic} \end{equation} which in turn translates to the phase function, Eq.~(\ref{eq:phase_function_chandra}). The phase function returns a number, which we compare to a randomly drawn number $\mathcal{R} \sim \text{Unif}[0,1)$. If $p(\theta', \phi' \,|\, \rho_{\rm phot}, \theta, \phi) \geq \mathcal{R}$, we accept the proposed scattering angles $\theta',\phi'$ as well as the scattered density matrix $\rho_{\rm phot}'$ and the photon moves on. \item To create observable, well-defined Stoke parameters, the density matrix $\rho_{\rm phot}$ can be transformed into a $3 \times 3$ density matrix $\rho_{\rm obs}$ using Eqs.~(\ref{eq:rho_observer}--\ref{eq:rho_cs}) that is relative to the observer, which is equivalent to observing the photons with a photon counting device that is fixed in space and no longer is oriented perpendicular to the propagation direction of the photon. This introduces six new density matrix coefficients $|c_x^2|, |c_y^2|, |c_z^2|, c_xc_y^* = c_x^* c_y, c_x c_z^* = c_x^* c_z, c_y c_z^* = c_y^* c_z$. \item For a chosen coordinate axis (which one observes nadir), there will be a plane spanned by the other two coordinate vectors. For this plane, we obtain well-defined Stokes parameters $I,Q,U$ and $V$ (the latter is zero) through Eqs.~(\ref{eq:stokes_I}--\ref{eq:stokes_V}). \item The Stokes parameters of each photon can be binned (by eg.~frequency, radial bins, spatial pixels) to create observables for the chosen plane. Multiple planes can be combined by assuming symmetries to increase the signal-to-noise ratio. We choose to observe photons escaping within a cone of $\cos 18^\circ = 0.95$ of the axis observed nadir, similar to \cite{Trebitsch2016}, who chose $15^\circ$ and did not find the choice to strongly affect the results. \end{enumerate} \begin{figure*}[t] \centering \includegraphics[width=0.23\textwidth]{polar_sketch_a.pdf} \includegraphics[width=0.23\textwidth]{polar_sketch_b.pdf} \includegraphics[width=0.23\textwidth]{polar_sketch_c.pdf} \includegraphics[width=0.23\textwidth]{polar_sketch_d.pdf} \caption{Sketches of four possible scattering geometries and their polarization signatures: \textbf{(a)} spherically symmetric scattering geometry where the polarization increases toward the limb and is tangential to it, \textbf{(b)} an oblate ellipsoid where the majority of the intensity is polarized parallel to the plane of the major axes, \textbf{(c)} an optically thinner ellipsoidal scattering geometry where the majority of the intensity is polarized perpendicular to the major axes and \textbf{(d)} a bipolar outflow where the polarization always is perpendicular to the outflow axis and a symmetrically polarized unobscured central core. \textit{In the lower panels}, we give the polarization if these extended sources were viewed edge-on as point sources, yielding \textbf{(a)} zero polarization as all vectors cancel due to the circular symmetry of the extended polarization signal, where photons that escape further out do so by scattering increasingly closer to $90^\circ$, leaving in sum a polarization signal that increases radially and that always is oriented tangentially to the central source; \textbf{(b)} non-zero polarization for an ellipsoid as the photons scatter and escape through optically thinner funnels along the minor axis of the ellipsoid before scattering at right angles toward the observer, becoming polarized horizontally; \textbf{(c)} non-zero polarization oriented perpendicular to the major axes of the ellipsoid as the scattering geometry was thin enough to allow photons to scatter along the major axes and then toward the observer, with the only allowed polarization being in the vertical direction; and \textbf{(d)} non-zero polarization oriented perpendicular to the outflow axis, as all the contributions from the brighter core are canceled out. The shade indicates the intensity.} \label{fig:polarization_sketch} \end{figure*} \section{Results} \label{sec:results} \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi radiative transfer through the interstellar and circumgalactic environments is a complex problem, and it is yet unclear which physical processes and scales play an important role in it. It is therefore advantageous to study \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi radiative transfer in simplified geometries in order to better identify the precise origins of the predicted observables, that is, in our case the predicted polarization signal. Here, we present calculations of Ly$\alpha$ polarization for a suite of simplified systems that have been adopted in the literature. These are representative of features in more complex astrophysical systems. In particular, we will discuss: \begin{itemize} \item static (\S~\ref{ssec:ellipsoid}) or expanding (\S~\ref{ssec:exp_ellipsoid}) ellipsoids, \item biconical outflows (\S~\ref{ssec:outflows}), and \item clumps of HI clouds, representative of a multiphase medium (\S~\ref{ssec:clouds}). \end{itemize} This can be thought of as a sequence in asymmetry: {\it first:} we introduce asymmetry in the gas distribution, {\it second:} we add an asymmetry in velocity space, {\it third:} we introduce further geometrical complexities by introducing biconical outflows, {\it fourth:} finally, we introduce `multiphase' versions of the outflow models. For each model, we will introduce the model parameters and present computed Ly$\alpha$ observables that could shed light on the nature and geometry of the sources and their environment. We sketch these models and some of our findings in Figure~\ref{fig:polarization_sketch}, which will be referred to throughout the text. Note that the apparent geometry of a system can change with frequency as \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi photons of different frequencies escape at different spatial locations. Following \cite{Lee1998}, we focus on computing the {\it frequency-dependence} of polarization for point sources, which differs from more recent analyses, which focused on (frequency-)integrated properties of spatially extended sources \citep{Dijkstra2008,Dijkstra2012a,Trebitsch2016}. This may represent a case where the Ly$\alpha$ source is spatially unresolved, or a case in which a spectroscopic slit is wide enough to cover the entire source. For these point sources, we also show the total (i.e. integrated over frequency) polarization signal and its direction (relative to the unobservable geometry of the source in the plane of an observer). \subsection{Oblate Ellipsoids} \label{ssec:ellipsoid} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{sketch_v2.pdf} \caption{Ellipsoidal scattering geometry. We have an ionized, inner region of radius $R_{\rm in}$ where our Ly$\alpha$ source is located, and an outer, ellipsoidal HI region with principal axes $\left( R_{{\rm ell},a}, R_{{\rm ell},b}, R_{ {\rm ell}, c} \right)$ where $R_{ {\rm ell}, a} = R_{ {\rm ell}, b}$. The outer ellipsoid has neutral hydrogen column densities $N_{\rm HI}^{(a)}$ and $N_{\rm HI}^{(c)}$ along the principal axes. The viewing angle is given as $\mu$. } \label{fig:ellipsoid_geometry} \end{figure} \begin{figure*}[t] \subfloat [Intensity (normalized to unit area under curve) and polarization spectra as function of velocity offsets from the \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi line center for ellipsoids with $N_{\text{H\MakeUppercase{\romannumeral 1}}}\xspace^{(c)} = 10^{21}$ cm$^{-2}$ along the minor axis, $T=10^4$ K, viewed edge-on as point sources. The colors indicate their ellipticities, given as the ratio between the minor and major axes, $\varepsilon \equiv R_{\rm ell,c}/R_{\rm ell,a}$. The vertical dashed, grey line indicates a typical velocity offset where photons blueward commonly are seen to be absorbed by the IGM. Hence, the spectrum blueward of the dashed line is in many cases not detectable. \label{fig:spectra_ellipsoids} ] {\includegraphics[width=\columnwidth]{ellipsoid_specP_ell_all.pdf}} \qquad \subfloat [The degree and direction of polarization for ellipsoids with varying column densities along the minor axis, $N_{\rm HI}^{(c)}$ and varying ellipticities $\varepsilon$. The ellipsoids are viewed edge-on, ie.~where they appear to be asymmetrical, but as point sources to prevent introduction of non-intrinsic geometric asymmetries by for example a narrow slit viewing only parts of the overall source. Viewed face-on, ie.~where the ellipsoids would have appeared symmetric to an observer, the total polarization is zero for all ellipticities and column densities, unlike in this plot. The direction of the arrows indicate the polarization direction relative to the plane of the major axes: horizontal arrows are parallel to it, whereas vertical arrows are perpendicular to it. \label{fig:matrix_polarization_ellipsoids} ] {\includegraphics[width=\columnwidth]{ellipsoid_matrix.pdf}} \caption{Static ellipsoids with varying ellipticity and column density along the minor axis, $N_{\text{H\MakeUppercase{\romannumeral 1}}}\xspace^{(c)}$.} \end{figure*} \cite{Angel1969} showed that Thomson scattering of thermal X-rays emanating from a ellipsoidal scattering geometry could provide up to 5\% polarization, if it was viewed from the side as a point source. This was further explored by \cite{Kim2007}, who considered H$\alpha$ line and continuum radiation emanating from an ellipsoid which also acted as a Rayleigh scatterer. They found a viewing angle-dependent increase in polarization in the wings. We pursue this idea further for Ly$\alpha$. We run a set of simulations where we explore the effects of \textbf{(i)} changing the column density of neutral HI gas, and \textbf{(ii)} changing the ellipticity. Changing the column density is known to have profound effect on the emergent spectrum. However, as shown in \cite{Dijkstra2016}, the shape of the spectrum emanating from a source region fully enclosed in an ellipsoid will primarily be given by the axis of lowest column density, and therefore it cannot reveal the ellipticity. We create an ellipsoid of static HI gas with fixed number density $n_{\rm HI}$ with principal axes $\left( R_{{\rm ell},a}, R_{{\rm ell},b}, R_{ {\rm ell}, c} \right)$ where we set the major axes $R_{ {\rm ell}, a} = R_{ {\rm ell}, b}$ and the perpendicular minor axis $R_{ {\rm ell}, c}$. This ellipsoid embeds an inner, ionized region with radius $R_{\rm in}$. In this inner region, we find our source of unpolarized Ly$\alpha$. The column densities along the principal axes can be found as $N_{\rm HI} = n_{\rm HI} (R_{\rm ell} - R_{\mathrm{in}})$, i.e., the neutral hydrogen number density is constant throughout the system. The viewing angle $\mu$ is defined relative to the plane of the two major axes (see Figure~\ref{fig:ellipsoid_geometry} for a sketch of this geometry). We choose three initial column densities along the minor axis, $N_{\rm HI}^{(c)}=10^{17}, 10^{19}, 10^{21}$ cm$^{-2}$. The choice of column densities reflect those expected in real systems \citep{Verhamme2017,Hashimoto2017,Gronke2015} The lower bound, $N_{\text{H\MakeUppercase{\romannumeral 1}}}\xspace^{(c)} = 10^{17}$ cm$^{-2}$, corresponds to a case from which ionizing LyC may escape. The upper bound $N_{\text{H\MakeUppercase{\romannumeral 1}}}\xspace^{(c)} = 10^{21}$ cm$^{-2}$ corresponds roughly to the upper envelope of $N_{\rm HI}$ that is inferred from Ly$\alpha$ emitting galaxies. We then vary the ellipticity by varying the major axes (i.e. $R_{ {\rm ell}, a}$ and $R_{ {\rm ell}, b}$). This gives a set of ellipticities $\varepsilon \equiv R_{{\rm ell},c}/R_{{\rm ell},a}=\{1, 1/2, 1/10, 1/100\}$\footnote{A change in ellipticity is equivalent with changing the column density along the major axes, $N_{\rm HI}^{(a)}$.}. We fix $R_{\rm in} = 10$ pc and $R_{ {\rm ell},c} = 20$ pc and note that the choice of scale is arbitrary\footnote{However, the choice of the ratio $R_{\rm in}/R_{ {\rm ell},c}$ may not be, we have however not investigated this further.} for media that are static or have constant velocity fields. Our results are thus not scale-dependent. Furthermore, we set the gas temperate to $T=10^4\,$K and inject the photons with $\sigma_i = 200\,\ifmmode{\mathrm{km}\,\mathrm{s}^{-1}}\else km\,s${}^{-1}$\fi$. In Figure~\ref{fig:spectra_ellipsoids}, we plot the spectra of the intensity $I$ and the polarization $P$ for the emergent photons for $N_{\text{H\MakeUppercase{\romannumeral 1}}}\xspace^{(c)}=10^{21}$ cm$^{-2}$ and all ellipticities, viewed with $\mu=0$ (edge-on). As a guide to the eye, we plot (in this and other intensity spectra) a gray dashed line, centered at 160 km s$^{-1}$. This velocity offset marks a typical boundary between the redward, observable part of the spectrum, and the blueward, inaccessible part, for which the increasingly neutral IGM at higher $z$ prevents transmission of \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi photons \citep{Dijkstra2007,Laursen2011}. As we are effectively changing the column density of the major axis, and the ($I$) spectra do not change, we realize that their shape are given by the column densities along the minor axis, which does not change, and not along the line of sight (LOS). In other words, the spectrum is mostly determined by the HI column density along the path of `least resistance' (see also \citealt{Dijkstra2016} for a similar result). However, we find that $P$ is overall higher across the spectrum for increased ellipticities, including at those frequencies where most photons escape. This can be understood as a consequence of the increasing deformation of the source with increasing ellipticity. At all frequencies, the shape of the source becomes asymmetric, leaving a preferential polarization direction and an overall non-zero polarization signal. We quantify this effect in Figure~\ref{fig:matrix_polarization_ellipsoids}. Here we show the fractional degree of polarization $P$ as a function of \textbf{(i)} the minor axis column density $N_{\rm HI}^{(c)}$ and \textbf{(ii)} the ellipticity $\varepsilon$. We assumed that we view the sources edge-on\footnote{Face-on sources would appear circularly symmetric, and any polarization signal averages out.} (i.e. the projected size of the source is $R_{ {\rm ell},c} \times R_{ {\rm ell},a}$). The color of a bin indicates the degree of polarization. The arrow indicates the direction of the linear polarization w.r.t~the plane of the major axes (the size of the arrow also reflects its magnitude). The {\it lower 3 panels} of Figure~\ref{fig:matrix_polarization_ellipsoids} show that the degree of polarization is $P<1$ \% for a spherical scattering geometry ($\varepsilon = 1$), as the spherical symmetry washes out any polarization. The polarization increases with ellipticity, but in a way that depends non-trivially on $N_{\rm HI}^{(c)}$: the direction of the polarization vector changes as $N_{\rm HI}^{(c)}$ increases from $N_{\rm HI}^{(c)} = 10^{19}$ cm$^{-2}$ to $N_{\rm HI}^{(c)} = 10^{21}$ cm$^{-2}$. This change in column density effectively blocks all light from the central part of the system, leaving only photons that escape along the minor axes. See panel b) of Figure~\ref{fig:polarization_sketch} for a sketch of this obscuration. To reach the observer, they have to scatter closer to $90^\circ$, obtaining large degrees of polarization with the polarization vector oriented parallel to the major axes. This effect is fundamentally similar to the effect seen in spherically-symmetric systems. At large radii, photons also are highly polarized as they must escape these systems at $90^\circ$ to reach the observer, see e.g.~the rise in $P$ with radius in Figure~3 of \cite{Dijkstra2008}. The polarization direction is always tangential to the central source. In these systems, the global signal would be canceled out from symmetries, however, as illustrated in panel a) of Figure~\ref{fig:polarization_sketch}. \subsection{Expanding Ellipsoids} \label{ssec:exp_ellipsoid} \begin{figure*}[t] \subfloat [Intensity and polarization spectra for ellipsoids with ellipticity $\varepsilon=1/10$, column density $N_{\rm HI}^{(c)} = 10^{19}$ cm$^{-2}$ along the minor axis, $T=10^4$ K, with varying degrees of global outflow velocities $v_{\rm flow}$ indicated by the line color. The expanding ellipsoids are viewed edge-on. The grey dashed line indicates the threshold for IGM removal of photons. \label{fig:spectra_expanding} ] { \includegraphics[width=\columnwidth]{exp_ellipsoid_specP_expell_all.pdf} } \qquad \subfloat [Total degree of polarization and its direction (given by the arrows) relative to the plane of major axes for globally expanding ellipsoids with outflow velocities $v_{\rm exp}$, ellipticity $\varepsilon=1/10$, and column densities along the minor axis $N_{\rm HI}^{(c)}$. The spectra of the sources with $N_{\rm HI}^{(c)} = 10^{19}$ cm$^{-2}$ are plotted in Figure~\ref{fig:spectra_expanding}. \label{fig:matrix_polarization_expanding_ellipsoids} ] { \includegraphics[width=\columnwidth]{exp_ellipsoid_matrix.pdf} } \caption{Expanding ellipsoids with varying outflow velocity and column density along the minor axis, $N_{\text{H\MakeUppercase{\romannumeral 1}}}\xspace^{(c)}$.} \end{figure*} The previous section discussed the polarization emerging from static ellipsoids. Here, we add an outflowing component to the ellipsoid with constant velocities $v_{\rm flow} = \{0, 67, 200\}$ km s$^{-1}$, which is directed radially outward. As in \S~\ref{ssec:ellipsoid} we fix $\sigma_i = 200$ km s$^{-1}$. The presence of an outflow introduces an additional degree of asymmetry, now in velocity space. Figure~\ref{fig:spectra_expanding} shows the spectrum and frequency-dependence of the polarization emerging from sources with fixed $N_{\rm HI}^{(c)} = 10^{19}$ cm$^{-2}$ along the minor axis, ellipticity $\varepsilon=1/10$, but with different expansion velocities $v_{\rm flow}$. For the static case we recover the double peaked spectrum that we obtained in \S~\ref{ssec:ellipsoid}. Expansion causes the majority of the photons to escape in the red wing, as blueward photons experience a higher optical depth \citep[see e.g.~][]{Zheng2002,Ahn2003,Dijkstra2006}. The frequency dependence of the polarization is also asymmetric around the line center. For the cases with $v_{\rm flow} \neq 0$, there is little flux on the blue side of the line. The polarization of this flux is comparable to that of the static case (within the uncertainties). On the other hand, in the red wing of the line, outflows enhance the degree of linear polarization significantly. We obtain an increasing degree of linear polarization with velocity offset $\Delta v$ from the line center that approaches $P\sim 30\%$ asymptotically at $\Delta v > 500$ km s$^{-1}$. It is remarkable that the increase in the degree of linear polarization is very similar for all models with $v_{\rm flow} \neq 0$. This result can be understood as follows. The distance a photon can travel increases for increasing outflow velocities, effectively lowering the optical depth seen by the photons. In our cases, the change in outflow velocities would not necessarily imply a change in observed spatial shape of the system. The optical depth is sufficiently low to make a significant fraction of the photons diffuse along the major axes in the presence of outflows, with polarization vectors tangential to the direction of the central source, as sketched in panel c) of Figure~\ref{fig:polarization_sketch}. This means that the spatial asymmetry does not change significantly, and the degree of polarization remains similar between the models. Figure~\ref{fig:matrix_polarization_expanding_ellipsoids} shows the degree and direction of polarization of expanding ellipsoids with $N_{\rm HI}^{(c)} = \{ 10^{19}, 10^{21} \}$ cm$^{-2}$ and $\varepsilon=1/10$ viewed edge-on. We omit the case $N_{\rm HI}^{(c)} = 10^{17}$ cm$^{-2}$, because we found that for this case, too few photons scatter, and our predictions practically correspond to that assumed for the intrinsic source. For $N_{\rm HI}^{(c)} = 10^{19}$ cm$^{-2}$, the polarization is near zero for the static ellipsoid (in agreement with the {\it central upper panel} of Figure~\ref{fig:matrix_polarization_ellipsoids}). When we increase $v_{\rm flow}$, the polarization remains roughly constant $P\sim10\%$, with the polarization direction aligned perpendicular to the major axes. This can be understood to be of the same reasons the degree of the polarization in the spectra did not change. The spatial (observed) shape of the systems do not significantly change for the increasing outflow velocities. As the asymmetries do not change, the systems obtain similar degrees and directions of polarization. Figure~\ref{fig:matrix_polarization_expanding_ellipsoids} contains other interesting results: For a higher column density along the minor axis, $N_{\rm HI}^{(c)} = 10^{21}$ cm$^{-2}$, the polarization behaves completely different compared to the case $N_{\rm HI}^{(c)} = 10^{19}$ cm$^{-2}$. The total polarization of $P=8\%$ for a static ellipsoid (see also Figure~\ref{fig:matrix_polarization_ellipsoids}), and it now \textit{decreases} with outflow velocity, reaching $P=1\%$ for $v_{\rm flow} = 200$ km s$^{-1}$. For this higher column density, the system is seen transitioning from the state sketched in panel b) in Figure~\ref{fig:polarization_sketch} to the more symmetric case presented in panel c). However, there is one important difference from the system presented in panel c) as well as the system with lower column density $N_{\rm HI}^{(c)}$. The increase in optical depth from the increased column density means that the photons also increasingly scatter once they encounter the rest-frame velocity offset of the atoms in the expanding medium. This isotropizes the local radiation field, and the polarization vectors become randomized. The lowering in optical depth from the increasing outflow means the photons penetrate deeper also along the major axes, contributing to removing the spatial asymmetry. The overall effect is that the net polarization is reduced. This reduction is eventually accompanied by a flip in the polarization vector of the $N_{\rm HI}^{(c)} = 10^{21}$ cm$^{-2}$ system, where the vector changes alignment from being parallel to perpendicular to the major axes. This indicates an increased semblance to the geometry of the much lower column density system $N_{\rm HI}^{(c)} = 10^{19}$ cm$^{-2}$. \subsection{Bipolar Outflows} \label{ssec:outflows} \begin{figure}[t] \centering \includegraphics[width=0.49\columnwidth]{bipolar_Iimage_unobscured_theta_0-125.pdf} \includegraphics[width=0.49\columnwidth]{bipolar_polarimage_unobscured_theta_0-125.pdf} \caption{ Spatially extended maps of the intensity and degree of polarization for bipolar outflows of $v_{\rm flow} = 200$ km s$^{-1}$ from a static sphere with $N_{\rm HI}=10^{19}$ cm$^{-2}$ and a total opening angle of $\theta_{\rm flow} = \pi/8$. The color indicate the degree of polarization, and the arrows indicate the direction of polarization. The dashed lines indicate the region that is obscured in Figure~\ref{fig:bipolar_spectrum}, to resemble the removal of Ly$\alpha$ photons by for example a dusty disk. } \label{fig:bipolar_polarimage} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{bipolar_matrix.pdf} \caption{Degrees of polarization had the unobscured bipolar flows (with outflow velocity $v_{\rm flow} = 200$ km s$^{-1}$) been viewed as a point source. Direction of overlaid polarization vectors indicate their direction relative to the flow (horizontal: perpendicular to flow directions, vertical: parallel to flow directions).} \label{fig:bipolar_matrix} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\columnwidth]{bipolar_specP_unobscured_theta_all.pdf} \, \includegraphics[width=\columnwidth]{bipolar_specP_obscured_theta_all.pdf} \caption{Intensity and polarization spectra for bipolar outflows with $v_{\rm flow} = 200$ km s$^{-1}$ out of a static HI non-elliptic sphere with radial column density $N_{\rm HI} = 10^{19}$ cm$^{-2}$. The opening angle $\theta_{\rm flow}$ of the outflows out of the sphere is indicated by the line colors. The grey, vertical dashed lines indicate the velocity offset for which bluer photons would be removed by a partially neutral IGM. \textbf{\textit{Left}}: spectra for unobscured sources, ie.~including photons leaving both the central, static sphere and the moving medium in the bipolar cones. \textbf{\textit{Right:}} spectra for partially obscured sources, ie.~corresponding to blocking photons emitted within the horizontally dashed grey lines in Figure~\ref{fig:bipolar_polarimage}, resembling for example the removal of Ly$\alpha$ photons by a circumgalactic disk of dust. } \label{fig:bipolar_spectrum} \end{figure*} So far, our analysis has focused on spherically or cylindrically symmetric gas geometries. However, there is observational and theoretical evidence that outflows are bipolar \citep[e.g.][]{Blandford1974,Suchkov1994}. More recently, observations of LARS~05 \citep{Duval2016} nicely illustrate how Ly$\alpha$ photons scatter off a bipolar outflow that burst out of an edge-on disk galaxy. In this section, we focus on predicting spectra and polarization of scattered Ly$\alpha$ radiation emerging from simplified representations of bipolar outflows, with either an unobscured (\S~\ref{ssec:unobs}) or obscured (\S~\ref{ssec:obs}) central source. We model the bipolar outflows as follows: it contains a spherical cloud with either $N_{\rm HI} = \{ 10^{17}, 10^{19}, 10^{21} \}$ cm$^{-2}$ and $T = 10^4$ K that resides in a fully ionized environment. The (unpolarized) Ly$\alpha$ source resides in the center of this cloud. We then introduce bipolar outflows in cones with total opening angles $\theta_{\rm flow} = \{ 1/16, 1/8, 1/4, 1/2 \} \pi$. Inside the cones, gas is radially outflowing with a constant velocity $v_{\rm flow} = 200$ km s$^{-1}$. The HI number density in the cones is equal to that in the central sphere, and extend a factor of $4$ further than the edge of the sphere. \subsubsection{Unobscured Central Source}\label{ssec:unobs} Figure~\ref{fig:bipolar_polarimage} shows an illustrative example of the spatial distribution of intensity and polarization for an outflow with $\theta_{\rm flow} = \pi/8$ viewed edge-on\footnote{When viewing the outflows face on, i.e. straight into the cones and possibly also the central static cloud, one would observe a spherically symmetric source, and hence would any point source polarization signal be lost.}. We clearly see the biconical structure in both the intensity and the polarization images. In the central, static sphere, the degree of polarization increases familiarly toward the limb, reaching $P \sim 30\%$ with the polarization angle oriented tangentially to the center. The intensity decreases radially outward, both in the central cloud and in the cones. This diffusion of photons gives rise to a decreasing surface brightness profile, see e.g.~Figure~4 of \cite{Dijkstra2006}. In the outflows, the degree of polarization increases with $\Delta v$ reaching values of $P\sim 50\%$--$70\%$ (pixels with $P > 80\%$ exist, but the flux in these is negligible). The direction of polarization in the outflows are oriented perpendicular to the flow direction. Figure~\ref{fig:bipolar_polarimage} also shows that the overall degree of polarization signal increases with the opening angle $\theta_{\rm flow}$ and is always aligned perpendicular to the outflow direction. This is the same physical effect that we saw at play for the ellipsoids: the local radiation fields are stronger in the direction of the source, and are not isotropized, leaving a polarization vector tangential to the direction of the central source. The dependence of integrated polarization on $\theta_{\rm flow}$ is summarized in Figure~\ref{fig:bipolar_matrix}, which shows clearly that $P$ increases with $\theta_{\rm flow}$ for all $N_{\rm HI}^{(c)}$. This increase reflects that a increasing $\theta_{\rm flow}$ causes a larger fraction of flux to emerge from the biconical outflows, thereby increasing the spatial asymmetry of the source. Additionally, the biconical outflows appear as more polarized: fewer photons scatter here. Those that do, propagate along the outflow direction, but have to scatter and escape at angles closer to 90$^{\circ}$ to reach the observer. That way they gain large degrees of polarization that also are oriented perpendicular to the outflow axis. We illustrate this in panel d) of Figure~\ref{fig:polarization_sketch}. Figure~\ref{fig:bipolar_matrix} also shows that the overall polarization is maximal for $N_{\rm HI} = 10^{19}$ cm$^{-2}$. For $N_{\rm HI} = 10^{17}$ cm$^{-2}$, the central cloud is optically thin to most emitted Ly$\alpha$ photons, and there is little flux in the scattered component (and the flux that does scatter, scatters in the core which leads to a lower degree of polarization). The overall polarization for for $N_{\rm HI} = 10^{21}$ cm$^{-2}$ is lower because in this case, the scattering medium is optically thicker, which isotropizes the Ly$\alpha$ radiation field. In turn, this isoptropization reduces the overall polarization of the radiation that escapes. The {\it upper left panel} of Figure~\ref{fig:bipolar_spectrum} shows the spectra of the models with $N_{\rm HI} = 10^{19}$ cm$^{-2}$ (i.e. with maximum polarization). The spectra are double-peaked, with the red peak becoming stronger relative to the blue peak with increasing $\theta_{\rm flow}$. This increase reflects the increasing fraction of Ly$\alpha$ photons that scatter through the outflow. The {\it lower left panel} of Figure~\ref{fig:bipolar_spectrum} shows that the degree of polarization is negligible blueward of line center for all opening angles. Redward, the degree of linear polarization increases with $\Delta v$. This reflects that the blue peak consists of photons that escape from the static central cloud: polarization is canceled out from the symmetric central geometry (as for case \textit{a} in Figure~\ref{fig:polarization_sketch}). This also holds to some extent for the red peak, but in addition, it has contributions from photons that have escaped into the biconical outflows. There, they scatter less and provide a higher overall local polarization signal. The redward increase in the global polarization reflects the increasing spatial asymmetry with $\Delta v$. \subsubsection{Obscured Central Source}\label{ssec:obs} \begin{figure*}[t] \centering \includegraphics[width=\columnwidth]{clouds_both_specP_all.pdf} \, \includegraphics[width=\columnwidth]{clouds_PandI.pdf} \caption{\textbf{\textit{Left:}} Spectra of intensity and degree of polarization as function of velocity offset/frequency from Ly$\alpha$ line center for systems filled with many small clumps of HI gas, sized to provide a covering fraction $f_{\rm c} \sim f_{\rm c,crit}$, in an otherwise ionized medium, representative of a multiphase scattering system. We plot the emergent spectra for systems where the source of Ly$\alpha$ is central (black line), or where the source is extended throughout the medium (red line), ie., embedded in each cloud. Note that we do not view the sources through a slit, but rather as point sources. We also plot the intrinsic intensity spectra, the dashed black line is for the central source, and the red dotted line is for the extended source. \textbf{\textit{Right:}} Surface brightness (normalized and rescaled to the maximum value obtained in the two models) and polarization profiles as function of impact parameter $\alpha$ for a central (black) and extend (red) source of Ly$\alpha$ emission in a clumpy, multiphase medium. We also plot the intrinsic surface brightness profiles, the dashed black line is for the central source, and the red dotted line is for the extended source. } \label{fig:clumpy_clouds} \end{figure*} We repeat the previous analysis (\S~\ref{ssec:unobs}), but now obscure the central static sphere (the obscured region is indicated with {\it grey dashed lines} in Figure~\ref{fig:bipolar_polarimage}). This represents a case in which the biconical outflows are separated by for example a dusty galactic disk as in LARS~05 \citep[see~][]{Duval2016} or in M82 \citep{Lynds1963,Gallagher1999}. The {\it right panels} of Figure~\ref{fig:bipolar_spectrum} show the spectra \& polarization for the same models as in the {\it left panels}, but with the central region obscured. Especially the red peaks of the spectra are widened for the largest opening angles $\theta_{\rm flow} = \{1/4, 1/2\}\pi$. This enhancement of the red peak is primarily a renormalization of the entire spectrum. The obscuration removes a majority of the (blue and red) photons that arise from the central spherical cloud, leaving the surplus of red photons that escape from the cones. For smaller opening angles $\theta_{\rm flow}$ however, less of the overall flux originates from the bipolar cones. The surplus of red photons that was seen for the larger opening angles is present, but is however not sufficient to significantly alter the shape of the spectrum. The spectral signature of the outskirts of the central sphere therefore dominate the spectrum. The {\it lower right panel} shows that the polarization increases at effectively all frequencies. This simply reflects that obscuring the central source eliminates photons whose polarization vectors align with the cone axis. The polarization in the blue wing is lower than in the red wing, as these are photons that escape from primarily the central sphere. The additional boost in $P$ at large $\Delta v$ in the red wing is the signature of the photons that have scattered in the outflows. As for the unobscured case, the local polarization is higher due to the fewer scatterings photons here undergo, and the global degree of polarization reflect the spatial asymmetry due to the cones. The degree of polarization in the red wing reveals how the source transitions from nearly symmetrical without much contributions from the cones for $\theta_{\rm flow}=1/16$, to larger contributions from the cones with increasing $\theta_{\rm flow}$. This comes at a price: the increased opening angles also allow for larger variations in the polarization vectors which in the cones are tangential to the source. \subsection{Multiphase Medium} \label{ssec:clouds} All previous models represented gas in the ISM with a single density and temperature. In reality, interstellar (and circum-galactic) gas is known to be multi-phase. Ly$\alpha$ radiative transfer through multiphase media is a complex problem, which has also been represented by simplified models \citep[see e.g.~][]{Neufeld1991,Hansen2006,Dijkstra2012a,Laursen2013,Gronke2016}. These simplified models consist of neutral, spherical (possibly dusty) clumps, embedded within a hot, ionized and dust free medium \citep[based loosely on the early models by][]{McKee1977}. It has been demonstrated that for such `clumpy' media, the key parameter that affects Ly$\alpha$ radiative transfer is the average number of clumps per sightline: the covering factor $f_{\mathrm{c}}$ \citep[see~][]{Hansen2006,Gronke2016}. \cite{Gronke2016,Gronke2017} showed that there exists a critical value for $f_{\mathrm{c}}$, $f_{\rm c, crit}$, above which clumpy media affect Ly$\alpha$ photons as if they consist of a single phase (i.e. homogeneous). The value of $f_{\rm c, crit}\sim$ a few - a few tens depending on the total HI column density and kinematics of the clumps (see Gronke et al. 2017). The polarization properties of Ly$\alpha$ radiation that scatters through `very clumpy' (i.e. $f_{\rm c} \gg f_{\rm c, crit}$) are therefore well captured by our previous models, in which the gas was homogeneously distributed. The polarization properties of Ly$\alpha$ radiation through models with $f_{\rm c} \ll f_{\rm c, crit}$ have been explored in \cite{Dijkstra2012a}, where it was associated with few (or no) scatterings and consequently high degrees of polarization. In this section, we focus on the `transition regime' which corresponds to $f_{\rm c} \sim f_{\rm c, crit}$, and contrast a central \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi emitting source surrounded by a uniform distribution of randomly moving clumps (which can represent a central star forming galaxy surrounded by a clumpy circum-galactic medium) with a setup where the \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi radiation emerges from the clumps (which can represent the same galaxy and circum galactic medium, but in which Ly$\alpha$ arises as fluorescent emission powered by ionizing radiation that leaked from the central galaxy, see \citealt{Mas-Ribas2016,Mas-Ribas2017}). While the numerical value of $f_{\mathrm{c, crit}}$ depends on the neutral hydrogen column density of the clumps $N_{\mathrm{HI, cl}}$ and their kinematics \citep{Gronke2017}, we stress that the characteristics described in this section apply generally for systems with $f_\mathrm{c} \sim f_{\mathrm{c,crit}}$. In our models, we chose the clumps' column densities to be $N_{\rm HI,cl} = 10^{18}$ cm$^{-2}$ with a gas temperature of $T=10^4$ K \citep[motivated by the `shattering' theory of][]{McCourt2016}, and a random velocity with each component drawn from a Gaussian distribution with standard deviation $\sigma_{\rm cl} = 200$ km s$^{-1}$. This yields a critical covering factor of $f_{\mathrm{c,crit}}\approx 5$ \citep{Gronke2017} which we will adopt for $f_{\mathrm{c}}$. Furthermore, we choose the clumps' radii to be $r_{\mathrm{cl}} = 1\,$pc and fix the radius of the (spherical) system to $1\,$kpc. We note, however, that these parameters (given the others are fixed) do not influence the radiative transfer process \citep{Hansen2006}. We set the intrinsic spectrum to have $\sigma_i=12.85\,\mathrm{km}\,\mathrm{s}^{-1}$ (in the reference frame of the emitting gas) which corresponds to the thermal velocity of the gas. The {\it upper left panel} of Figure~\ref{fig:clumpy_clouds} shows the spectra from a multiphase medium where \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi is emitted either (a) by a central source or (b) extendedly, throughout the medium by sources residing in each clump. This is particularly visible in the intrinsic spectra, plotted with dashed and dotted lines. When having a central source, the photons are emitted close to the line center. With the extended source, the motion of the clumps must also be accounted for. Both spectra are broad, and single peaked, which is characteristic of media with $f_{\rm c} \sim f_{\rm c, crit}$ \citep[see Figure~3 of~][]{Gronke2017}. Both models also predict a degree of spatially averaged polarization which is consistent with zero over most frequencies. This is a direct consequence of our symmetric scattering geometry---even if there exist patches that are tangentially polarized to the center, the overall geometry would cancel the global signal out, as illustrated in Figure~\ref{fig:polarization_sketch}. The apparent rise in polarization toward the far wings occurs at frequencies with near zero intensity. In the {\it right panel} of Figure~\ref{fig:clumpy_clouds}, we plot the normalized surface brightness profiles (which we after normalization render unitless by dividing with $I_{\rm max}$, the maximum surface brightness of the model with a central source) and polarization profiles of the sources as function of impact parameter $\alpha$ in kpc, in agreement with previous studies \citep[see~][]{Dijkstra2006}. We also plot the intrinsic, unscattered surface brightness profiles with dotted/dashed lines. In addition, the degree of polarization also differs. For the central source, $P$ rises to $\sim 10\%$ at $\sim 0.1$ kpc, before it eventually rises to $\sim 15\%$. For extended sources of Ly$\alpha$ emission, the polarization is consistent with $P<5\%$ out to the most distant impact parameters. The degree of polarization is lower than that obtained for scattering off clumps at low $f_c < 1$ and greater $N_{\mathrm{HI, cl}}$ in \cite{Dijkstra2012a}, where photons that scattered only once in a clumpy outflow would produce a spatial polarization signal up to $P \approx 60$\% at large impact radii. As we have a higher number of clumps along the line-of-sight (thereby a higher total optical depth) and a lower clump optical depth, the photons scatter several times per clump which reduces their polarization. This explains the lower $P$ we obtain in the case of a central source as compared to \citet{Dijkstra2012a}. However, we still obtain a similar increase in $P$ with impact radii as in other models with a central source since the radiation field is anisotropic, being stronger in the direction of the source, and the photons that escape at large radii must do so by scattering increasingly more at $90^\circ$. This means, even though the exact degree of polarization depends on other parameters such as $N_{\mathrm{HI, cl}}$ and the clump placement, the central source shows a rising $P(r)$ signal -- while the ``flourescent'' clumps, i.e., the extended source does not. This is a clear observational signal for the distinction of different \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi powering mechanism, and we will explore this further in future work. \section{Discussion} \label{sec:discussion} In this section, we discuss the origins of polarization, from quantum mechanical (\S~\ref{ssec:disc_QM}) to astrophysical scales (\S~\ref{ssec:disc_intuitive}). We discuss how \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi polarization can break degeneracies between models for spectra and/or surface brightness profiles, when used \textit{in concert} with these other observables (\S~\ref{ssec:disc_degeneracies}). \subsection{Polarization: the Quantum Mechanical Origins} \label{ssec:disc_QM} \begin{deluxetable}{l c c | c c } \tablecaption{Polarization through single scatterings. \label{tab:QM_polarization} } \tablecolumns{5} \tablehead{ \colhead{} \vspace{-0.4cm} & \multicolumn{2}{c}{\textbf{Core}} & \multicolumn{2}{c}{\textbf{Wing}} \\ \colhead{\bfseries Init.~polarization} \vspace{-0.4cm} & & & & \\ \colhead{} & $90^\circ$ & $0$ /$180^\circ$ & $90^\circ$ & $0$ / $180{}^\circ$ \\ \vspace{-0.4cm} } \startdata \sidehead{} \vspace{-1.3cm} \\ \textit{Unpolarized \tablenotemark{a}} \vspace{-0.9cm} & 43\% & 0\% & 100\% & \\ \\ \vspace{-0.25cm} &&&& \textit{unchanged} \\ \textit{Polarized \tablenotemark{b}} & 43\% & 60\% & 100\% & \\[-0.75cm] \sidehead{} \enddata \tablecomments{\textit{Core} scattering in this regard includes only the anisotropic K transition, and not the depolarizing H transition.} \tablenotetext{a}{An \textit{initially} unpolarized photon has $P=0\%$.} \tablenotetext{b}{An initially polarized photon conversely has $P=100\%$.} \end{deluxetable} For classical electron-scattering - which applies to Ly$\alpha$ wing scattering - unpolarized radiation that scatters at right angles become maximally polarized. In addition, the polarization properties of a single photon impose restrictions on the scattering angles: the photon cannot scatter in the direction in which it is fully polarized. For core scattering however, the `shape' of quantmum-mechanical wavefunctions plays a role. Unpolarized radiation still obtains the highest degree of polarization when scattering at right angles, but only up to $P=43\%$ via K scatterings (i.e. through the 2P$_{3/2}$ state). For H scatterings (i.e. through the 2P$_{1/2}$ state) polarization is only destroyed as the wavefunction of the 2P$_{1/2}$ is spherically symmetric. For fully polarized radiation, i.e. with $P=100\%$, a photon that is H scattered will only obtain $P=43\%$ for scattering at right angles, but for forward or backward scatterings, the degree of polarization can increase to $P=60\%$. This effect in known as \textit{depolarization}: a photon can only obtain $P=100\%$ through wing scatterings, and if it is scattered through the core, it will only retain, at best, $P=60\%$. Depolarization is not possible through wing scatterings: a partially polarized photon that is forward or backward scattered will retain its polarization, or have it boosted when scattering at inclined angles. We obtain these polarization magnitudes by using the desired scattering angles for the density matrices of the various transitions. These values agree perfectly with the results for Rayleigh or core scatterings as described, e.g., in \cite{Chandrasekhar1960} or \cite{Dijkstra2008}. We summarize this discussion in Table~\ref{tab:QM_polarization} which provides an overview of the polarization obtained through single scattering. \subsection{Polarization: the Astrophysical Origins} \label{ssec:disc_intuitive} There is a difference between the probability of measuring the polarization state of an individual photon---which we just have shown can grow to prefer highly polarized photons after multiple scatterings---and observationally detecting polarization. The observable Stokes parameters describes the polarization properties of an \textit{ensemble} of photons. While individual photons may obtain high levels of linear polarization through scattering, the ensemble averaged polarization can still be zero -- if their polarization angles are not well aligned. There are two main mechanisms behind the alignment of the polarization vectors, and thus, of generation of observable polarization: \textit{natural asymmetries in the scattering geometry}, or \textit{introduced asymmetries} from e.g.~finite slit widths in spectropolarimeters, foregrounds or instrumental artefacts. We discuss these next.\\ \textbf{I. Natural asymmetries}: On scales where the properties of the scattering medium appears constant (local scales), any process that induces some preference in scattering direction, also introduces a preferential polarization direction. The polarization can not be oriented in the direction the photon had before scattering and it must also be perpendicular to the post-scattering propagation direction. Such a process can be an alignment of the atoms in the medium either from an external magnetic field or pumped by scatterings \citep[see e.g.~][]{Zhang2017}, or as in our cases, an anisotropic radiation field, as also realized by \cite{Dijkstra2008}. A small, local patch of the scattering medium is in general unevenly illuminated, with strongest illumination in the direction of the source. This is the origin of the tangential polarization patterns one would obtain from scatterings in the expanding IGM \citep{Rybicki1999}, in spherical shells \citep{Dijkstra2008}, or in any of the symmetric regions of our scattering models including the spherical ellipsoid ($\varepsilon=1/1$) in \S~\ref{ssec:ellipsoid}, the centrally illuminated clumpy medium in \S~\ref{ssec:clouds} with radial polarization profile given in Figure~\ref{fig:bipolar_spectrum} or the IGM (radial profile in Figure~\ref{fig:polardegree_tests}). Observationally, such a polarization pattern of concentric circles was observed in LAB1 by \cite{Hayes2011}, illustrated in \cite{Bower2011}. At larger impact radii, these models possess a larger degree of polarization. This is due to the larger fraction of photons escaping at right angles with increasing impact parameter. As was shown in the previous section (also see Table~\ref{tab:QM_polarization}), this is accompanied by higher degrees of polarization. However, when the systems are symmetrical, their net polarization will cancel out, as illustrated in the lower row of panel a) in Figure~\ref{fig:polarization_sketch}. This brings us to our next important realization: on a macroscopic, global level, scattering through a geometrically \textit{asymmetric} system can result in polarization, as also found by \cite{Angel1969} and \cite{Lee1998} This polarization is detectable even \textit{without} spatially resolving the system. We have shown this by introducing ellipticity, biconical outflows and central disk-like obscuration. In panels b), c) and d) of Figure~\ref{fig:polarization_sketch}, we display examples of such asymmetric systems and their overall polarization signatures. A spatially-averaged polarization signal requires, in the first place, that the local radiation field is polarized. The global polarization orientation is then tangential to the source and reflects the location of the asymmetric regions that provide the surplus of polarized photons. For example, in the bipolar outflow, they emerge from the cones, and the polarization direction is consequently perpendicular to the outflow direction (see \S~\ref{ssec:outflows}). This is similar to the observations of the Egg nebula by \cite{You2017}. When the local radiation fields were isotropized by a high number of core scatterings, we found, as \cite{Lee1994a} and \cite{Dijkstra2008} that this reduces the emergent polarization. An example of this is an outflowing, oblate system, in which an increased column density could reduce the polarization as well as flip the polarization vector (see~Figure~\ref{fig:matrix_polarization_expanding_ellipsoids}). The same effect occurs in the multiphase systems studied (\S~\ref{ssec:clouds}), i.e., several core scatterings lead to a decrease in polarization -- as also identified by \cite{Dijkstra2008} in the context of intergalactic propagation. The above asymmetries are purely \textit{geometrical}. It is also possible to introduce asymmetries in \textit{velocity space} since a velocity field can lower the (frequency dependent) optical depth of a systemt -- with similar effects as described above. Examples are the ellipsoids presented in \S~\ref{ssec:ellipsoid} that became stronger polarized in the presence of global outflows, and the outflows in the biconical structures in \S~\ref{ssec:outflows} which allowed for scattering in them.\\ \textbf{II. Introduced asymmetries}: These occur when observing a patch of a larger geometry (intentionally or unintentionally), that is, by effectively masking out regions which would alter the observable. This is illustrated by the sketch in Figure~\ref{fig:polarization_sketch}a. With a slit or aperture covering the entire system, one would detect no polarization as the symmetric polarization vectors cancel out (illustrated in {\it the lower panel}). However, if we observed part of the system through a narrow slit, then polarization contributions outside the slit are removed, which breaks the symmetry, and yields a surplus of polarization perpendicular to the slit alignment direction. This would result in a global (but possibly misleading) polarization signal. Of course, these issues are less important in imaging polarimetry when one can obtain Stokes parameters in a per-pixel basis (as in \citealt{Hayes2011}, \citealt{Prescott2011} and \citealt{You2017}). \subsection{Polarization: Breaking Degeneracies} \label{ssec:disc_degeneracies} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{polarization_observability_models_7.pdf} \caption{Detectability of models studied in this work. We show the degree of polarization at those frequencies/radii where $P$ differs among them (with $f_{\rm rel}$ giving the fraction of the total flux for these, see \S~\ref{ssec:disc_degeneracies}). The solid and dashed lines show the $1\sigma$ detection limit for a $z \sim 3$ source observed for 1 hour with a VLT-like telescope. We show the impact of a 1\% (10\%) systemic error, and a different luminosity, both shown as labels on the right-hand side of the plot.} \label{fig:observability} \end{figure} We have shown in the results that polarization signals themselves can be degenerate, i.e., several setups can produce similar polarization signatures. A prime example of this is the global polarization signals obtained from non-static asymmetric scattering geometries (our ellipsoids, see Figure~\ref{fig:matrix_polarization_expanding_ellipsoids}). Here, the polarization angle \textit{flips} from being perpendicular to the major axes of the system, to being parallel to it, when the column density is increased. Also, when further thinning the medium by introducing outflows, the degree of polarization \textit{decreases} at higher column densities, compared to an \textit{increase} with lower column densities. The origins of this flip is the change of apparent geometry in different column densities (the transition from column \textit{b} to \textit{c} in Figure~\ref{fig:polarization_sketch}), by lowering the optical depth, one also transitions from escape and scattering mainly along the minor axis to scattering and escaping in the full system. In the latter case, a surplus of photons escape from the extended lobes, being polarized tangentially to the source, and perpendicular to the major axes. Without knowledge of the apparent geometry of a system (which is generally the case), we cannot solely use the degree and direction of polarization to constrain the major axis of the system (compare the case $N_{\text{H\MakeUppercase{\romannumeral 1}}}\xspace^{(c)} = 10^{19}$ cm$^{-2}$ and $v_{\rm exp} = 0$ km s$^{-1}$ to that of rotated system with $N_{\text{H\MakeUppercase{\romannumeral 1}}}\xspace^{(c)} = 10^{21}$ cm$^{-2}$ and $v_{\rm exp} = 200$ km s$^{-1}$ in Figure~\ref{fig:matrix_polarization_expanding_ellipsoids}). Similarly, we cannot differentiate between a strong bipolar flow-like geometry in which the polarization arises due to scattering in lobes, and a more compact - slightly asymmetric - system with obscuration of the core (our \S~\ref{ssec:ellipsoid} compared to \S~\ref{ssec:outflows}): In both cases, we would have a polarization signal aligned with the major axis of the system. However, similar degeneracies also exist when using \textit{other observables}: The spectrum is most sensitive to the properties of the scattering medium along the path of least resistance. The ellipsoids explored in \S~\ref{ssec:ellipsoid} and \S~\ref{ssec:exp_ellipsoid} show examples of this: the spectra do not change when the system changes from being viewed face-on to edge-on. In addition, Ly$\alpha$ spectra do not necessarily reveal intrinsic dynamics of the scattering gas. Scattering through outflows gives rise to asymmetric spectra, often with a negligible blue peak as in Figure~\ref{fig:spectra_expanding} and \ref{fig:bipolar_spectrum}. In the same figures, and in Figure~\ref{fig:spectra_ellipsoids}, we have plotted vertical dashed gray lines, which mark the range of frequencies which could suppressed by scattering in the intergalactic medium \citep{Dijkstra2007,Laursen2011}, and leave a spectral signature virtually identical to that generated by scattering through an optically thick, outflowing medium. This illustrates that degeneracies can exist when using spectra alone. A joint analysis of \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi observables can break the mentioned degeneracies: the spectrum can constrain the column density of the optically thinner minor axis. We can then use polarization to constrain the orientation of the system. This allows, for instance, to differentiate between the polarization signals from static, asymmetric systems to those of dynamic, possibly geometrically symmetrical, systems. In static gas geometries, scattering gives rise to symmetric spectra, whereas scattering through dynamic gas geometries generally gives rise to asymmetries in the spectra. Another example relates to which processes removes flux blueward of a galaxies systemic velocity in \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi spectra: IGM or outflows. The IGM can transform an intrinsically double peaked profile emerging from a static medium into a spectrum with a dominant red peak, and can therefore mimick the effect of scattering through a galactic outflow. However, with only outflows, the polarization increases with offset from the line center (as seen in Figures~\ref{fig:spectra_expanding} and \ref{fig:bipolar_spectrum}) while it does not in static systems (see Figure~\ref{fig:spectra_ellipsoids}). A static system with the IGM processing away the blue peak would hence give rise to different polarization signature. In the case of \textit{both} IGM processing and outflows, the \textit{degree} of polarization can be attempted reconciled with the observed spectral shape. Finally, in Figure~\ref{fig:observability} we quantify the ability of present-day telescopes to differentiate between polarization signatures of all our explored models. In order to do this, we define the fraction of the total flux $f_{\rm rel}$ where the polarization signal differs. This fraction can be defined spatially (e.g., only photons arriving in the outer regions for the multiphase media in \S~\ref{ssec:clouds}) or in frequency space. This corresponds to an optimally designed experiment where, for instance, the slit position has been chosen so that only photons with a positive net polarization are recorded. Specifically, we then the show the degree of polarization for this fraction of photons $P(f_{\rm rel})$ versus $f_{\rm rel}$ in Figure~\ref{fig:observability}. For the outflowing ellipsoids and bipolar outflows (unobscured and obscured), $f_{\rm rel}$ is obtained for $v > 160$ km s$^{-1}$ (also indicated by vertical dashed gray lines in the spectra in Figures~\ref{fig:spectra_expanding} and \ref{fig:bipolar_spectrum}), where the polarization differ the most. For the static ellipsoids, the polarization differ similarly across the spectrum, and all frequencies are included, hence $f_{\rm rel} = 1$ for those. We used photons arriving from $r>0.1 R_{\rm max}$ for the multiphase media with either an central source of \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi or sources extending throughout the medium as the frequency-dependent polarization for both those models were near zero. With the solid lines, we show the sensitivity\footnote{We restrict ourselves to a shot-noise/systemic-limited approximation $\sigma_P = \sqrt{1/\left((N_{\rm HWP}/2) {\rm SNR} \right)^2 + \sigma_{\rm syst}^2}$ where $N_{\rm HWP}=4$ is the number of half-wave plate rotations, ${\rm SNR} = \sqrt{N_{\rm phot}}$ is the signal-to-noise ratio given in the case of shot-noise only from the number of photons $N_{\rm phot}$ arriving at the sensor, and $\sigma_{\rm syst}$ is a systemic error. Based on \cite{Patat2006}.} of FORS2 at VLT. We see that it would be able to differentiate between most models if these were $L=10^{43}$ erg s$^{-1}$, or even $L=10^{41}$ erg s$^{-1}$, \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi emitters at $z=3$ that were observed for one hour. Separating models becomes harder if one assumes that the systemic error of the instrument is as high as 10\%, except for those models where $P({\rm frel})>20\%$. Detecting, differentiating $P$ and breaking the otherwise degenerate models we have explored is thus viable already today. \begin{deluxetable*}{l c c | c c | c c | c c } \tablecaption{Intensity and polarization properties of \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi systems explored in this paper. \label{tab:summary} } \tablecolumns{9} \tabletypesize{\footnotesize} \tablehead{ \colhead{} & \multicolumn{2}{c}{\S~\ref{ssec:ellipsoid} \textbf{Ellipsoid}\tablenotemark{a}} & \multicolumn{2}{c}{\S~\ref{ssec:exp_ellipsoid} \textbf{Ellipsoidal outflow}\tablenotemark{a}} & \multicolumn{2}{c}{\S~\ref{ssec:outflows} \textbf{Bipolar outflow}\tablenotemark{a}} & \multicolumn{2}{c}{\S~\ref{ssec:clouds} \textbf{Multiphase medium}\tablenotemark{b}} \\ \colhead{} & \colhead{Spherical} & \colhead{Ellipsoidal} & \colhead{Low $v_{\rm flow}$} & \colhead{High $v_{\rm flow}$} & \colhead{Small $\theta_{\rm flow}$} & \colhead{Large $\theta_{\rm flow}$} & \colhead{Central source} & \colhead{Extended source} \\ \vspace{-0.4cm} } \startdata \sidehead{} \vspace{-1.1cm} \\ \textbf{$I(v)$} & \multicolumn{2}{c|}{Symmetric double-peaked} & \multicolumn{2}{c|}{Redshifted single-peaked} & \multicolumn{2}{c|}{Redshifted doubled-peaked} & \multicolumn{2}{c}{Broad, single peak} \\ \textbf{$\langle P \rangle$}\tablenotemark{1} & Zero\tablenotemark{2} & $\sim 5\%$ & \multicolumn{2}{c|}{Depends on $N_{\rm HI}$} & 1\% & 10\% & Zero\tablenotemark{2} & $<5\%$ also locally \\ \textbf{$P(v)$} & Flat, nil & Flat, non-zero & \multicolumn{2}{c|}{Rises\tablenotemark{3} to $\sim$ $30\%$} & Flat, low & Rises up to $60\%$ & \multicolumn{2}{c}{Flat, zero} \\ \textbf{$P(r)$} & \multicolumn{2}{c|}{Rising} & \multicolumn{2}{c|}{Rising} & \multicolumn{2}{c|}{Rising} & Rising & Flat, zero \\ Figures & \multicolumn{2}{c|}{\ref{fig:spectra_ellipsoids}, \ref{fig:matrix_polarization_ellipsoids}} & \multicolumn{2}{c|}{\ref{fig:spectra_expanding}, \ref{fig:matrix_polarization_expanding_ellipsoids}} & \multicolumn{2}{c|}{\ref{fig:bipolar_polarimage}, \ref{fig:bipolar_matrix}, \ref{fig:bipolar_spectrum}} & \multicolumn{2}{c}{\ref{fig:clumpy_clouds}} \\[-0.65cm] \sidehead{} \enddata \tablecomments{The exact numerical values are model dependent. } \tablenotetext{a}{Viewed edge-on.} \tablenotetext{b}{Our clumpy clouds have a covering fraction close to the critical value, $f_{\rm c} \sim f_{\rm c, crit}$. Other $f_{\rm c}$ produce different spectra.} \tablenotetext{1}{The global, frequency- and spatially-integrated polarization values are given as the extremes obtained for the models.} \tablenotetext{2}{The polarization is \textit{locally} non-zero, however. Global symmetries cancel it out as illustrated in panel a) of Figure~\ref{fig:polarization_sketch}.} \tablenotetext{3}{These increases in $P$ with $v$ depends on the column density of the system and are here given for $N_{\rm HI} = 10^{19}$ cm$^{-2}$ along the minor axes.} \end{deluxetable*} \section{Summary and Conclusions} \label{sec:conclusion} A major challenge in extragalactic astrophysics is to decode and reveal the properties of systems with only a limited set of observables. Interpreting observations of \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi requires us to understand the transport of this radiation. Spectra and/or surface brightness profiles provide constraints on this scattering process, though sometimes not uniquely so. The polarization properties of Ly$\alpha$ provide additional constraints on the scattering process, but this has been explored much less in the literature, even though it has been demonstrated that extragalactic sources of Ly$\alpha$ source can reach significant degrees of polarization (both theoretically and observationally). This motivated us to implement polarization into the \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi radiative transfer code \texttt{tlac} of \cite{Gronke2014}, providing us with the ability to do a joint analysis of emergent observables. To this aim, we used the \textit{density matrix formalism} of \cite{Lee1994}. Through a $2 \times 2$ matrix, it properly describes the probability of measuring a \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi photon in either of its two helical spin states, as well as the linear superposition of these. The elements of this are modified through (\ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi) \textit{core scatterings} near the line center and \textit{wing scatterings} in the damping wing. As photons escape from an arbitrary three-dimensional {\text{H\MakeUppercase{\romannumeral 1}}}\xspace scattering medium that contain a single or a distribution of \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi sources, we convert the density matrix coefficients of each individual photon to observable Stokes parameters. This approach allowed us to treat polarization both on a quantum mechanical level as well on a statistical, observable level, setting this work apart from earlier works where only the latter could be achieved (as in~\citealt{Rybicki1999}; \citealt{Dijkstra2008} and \citealt{Trebitsch2016}). We have explored scattering through a suite of simplied geometries with simplistic dynamics, such as static and expanding ellipsoids, biconical outflows, and multiphase (clumpy) outflows. We summarize some of their observable properties in Table~\ref{tab:summary}. These idealized models help in understanding the physical origins of the polarization signal, and correspond to simplified setups for which other \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi observables have been studied previously in the literature. We have shown how the global signal from an unresolved source, either its degree and angle of polarization, or its polarization spectrum, depends on its scattering symmetry. A symmetric system would appear to have zero polarization, just as one that either is sufficiently optically thick to isotropize the emergent radiation, or that emits \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi extensively \textit{throughout} itself (as in cooling systems, or in the case of recombinations/fluoresence) would have. Introducing asymmetries in the scattering geometry, from the smallest to the largest scales, we showed how polarization is generated. The polarization is a measure of any surplus, or lack of, scattered photons at locations for which the scattering geometry is not fully symmetric. We explored ellipsoids and bipolar outflows as examples of this. Alone, the polarization signal cannot -- just as other observables -- be used alone to describe the physical state of a source and its environment. We have shown that this is only possible when it is used in conjunction with other observables. As an example, we have shown that we obtain tangential polarization patterns around central sources. In asymmetric geometries, the polarization direction may be used to reveal the alignment of the system. But this only works if the intensity spectrum of the system is known, as the polarization direction is degenerate between several geometries and dynamics. Likewise, other observables should be used with caution. Knowing an intensity spectrum or luminosity of a source, one may misinterpret these as being intrinsic to the source, although in an asymmetric system with anisotropic \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi escape, it is not necessarily so. The emergent intensity only reveals the properties of the medium along the path of least resistance. With polarization arising due to asymmetries, a global polarized \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi signal of an unresolved source would be a strong indicator of possible anisotropies in the \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi escape. In systems with anisotropic \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi escape, it is easy to misinterpret this as a low \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi (and in some cases, LyC) escape fraction. Moreover, IGM absorption manifests itself through an attenuation of the blue part of the intensity spectra of sources, but this could also be falsely mistaken to be caused by outflows, which also imprint this spectral signature. We have shown that it is possible to break this degeneracy with polarization measurements. Currently, none of the next generation extremely large telescopes plan to include polarimeters intended for extragalactic use \citep[see discussion in][]{Hayes2011a}. However, new, dedicated observations are undertaken \citep[see e.g.][]{Beck2016,You2017}, promising a bright future. We have also shown that present-day telescopes would be able to differentiate between the polarization signals of most of the models we have explored. We are currently exploring a realistic, multiphase medium in an upcoming paper, comparing it to recently obtained observations. Continued work is needed, both theoretically \citep[see e.g.~][]{Chang2017}, numerically and observationally, as we have shown in this paper that polarized \ifmmode{\mathrm{Ly}\alpha}\else Ly$\alpha$\xspace\fi can be a powerful, degeneracy-breaking probe into an otherwise secretive Universe. \acknowledgments We all thank B.~Ciardi for helpful comments. We thank the referee for the highly constructive feedback. MBE is grateful to H.-W.~Lee, S.-J.~Chang, C.~Scarlata, C.~You and Ll.-M.~Ribas along with other members of the observational and theoretical Ly$\alpha$ community (you know who you are) for rewarding discussions. MBE thanks the Institute of Theoretical Astrophysics at UiO and the Astronomy and Astrophysics Department at UCSC for their kind hospitality. MD thanks the physics department at UCSB for their kind hospitality. MH acknowledges the support of the Swedish Research Council, Vetenskapsr{\aa}det and the Swedish National Space Board (SNSB), and is Fellow of the Knut and Alice Wallenberg Foundation. \software{\texttt{tlac} \citep{Gronke2014}, \texttt{numpy} \citep{VanderWalt2011}, \texttt{Cython} \citep{Behnel2011}, \texttt{Matplotlib} \citep{Hunter2007}.} \bibliographystyle{yahapj}
{ "timestamp": "2018-03-29T02:08:48", "yymm": "1802", "arxiv_id": "1802.04280", "language": "en", "url": "https://arxiv.org/abs/1802.04280" }
\section{Introduction} A function that assigns sets to all vertices of a graph is a \emph{set coloring} if the sets assigned to adjacent vertices are disjoint. For positive integers $a$ and $b\le a$, an {\em $(a:b)$-coloring} of a graph $G$ is a set coloring with range $\binom {\{1,\ldots, a\}}{b}$, i.e., a set coloring that to each vertex assigns a $b$-element subset of $\{1,\ldots, a\}$. The concept of $(a:b)$-coloring is a generalization of the conventional vertex coloring. In fact, an $(a:1)$-coloring is exactly an ordinary proper $a$-coloring. The {\em fractional chromatic number} of $G$, denoted by $\chi_f(G)$, is the infimum of the fractions $a/b$ such that $G$ admits an $(a:b)$-coloring. Note that $\chi_f(G)\leq \chi(G)$ for any graph $G$, where $\chi(G)$ is the chromatic number of $G$. The fractional coloring was first introduced in 1973 \cite{planfr5} to seek for a proof of the Four Color Problem. Since then, it has been the focus of many intensive research efforts, see \cite{ScheinermanUllman2011}. In particular, fractional coloring of planar graphs without cycles of certain lengths is widely studied. Pirnazar and Ullman~\cite{PU02} showed that the fractional chromatic number of a planar graph with girth at least $8k-4$ is at most $2+\frac{1}{k}$. Dvo\v{r}\'{a}k {\em et al.}~\cite{DSV08} showed that every planar graph of odd-girth at least 9 is $(5:2)$-colorable. Recently, Dvo\v{r}\'{a}k {\em et al.} \cite{frpltr} showed that every planar triangle-free graph on $n$ vertices is $(9n:3n+1)$-colorable, and thus it has fractional chromatic number at most $3-\frac{3}{3n+1}$. Well-known Steinberg's Conjecture asserts that every planar graph without cycles of length 4 or 5 is 3-colorable. Recently, Steinberg's conjecture was disproved \cite{CohenAddad2016}. This conjecture, though disproved, had motivated a lot of research, see \cite{borsurvey}. Since $\chi_f(G)\leq \chi(G)$ for any graph $G$, it is natural to ask whether there exists a constant $c<4$ such that $x_f(G)\leq c$ for all planar graphs without cycles of length 4 or 5. In this paper, we confirm this is the case for $c=\frac{11}{3}$. In fact, we prove the following stronger theorem. \begin{theorem} \label{MT} Every planar graph without cycles of length 4 or 5 is $(11:3)$-colorable, and thus its fractional chromatic number is at most $\frac{11}{3}$. \end{theorem} The \emph{independence number} $\alpha(G)$ of a graph $G$ is the size of a largest independent set in $G$. The \emph{independence ratio} of $G$ is the quantity $\frac{\alpha(G)}{|V(G)|}$. The famous Four Color Theorem \cite{AppHak1} implies that every planar graph has independence ratio at least $\frac{1}{4}$. In 1976, Albertson \cite{Albertson76} proved a weaker result that every planar graph has independence ratio at least $\frac{2}{9}$ without using the Four Color Theorem. In 2016, Cranston and Rabern \cite{CR16} improved this constant to $\frac{3}{13}$. If $G$ is a triangle-free planar graph, a classical theorem of Gr\H{o}tzsch \cite{grotzsch1959} says that $G$ is 3-colorable, and thus $G$ has independence ratio at least $\frac{1}{3}$. This bound can be slightly improved---Steinberg and Tovey~\cite{SteinbergTovey1993} proved that the independence ratio is at least $\frac{1}{3}+\frac{1}{3|V(G)|}$, and gave an infinite family of planar triangle-free graphs for that this bound is tight. Steinberg's Conjecture would imply that every planar graph without cycles of length 4 or 5 has independence ratio at least $\frac{1}{3}$, and it is not known whether this weaker statement holds or not. Since $\alpha(G)\geq \frac{|V(G)|}{\chi_f(G)}$ for any graph $G$, we have the following corollary by Theorem~\ref{MT}. \begin{corollary}\label{ratio} Every planar graph without cycles of length 4 or 5 has independence ratio at least $\frac{3}{11}$. \end{corollary} It is not clear whether the constant $\frac{11}{3}$ from Theorem~\ref{MT} is the best possible, and we suspect this is not the case. Hence, the following question is of interest. \begin{problem} What is the infimum of fractional chromatic numbers of planar graphs without cycles of length 4 or 5? \end{problem} Let us remark that the counterexample to Steinberg's conjecture constructed in \cite{CohenAddad2016} is $(6:2)$-colorable, and thus we cannot even exclude the possibility that the answer is $3$. The proof of Theorem~\ref{MT} naturally proceeds in list coloring setting. A \emph{list assignment} for a graph $G$ is a function $L$ that to each vertex $v$ of $G$ assigns a set $L(v)$ of colors. A set coloring $\varphi$ of $G$ is an \emph{$L$-set coloring} if $\varphi(v)\subseteq L(v)$ for all $v\in V(G)$. For a positive integer $b$, we say that $\varphi$ is an \emph{$(L:b)$-coloring} of $G$ if $\varphi$ is an $L$-set coloring and $|\varphi(v)|=b$ for all $v\in V(G)$. If such an $(L:b)$-coloring exists, we say that $G$ is \emph{$(L:b)$-colorable}. For an integer $a\ge b$, we say that $G$ is \emph{$(a:b)$-choosable} if $G$ is $(L:b)$-colorable from any assignment $L$ of lists of size $a$. We actually prove the following strengthening of Theorem~\ref{MT}. \begin{theorem}\label{MTl} Every planar graph without cycles of length 4 or 5 is $(11:3)$-choosable. \end{theorem} \section{Colorability of small graphs} Let us start with some technical results on list-colorability of small graphs, especially paths and cycles. In the proofs, it is convenient to work with a non-uniform version of set coloring. Let $f:V(G)\to\mathbf{Z}_0^+$ be an arbitrary function. An \emph{$(L:f)$-coloring} of a graph $G$ is an $L$-set coloring $\varphi$ such that $|\varphi(v)|=f(v)$ for all $v\in V(G)$. If such an $(L:f)$-coloring exists, we say that $G$ is \emph{$(L:f)$-colorable}. We repeatedly use the following simple observation. \begin{lemma}\label{lemma-redulist} Let $L$ be an assignment of lists to vertices of a graph $G$, let $f$ assign non-negative integers to vertices of $G$, and let $\psi$ be an $L$-set coloring of $G$ such that $|\psi(v)|\le f(v)$ for all $v\in V(G)$. Let $L'$ be the list assignment defined by $$L'(v)=L(v)\setminus \Big(\psi(v)\cup\bigcup_{u\in N_G(v)} \psi(u)\Big)$$ for all $v\in V(G)$, and let $f'(v)=f(v)-|\psi(v)|$ for all $v\in V(G)$. If $G$ is $(L',f')$-colorable, then $G$ has an $(L:f)$-coloring $\varphi$ such that $\psi(v)\subseteq \varphi(v)$ for all $v\in V(G)$. \end{lemma} \begin{proof} If $\varphi'$ is an $(L',f')$-coloring of $G$, it suffices to set $\varphi(v)=\psi(v)\cup\varphi'(v)$ for all $v\in V(G)$. \end{proof} We also use the following observation. \begin{lemma}\label{lemma-greedy} Let $L$ be an assignment of lists to vertices of a graph $G$, let $f$ assign non-negative integers to vertices of $G$, and let $v_1$, \ldots, $v_n$ be an ordering of vertices of $G$. If \begin{equation}\label{eq:assgreedy} |L(v_i)|\ge f(v_i)+\sum_{v_jv_i\in E(G), j<i} f(v_j) \end{equation} holds for $1\le i\le n$, then $G$ has an $(L:f)$-coloring. \end{lemma} \begin{proof} We prove the claim by induction on $n$. The basic case $n=0$ is trivial. If $n\ge 1$, then $|L(v_1)|\ge f(v_1)$ by the assumptions, and thus there exists a subset $A$ of $L(v_1)$ of size $f(v_1)$. Let $L'(v_i)=L(v_i)\setminus A$ for all $i$ such that $v_1v_i\in E(G)$, and $L'(v_i)=L(v_i)$ for all $i\ge 2$ such that $v_1v_i\not\in E(G)$. Since $|L'(v_i)|\ge |L(v_i)|-f(v_1)$ in the former case and $|L'(v_i)|=|L(v_i)|$ in the latter case, it is easy to verify that the assumption (\ref{eq:assgreedy}) holds for $G-v_1$ with the vertex ordering $v_2$, \ldots, $v_n$ and the list assignment $L'$. Hence, by the induction hypothesis, $G-v_1$ has an $(L':f)$-coloring. Assigning $A$ to $v_1$ turns this coloring into an $(L:f)$-coloring of $G$. \end{proof} When Lemma~\ref{lemma-greedy} applies, we say that we \emph{color vertices of $G$ greedily in order $v_1$, \ldots, $v_n$}. Finally, let us make another simple observation, which we will often (implicitly) apply. Let $G$ be a graph, let $G_0$ be a subgraph of $G$, and let $f,g:V(G)\to\mathbf{Z}_0^+$ be functions such that $f(v)\le g(v)$ for all $v\in V(G)$. Let us consider the situation that we need to prove that a graph is $(L:f)$-colorable for every list assignment $L$ such that $|L(v)|\ge g(v)$ for all $v\in V(G)$, under assumption that $G_0$ is $(L:f)$-colorable. Then it suffices to prove this for all list assignments $L(v)$ such that $|L(v)|=g(v)$ for all $v\in V(G)$: if $|L(v)|>g(v)$, then we can without loss of generality throw away any color in $L(v)$, not used in the $(L:f)$-coloring of $G_0$ when $v\in V(G_0)$. \begin{lemma}\label{3-3-3} Let $L$ be a list assignment for a path $P=v_1v_2v_3$. If $|L(v_1)|=|L(v_3)|=5$ and $|L(v_2)|=8$, then $P$ is $(L:3)$-colorable. Moreover, for any colors $\alpha_1,\alpha_2\in L(v_1)$ and $\beta\in L(v_3)$, there exists an $(L:3)$-coloring $\varphi$ of $P$ such that $\alpha_1,\alpha_2\in \varphi(v_1)$ and $\beta\in \varphi(v_3)$. \end{lemma} \begin{proof} Consider arbitrary colors $\alpha_1,\alpha_2\in L(v_1)$ and $\beta\in L(v_3)$. Let $f'(v_1)=1$, $f'(v_2)=3$, and $f'(v_3)=2$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $P$ has an $(L':f')$-coloring for any list assignment $L'$ such that $|L'(v_1)|=3$, $|L'(v_2)|=5$, and $|L'(v_3)|=4$. Choose colors $\gamma_1\in L'(v_2)\setminus L'(v_3)$ and $\gamma_2\in L'(v_2)\setminus (\{\gamma_1\}\cup L'(v_1))$. Choose $\varphi'(v_2)$ as any $3$-element subset of $L'(v_2)$ containing $\gamma_1$ and $\gamma_2$. Then $|L'(v_1)\setminus \varphi'(v_2)|\ge 1$ and $|L'(v_3)\setminus \varphi'(v_2)|\ge 2$, and thus we can choose $\varphi'(v_1)$ as a $1$-element subset of $L'(v_1)\setminus \varphi'(v_2)$ and $\varphi'(v_3)$ as a $2$-element subset of $L'(v_3)\setminus \varphi'(v_2)$. Clearly, $\varphi'$ is an $(L':f')$-coloring of $P$. \end{proof} \begin{lemma}\label{3-3-4-3} Let $L$ be a list assignment for a path $P=v_1v_2v_3v_4$ such that $|L(v_1)|=|L(v_3)|=|L(v_4)|=5$, $|L(v_2)|=8$ and the subpath $v_3v_4$ of $P$ has an $(L:3)$-coloring. Then $P$ is $(L:3)$-colorable. Moreover, for any colors $\alpha\in L(v_1)$ and $\beta\in L(v_4)$, the path $P$ has an $(L:3)$-coloring $\varphi$ such that $\alpha\in \varphi(v_1)$ and $\beta\in \varphi(v_4)$. \end{lemma} \begin{proof} Since the path $v_3v_4$ is $(L:3)$-colorable, we have $L(v_3)\neq L(v_4)$. Consider arbitrary colors $\alpha\in L(v_1)$ and $\beta,\beta'\in L(v_4)$ such that at most one of the colors $\beta$ and $\beta'$ belongs to $L(v_3)$. Let $f'(v_1)=2$, $f'(v_2)=f'(v_3)=3$ and $f'(v_4)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $P$ has an $(L':f')$-coloring for any list assignment $L'$ such that $|L'(v_1)|=4$, $|L'(v_2)|=7$, $|L'(v_3)|=4$, and $|L'(v_4)|=3$. Let $\gamma_3$ be any color in $L'(v_3)\setminus L'(v_4)$ and let $\gamma_2$ be any color in $L'(v_2)\setminus (\{\gamma_3\}\cup L'(v_1))$. Let $f''(v_1)=f''(v_2)=f''(v_3)=2$ and $f''(v_4)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $P$ has an $(L'':f'')$-coloring for any list assignment $L''$ such that $|L''(v_1)|=4$, $|L''(v_2)|=5$, $|L''(v_3)|=2$, and $|L''(v_4)|=3$. This is the case by coloring the vertices of $P$ greedily in order $v_3$, $v_4$, $v_2$, and $v_1$. \end{proof} \begin{lemma} \label{3-3-4-4-3} Let $L$ be a list assignment for a path $P=v_1\ldots v_5$ such that $|L(v_1)|=|L(v_3)|=|L(v_4)|=|L(v_5)|=5$, $|L(v_2)|=8$, and the subpath $v_3v_4v_5$ has an $(L:3)$-coloring. Then $P$ is $(L:3)$-colorable. Moreover, for any colors $\alpha\in L(v_1)$ and $\beta\in L(v_5)$ such that $\{\beta\}\neq L(v_4)\setminus L(v_3)$, the path $P$ has an $(L:3)$-coloring $\varphi$ such that $\alpha\in \varphi(v_1)$ and $\beta\in \varphi(v_5)$. \end{lemma} \begin{proof} Since the path $v_3v_4v_5$ is $(L:3)$-colorable, we have $L(v_3)\neq L(v_4)\neq L(v_5)$. Consider arbitrary colors $\alpha\in L(v_1)$, $\varepsilon\in L(v_3)\setminus L(v_4)$, and $\beta\in L(v_5)$ such that $\{\beta\}\neq L(v_4)\setminus L(v_3)$. There exists a color $\gamma\in L(v_4)\setminus L(v_3)$ such that $\gamma\neq \beta$. If $\beta\not\in L(v_4)$, then choose $\beta'\in L(v_5)\setminus\{\beta,\gamma\}$ arbitrarily; otherwise, choose $\beta'\in L(v_5)\setminus L(v_4)$ arbitrarily. In either case, assigning sets $\{\alpha\}$, $\emptyset$, $\{\varepsilon\}$, $\{\gamma\}$, $\{\beta,\beta'\}$ to vertices of $P$ in order gives an $L$-set coloring. Let $f'(v_1)=f'(v_3)=f'(v_4)=2$, $f'(v_2)=3$, and $f'(v_5)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $P$ has an $(L':f')$-coloring for any list assignment $L'$ such that $|L'(v_1)|=4$, $|L'(v_2)|=6$, $|L'(v_3)|=4$, $|L'(v_4)|=3$, and $|L'(v_5)|=2$. Choose $\kappa_3\in L'(v_3)\setminus L'(v_4)$, $\kappa_4\in L'(v_4)\setminus L'(v_5)$, and $\kappa_2\in L'(v_2)\setminus (\{\kappa_3\}\cup L'(v_1))$. Let $f''(v_1)=f''(v_2)=2$ and $f''(v_3)=f''(v_4)=f''(v_5)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $P$ has an $(L'':f'')$-coloring for any list assignment $L''$ such that $|L''(v_1)|=|L''(v_2)|=4$, $|L''(v_3)|=1$, and $|L''(v_4)|=|L''(v_5)|=2$. This is the case by coloring the vertices of $P$ greedily in order $v_3$, $v_4$, $v_5$, $v_2$, and $v_1$. \end{proof} \begin{lemma} \label{3-3-4-4-4-3} Let $L$ be a list assignment for a path $P=v_1\ldots v_6$ such that $|L(v_1)|=|L(v_3)|=\ldots=|L(v_6)|=5$, $|L(v_2)|=8$, and the subpath $v_3\ldots v_6$ has an $(L:3)$-coloring. Then $P$ is $(L:3)$-colorable. Moreover, for any color $\alpha\in L(v_1)$, the path $P$ has an $(L:3)$-coloring $\varphi$ such that $\alpha\in \varphi(v_1)$. \end{lemma} \begin{proof} Since $v_3v_4v_5v_6$ has an $(L:3)$-coloring $\psi$, we have $L(v_3)\neq L(v_4)\neq L(v_5)\neq L(v_6)$. Furthermore, if $|L(v_4)\setminus L(v_3)|=1$, then $\psi(v_4)$ contains the unique color $\gamma\in L(v_4)\setminus L(v_3)$, and thus $L(v_5)\setminus \{\gamma\}\not\subseteq L(v_6)$; in this case, let $\beta$ be an arbitrary color in $L(v_5)\setminus (\{\gamma\}\cup L(v_6))$. Otherwise, let $\beta$ be an arbitrary color in $L(v_5)\setminus L(v_6)$. In either case, we have $\{\beta\}\neq L(v_4)\setminus L(v_3)$; hence, considering any $\alpha\in L(v_1)$, $P-v_6$ has an $(L:3)$-coloring $\varphi$ such that $\alpha\in \varphi(v_1)$ and $\beta\in \varphi(v_5)$ by Lemma~\ref{3-3-4-4-3}. Since $\beta\not\in L(v_6)$, we have $|L(v_6)\setminus\varphi(v_5)|\ge 3$, and thus $\varphi$ can be extended to an $(L:3)$-coloring of $P$ by choosing $\varphi(v_6)$ as an arbitrary $3$-element subset of $L(v_6)\setminus\varphi(v_5)$. \end{proof} \begin{lemma}\label{3-4-3---3} Let $L$ be a list assignment for a path $P=v_1\ldots v_k$ with $5\le k\le 7$ such that $|L(v_1)|=|L(v_2)|=|L(v_4)|=\ldots=|L(v_k)|=5$, $|L(v_3)|=8$, and $P-v_3$ has an $(L:3)$-coloring. Then $P$ is $(L:3)$-colorable. \end{lemma} \begin{proof} Since $v_1v_2$ has an $(L:3)$-coloring, we have $L(v_1)\neq L(v_2)$, and thus there exists a color $\alpha\in L(v_2)\setminus L(v_1)$. By Lemmas~\ref{3-3-4-3}, \ref{3-3-4-4-3}, and \ref{3-3-4-4-4-3}, there exists an $(L:3)$-coloring $\varphi$ of $P-v_1$ such that $\alpha\in L(v_2)$. Since $\alpha\not\in L(v_1)$, we have $|L(v_1)\setminus\varphi(v_2)|\ge 3$, and thus $\varphi$ can be extended to an $(L:3)$-coloring of $P$ by choosing $\varphi(v_1)$ as an arbitrary $3$-element subset of $L(v_1)\setminus\varphi(v_2)$. \end{proof} \begin{lemma}\label{triangle} Let $L$ be a list assignment for a triangle $C=v_1v_2v_3$. Then $C$ is $(L:3)$-colorable if and only if $|L(v_i)|\ge 3$ for $1\le i\le 3$, $|L(v_i)\cup L(v_j)|\ge 6$ for $1\le i < j\le 3$, and $|L(v_1)\cup L(v_2)\cup L(v_3)|\ge 9$. \end{lemma} \begin{proof} If $\varphi$ is an $(L:3)$-coloring of $C$ and $S$ is a subset of $V(C)$, then $\varphi$ assigns pairwise disjoint sets to vertices of $S$, and thus $\big|\bigcup_{v\in S} L(v)\big|\ge \big|\bigcup_{v\in S} \varphi(v)\big|=3|S|$, proving that the conditions from the statement of the lemma are necessary. Consider an auxiliary bipartite graph $H$ with one part $U$ consisting of $L(v_1)\cup L(v_2)\cup L(v_3)$ and the other part $V$ consisting of vertices $v_{i,k}$ for $1\le i,k\le 3$, with $c\in U$ adjacent to $v_{i,k}$ if and only if $c\in L(v_i)$. Using Hall's theorem, the assumptions of the lemma imply that $H$ has a matching saturating the vertices of $V$. Letting $\varphi(v_i)$ consist of the colors joined to $v_{i,1}$, $v_{i,2}$, and $v_{i,3}$ in this matching for $1\le i\le 3$ gives an $(L:3)$-coloring of $C$. \end{proof} \begin{lemma}\label{lollipop} Let $L$ be a list assignment for the graph $H$ consisting of a path $v_1v_2v_3v_4$ and an edge $v_1v_3$, such that $|L(v_1)|=|L(v_4)|=5$, $|L(v_2)|=|L(v_3)|=8$, and the triangle $v_1v_2v_3$ has an $(L:3)$-coloring. Then $H$ is $(L:3)$-colorable. \end{lemma} \begin{proof} Since $v_1v_2v_3$ is $(L:3)$-colorable, we have $|L(v_1)\cup L(v_2)\cup L(v_3)|\ge 9$ by Lemma~\ref{triangle}, and thus there exists a color $\alpha\in (L(v_1)\cup L(v_3))\setminus L(v_2)$. Let $\beta$ be a color in $L(v_3)\setminus L(v_1)$. Let $\varphi(v_4)$ be any $3$-element subset of $L(v_4)\setminus\{\alpha,\beta\}$. Let $L'(v_3)=L(v_3)\setminus \varphi(v_4)$, $L'(v_1)=L(v_1)$ and $L'(v_2)=L(v_2)$. Note that $\beta\in L'(v_3)\setminus L'(v_1)$, and thus $|L'(v_1)\cup L'(v_3)|\ge 6$. Furthermore, $\alpha\in (L'(v_1)\cup L'(v_3))\setminus L'(v_2)$, and thus $|L'(v_1)\cup L'(v_2)\cup L'(v_3)|\ge |L'(v_2)|+1=9$. By Lemma~\ref{triangle}, $v_1v_2v_3$ has an $(L':3)$-coloring, and this coloring extends $\varphi$ to an $(L:3)$-coloring of $H$. \end{proof} \begin{lemma}\label{l6cycle} Let $L$ be a list assignment for a $6$-cycle $C=v_1\ldots v_6$, such that $|L(v_i)|\ge 5$ for $1\le i\le 6$. Suppose that there exists $S\subseteq V(C)$ such that $|S|=2$, $|L(u)|=8$ for all $u\in S$, and $C-S$ is $(L:3)$-colorable. Then $C$ is $(L:3)$-colorable. \end{lemma} \begin{proof} Without loss of generality, we can assume $|L(v)|=5$ for $v\in V(C)\setminus S$ and $S=\{v_1,v_t\}$ for some $t\in\{2,3,4\}$. Let us discuss the possible values of $t$ separately. \begin{itemize} \item Suppose first that $t=2$. Since $C-S$ is $(L:3)$-colorable, we have $L(v_3)\neq L(v_4)\neq L(v_5)\neq L(v_6)$, and furthermore, if $|L(v_4)\setminus L(v_3)|=1$ and $|L(v_5)\setminus L(v_6)|=1$, then $L(v_4)\setminus L(v_3)\neq L(v_5)\setminus L(v_6)$. Select $\beta\in L(v_4)\setminus L(v_3)$ such that $|L(v_5)\setminus (\{\beta\}\cup L(v_6))|\ge 1$, and let $\gamma\in L(v_5)\setminus (\{\beta\}\cup L(v_6))$ be arbitrary. Then, select $\beta'\in L(v_4)\setminus\{\beta,\gamma\}$ so that at most one of $\beta$ and $\beta'$ belongs to $L(v_5)$, and $\gamma'\in L(v_5)\setminus \{\beta,\beta',\gamma\}$ so that at most one of $\gamma$ and $\gamma'$ belongs to $L(v_4)$. Furthermore, arbitrarily select $\alpha\in L(v_3)\setminus L(v_4)$ and $\varepsilon\in L(v_6)\setminus L(v_5)$. Note that assignment of sets $\emptyset$, $\emptyset$, $\{\alpha\}$, $\{\beta,\beta'\}$, $\{\gamma,\gamma'\}$, $\{\varepsilon\}$ to vertices of $C$ in order is a set coloring. Let $f'(v_1)=f'(v_2)=3$, $f'(v_3)=f'(v_6)=2$, and $f'(v_4)=f'(v_5)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $C$ has an $(L':f')$-coloring for any list assignment $L'$ such that $|L'(v_1)|=|L'(v_2)|=7$, $|L'(v_3)|=|L'(v_6)|=3$, and $|L'(v_4)|=|L'(v_5)|=2$. Choose $\alpha'\in L'(v_3)\setminus L'(v_4)$ and $\varepsilon'\in L'(v_6)\setminus L'(v_5)$. Let $f''(v_1)=f''(v_2)=3$ and $f''(v_3)=\ldots=f''(v_6)=1$. Applying Lemma~\ref{lemma-redulist} again, it suffices to prove that $C$ has an $(L'':f'')$-coloring for any list assignment $L''$ such that $|L''(v_1)|=|L''(v_2)|=6$ and $|L''(v_3)|=\ldots=|L''(v_6)|=2$. If $L''(v_1)\neq L''(v_2)$, then let $\kappa$ be a color in $L''(v_1)\setminus L''(v_2)$, and let $\varphi$ be an $(L'':f'')$-coloring of the path $v_6v_5v_4v_3$ such that $\varphi(v_6)\neq \{\kappa\}$, obtained greedily. If $L''(v_1)=L''(v_2)$, then let $\varphi$ be an $(L'':f'')$-coloring of the path $v_6v_5v_4v_3$ such that $\varphi(v_3)\neq\varphi(v_6)$, which exists, since a $4$-cycle is $(2:1)$-choosable~\cite{erdosrubintaylor1979}. In either case, the choice of $\varphi$ ensures that if $L''(v_1)\setminus\varphi(v_6)$ and $L''(v_2)\setminus \varphi(v_3)$ both have size $5$, then they are different. Hence, we can choose $\varphi(v_1)$ and $\varphi(v_2)$ as disjoint $3$-element subsets of $L''(v_1)\setminus\varphi(v_6)$ and $L''(v_2)\setminus \varphi(v_3)$, respectively. This gives an $(L'':f'')$-coloring of $C$, as required. \item Next, suppose that $t=3$. If $L(v_2)\not\subseteq L(v_1)$, then choose $\alpha\in L(v_2)\setminus L(v_1)$ and $\beta\in L(v_6)$ arbitrarily so that $L(v_5)\setminus L(v_4)\neq\{\beta\}$. If $L(v_2)\subseteq L(v_1)$, then note that $|L(v_6)\setminus (L(v_1)\setminus L(v_2))|\ge 2$, and thus we can choose $\beta\in L(v_6)\setminus (L(v_1)\setminus L(v_2))$ so that $L(v_5)\setminus L(v_4)\neq\{\beta\}$. In this case, if $\beta\in L(v_2)$ then let $\alpha=\beta$, otherwise choose $\alpha\in L(v_2)$ arbitrarily. By Lemma~\ref{3-3-4-4-3}, the path $v_2v_3\ldots v_6$ has an $(L:3)$-coloring $\varphi$ such that $\alpha\in \varphi(v_2)$ and $\beta\in \varphi(v_6)$. By the choice of $\alpha$ and $\beta$ we have $|L(v_1)\setminus (\varphi(v_2)\cup \varphi(v_6))|\ge 3$, and thus we can extend $\varphi$ to an $(L:3)$-coloring of $C$ by choosing $\varphi(v_1)$ as a $3$-element subset of $L(v_1)\setminus (\varphi(v_2)\cup \varphi(v_6))$. \item Finally, suppose that $t=4$. Since $C-S$ is $(L:3)$-colorable, we have $L(v_2)\neq L(v_3)$ and $L(v_5)\neq L(v_6)$. Hence, there exist $\alpha\in L(v_2)\setminus L(v_3)$, $\beta\in L(v_3)\setminus L(v_2)$, $\gamma\in L(v_5)\setminus L(v_6)$, and $\varepsilon\in L(v_6)\setminus L(v_5)$. Let $f'(v_1)=f'(v_4)=3$ and $f'(v_2)=f'(v_3)=f'(v_5)=f'(v_6)=2$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $C$ has an $(L':f')$-coloring for any list assignment $L'$ such that $|L'(v_1)|=|L'(v_4)|=6$ and $|L'(v_2)|=|L'(v_3)|=|L'(v_5)|=|L'(v_6)|=4$. Suppose first that there exists a color $\alpha'\in L'(v_2)\setminus L'(v_1)$. Since $|L'(v_5)|+|L'(v_3)\setminus\{\alpha'\}|>|L'(v_4)|$, there exist colors $\beta'\in L'(v_3)\setminus\{\alpha'\}$ and $\gamma'\in L'(v_5)$ such that either $\beta'=\gamma'$ or at most one of $\beta'$ and $\gamma'$ belongs to $L'(v_4)$. Let $f''(v_1)=f''(v_4)=3$, $f''(v_2)=f''(v_3)=f''(v_5)=1$, and $f''(v_6)=2$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $C$ has an $(L'':f'')$-coloring for any list assignment $L''$ such that $|L''(v_1)|=6$, $|L''(v_4)|=5$, $|L''(v_2)|=|L''(v_3)|=2$, and $|L''(v_5)|=|L''(v_6)|=3$. This is the case by coloring the vertices of $C$ greedily in order $v_2$, $v_3$, $v_5$, $v_6$, $v_1$, and $v_4$. Hence, we can assume that $L'(v_2)\subset L'(v_1)$, and by symmetry also $L'(v_6)\subset L'(v_1)$ and $L'(v_3),L'(v_5)\subset L'(v_4)$. Since $|L'(v_2)|+|L'(v_6)|-|L'(v_1)|=2$, it follows that $|L'(v_2)\cap L'(v_6)|\ge 2$, and symmetrically $|L'(v_3)\cap L'(v_5)|\ge 2$. Hence, there exist $\alpha'\in L'(v_2)\cap L'(v_6)$ and $\beta'\in L'(v_3)\cap L'(v_5)$ such that $\alpha'\neq\beta'$. Let $f''(v_1)=f''(v_4)=3$ and $f''(v_2)=f''(v_3)=f''(v_5)=f''(v_6)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $C$ has an $(L'':f'')$-coloring for any list assignment $L''$ such that $|L''(v_1)|=|L''(v_4)|=5$ and $|L''(v_2)=|L''(v_3)|=|L''(v_5)|=|L''(v_6)|=2$. This is again the case by coloring the vertices of $C$ greedily in order $v_2$, $v_3$, $v_5$, $v_6$, $v_1$, and $v_4$. \end{itemize} \end{proof} \begin{lemma}\label{claw} Let $L$ be a list assignment for the graph $H$ consisting of a vertex $v$ with three neighbors $v_1$, $v_2$, and $v_3$, such that $|L(v_i)|=5$ for $1\le i\le 3$ and $|L(v)|=8$. Then $H$ is $(L:3)$-colorable. \end{lemma} \begin{proof} For $1\le i\le 3$, there exists $\alpha_i\in L(v)\setminus L(v_i)$. Fix $\varphi(v)$ as a $3$-element subset of $L(v)$ containing $\alpha_1$, $\alpha_2$, and $\alpha_3$. Then $|L(v_i)\setminus\varphi(v)|\ge 3$ for $1\le i\le 3$, and thus $\varphi$ extends to an $(L:3)$-coloring of $H$. \end{proof} \begin{lemma}\label{claw5} Let $L$ be a list assignment for the graph $H$ consisting of a vertex $v$ with four neighbors $v_1$, \ldots, $v_4$, and possibly the edge $v_3v_4$, such that $|L(v_1)|=|L(v_2)|=5$, $|L(v)|=8$ and either \begin{itemize} \item $v_3v_4\not\in E(H)$ and $|L(v_3)|=|L(v_4)|=5$, or \item $v_3v_4\in E(H)$, $|L(v_3)|=|L(v_4)|=8$, and the triangle $vv_3v_4$ is $(L:3)$-colorable. \end{itemize} Then $H$ is $(L:3)$-colorable. \end{lemma} \begin{proof} If $v_3v_4\not\in E(H)$, then let $A_i$ be a $3$-element subset of $L(v)\setminus L(v_i)$ for $1\le i\le 4$. Since $\sum_{i=1}^4 |A_i|>|L(v)|$, there exists a color in $L(v)$ belonging to at least two of the sets $A_1$, \ldots, $A_4$. Hence, there exists a $3$-element set $\varphi(v)\subset L(v)$ such that $\varphi(v)\cap A_i\neq \emptyset$ for $1\le i\le 4$. Then $|L(v_i)\setminus \varphi(v)|\ge 3$, and thus $\varphi$ extends to an $(L:3)$-coloring of $H$. If $v_3v_4\in E(H)$, then since $vv_3v_4$ is $(L:3)$-colorable, there exists a color $\alpha\in L(v)$ such that $|\{\alpha\}\cup L(v_3)\cup L(v_4)|\ge 9$. Since $|L(v_1)\setminus \{\alpha\}|+|L(v_2)\setminus \{\alpha\}|>|L(v)\setminus\{\alpha\}|$, there exist colors $\beta_1\in L(v_1)\setminus\{\alpha\}$ and $\beta_2\in L(v_2)\setminus\{\alpha\}$ such that either $\beta_1=\beta_2$ or at most one of $\beta_1$ and $\beta_2$ belongs to $L(v)$. For $i\in\{1,2\}$, let $\varphi(v_i)$ be any $3$-element subset of $L(v_i)\setminus\{\alpha\}$ containing $\beta_i$. Then $L(v)\setminus (\varphi(v_1)\cup\varphi(v_2))$ has size at least $3$ and contains $\alpha$, and thus $\varphi$ extends to an $(L:3)$-coloring of $H$ by Lemma~\ref{triangle}. \end{proof} \begin{lemma}\label{claw53} Let $L$ be a list assignment for the graph $H$ consisting of a vertex $v$ with three neighbors $v_1$, $v_2$, and $v_3$, a vertex $u_1$ adjacent to $v_1$, and possibly one edge between the vertices $v_1$, $v_2$, and $v_3$, such that $|L(u_1)|=|L(v)|=5$ and $|L(v_i)|=2+3\deg_H(v_i)$ for $1\le i\le 3$. If $H-v_1$ is $(L:3)$-colorable, then $H$ is $(L:3)$-colorable. \end{lemma} \begin{proof} If $v_1v_2\in E(H)$, then let $\varphi(v_2)$ be a $3$-element subset of $L(v_2)\setminus L(v)$. Let $L'(v)=L(v)$, $L'(v_3)=L(v_3)$, $L'(u_1)=L(u_1)$, and $L'(v_1)=L(v_1)\setminus\varphi(v_2)$. Since $H-v_1$ is $L$-colorable, $H-\{v_1,v_2\}$ is $L'$-colorable, and thus $H-v_2$ is $L'$-colorable by Lemma~\ref{3-3-4-3}. Hence, $\varphi$ extends to an $(L:3)$-coloring of $H$. Hence, assume that $v_1v_2\not\in E(H)$, and by symmetry, $v_1v_3\not\in E(H)$. If $v_2v_3\in E(H)$, then since $vv_2v_3$ is $(L:3)$-colorable, there exists $\alpha\in L(v)$ such that $|\{\alpha\}\cup L(v_2)\cup L(v_3)|\ge 9$. Choose distinct $\beta,\beta'\in L(v_1)\setminus\{\alpha\}$ such that $\beta\not\in L(u_1)$ and $\beta'\not\in L(v)$. Let $\varphi(v_1)$ be an arbitrary $3$-element subset of $L(v_1)\setminus\{\alpha\}$ containing $\beta$ and $\beta'$. Then $\varphi$ extends to an $(L:3)$-coloring of $H$ (using Lemma~\ref{triangle}), since $|L(u_1)\setminus\varphi(v_1)|\ge 3$, $|L(v)\setminus\varphi(v_1)|\ge 3$, and $\alpha\in L(v)\setminus\varphi(v_1)$. Finally, suppose that $\{v_1, v_2,v_3\}$ is an independent set. Since the path $v_2vv_3$ is $(L:3)$-colorable, we have $L(v_2)\neq L(v)\neq L(v_3)$. Choose colors $\beta\in L(v)\setminus L(v_2)$ and $\beta'\in L(v)\setminus L(v_3)$ arbitrarily. By Lemma~\ref{3-3-3}, there exists an $(L:3)$-coloring $\varphi$ of $vv_1u_1$ such that $\beta,\beta'\in \varphi(v)$. Then $|L(v_i)\setminus\varphi(v)|\ge 3$ for $i\in \{2,3\}$, and thus $\varphi$ extends to an $(L:3)$-coloring of $H$. \end{proof} \begin{lemma}\label{claw543} Let $L$ be a list assignment for the graph $H$ consisting of a path $u_1v_1vv_2u_2$, a vertex $v_3$ adjacent to $v$, and possibly the edge $v_1v_3$, such that $|L(u_1)|=|L(v)|=|L(u_2)|=5$, $|L(v_2)|=8$, $|L(v_3)|=2+3\deg_H(v_3)$, and $|L(v_1)|=3\deg_H(v_1)-1$. If $H-v_2$ is $(L:3)$-colorable, then $H$ is $(L:3)$-colorable. \end{lemma} \begin{proof} Let us first consider the case that $v_1v_3\in E(H)$. Since the triangle $v_1vv_3$ is $(L:3)$-colorable, there exists $\alpha\in (L(v)\cup L(v_1))\setminus L(v_3)$. Choose $\beta\in L(v_2)\setminus (\{\alpha\}\cup L(u_2))$ and distinct colors $\gamma,\gamma'\in L(v_2)\setminus L(v)$ arbitrarily. Let $\varphi(v_2)$ be a $3$-element subset of $L(v_2)$ containing $\beta$, $\gamma$, and $\gamma'$. Note that $|L(u_2)\setminus\varphi(v_2)|\ge 3$ by the choice of $\beta$, and thus we can choose $\varphi(u_2)$ as a $3$-element subset of $L(u_2)\setminus\varphi(v_2)$. By the choice of $\gamma$ and $\gamma'$, there exists a $4$-element subset $A$ of $L(v)$, containing $\alpha$ if $\alpha\in L(v)$. Choose distinct colors $\kappa,\kappa'\in L(v_1)\setminus A$, such that $\kappa=\alpha$ if $\alpha\not\in A$. Let $\varphi(u_1)$ be a $3$-element subset of $L(u_1)\setminus \{\kappa,\kappa'\}$, and let $B=L(v_1)\setminus \varphi(u_1)$. Note that $|B|\ge 5$, $|A\cup B|\ge |A\cup \{\kappa,\kappa'\}|=6$, and $|A\cup B\cup L(v_3)|\ge |L(v_3)\cup \{\alpha\}|=9$, and thus by Lemma~\ref{triangle}, $\varphi$ extends to an $(L:3)$-coloring of $H$. Suppose now that $v_1v_3\not\in E(H)$. Since the path $u_1v_1vv_3$ is $(L:3)$-colorable, we have $L(u_1)\neq L(v_1)\neq L(v)\neq L(v_3)$, and furthermore, if $|L(v_1)\setminus L(u_1)|=1$, then $L(v_1)\setminus L(u_1)\neq L(v)\setminus L(v_3)$. Hence, there exists a color $\alpha\in L(v)\setminus L(v_3)$ such that $|L(v_1)\setminus (\{\alpha\}\cup L(u_1))|\ge 1$. Let $\beta$ be any color in $L(v_1)\setminus (\{\alpha\}\cup L(u_1))$. If $\alpha\in L(v_1)$, then let $\alpha'$ be any color in $L(v)\setminus L(v_1)$, otherwise let $\alpha'$ be any color in $L(v)\setminus \{\alpha,\beta\}$. If $\beta\in L(v)$, then let $\beta'$ be any color in $L(v_1)\setminus L(v)$, otherwise let $\beta'$ be any color in $L(v_1)\setminus\{\alpha,\alpha',\beta\}$. Let $\gamma$ be any color in $L(u_1)\setminus L(v_1)$, let $\varepsilon$ be any color in $L(v_3)\setminus L(v)$, and let $\kappa$ be any color in $L(v_2)\setminus (\{\alpha,\alpha'\}\cup L(u_2)\})$. Let $f'(u_1)=f'(v_3)=f'(v_2)=2$, $f'(v_1)=f'(v)=1$, and $f'(u_2)=3$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $H$ has an $(L':f')$-coloring for any list assignment $L'$ such that $|L'(u_1)|=|L'(v_3)|=3$, $|L'(v_1)|=2$, $|L'(v)|=1$, and $|L'(v_2)|=|L'(u_2)|=5$. This is the case by coloring the vertices of $H$ greedily in order $v$, $v_3$, $v_1$, $u_1$, $v_2$, and $u_2$. \end{proof} \section{Properties of a minimal counterexample} We are going to prove a mild strengthening of Theorem~\ref{MTl} where a clique (one vertex, two adjacent vertices, or a triangle) is precolored. A (hypothetical) \emph{counterexample} (to this strengthening) is a triple $(G,L,Z)$, where $G$ is a plane graph without $4$- or $5$-cycles, $Z$ is the vertex set of a clique of $G$, and $L$ is an assignment of lists of size $11$ to vertices of $V(G)\setminus Z$ and pairwise disjoint lists of size $3$ to vertices $Z$, such that $G$ is not $(L:3)$-colorable. The \emph{order} of the counterexample is the number of vertices of $G$. A counterexample is \emph{minimal} if there exists no counterexample of smaller order. \begin{lemma}\label{conn} If $(G,L,Z)$ is a minimal counterexample, then $G$ is $2$-connected and every triangle in $G$ bounds a face. \end{lemma} \begin{proof} If $G$ is not $2$-connected, then there exist proper induced subgraphs $G_1$ and $G_2$ of $G$ and a vertex $z\in V(G_2)$ such that $G=G_1\cup G_2$, $V(G_1\cap G_2)\subseteq\{z\}$, and $Z\subseteq V(G_1)$. By the minimality of the counterexample, there exists an $(L:3)$-coloring $\varphi_1$ of $G_1$. If $z\in V(G_1)$, then let $L'(z)=\varphi_1(z)$, otherwise let $L'(z)$ be any $3$-element subset of $L(z)$. Let $L'(v)=L(v)$ for all $v\in V(G_2)\setminus \{z\}$. Note that $(G_2,L',\{z\})$ is not a counterexample since it has smaller order than $(G,L,Z)$, and thus there exists an $(L':3)$-coloring $\varphi_2$ of $G_2$. However, then $\varphi_1$ and $\varphi_2$ combine to an $(L:3)$-coloring of $G$, which is a contradiction. Similarly, if $G$ contains a non-facial triangle $T$, then there exist proper induced subgraphs $G_1$ and $G_2$ of $G$ such that $G=G_1\cup G_2$, $T=G_1\cap G_2$, and $Z\subseteq V(G_1)$. By the minimality of the counterexample, there exists an $(L:3)$-coloring $\varphi_1$ of $G_1$. Let $L'(z)=\varphi_1(z)$ for all $z\in V(T)$ and $L'(v)=L(v)$ for all $v\in V(G_2)\setminus V(T)$. Note that $(G_2,L',V(T))$ is not a counterexample since it has smaller order than $(G,L,Z)$, and thus there exists an $(L':3)$-coloring $\varphi_2$ of $G_2$. However, then $\varphi_1$ and $\varphi_2$ combine to an $(L:3)$-coloring of $G$, which is a contradiction. \end{proof} Let $L$ be a list assignment for a graph $G$, let $H$ be an induced subgraph of $G$, and let $\psi$ be an $(L:3)$-coloring of $G-V(H)$. Let $L_\psi$ denote the list assignment for $H$ defined by $$L_\psi(v)=L(v)\setminus\bigcup_{uv\in E(G), u\not\in V(H)} \psi(u)$$ for all $v\in V(H)$. Note that \begin{equation}\label{eq:size} |L_\psi(v)|\ge |L(v)|-3(\deg_G(v)-\deg_H(v)). \end{equation} Furthermore, any $(L_\psi:3)$-coloring of $H$ combines with $\psi$ to an $(L:3)$-coloring of $G$. Hence, the following claim holds. \begin{proposition}\label{obs:extend} Let $(G,L,Z)$ be a minimal counterexample and let $H$ be an induced subgraph of $G$ disjoint from $Z$. If $\psi$ is an $(L:3)$-coloring of $G-V(H)$, then $H$ is not $(L_\psi:3)$-colorable. \end{proposition} In a counterexample $(G,L,Z)$, a vertex $v\in V(G)$ is \emph{internal} if $v\not\in Z$. \begin{lemma}\label{MD} If $(G,L,Z)$ is a minimal counterexample, then every internal vertex of $G$ has degree at least $3$. \end{lemma} \begin{proof} Suppose for a contradiction that there exists a vertex $v\in V(G)\setminus Z$ of degree at most two. By the minimality of the counterexample, the graph $G-v$ has an $(L:3)$-coloring $\psi$. By Proposition~\ref{obs:extend}, $v$ is not $(L_\psi:3)$-colorable. However, this is a contradiction, since $|L_\psi(v)|\ge 11-2\cdot 3=5$ by (\ref{eq:size}). \end{proof} \begin{lemma} \label{33-path} Let $(G,L,Z)$ be a minimal counterexample. Let $P=v_1\ldots v_k$ be a path in $G$ disjoint from $Z$ such that $3\le k\le 6$, $\deg (v_1) = \deg (v_2) = \deg (v_k) = 3$ and $\deg(v_i)=4$ for $3\le i\le k-1$. Then $k=3$ and $v_1v_3\in E(G)$. \end{lemma} \begin{proof} Suppose for a contradiction that either $k\ge 4$, or $k=3$ and $v_1v_3\not\in E(G)$. Choose such a path $P$ with $k$ minimum. Note that $G$ contains at most one of the edges $v_1v_k$ and $v_2v_k$; this follows by the assumptions if $k=3$ and by the fact that $G$ does not contain $4$- or $5$-cycles otherwise. By the minimality of $k$, we conclude that $G$ contains neither of these edges, with the exception of the case $k=3$ and the edge $v_2v_3$ (since otherwise we can consider a path $v_1v_2v_k$ or $v_2v_1v_k$ instead of $P$). Consequently, by the minimality of $k$, it follows that the path $P$ is induced. By the minimality of $G$, the graph $G-\{v_1,v_2\}$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(P)$, and consider the list assignment $L_\psi$ for $P$. By the existence of $\psi_0$, we conclude that $P-\{v_1,v_2\}$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v_2)|\ge 8$ and $|L_\psi(v_i)|\ge 5$ for $i=1,\ldots, k$ by (\ref{eq:size}). By Lemmas~\ref{3-3-3}, \ref{3-3-4-3}, \ref{3-3-4-4-3}, and \ref{3-3-4-4-4-3}, the path $P$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof} \begin{lemma}\label{6-cycle} Let $(G,L,Z)$ be a minimal counterexample. Let $C=v_1\ldots v_6$ be a $6$-cycle in $G$ disjoint from $Z$ such that all vertices of $C$ have degree at most $4$. Then at most one vertex of $C$ has degree three. \end{lemma} \begin{proof} Suppose for a contradiction that $C$ contains at least two vertices of degree three, and let $S$ be the set of two such vertices. Note that since $G$ does not contain $4$- or $5$-cycles, the cycle $C$ is induced. By the minimality of $G$, the graph $G-S$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(C)$, and consider the list assignment $L_\psi$ for $C$. By the existence of $\psi_0$, we conclude that $C-S$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v)|\ge 8$ for all $v\in S$ and $|L_\psi(v_i)|\ge 5$ for $v\in V(C)\setminus S$ by (\ref{eq:size}). By Lemma~\ref{l6cycle}, the cycle $C$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof} \begin{lemma}\label{tria3} Let $(G,L,Z)$ be a minimal counterexample. Let $v_1v_2v_3$ be a triangle in $G$ disjoint from $Z$ such that $\deg(v_1),\deg(v_3)\le 4$ and $\deg(v_2)=3$. Then $v_3$ has no neighbor $v_4\not\in \{v_1,v_2\}\cup Z$ of degree $3$. \end{lemma} \begin{proof} Suppose for a contradiction that $v_3$ has such a neighbor $v_4$. Note that $v_1v_4, v_2v_4\not\in E(G)$, since $G$ does not contain $4$-cycles. Let $H$ be the subgraph of $G$ induced by $\{v_1,v_2,v_3,v_4\}$. By the minimality of $G$, the graph $G-v_4$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(H)$, and consider the list assignment $L_\psi$ for $H$. By the existence of $\psi_0$, we conclude that the triangle $v_1v_2v_3$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v_2)|,|L_\psi(v_3)|\ge 8$ and $|L_\psi(v_1)|,|L_\psi(v_4)|\ge 5$ by (\ref{eq:size}). By Lemma~\ref{lollipop}, the graph $H$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof} \begin{lemma} \label{34-path} Let $(G,L,Z)$ be a minimal counterexample. Then $G$ does not contain a path $P=v_1\ldots v_k$ disjoint from $Z$ such that $5\le k\le 7$, $\deg (v_1) = \deg (v_3) = \deg (v_k) = 3$, $\deg(v_2)=4$, and $\deg(v_i)=4$ for $4\le i\le k-1$. \end{lemma} \begin{proof} Suppose for a contradiction that $G$ contains such a path $P$. By Lemma~\ref{33-path}, the vertices $v_1$, $v_3$, and $v_k$ form an independent set. By considering $P$ as short as possible, we can assume that $v_3\ldots v_k$ is an induced path. Note that $v_2v_j\not\in E(G)$ for $5\le j\le k$ and $v_1v_i\not\in E(G)$ for $4\le i\le k-1$ by the absence of $4$- and $5$-cycles and by Lemma~\ref{6-cycle}. Furthermore, $v_2v_4\not\in E(G)$ by Lemma~\ref{tria3}. Hence, $P$ is an induced path. By the minimality of $G$, the graph $G-v_3$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(P)$, and consider the list assignment $L_\psi$ for $P$. By the existence of $\psi_0$, we conclude that $P-v_3$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v_3)|\ge 8$ and $|L_\psi(v_i)|\ge 5$ for $1\le i\le k$ by (\ref{eq:size}). By Lemma~\ref{3-4-3---3}, the path $P$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof} We now consider the neighborhoods of vertices of degree 4. \begin{lemma} \label{4-vertex} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be an internal vertex of $G$ of degree four. Then $v$ has at most two internal neighbors of degree three. \end{lemma} \begin{proof} Suppose for a contradiction that $v$ has three such neighbors $v_1, v_2,v_3\in V(G)\setminus Z$. Note that $\{v_1,v_2,v_3\}$ is an independent set by Lemma~\ref{tria3}. By the minimality of $G$, the graph $G-\{v,v_1,v_2,v_3\}$ has an $(L:3)$-coloring $\psi$. Note that $|L_\psi(v)|\ge 8$ and $|L_\psi(v_i)|\ge 5$ for $1\le i\le 3$ by (\ref{eq:size}). By Lemma~\ref{claw}, the subgraph $G[\{v,v_1,v_2,v_3\}]$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof} Next, let us consider the neighborhoods of vertices of degree 5. \begin{lemma} \label{5-vertex} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be an internal vertex of $G$ of degree five. If $v$ has four internal neighbors $v_1$, \ldots, $v_4$ of degree three, then $G[\{v_1,\ldots,v_4\}]$ is a perfect matching. \end{lemma} \begin{proof} Suppose for a contradiction that $G[\{v_1,\ldots,v_4\}]$ is not a perfect matching. Since $G$ does not contain $4$-cycles, it follows that $G[\{v_1,\ldots,v_4\}]$ has at most one edge; we can assume that it contains no edge other than $v_3v_4$. Let $H=G[\{v,v_1,v_2,v_3,v_4\}]$. By the minimality of $G$, the graph $G-\{v_1,v_2\}$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(H)$, and consider the list assignment $L_\psi$ for $H$. By the existence of $\psi_0$, we conclude that $H-\{v_1,v_2\}$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v)|\ge 8$, $|L_\psi(v_i)|\ge 5$ for $1\leq i\leq 2$ and $|L_\psi(v_i)|\ge 2+3\deg_H(v_i)$ for $3\leq i\leq 4$ by (\ref{eq:size}). By Lemma~\ref{claw5}, the graph $H$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof} \begin{lemma} \label{5-vertex3} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be an internal vertex of $G$ of degree five. If $v$ has three internal neighbors $v_1$, $v_2$, and $v_3$ of degree three, then $v_1$ has no neighbor of degree three not belonging to $Z$ and not adjacent to $v$. \end{lemma} \begin{proof} Suppose for a contradiction that $v_1$ has a neighbor $u_1\not\in Z\cup N_G(v)$ of degree three. Since $G$ does not contain $4$-cycles, it follows that $u_1v_2,u_1v_3\not\in V(G)$ and that $G$ contains at most one of the edges $v_1v_2$, $v_2v_3$, and $v_1v_3$. Let $H=G[\{v,v_1,v_2,v_3,u_1\}]$. By the minimality of $G$, the graph $G-v_1$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(H)$, and consider the list assignment $L_\psi$ for $H$. By the existence of $\psi_0$, we conclude that $H-v_1$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v)|\ge 5$, $|L_\psi(u_1)|\ge 5$ and $|L_\psi(v_i)|\ge 2+3\deg_H(v_i)$ for $1\leq i\leq 3$ by (\ref{eq:size}). By Lemma~\ref{claw53}, the graph $H$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof} \begin{lemma}\label{5-vertex43} Let $(G,L,Z)$ be a minimal counterexample, and let $P=u_1v_1vv_2u_2$ be a path in $G$ vertex-disjoint from $Z$. If $vu_2\not\in E(G)$, $\deg(v)=5$, $\deg(u_1)=\deg(u_2)=\deg(v_2)=3$, and $\deg(v_1)=4$, then $v$ has no internal neighbors of degree three distinct from $v_2$ and $u_1$. \end{lemma} \begin{proof} Suppose for a contradiction that $v$ has a neighbor $v_3\not\in \{v_2,u_1\}\cup Z$ of degree three. Note that $G$ does not contain the edge $v_1v_2$ by Lemma~\ref{33-path} and the edge $vu_1$ by Lemma~\ref{5-vertex3}. Since $G$ does not contain $4$- or $5$-cycles, $P$ is an induced path. By Lemma~\ref{33-path} and absence of $4$- and $5$-cycles, $v_3$ has no neighbors among $u_1$, $v_2$, and $u_2$. Consequently, since $G$ does not contain $4$- or $5$-cycles, $H=G[\{u_1,v_1,v,v_2,u_2,v_3\}]$ consists of the path $P$, the edge $vv_3$, and possibly the edge $v_1v_3$. By the minimality of $G$, the graph $G-v_2$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(H)$, and consider the list assignment $L_\psi$ for $H$. By the existence of $\psi_0$, we conclude that $H-v_2$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v)|\ge 5$, $|L_\psi(u_i)|\ge 5$ for $1\leq i\leq 2$, $|L_\psi(v_1)|\ge 3\deg_H(v_1)-1$, $|L_\psi(v_2)|\ge 8$ and $|L_\psi(v_3)|\ge 2+3\deg_H(v_3)$ by (\ref{eq:size}). By Lemma~\ref{claw543}, the graph $H$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof} \section{Discharging}\label{sec-discharge} \subsection{Notation} Consider a minimal counterexample $(G,L,Z)$. We say that the faces of $G$ of length at least $6$ are \emph{$6^+$-faces}. Since $G$ is $2$-connected by Lemma~\ref{conn}, every face of $G$ is bounded by a cycle, and in particular, every face of $G$ is either a $3$-face or a $6^+$-face. A vertex $v\in V(G)$ is a \emph{$k$-vertex} if $v$ is internal and $\deg(v)=k$. We say that $v$ is a \emph{$k^+$-vertex} if either $v\in Z$ or $\deg(v)\ge k$. Let $v_1vv_2$ be a part of the cycle bounding a $6^+$-face $f$ of $G$, and for $i\in\{1,2\}$, let $f_i\neq f$ be the face incident with the edge $vv_i$. If both $f_1$ and $f_2$ are $3$-faces, we say that $v$ is \emph{type-II incident} with $f$. If exactly one of $f_1$ and $f_2$ is a $3$-face, we say that $v$ is \emph{type-I incident} with $f$. If neither $f_1$ nor $f_2$ is a $3$-face, we say that $v$ is \emph{type-0 incident} with $f$. See Figure~\ref{fig-incid} for an illustration. \begin{figure}[!htb] \centering {\includegraphics[height=0.27\textwidth]{Face}} \caption{Type-II, type-I, and type-0 incidences.}\label{fig-incid} \end{figure} Suppose that in the situation described in the previous paragraph, $v$ is a 4-vertex type-I incident with $f$, where $f_1$ is a $3$-face, $v_1$ is a $4^+$-vertex and $v_2$ is a $5^+$-vertex. Let $v_2vx$ be the subpath of the cycle bounding $f_2$ centered at $v$. If $x$ is a $3$-vertex, then we say $v$ is {\em type-I-1 incident} with $f$. If $x$ is a $4$-vertex, $f_2$ is bounded by a $6$-cycle $xvv_2w_1w_2w_3$, $w_1$ and $w_3$ are $3$-vertices and $w_2$ is a $4$-vertex type-II incident with $f_2$, then we say $v$ is {\em type-I-2 incident} with $f$. See Figure~\ref{fig-type-Ia} for an illustration. \begin{figure}[!htb] \centering {\includegraphics[height=0.31\textwidth]{F-4}} \caption{Type-I-1 and type-I-2 incidences.}\label{fig-type-Ia} \end{figure} Let $v_0v_1vv_2v_3$ be a subpath of the cycle bounding a $6^+$-face $f$, where $vv_1$ is incident with a $3$-face, $vv_2$ is not incident with a $3$-face, $v$ is a $5$-vertex and $v_1$ is a $3$-vertex. Let $v_1$, $v_2$, $x_1$, $x_2$, $x_3$ be the neighbors of $v$ listed in cyclic order according to their drawing in $G$. If both $v_2$ and $v_3$ are $3$-vertices, then we say $v$ is {\em type-I-3 incident} with $f$. If $v_0$ and $x_1$ are $3$-vertices and $x_1$ is contained in a triangle $x_1yz$ for $4^+$-vertices $y$ and $z$ distinct from $x_2$ and $x_3$, then we say $v$ is {\em type-I-4 incident} with $f$. See Figure~\ref{fig-type-Ib} for an illustration. \begin{figure}[!htb] \centering {\includegraphics[height=0.39\textwidth]{F-5-1}} \caption{Type-I-3 and type-I-4 incidences.}\label{fig-type-Ib} \end{figure} Let $v_1vv_2v_3$ be a part of the cycle bounding a $6^+$-face $f$, where $v$ is a $5$-vertex type-0 incident with $f$. Let $v_1$, $v_2$, $x_1$, $x_2$, $x_3$ be the neighbors of $v$ listed in cyclic order according to their drawing in $G$. If $v_1$ and $v_2$ are $3$-vertices, then we say $v$ is {\em type-0-1 incident} with $f$. If $v_2$ and $v_3$ are $3$-vertices, then we say $v$ is {\em type-0-2 incident} with $f$. If both $x_1$ and $x_3$ belong to triangles containing only $3$-vertices distinct from $x_2$, then we say $v$ is {\em type-0-3 incident} with $f$. See Figure~\ref{fig-type-0} for an illustration. \begin{figure}[!htb] \centering {\includegraphics[height=0.35\textwidth]{F-5-2}} \caption{Type-0-1, type-0-2, and type-0-3 incidences.}\label{fig-type-0} \end{figure} \subsection{Initial charge and discharging rules} Now we proceed by the discharging method. Consider a minimal counterexample $(G,L,Z)$. Set the initial charge of every vertex $v$ of $G$ to be $\text{ch}_0(v)=2\deg (v)-6$, and the initial charge of every face $f$ of $G$ to be $\text{ch}_0(f)=|f|-6$. By Euler's formula, \begin{align} \sum_{v\in V(G)}\text{ch}_0(v)+\sum_{f\in F(G)}\text{ch}_0(f)&=\sum_{v\in V(G)}(2\deg (v)-6)+\sum_{f\in F(G)}(|f|-6)\nonumber\\ &=6(|E(G)|-|V(G)|-|F(G)|)=-12.\label{eq:sum} \end{align} We redistribute the charges according to the following rules: \begin{description} \item[Rt] If a $6^+$-face $f$ shares an edge with a $3$-face $f'$, then $f$ sends 1 to $f'$. \item[R4] Suppose $v$ is a 4-vertex and $f$ is a $6^+$-face incident with $v$. \begin{itemize} \item[(II)] If $v$ is type-II incident with $f$, then $v$ sends $1$ to $f$. \item[(I)] Suppose $v$ is type-I incident with $f$. If $v$ is type-I-1 or type-I-2 incident with $f$, then $v$ sends $1/2$ to $f$, otherwise $v$ sends $1$ to $f$. \item[(0)] Suppose $v$ is type-0 incident with $f$. If either $v$ is not incident with any $3$-faces or $v$ is type-I-1 or type-I-2 incident with another $6^+$-face, then $v$ sends $1/2$ to $f$. \end{itemize} \item[R5] Suppose $v$ is a 5-vertex and $f$ is a $6^+$-face incident with $v$. \begin{itemize} \item[(II)] Suppose $v$ is type-II incident with $f$. If $v$ is type-I-3 incident with another $6^+$-face, then $v$ sends $1$ to $f$, otherwise $v$ sends $2$ to $f$. \item[(I)] Suppose $v$ is type-I incident with $f$. If $v$ is type-I-3 or type-I-4 incident with $f$, then $v$ sends $3/2$ to $f$, otherwise $v$ sends $1$ to $f$. \item[(0)] Suppose $v$ is type-0 incident with $f$. If $v$ is type-0-1 or type-0-2 incident with $f$, then $v$ sends $1$ to $f$; otherwise, if $v$ is not type-0-3 incident with $f$, then $v$ sends $1/2$ to $f$. \end{itemize} \item[R6] Suppose $v$ is a $6^+$-vertex and $f$ is a $6^+$-face incident with $v$. \begin{itemize} \item[(II)] If $v$ is type-II incident with $f$, then $v$ sends $2$ to $f$. \item[(I)] If $v$ is type-I incident with $f$, then $v$ sends $3/2$ to $f$. \item[(0)] If $v$ is type-0 incident with $f$, then $v$ sends $1$ to $f$. \end{itemize} \end{description} In the situations of rules R4, R5, and R6, we write $\text{ch}(v\to f)$ for the amount of charge sent from $v$ to $f$. \subsection{Final charges of vertices} Let $\text{ch}$ denote the charge assignment after performing the charge redistribution using the rules Rt, R4, R5, and R6. \begin{lemma}\label{charge-4vertex} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be a vertex of $G$. If $v$ is a $4$-vertex, then $\text{ch}(v)\ge 0$. \end{lemma} \begin{proof} Note that $\text{ch}_0(v)=2$. Let $v_1$, \ldots, $v_4$ be the neighbors of $v$ listed in cyclic order according to their drawing in $G$. For $1\le i\le 4$, let $f_i$ be the face whose boundary contains the path $v_ivv_{i+1}$ (where $v_5=v_1$). Since $G$ contains no $4$-cycles, we can without loss of generality assume that $f_2$ and $f_4$ are $6^+$-faces. If $f_1$ and $f_3$ are $3$-faces, then $v$ is type-II incident with them and $\text{ch}(v\to f_2)=\text{ch}(v\to f_4)=1$ by R4(II), and $\text{ch}(v)=2-2\times 1=0$. Hence, suppose that $f_3$ is a $6^+$-face. Suppose now that $f_1$ is a $3$-face. If $v$ neither type-I-1 nor type-I-2 incident with $f_2$ and $f_4$, then $\text{ch}(v\to f_3)=0$ by R4(0) and $\text{ch}(v\to f_2)=\text{ch}(v\to f_4)=1$ by R4(I), and $\text{ch}(v)=2-2\times 1=0$. If $v$ is type-I-1 or type-I-2 incident with say $f_2$, then $\text{ch}(v\to f_3)=1/2$ by R4(0) and $\text{ch}(v\to f_2)=1/2$ and $\text{ch}(v\to f_4)\le 1$ by R4(I), and $\text{ch}(v)\ge 2-1-2\times 1/2=0$. Finally, if $v$ is not incident with any $3$-faces, then $\text{ch}(v\to f_i)=1/2$ by R4(0) for $1\le i\le 4$ and $\text{ch}(v)=2-4\times 1/2=0$. \end{proof} \begin{lemma}\label{charge-5vertex} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be a vertex of $G$. If $v$ is a $5$-vertex, then $\text{ch}(v)\ge 0$. \end{lemma} \begin{proof} Note that $\text{ch}_0(v)=4$. Let $v_1$, \ldots, $v_5$ be the neighbors of $v$ listed in cyclic order according to their drawing in $G$. For $1\le i\le 5$, let $f_i$ be the face whose boundary contains the path $v_ivv_{i+1}$ (where $v_6=v_1$). Since $G$ contains no $4$-cycles, we can without loss of generality assume that $f_2$, $f_4$, and $f_5$ are $6^+$-faces. Suppose first that $f_1$ and $f_3$ are $3$-faces. Then $v$ is type-II incident with $f_2$ and type-I incident with $f_4$ and $f_5$. Note that $v$ cannot be type-I-4 incident with $f_4$ and $f_5$. If $v$ is not type-I-3 incident with $f_4$ and $f_5$, then $\text{ch}(v\to f_2)=2$ by R5(II) and $\text{ch}(v\to f_4)=\text{ch}(v\to f_5)=1$ by R5(I), and thus $\text{ch}(v)=4-2-2\times 1=0$. If $v$ is type-I-3 incident with $f_4$ or $f_5$, then $\text{ch}(v\to f_2)=1$ by R5(II) and $\text{ch}(v\to f_4),\text{ch}(v\to f_5)\le 3/2$ by R5(I), and thus $\text{ch}(v)\ge 4-1-2\times 3/2=0$. Hence, we can assume that $f_3$ is a $6^+$-face. Suppose now that $f_1$ is a $3$-face. If $v$ is type-I-3 or type-I-4 incident with neither $f_2$ nor $f_5$, then $\text{ch}(v\to f_i)\le 1$ by R5(I) and R5(0) for $2\le i\le 5$, and $\text{ch}(v)\ge 4-4\times 1=0$. If $v$ is type-I-3 incident with $f_2$, then $v_4$, $v_5$, and $v_1$ are $4^+$-vertices by Lemma~\ref{5-vertex3}, and thus $v$ is neither type-I-3 nor type-I-4 incident with $f_5$ and $v$ is neither type-0-1 nor type-0-2 incident with $f_4$. If $v$ is type-I-4 incident with $f_2$, then $v_3$, $v_5$, and $v_1$ are $4^+$-vertices by Lemma~\ref{5-vertex3}, and thus $v$ is neither type-I-3 nor type-I-4 incident with $f_5$ and $v$ is neither type-0-1 nor type-0-2 incident with $f_3$ and $f_4$. In either case, $\text{ch}(v\to f_2)=3/2$ and $\text{ch}(v\to f_5)=1$ by R5(I) and $\text{ch}(v\to f_4)\le 1/2$ and $\text{ch}(v\to f_3)\le 1$ by R5(0), and $\text{ch}(v)\ge 4-3/2-2\times 1-1/2=0$. Finally, let us consider the case that $v$ is incident with no $3$-faces. If $v$ is type-0-1 or type-0-2 incident with at most three faces, then $\text{ch}(v)\ge 4-3\times 1-2\times 1/2=0$ by R5(0). Hence, suppose that $v$ is type-0-1 or type-0-2 incident with at least 4 faces. By Lemma~\ref{5-vertex}, $v$ is adjacent to at most three $3$-vertices. If $v$ is adjacent to three $3$-vertices, then by Lemma~\ref{5-vertex3} $v$ is not type-0-2 incident with any faces, and clearly $v$ is type-0-1 incident with at most two faces, which is a contradiction. Hence, $v$ is adjacent to at most two $3$-vertices, and by symmetry, we can assume that $v_5$ and $v_1$ are $4^+$-vertices. Then $v$ is neither type-0-1 nor type-0-2 incident with $f_5$, and thus it is type-0-1 or type-0-2 incident with $f_1$, \ldots, $f_4$. It cannot be type-0-1 incident with $f_1$ and $f_4$, and thus it is type-0-2 incident with these faces; i.e., $v_2$ and $v_4$ are $3$-vertices and have $3$-vertex neighbors $x_2$ and $x_4$ incident with $f_1$ and $f_4$. By Lemma~\ref{5-vertex3}, $v_3$ is a $4^+$-vertex. Consequently, $v$ is also type-0-2 incident with $f_2$ and $f_3$, and thus $v_2$ and $v_4$ have $3$-vertex neighbors $x'_2$ and $x'_4$ incident with $f_2$ and $f_3$. By Lemma~\ref{33-path}, $x_2x'_2\in E(G)$ and $x_4x'_4\in E(G)$. But then $v$ is type-0-3 incident with $f_5$ and $\text{ch}(v\to f_5)=0$ by R5(0); and thus $\text{ch}(v)=4-4\times 1=0$. \end{proof} \begin{lemma} \label{vertex} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be a vertex of $G$. If $v$ is internal, then $\text{ch}(v)\ge 0$. If $v\in Z$, then $\text{ch}(v)=\deg(v)-6$. \end{lemma} \begin{proof} By Lemma~\ref{MD}, if $v$ is internal then $\deg (v)\geq 3$. If $v$ is a $3$-vertex, then $\text{ch}(v)=\text{ch}_0(v)=0$. If $v$ is a $4$- or $5$-vertex, then $\text{ch}(v)\ge 0$ by Lemmas~\ref{charge-4vertex} and \ref{charge-5vertex}. Hence, suppose that $v$ is a $6^+$-vertex, incident with $t$ $3$-faces, and type-II, type-I and type-0 incident with $d_{II}$, $d_I$, and $d_0$ $6^+$-faces, respectively. Note that $d_{II}+d_I/2=t$. By R6, we have \begin{align*} \text{ch}(v)&=\text{ch}_0(v)-d_{II}\times 2-d_I\times 3/2-d_0\\ &=\text{ch}_0(v)-(d_{II}+d_I+d_0+t)=\text{ch}_0(v)-\deg(v)=\deg(v)-6. \end{align*} If $v$ is internal, then $\deg(v)\ge 6$, and thus $\text{ch}(v)\ge 0$. \end{proof} \subsection{Final charge of faces} Let $f$ be a $6^+$-face. A subpath $S=u_0u_1\ldots u_t$ of the cycle bounding $f$ with at least two vertices is called a \emph{segment} of $f$ if $u_1$, \ldots, $u_{t-1}$ are type-II incident with $f$, and $u_0$ and $u_t$ are type-I incident with $f$. In particular, for $1\le i\le t$, the edge $u_{i-1}u_i$ is incident with a $3$-face. Note that the segments of $f$ are pairwise vertex-disjoint. Let us define $\text{ch}(S)=-t+\sum_{i=0}^t \text{ch}(u_i\to f)$; note that $\text{ch}(S)$ denotes the amount of charge received by $f$ from vertices of the segment, minus the amount sent by the rule Rt to $3$-faces incident with edges of $S$. If $\text{ch}(S)<0$, then we say $S$ is a \emph{negative segment}. A \emph{$t$-segment} is a segment with $t$ edges. \begin{proposition} \label{segment} Let $(G,L,Z)$ be a minimal counterexample. Let $S=u_0\ldots u_t$ be a negative $t$-segment of a $6^+$-face $f$ of $G$, where $\deg(u_0)\le \deg(u_t)$. Then all vertices of $S$ are internal, and either \begin{itemize} \item both $u_0$ and $u_t$ are $3$-vertices and $\text{ch}(S)=-1$, or \item $t\ge 2$, $u_0$ is a $3$-vertex, $u_t$ is a $4$-vertex type-I-1 or type-I-2 incident with $f$, and $\text{ch}(S)=-1/2$. \end{itemize} Furthermore, if either $t\le 3$, or $t\le 5$ and $\text{ch}(S)=-1$, then $u_1$, \ldots, $u_{t-1}$ are $4$-vertices. \end{proposition} \begin{proof} Let $\beta_0=\beta_t=0$ and $\beta_i=1$ for $1\le i \le t-1$. Note that $\text{ch}(S)=-1+\sum_{i=0}^t (\text{ch}(u_i\to f)-\beta_i)$. For $1\le i\le t-1$, the edges $u_{i-1}u_i$ and $u_iu_{i+1}$ are incident with $3$-faces, and thus $u_i$ is a $4^+$-vertex type-II incident with $f$; hence, we have $\text{ch}(u_i\to f)\ge 1$ by R4(II), R5(II), and R6(II). Consequently, $\text{ch}(u_i\to f)-\beta_i\ge 0$ for $0\le i\le t$ and $\text{ch}(S)\ge -1$. Since $S$ is negative, we have $\text{ch}(u_i\to f)<\beta_i+1$ for $0\le i\le t$, and by R6(II), R6(I), R5(I), and R4(I), we conclude that all vertices of $S$ are internal of degree at most $5$, both $u_0$ and $u_t$ have degree at most $4$, and if they are $4$-vertices, then they are type-I-1 or type-I-2 incident with $f$. Furthermore, by R5(II), if $u_i$ is a $5$-vertex for some $i\in\{1,\ldots, t-1\}$, then $u_i$ is type-I-3 incident with another $6^+$-face, so that $\text{ch}(u_i\to f)=\beta_i$. By R4(I), if $u_i$ is a $4$-vertex for some $i\in\{0,t\}$, then $\text{ch}(u_i\to f)=\beta_i+1/2$, and thus either both $u_0$ and $u_t$ are $3$-vertices and $\text{ch}(S)=-1$, or $u_0$ is a $3$-vertex and $u_t$ is a $4$-vertex and $\text{ch}(S)=-1/2$. In the latter case, $u_t$ is type-I-1 or type-I-2 incident with $f$, and in particular $u_{t-1}$ is a $4^+$-vertex, and consequently $t\ge 2$. Suppose now that some vertex $u_i$ with $i\in\{1,\ldots,t-1\}$ is a $5$-vertex; as we observed, $u_i$ is type-I-3 incident with another $6^+$-face. By Lemma~\ref{5-vertex3}, we have $i\ge 2$, and by Lemma~\ref{5-vertex43}, we have $i\ge 3$. If $\text{ch}(S)=-1$, then $u_t$ is a $3$-vertex, and a symmetric argument shows that neither $u_{t-1}$ nor $u_{t-2}$ is a $5$-vertex. Consequently, if either $t\le 3$, or $t\le 5$ and $\text{ch}(S)=-1$, then $u_1$, \ldots, $u_{t-1}$ are $4$-vertices. \end{proof} We say two segments of the same $6^+$-face $f$ are \emph{adjacent} if an edge of the cycle bounding $f$ joins their ends. \begin{proposition} \label{charge} Let $(G,L,Z)$ be a minimal counterexample and let $f$ be a $6^+$-face of $G$. The following propositions hold. \begin{itemize} \item[(1)] If $S$ is a segment of $f$ adjacent to a negative $1$-segment, then $\text{ch}(S)\ge 0$. Additionally, if $S$ is a $1$-segment, then $\text{ch}(S)\ge 1/2$. \item[(2)] Suppose $uvw$ is a subpath of the cycle bounding $f$, where $uv$ is incident with a $3$-face. If $w$ is type-0 incident with $f$ and $v$ is a $4^+$-vertex, then $\text{ch}(v\to f)+\text{ch}(w\to f)\geq 1$ (see Figure~\ref{fig-charge}(a)). \item[(3)] Suppose $uvw$ is a subpath of the cycle bounding $f$, where both $v$ and $w$ are type-0 incident with $f$ and $v$ is a $4$-vertex. Let $u$, $w$, $x$, $y$ be the neighbors of $v$ listed in cyclic order according to their drawing in $G$. If $u$ is a $3$-vertex, $w$ is a $5^+$-vertex, and both $x$ and $y$ are $4^+$-vertices, then $\text{ch}(v\to f)+\text{ch}(w\to f)\geq 1$ (see Figure~\ref{fig-charge}(b)). \item[(4)] Suppose $uvwx$ is a subpath of the cycle bounding $f$, where $uv$ is incident with a $3$-face and $u$ and $v$ are $3$-vertices. If $wx$ is incident with a $3$-face, then let $T=\{w\}$, otherwise let $T=\{w,x\}$. Then either $\sum_{z\in T} \text{ch}(z\to f)\ge 1$ or both $w$ and $x$ are $4$-vertices type-0 incident with $f$. \end{itemize} \end{proposition} \begin{proof} Let us prove the claims separately. \begin{itemize} \item[(1)] Let $S=u_0\ldots u_t$, and let $S'=v_0v_1$ be a negative $1$-segment adjacent to $S$; say $v_1u_0$ is an edge of the cycle bounding $f$. By Proposition~\ref{segment}, both $v_0$ and $v_1$ are $3$-vertices. Note that $u_0$ is not adjacent to $v_0$, since all triangles in $G$ bound faces and $\deg(v_1)>2$. By Lemma~\ref{33-path}, $u_0$ is a $4^+$-vertex. Since $v_1$ is a $3$-vertex, $u_0$ is neither type-I-1 nor type-I-2 incident with $f$, and thus $\text{ch}(S)\ge 0$ by Proposition~\ref{segment}. Suppose now that $t=1$. If $\text{ch}(u_0\to f)\ge 3/2$, then $\text{ch}(S)\ge \text{ch}(u_0\to f)-1=1/2$. Hence, we can assume that $\text{ch}(u_0\to f)<3/2$, and thus by R6(I), $u_0$ is not a $6^+$-vertex. Since $u_0$ is neither type-I-1 nor type-I-2 incident with $f$, R4(I) and R5(I) imply $\text{ch}(u_0\to f)=1$. If $u_0$ is a $5$-vertex, this by R5(I) implies that $u_0$ is not type-I-3 incident with $f$, and thus $u_1$ is a $4^+$-vertex. If $u_0$ is a $4$-vertex, then by Lemma~\ref{33-path}, we again conclude that $u_1$ is a $4^+$-vertex. In either case, R4(I), R5(I), and R6(I) imply $\text{ch}(u_1\to f)\ge 1/2$, and thus $\text{ch}(S)=\text{ch}(u_0\to f)+\text{ch}(u_1\to f)-1\ge 1/2$. \item[(2)] By R4(I), R5(I), and R6(I), we have $\text{ch}(v\to f)\ge 1$, unless $v$ is a $4$-vertex type-I-1 or type-I-2 incident with $f$. If $v$ is type-I-1 or type-I-2 incident with $f$, then $\text{ch}(v\to f)=1/2$ and $w$ is a $5^+$-vertex. Note that in this case $w$ is not type-0-3 incident with $f$ (this is clear if $v$ is type-I-2 incident with $f$, and follows by Lemma~\ref{5-vertex43} if $v$ is type-I-1 incident with $f$). Hence, $\text{ch}(w\to f)\ge 1/2$ by R5(0) and R6(0), and $\text{ch}(v\to f)+\text{ch}(w\to f)\geq 1$. \item[(3)] If $w$ is a $6^+$-vertex, then $\text{ch}(w\to f)=1$ by R6(0). Hence, assume that $w$ is a $5$-vertex. By Lemma~\ref{5-vertex43}, $w$ is not type-0-3 incident with $f$, and thus $\text{ch}(w\to f)\ge 1/2$ by R5(0). If $xy\in E(G)$, then consider the face $g$ whose boundary contains the path $wvx$, and observe that $v$ is type-I-1 incident with $g$. If $xy\not\in E(G)$, then $v$ is not incident with any $3$-faces. In either case, $\text{ch}(v\to f)=1/2$ by R4(0), and thus $\text{ch}(v\to f)+\text{ch}(w\to f)\ge 1$. \item[(4)] By Lemma~\ref{33-path}, $w$ is a $4^+$-vertex. If $w$ is a $5$-vertex, then it is either type-I or type-0-2 incident with $f$. Hence, if $w$ is a $5^+$-vertex, then $\text{ch}(w\to f)\ge 1$ by R5(I), R5(0), and R6. Consequently, we can assume that $w$ is a $4$-vertex. If $wx$ is incident with a $3$-face, then note that $w$ is neither type-I-1 nor type-I-2 incident with $f$, and $\text{ch}(w\to f)=1$ by R4(I). Hence, assume that $wx$ is not incident with a $3$-face. By Lemma~\ref{33-path}, $x$ is a $4^+$-vertex. If $x$ is a $5^+$-vertex, then note that $w$ has no $3$-vertex neighbors other than $v$ by Lemma~\ref{33-path}, and thus $\text{ch}(w\to f)+\text{ch}(x\to f)\geq 1$ by (3). If $x$ is a $4$-vertex type-I incident with $f$, then note that $x$ is neither type-I-1 nor type-I-2 incident with $f$, and thus $\text{ch}(x\to f)=1$ by R4(I). Therefore, either $\sum_{z\in T} \text{ch}(z\to f)\ge 1$ or both $w$ and $x$ are $4$-vertices type-0 incident with $f$. \end{itemize} \end{proof} \begin{figure}[!htb] \centering {\includegraphics[height=0.35\textwidth]{charge}} (a) \hspace{3.0cm} (b) \hspace{1.0cm} \caption{Configurations from Proposition~\ref{charge}.}\label{fig-charge} \end{figure} \begin{lemma}\label{6face} Let $(G,L,Z)$ be a minimal counterexample and let $f$ be a face of $G$. If $f$ has length $6$, then $\text{ch}(f)\ge 0$. \end{lemma} \begin{proof} Note that $\text{ch}_0(f)=0$. Let $v_1\ldots v_6$ be the cycle bounding $f$. If all faces that share edges with $f$ are $3$-faces, then $v_1$, \ldots, $v_6$ are $4^+$-vertices type-II incident with $f$, and thus $\text{ch}(f)\ge 6\times 1-6\times 1=0$ by R4(II), R5(II), R6(II), and Rt. Hence, we can assume that $f$ shares an edge with a $6^+$-face. Hence, each edge which $f$ shares with a $3$-face is contained in a segment. Let $S_1$, \ldots, $S_k$ be segments of $f$. By Rt, we have $\text{ch}(f)\ge \sum_{i=1}^k \text{ch}(S_i)$, and thus we can assume that say $S_1$ is a negative segment. We can label the vertices so that $S_1=v_1\ldots v_m$ for some $m\ge 2$. Let us first consider the case that $\text{ch}(S)\ge -1/2$ for every segment $S$ of $f$. By Proposition~\ref{segment}, we have $\text{ch}(S_1)=-1/2$, $m\ge 3$, and we can assume that $v_1$ is a $3$-vertex and $v_m$ is a $4$-vertex type-I-1 or type-I-2 incident with $f$. Consequently, $v_{m+1}$ is a $5^+$-vertex (and in particular $m\le 5$). If $v_{m+1}$ is type-0 incident with $f$, then $f$ cannot have a negative segment other than $S_1$ (as it would be a $1$-segment, which cannot have charge at least $-1/2$ by Proposition~\ref{segment}). Furthermore, $\text{ch}(v_m\to f)+\text{ch}(v_{m+1}\to f)\ge 1$ by Proposition~\ref{charge}(2), and thus $\text{ch}(f)\ge (\text{ch}(S_1)-\text{ch}(v_m\to f))+(\text{ch}(v_m\to f)+\text{ch}(v_{m+1} \to f))\ge -1+1=0$. Hence, we can assume that $v_{m+1}$ is type-I incident with $f$ and starts a segment $S_2=v_{m+1}\ldots v_i$ for some $i\in \{5,6\}$. We have $\text{ch}(v_{m+1}\to f)\ge 1$ by R5(I) and R6(I), and if $v_i$ is a $4^+$-vertex, then $\text{ch}(v_i\to f)\ge 1/2$ by R4(I), R5(I), and R6(I), and thus $\text{ch}(S_2)\ge 1/2$, and $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(S_2)\ge 0$. Hence, suppose that $v_i$ is a $3$-vertex. Note that $m\le 4$, and thus $v_2$, \ldots, $v_{m-1}$ are $4$-vertices by Proposition~\ref{segment}. If $m=3$, then by Lemmas~\ref{33-path} and \ref{34-path}, we conclude that $i=5$ and $v_6$ is a $5^+$-vertex, not type-0-3 incident with $f$ by Lemma~\ref{5-vertex3}. Hence, $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(S_2)+\text{ch}(v_6\to f)\ge -1/2+0+1/2=0$ by R5(0) and R6(0). Therefore, we can assume that $m=4$, and thus $i=6$. By Lemma~\ref{33-path}, $v_4$ is not type-I-1 incident with $f$, and thus it is type-I-2 incident; i.e., $f$ shares the edge $v_4v_5$ with a $6$-face bounded by a cycle $v_4v_5w_1yx_1x$, where the edge $w_1y$ is incident with a $3$-face bounded by cycle $w_1yy_2$, $x_1$ and $w_1$ are $3$-vertices and $y$ is a $4$-vertex. Note that $y_2$ is a $4^+$-vertex by Lemma~\ref{4-vertex}. Consequently, $v_5$ is either a $6^+$-vertex, or a $5$-vertex type-I-4 incident with $f$, and $\text{ch}(v_5\to f)=3/2$ by R5(I) and R6(I). Consequently, $\text{ch}(S_2)=1/2$ and $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(S_2)=0$. Let us now consider the case that $f$ is incident with a segment $S$ with $\text{ch}(S)<-1/2$, say $S=S_1$. By Proposition~\ref{segment}, $\text{ch}(S_1)=-1$ and both $v_1$ and $v_m$ are $3$-vertices. Since $m\le 6$, Proposition~\ref{segment} also implies that $v_2$, \ldots, $v_{m-1}$ are $4$-vertices. By Lemma~\ref{6-cycle}, we have $m\le 5$. If $m=5$, then by Lemmas~\ref{33-path} and \ref{6-cycle}, $v_6$ is a $5^+$-vertex. Note that if $v_6$ is a $5$-vertex, then it is type-0-1 incident with $f$. Hence, $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(v_6\to f)=-1+1=0$ by R5(0) and R6(0). Let us distinguish cases depending on whether $m$ is $2$, $3$, or $4$. \smallskip \textbf{Case $m=4$:} By Lemmas~\ref{33-path} and \ref{6-cycle}, we can assume that $v_5$ is a $4^+$-vertex and $v_6$ is a $5^+$-vertex. If $v_5v_6$ is incident with a $3$-face (and thus $v_5v_6$ is a segment $S_2$), then note that $v_5$ is not type-I-1 or type-I-2 incident with $f$, and thus $\text{ch}(v_i\to f)\ge 1$ for $i\in\{5,6\}$ by R4(I), R5(I), and R6(I), $\text{ch}(S_2)\ge 1$, and $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(S_2)\ge -1+1=0$. Hence, suppose that $v_5$ and $v_6$ are type-0 incident with $f$. Note that neither $v_5$ nor $v_6$ is type-0-3 incident with $f$ by Lemma~\ref{5-vertex3}, and thus $\text{ch}(v_5\to f)+\text{ch}(v_6\to f)\ge 1$ by R5(0) and R6(0) unless $v_5$ is a $4$-vertex. If $v_5$ is a $4$-vertex, then note that $v_4$ is the only $3$-vertex neighbor of $v_5$ by Lemma~\ref{34-path}, and thus $\text{ch}(v_5\to f)+\text{ch}(v_6\to f)\ge 1$ by Proposition~\ref{charge}(3). Hence, $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(v_5\to f)+\text{ch}(v_6\to f)\ge -1+1=0$. \smallskip \textbf{Case $m=3$:} By Lemma~\ref{33-path}, $v_4$ and $v_6$ are $4^+$-vertices. If $v_5$ is a $3$-vertex, then $v_4$ and $v_6$ are $5^+$-vertices by Lemma~\ref{33-path} and by symmetry we can assume that $v_4$ is type-0 incident with $f$. Then $S_1$ is the only negative segment of $f$ ($v_5v_6$ could be a $1$-segment, but by Proposition~\ref{charge}(1), a negative $1$-segment cannot be adjacent to $S_1$) and if $v_4$ is a $5$-vertex, then it is type-0-1 incident with $f$. Hence, $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(v_4\to f)\ge -1+1=0$ by R5(0) and R6(0). Consequently, we can assume that $v_5$ is also a $4^+$-vertex. Suppose that $f$ is incident with a segment $S_2\neq S_1$. If $\text{ch}(S_2)\ge 1$, then $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(S_2)\ge -1+1=0$. Hence, we can assume that $\text{ch}(S_2)<1$. Note that neither $v_4$ nor $v_6$ is type-I-1 or type-I-2 incident with $f$, and thus by R4(I), R5(I), and R6(I), this is only possible if $v_5$ is an end of $S_2$ (say $S_2=v_5v_6)$, $v_5$ is a $4$-vertex and $v_5$ is type-I-1 or type-I-2 incident with $f$. Then $\text{ch}(S_2)=1/2$ and $v_4$ is a $5^+$-vertex. By Lemma~\ref{5-vertex3}, $v_4$ is not type-0-3 incident with $f$, and thus $\text{ch}(v_4\to f)\ge 1/2$ by R5(0) and R6(0). Hence, $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(S_2)+\text{ch}(v_4\to f)\ge -1+1/2+1/2=0$. Hence, we can assume that $S_1$ is the only segment of $f$. If $\text{ch}(v_4\to f)\ge 1/2$ and $\text{ch}(v_6\to f)\ge 1/2$, then $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(v_4\to f)+\text{ch}(v_6\to f)\ge -1+2\times 1/2=0$. Hence, by symmetry we can assume that $\text{ch}(v_4\to f)<1/2$. By Lemma~\ref{5-vertex3}, $v_4$ is not type-0-3 incident with $f$, and thus by R4(0), R5(0), and R6(0), we conclude that $v_4$ is a $4$-vertex. By Lemma~\ref{34-path}, $v_3$ is the only $3$-vertex neighbor of $v_4$. If $v_5$ is a $5^+$-vertex, then $\text{ch}(v_4\to f)+\text{ch}(v_5\to f)\ge 1$ by Proposition~\ref{charge}(3), and thus $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(v_4\to f)+\text{ch}(v_5\to f)\ge -1+1=0$. Hence, we can assume that $v_5$ is a $4$-vertex, and thus $v_6$ is a $5^+$-vertex by Lemma~\ref{6-cycle}. By Lemma~\ref{5-vertex3}, $v_6$ is not type-0-3 incident with $f$, and thus $\text{ch}(v_6\to f)\ge 1/2$ by R5(0) and R6(0). Let $f'$ denote the face with that $f$ shares the edge $v_5v_6$, and let $z\neq v_6$ be the neighbor of $v_5$ in the boundary cycle of $f'$. By Lemma~\ref{34-path}, $z$ is a $4^+$-vertex, and thus if $v_5$ is incident with a $3$-face, then $v_5$ is type-I-2 incident with $f'$. Consequently, $\text{ch}(v_5\to f)=1/2$ by R4(0), and $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(v_5\to f)+\text{ch}(v_6\to f)\ge -1+2\times 1/2=0$. \smallskip \textbf{Case $m=2$:} If $k=3$, then the segments of $f$ are $S_1$, $S_2=v_3v_4$ and $S_3=v_5v_6$. By Proposition~\ref{charge}(1), $\text{ch}(S_2),\text{ch}(S_3)\ge 1/2$, and thus $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(S_2)+\text{ch}(S_3)=0$. Hence, we can assume that $k\le 2$. If $v_3$ is contained in a segment, then let $T_3=\{v_3\}$, otherwise let $T_3=\{v_3,v_4\}$. If $v_6$ is contained in a segment, then let $T_6=\{v_6\}$, otherwise let $T_6=\{v_5,v_6\}$. For $i\in\{3,6\}$, let $\gamma_i=\sum_{x\in T_i} \text{ch}(x\to f)$. By Lemma~\ref{6-cycle}, $v_3$, \ldots, $v_6$ cannot all be $4$-vertices, and thus by Proposition~\ref{charge}(4), we have $\max(\gamma_3,\gamma_6)\ge 1$. If $k=1$, then $\text{ch}(f)=\text{ch}(S_1)+\gamma_3+\gamma_6\ge -1+0+1=0$. Hence, we can assume that $k=2$. Let $S_2\neq S_1$ be the other segment of $f$, with ends $x$ and $y$, and let $\beta=\text{ch}(S_2)-\text{ch}(x\to f)-\text{ch}(y \to f)$. Observe that $\beta\ge -1$, and $\text{ch}(f)\ge \text{ch}(S_1)+\beta+\gamma_3+\gamma_6$. If $\gamma_3,\gamma_6\ge 1$, we have $\text{ch}(f)\ge 0$. Hence, by symmetry we can assume that $\gamma_3<1$, and by Proposition~\ref{charge}(4), $v_3$ and $v_4$ are $4$-vertices type-0 incident with $f$, and thus $S_2=v_5v_6$. By Lemma~\ref{33-path}, $v_5$ and $v_6$ are $4^+$-vertices, and clearly neither of them is type-I-1 or type-I-2 incident with $f$. By R4(I), R5(I), and R6(I), we have $\text{ch}(v_i\to f)\ge 1$ for $i\in \{5,6\}$, and thus $\text{ch}(S_2)\ge 1$. Consequently, $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(S_2)\ge 0$. \end{proof} \begin{lemma}\label{7face} Let $(G,L,Z)$ be a minimal counterexample and let $f$ be a face of $G$. If $f$ has length at least $7$, then $\text{ch}(f)\ge 0$. \end{lemma} \begin{proof} Let $C=v_1\ldots v_m$ be the cycle bounding $f$. If all faces that share edges with $f$ are $3$-faces, then $v_1$, \ldots, $v_m$ are $4^+$-vertices type-II incident with $f$, and thus $\text{ch}(f)\ge \text{ch}_0(f)+m\times 1-m\times 1\ge 0$ by R4(II), R5(II), R6(II), and Rt. Hence, we can assume that $f$ shares an edge with a $6^+$-face. Hence, each edge which $f$ shares with a $3$-face is contained in a segment. Let $S_1$, \ldots, $S_k$ be the segments of $f$, and let $n$ denote the number of them which are negative. For a negative segment $S=v_iv_{i+1}\ldots v_s$, we say that $S$ \emph{owns} the edges of the path $S$, the edge $v_sv_{s+1}$, and if $S$ is a $1$-segment, then also the edge $v_{i-1}v_i$ (with all indices taken cyclically modulo $m$). Note that by Proposition~\ref{segment} and Lemma~\ref{33-path}, each edge of $C$ is owned by at most one negative segment, and each negative segment owns at least three edges. Consequently, $n\le \lfloor m/3\rfloor$, and $$\text{ch}(f)\ge \text{ch}(f_0)+\sum_{i=1}^k \text{ch}(S_i)\ge \text{ch}(f_0)-n\ge m-6-\lfloor m/3\rfloor.$$ It follows that if $m\ge 8$, then $\text{ch}(f)\ge 0$. Hence, suppose that $|f|=7$, and thus $n\le 2$. If $f$ has at most one negative segment, or two negative segments of charge at least $-1/2$, then $\text{ch}(f)\ge \text{ch}(f_0)-1=0$. Hence, we can assume that $f$ has two negative segments $S_1$ and $S_2$ and $\text{ch}(S_1)<-1/2$. By Proposition~\ref{segment}, $\text{ch}(S_1)=-1$, and we can assume $S_1=v_1\ldots v_s$ for some $s\le 5$, $v_1$ and $v_s$ are $3$-vertices, and $v_2$, \ldots, $v_{s-1}$ are $4$-vertices. By Lemma~\ref{33-path}, $v_{s+1}$ and $v_7$ are $4^+$-vertices; furthermore, they are clearly neither type-I-1 nor type-I-2 incident with $f$. Since $S_2$ is negative, Proposition~\ref{segment} implies that $v_{s+1},v_7\not\in V(S_2)$, and thus $s\le 3$. If $\text{ch}(S_2)>-1$, then by Proposition~\ref{segment} we conclude that $s=2$ and $S_2$ is the $2$-segment $v_4v_5v_6$, and one end of $S_2$ is a $3$-vertex; otherwise, Proposition~\ref{segment} implies that both ends of $S_2$ are $3$-vertices. By symmetry, we can assume that $v_6$ is a $3$-vertex belonging to $S_2$. By Lemma~\ref{34-path}, $v_7$ is a $5^+$-vertex. Note that if $v_7$ is a $5$-vertex, then it is type-0-1 incident with $f$, and thus $\text{ch}(v_7\to f)=1$ by R5(0) and R6(0). Hence, $\text{ch}(f)\ge \text{ch}_0(f)+\text{ch}(S_1)+\text{ch}(S_2)+\text{ch}(v_7\to f)\ge 1-2\times 1+1=0$. \end{proof} \begin{lemma} \label{face} Let $(G,L,Z)$ be a minimal counterexample. Then every face $f$ of $G$ satisfies $\text{ch}(f)\ge 0$. \end{lemma} \begin{proof} If $f$ is a $6^+$-face, this follows from Lemmas~\ref{6face} and \ref{7face}. If $f$ is a $3$-face, then note that $f$ only shares edges with $6^+$-faces by the absence of $4$- and 5-cycles, and thus $\text{ch}(f)=\text{ch}_0(f)+3\times 1=0$ by Rt. \end{proof} \section{$(11:3)$-colorability of planar graphs} We are now ready to prove our main result. \begin{proof}[Proof of Theorem~\ref{MTl}] Suppose for a contradiction that there exists a plane graph $G_0$ without $4$- or $5$-cycles and an assignment $L_0$ of lists of size 11 to vertices of $G_0$ such that $G_0$ is not $(L_0:3)$-colorable. Let $z$ be any vertex of $G_0$, let $L_0'(z)$ be any $3$-element subset of $L_0(z)$, and let $L'_0(v)=L_0(v)$ for all $v\in V(G)\setminus\{z\}$. Then $G_0$ is not $(L'_0:3)$-colorable, and thus $(G_0, L_0,\{z\})$ is a counterexample. Therefore, there exists a minimal counterexample $(G,L,Z)$. Let $\text{ch}$ be the assignment of charges to vertices and faces of $G$ obtained from the initial charge $\text{ch}_0$ as described in Section~\ref{sec-discharge}. By (\ref{eq:sum}), the fact that the total amount of charge does not change by its redistribution, and Lemmas~\ref{vertex} and \ref{face}, we have $$-12=\sum_{v\in V(G)}\text{ch}_0(v)+\sum_{f\in F(G)} \text{ch}_0(f)=\sum_{v\in V(G)}\text{ch}(v)+\sum_{f\in F(G)} \text{ch}(f)\ge \sum_{z\in Z} (\deg(z)-6).$$ Since $|Z|\le 3$ and $\deg(z)\ge 2$ for all $z\in Z$ by Lemma~\ref{conn}, we conclude that $|Z|=3$ and all vertices of $Z$ have degree two. But since $G$ is connected and $G[Z]$ is a triangle, this implies that $V(G)=Z$, and thus $G$ is $(L:3)$-colorable. This is a contradiction. \end{proof} \section*{Acknowledgments} The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement n.616787. Xiaolan Hu is partially supported by NSFC under grant number 11601176 and NSF of Hubei Province under grant number 2016CFB146. \bibliographystyle{siam}
{ "timestamp": "2019-07-16T02:13:51", "yymm": "1802", "arxiv_id": "1802.04179", "language": "en", "url": "https://arxiv.org/abs/1802.04179" }
\section{Measure derivatives of Lions, associated It\^o formula and examples} \label{sec measure derivatives and ito} For the construction of the measure derivative in the sense of Lions we follow the approach from~\cite[Section 6]{cardaliaguet2012}. There are three main differences: The first difference is that we define the measure derivative in a domain. More precisely we will define the measure derivative for any measure as long as it has support on $D_k \subset D$ for some $k\in \mathbb N$ (recall that $D_k \subset D_{k+1}$ and $\bigcup_k D_k = D$ and every $D_k$ is compact) \wh{Here we recall that every $D_k$ is compact - this is inconsistent with the rest of the paper - see pg 7 and my note there.} This is precisely what is needed for the analysis in this paper. The second difference is that we are explicit in making it clear why the measure derivative is independent of the probability space used to realise the measure as well as the random variable used. The third difference is in proving the ``Structure of the gradient'', see~\cite[Theorem 6.5]{cardaliaguet2012}. Thanks to an observation by Sandy Davie (University of Edinburgh), we can show as part iii) of Proposition~\ref{propn L deriv} that the measure derivative has the right structure even if it only exists at the point $\mu$ instead of for every square integrable measure, as is required in~\cite{cardaliaguet2012}. The method of Sandy Davie also conveniently results in a much shorter proof. \subsection{Construction of first-order Lions' measure derivative on $D_k\subset D \subseteq \mathbb{R}^d$} Consider $u:\mathcal{P}_2(D) \to \mathbb R$. Here $\mathcal{P}_2(D)$ is a space of probability measures on $D$ that have second moments i.e. $\int_D x^2 \mu (dx) < \infty$ for $\mu \in \mathcal{P}_2(D)$. We want to define the derivative at points $\mu \in \mathcal P_2(D)$ such that $\text{supp}(\mu)\subseteq D_k$. We shall write $\mu \in \mathcal P(D_k)$ if $\mu$ is a probability measure on $D$ with support in $D_k$. \begin{definition}[L-differentiability at $\mu \in \mathcal{P}(D_k)$] \label{def L differentiability} We say that $u$ is {\em L-differentiable at} $\mu \in \mathcal{P}(D_k)$ if there is an atomless\footnote{ Given $(\Omega, \mathcal F, \mathbb P)$ an {\em atom} is $E\in \mathcal F$ s.t. $\mathbb P(E)>0$ and for any $G\in \mathcal F$, $G\subset E$, $\mathbb P(E) > \mathbb P(G)$ we have $\mathbb P(G) = 0$.}, Polish probability space $(\Omega, \mathcal F, \mathbb P)$ and an $X \in L^2(\Omega)$ such that $\mu = \mathscr{L}(X)$ and the function $U:L^2(\Omega) \to \mathbb R$ given by $U(Y) := u(\mathscr{L}(Y))$ is Fr\'echet differentiable at $X$. We will call $U$ the {\em lift} of $u$. \end{definition} Clearly $X$ s.t. $\mu = \mathscr L(X)$ can always be chosen so that $\text{supp}(X) \subseteq D_k$ for $\mu \in \mathcal P(D_k)$. We recall that saying $U:L^2(\Omega; D) \to \mathbb R$ is Fr\'echet differentiable at $X$ with $\text{supp}(X)\subseteq D_k$ means that there exists a bounded linear operator $A:L^2(\Omega) \to \mathbb R$ such that for \[ \lim_{\substack{|Y|_2 \to 0 \\ \text{supp}(X+Y)\subseteq D} } \bigg| \frac{U(X+Y)-U(X)}{|Y|_2} - \frac{AY}{|Y|_2} \bigg| = 0\, . \] Note that Since $L^2(\Omega)$ is a Hilbert space with the inner product $(X,Y) := \mathbb E [XY]$ we can identify $L^2(\Omega)$ with its dual $L^2(\Omega)^*$ via this inner product. Then the bounded linear operator $A$ defines an element $DU(X) \in L^2(\Omega)$ through \[(DU(X),Y) := AY \quad \forall Y \in L^2(\Omega).\] \begin{proposition} \label{propn L deriv} Let $u$ be L-differentiable at $\mu \in \mathcal P(D_k)$, with some atomless $(\Omega, \mathcal F, \mathbb P)$, lift $U$ and $X\in L^2(\Omega)$. Let $(\bar \Omega, \bar{\mathcal F}, \bar {\mathbb P})$ be an arbitrary atomless, Polish probability space which supports $\bar{X} \in L^2(\bar \Omega)$ and on which we have the lift $\bar{U}(Y) := u(\mathscr{L}(Y))$. Then \begin{enumerate}[i)] \item The lift $\bar U$ is Fr\'echet differentiable at $\bar X$ with derivative $D\bar U(\bar X)\in L^2(\bar \Omega)$. \item The joint law of $(X,DU(X))$ equals that of $( \bar X ,D\bar U(\bar X))$. \item There is $\xi:D_k \to D_k$ measurable such that $\int_{D_k} \xi^2(x) \mu (dx) < \infty$ and \[ \xi(X) = DU(X), \quad \xi(\bar X) = D\bar U(\bar X)\,.\] \end{enumerate} \end{proposition} Once this is proved we will know that the notion of L-differentiability depends neither on the probability space used nor on the random variable used. Moreover the function $\xi$ given by this proposition is again independent of the probability space and random variable used. \begin{definition}[L-derivative of $u$ at $\mu$] \label{def L derivative} If $u$ is L-differentiable at $\mu$ then we write $\partial_\mu u(\mu):= \xi$, where $\xi$ is given by Proposition~\ref{propn L deriv}. Moreover we have $\partial_\mu u: \mathcal P_2(D_k) \times D_k \to D_k$ given by \[ \partial_\mu u(\mu, y) := [\partial_\mu u(\mu)](y)\,. \] \end{definition} To prove Proposition~\ref{propn L deriv} we will need the following result: \begin{lemma} \label{lemma tau} Let $(\Omega, \mathcal F, \mathbb P)$ and $(\bar \Omega, \bar{\mathcal F}, \bar {\mathbb P})$ be two atomless, Polish probability spaces supporting $D_k$-valued random variables $X$ and $\bar X$ such that $\mathscr{L}(X) =\mathscr{L}(\bar X)$. Then for any $\epsilon > 0$ there exists $\tau:\Omega \to \bar \Omega$ which is bijective, such that both $\tau$ and $\tau^{-1}$ are measurable and measure preserving and moreover \[ |X - \bar X \circ \tau |_\infty < \epsilon \,\,\, \text{ and }\,\,\, |X \circ \tau^{-1} - \bar X|_\infty < \epsilon\,. \] \end{lemma} \begin{proof} Let $(A_n)_n$ be a measurable partition of $D_k$ such that $\text{diam}(A_n) < \epsilon$. Let \[ B_n := \{X \in A_n\},\quad \bar B_n := \{\bar X \in A_n\}\,. \] These form measurable partitions of $\Omega$ and $\bar \Omega$ respectively and moreover $\mathbb P(B_n) = \bar{\mathbb P}(\bar B_n)$. As the probability spaces are atomless, there exist $\tau_n :B_n \to \bar B_n$ bijective, such that $\tau_n$ and $\tau_n^{-1}$ are measurable and measure preserving. See~\cite[Sec. 41, Theorem C]{hamlos1950} for details\footnote{The theorem in fact provides the required isomorphism between a measure on separable atomless probability space and the unit interval.}. Let \[ \tau(\omega) := \tau_n(\omega)\,\,\, \text{if $\omega \in B_n$},\quad \tau^{-1}(\bar \omega) := \tau_n^{-1}(\bar \omega)\,\,\, \text{if $\bar \omega \in \bar B_n$} \,. \] We can see that these are measurable, measure preserving bijections. Now consider $\omega \in B_n$. Then $\tau(\omega) = \tau_n(\omega) \in \bar B_n$. But then $X(\omega) \in A_n$ and $\bar{X}(\tau(\omega) \in A_n$ too. Hence \[ |X(\omega) - \bar{X}(\tau(\omega))| < \epsilon \quad \forall \omega \in \Omega\,. \] The estimate for the inverse is proved analogously. \end{proof} We use the notation $L^2:=L^2(\Omega)$ and $\bar L^2:= L^2(\bar \Omega)$. \begin{proof}[Proof of Proposition~\ref{propn L deriv}, part i)] For any $h>0$ we have $\tau_h, \tau_h^{-1}$ given by Lemma~\ref{lemma tau} measure preserving and such that $|X - \bar{X}\circ\tau_h |_\infty < h$. This means that $|X - \bar{X}\circ\tau_h |_2 < h$ and we have the analogous estimate with $\tau_h^{-1}$. Our first aim is to show that $(DU(X)\circ \tau_h^{-1})_{h>0}$ is a Cauchy sequence in $\bar L^2$. Fix $\epsilon > 0$. Then $\exists \,\delta > 0$ such that we have \[ |U(X+Y) - U(X) - (DU(X),Y)| < \frac{\epsilon}{2}|Y|_2 \quad \text{for all}\,\,\,|Y|_2 < \delta\,\,\, \text{and}\,\,\,\text{supp}(X+Y)\subseteq D \,, \] since $U$ is Fr\'echet differentiable at $X$. Fix $h, h' < \delta/2$ and consider $|\bar Y|_2 < \delta / 2$ and $supp(\bar X+\bar Y)\subseteq D$\,. Then, since the maps $\tau_h^{-1}$ are measure preserving, we have \[ (DU(X)\circ \tau_h^{-1}, \bar Y) = (DU(X), \bar Y \circ \tau_h)\,. \] Note that the inner product on the left is in $\bar L^2$ but the one on the right is in $L^2$. This will not be distinguished in our notation. Let $Z_h := \bar Y \circ \tau_h - X + \bar X\circ \tau_h$. Then $|Z_h|_2 \leq |\bar Y|_2 + |\bar X\circ \tau_h - X|_2 <\delta$ and since $\text{supp}(\bar X + \bar Y) \subseteq D$, we have $\text{supp}(X+ Z_h )\subseteq D$. Moreover \begin{equation*} \begin{split} (DU(X)\circ \tau_h^{-1} & - DU(X)\circ \tau_{h'}^{-1},\bar Y) = (DU(X), Z_h) - (DU(X),Z_{h'})\\ & + (DU(X), \bar X \circ \tau_h - X) + (DU(X), X - \bar X \circ \tau_{h'} )\\ = & + U(X+Z_h) - U(X) + (DU(X),Z_h) - [U(X+Z_h) - U(X)]\\ & + U(X+Z_{h'}) - U(X) - (DU(X),Z_{h'}) - [U(X+Z_{h'}) - U(X)]\\ & + (DU(X), \bar X \circ \tau_h - X) + (DU(X), X - \bar X \circ \tau_{h'} )\,. \end{split} \end{equation*} But as $\tau_h$ is measure preserving and $U$ and $\bar U$ only depend on the law, we have \[ U(X+Z_h) = U(\bar Y \circ \tau_h + \bar X\circ \tau_h) = \bar U(\bar Y + \bar X) = U(X+Z_{h'}). \] Hence \[ \begin{split} |(DU(X)\circ \tau_h^{-1} & - DU(X)\circ \tau_{h'}^{-1},\bar Y)| \leq \frac{\epsilon}{2}|Z_{h'}|_2 + \frac{\epsilon}{2}|Z_{h}|_2 + 2|DU(X)|_2\max(h,h') \, \\ & \leq \epsilon |Y|_2 + \epsilon \max(h,h') + 2|DU(X)|_2\max(h,h') \,. \end{split} \] This means that \begin{equation*} \begin{split} & |DU(X)\circ \tau_h^{-1} - DU(X)\circ \tau_{h'}^{-1}|_2\\ & = \sup_{|\bar Y|_2 = \delta/2} \frac{|(DU(X)\circ \tau_h^{-1} - DU(X)\circ \tau_{h'}^{-1},\bar Y)|}{|\bar Y|_2}\leq \epsilon + (2 \epsilon + 4|DU(X)|_2) \frac{\max(h,h')}{\delta}\,. \end{split} \end{equation*} Since we can choose $h,h' < \tfrac{\delta}{2}$ and also $h,h' < \tfrac{\epsilon \delta}{4|DU(X)|_2}$ we have the required estimate and see that $(DU(X)\circ \tau_h^{-1})_{h>0}$ is a Cauchy sequence in $\bar L^2$. Thus, there is $\psi \in \bar L^2$ such that \[DU(X)\circ \tau_h^{-1}\to \psi \,\,\, \text{as} \,\,\, h \searrow 0.\] The next step is to show that $\bar U$ is Fr\'echet differentiable at $\bar X$ and $\psi = D\bar U(\bar X)$. To that end we note that $\bar U(\bar X + \bar Y) = U(X+Z_h)$ and \[ (DU(X),\bar Y\circ \tau_h) = (DU(X),Z_h) + (DU(X),X- \bar X \circ \tau_h ). \] Hence \begin{equation*} \begin{split} & |\bar U(\bar X + \bar Y) - \bar U(\bar X) - (\psi,\bar Y)|\\ & = |U(X+Z_h) - U(X) - (DU(X),\bar Y\circ \tau_h) + (DU(X),\bar Y \circ \tau_h) - (\psi,\bar Y)|\\ & \leq \varepsilon|Z_h|_2 + |DU(X)|_2 h + |DU(X)\circ \tau_h^{-1} - \psi||\bar Y|_2 \leq 4 \epsilon |\bar Y|_2, \end{split} \end{equation*} for $h$ sufficiently small. Thus $\bar U$ is differentiable at $\bar X$ and $\psi = D\bar U(\bar X)\in \bar L^2$. \end{proof} \begin{proof}[Proof of Proposition~\ref{propn L deriv}, part ii)] We first note that \[ \mathscr{L}(X\circ \tau_h^{-1}, DU(X) \circ \tau_h^{-1}) = \mathscr{L}(X, DU(X)) \] since the mapping $\tau_h^{-1}$ is measure preserving. Moreover \[ \text{$(X\circ \tau_h^{-1}, DU(X) \circ \tau_h^{-1}) \to (\bar X, D \bar U((\bar X)))$ in $L^2(\bar \Omega; \mathbb R^{2d})$ as $h\searrow 0$.} \] Hence we get that $\mathscr{L}(X,DU(X))= \mathscr{L}(\bar X,D\bar U(\bar X))$. \end{proof} \begin{proof}[Proof of Proposition~\ref{propn L deriv}, part iii)] Note that $\mu$ is not necessarily atomless. We take $\lambda$, the translation invariant measure on $\mathscr{B}(S^1)$, with $S^1$ denoting the unit circle. The probability space $( D_k \times S^1, \mathscr{B}(D_k)\otimes \mathscr{B}(S^1), \mu \otimes \lambda)$ is atomless. Let $\tilde L^2$ denote the space of square integrable random variables on this probability space. The random variable $\tilde X(x,s):=x$ is in $\tilde L^2$ and has law $\mu$. With the usual lift $\tilde U$ we know, from part i), that $D\tilde U(\tilde X)$ exists in $\tilde L^2$. Let $\xi(x,s) := D\tilde U(\tilde X)(x,s)$. We see that \[ \mathscr{L}(D\tilde U(\tilde X))(B) = \mu \otimes \lambda(D\tilde U(\tilde X) \in B) \] depends only on the law of $X$ which is $\mu$. So must $D\tilde U(\tilde X)(x,s)$ not change with $s$ and thus $\xi(x,s) = \xi(x)$. Then \[ 1 = \mu\left(x\in D_k:\xi(x) = D\tilde U(\tilde X)(x)\right) = \mathbb P\left(\xi(X) = D U (X)\right) \] since $\mathscr{L}(X,DU(X))= \mathscr{L}(\tilde X ,D\tilde U(\tilde X))$ due to part ii). Hence $\xi(X) = DU(X)$ $\mathbb P$-a.s. \end{proof} \subsection{Higher-order derivatives} We observe that if $\mu$ is fixed then $\partial_\mu u(\mu)$ is a function from $D_k\to D_k$. If, for $y \in D_k$, $\partial_y \left[ \partial_\mu u(\mu)(y)_j \right]$ exists for each $j=1,\ldots,d$ then $\partial_y \partial_\mu u:\mathcal P(D_k) \times D_k \to D_k \times D_k$ is the matrix \[ \partial_y \partial_\mu u(\mu,y) := \big(\partial_y \left[ \partial_\mu u(\mu)(y)_j \right]\big)_{j=1,\ldots,d} \,\,. \] If we fix $y\in D_k$ then $\partial_\mu u(\cdot)(y)$ is a function from $\mathcal P(D_k) \to D_k$. Fixing $j=1,\ldots,d$, if $\partial_\mu u(\cdot)(y)_j : \mathcal P(D_k) \to \mathbb R$ is L-differentiable at some $\mu$ then its L-derivative is the function given by part iii) of Proposition~\ref{propn L deriv}, namely $\partial_\mu \big(\partial_\mu u(\mu)(y)_j \big):D_k \to D_k$. The second order derivative in measure thus constructed is $\partial_\mu^2 : \mathcal P(D_k) \times D_k \times D_k \to D_k \times D_k$ given by \[ \partial_\mu^2(\mu, y, \bar y) := \Big( \partial_\mu \big(\partial_\mu u(\mu,y)_j \big)(\bar y) \Big)_{j=1,\ldots,d}\,\,. \] \subsection{It\^o formula for functions of measures} \ls{There is no need to redefine anything here to $D_k$ as these a generic results. \\ {\bf D:} I don't agree, as we only have derivative for $\mu$ supported in $D_k$. So we need the process $x=(x_t)$ to live in $D_k$ (this is in line with how we apply things). } Assume we have a filtered probability space $(\Omega, \mathcal F, \mathbb P)$ with filtration $(\mathcal F_t)_{t\geq 0}$ satisfying the usual conditions supporting an $(\mathcal F_t)_{t\geq 0}$-Brownian motion $w$ and adapted processes $b$ and $\sigma$ satisfying appropriate integrability conditions. We consider the It\^o process \[ dx_t = b_t \, dt + \sigma_t dw_t, \,\,\,x_0 \in L^2(\mathcal F_0) \] which satisfies $x_t \in D_k$ for all $t$ a.s. \begin{definition} We say that $u:\mathcal P_2(D) \to \mathbb R$ is in $\mathcal C^{(1,1)}(\mathcal P_2(D))$ if there is a continuous version of $y \mapsto \partial_\mu u(\mu)(y)$ such that the mapping $\partial_\mu u : \mathcal P_2(D) \times D \to D$ is jointly continuous at any $(\mu,y)$ s.t. $y\in \text{supp}(\mu)$ and such that $y\mapsto \partial_\mu u(\mu,y)$ is continuously differentiable and its derivative $\partial_y \partial_\mu u:\mathcal P_2(D) \times D \to D \times D$ is jointly continuous at any $(\mu,y)$ s.t. $y\in \text{supp}(\mu)$. \end{definition} The notation $\mathcal C^{(1,1)}$ is chosen to emphasise that we can take one measure derivative which is again differentiable (in the usual sense) with respect to the new free variable that arises. Note that in~\cite{chassagneux2014classical} such functions are called partially $\mathcal C^2$. \begin{proposition} \label{propn ito for meas only} Assume that \[ \mathbb E \int_0^\infty |b_t|^2 + |\sigma_t|^4 \, dt < \infty\,. \] Let $u$ be in $\mathcal C^{(1,1)}(\mathcal P_2(D))$ such that for any compact subset $\mathcal K \subset \mathcal P_2(D)$ \begin{equation} \label{eq ito integrability condition} \sup_{\mu \in \mathcal K} \int_{D} \left[|\partial_\mu u(\mu)(y)|^2 + |\partial_y \partial_\mu u(\mu)(y)|^2 \right]\mu(dy) < \infty \,. \end{equation} Then, for $\mu_t := \mathscr L(x_t)$, \[ \begin{split} u( \mu_t) = & u(\mu_0) + \int_0^t \mathbb E\left[ b_s \partial_\mu u(\mu_s)(x_s) + \frac{1}{2}\text{tr}\left[ \sigma_s \sigma_s^* \partial_y \partial_\mu u(\mu_s)(x_s)\right]\right]\,ds\,. \end{split} \] \end{proposition} Note that since we are assuming that the process $x$ never leaves some $D_k$, we have $\text{supp}(\mu_t) \subset D_k$ for all times $t$. The proof relies on replacing $\mu_t$ by an approximation arising as the empirical measure of $N$ independent copies of the process $x$. For marginal empirical measures there is a direct link between measure derivatives and partial derivatives, see~\cite[Proposition 3.1]{chassagneux2014classical}. One can then apply classical It\^o formula to the approximating system of independent copies of $x$ and take the limit. This is done in~\cite[Theorem 3.5]{chassagneux2014classical}. Proposition~\ref{propn ito for meas only} can be used to derive an It\^o formula for a function which depends on $(t,x,\mu)$. \begin{definition} \label{def c122} By $\mathcal C^{1,2,(1,1)}([0,\infty)\times D \times \mathcal P_2(D))$ we denote the functions $v=v(t,x,\mu)$ such that $v(\cdot,\cdot,\mu) \in C^{1,2}([0,\infty)\times D)$ for each $\mu$, and such that $v(t,x,\cdot)$ is in $\mathcal C^{(1,1)}(\mathcal P_2(D))$ for each $(t,x)$. Moreover all the resulting (partial) derivatives must be jointly continuous in $(t,x,\mu)$ or $(t,x,\mu,y)$ as appropriate. Finally, by $\mathcal C^{2,(1,1)}(D \times \mathcal P_2(D))$ we denote the subspace of $\mathcal C^{1,2,(1,1)}([0,\infty)\times D \times \mathcal P_2(D))$ of functions $v$ that are constant in $t$. \end{definition} To conveniently express integrals with respect to the laws of the process taken \textit{only} over the ``new'' variables arising in the measure derivative we introduce another probability space $(\tilde \Omega, \tilde{\mathcal F}, \tilde{\mathbb P})$ a filtration $(\tilde{\mathcal F}_t)_{t\geq 0}$ and processes $\tilde w$, $\tilde b$, $\tilde \sigma$ and a random variable $\tilde x_0$ on this probability space such that they have the same laws as $w$, $b$, $\sigma$ and $x_0$. We assume $\tilde w$ is a Wiener process. Then \[ d\tilde x_t = \tilde b_t \, dt + \tilde \sigma_t d\tilde w_t, \,\,\,\tilde x_0 \in L^2(\tilde{\mathcal F}_0) \] is another It\^o process which satisfies $\tilde x_t \in D_k$ for all $t$ a.s. Moreover if we now consider the probability space $(\Omega\times \tilde \Omega, \mathcal F \otimes \tilde{\mathcal F}, \mathbb P \otimes \tilde{\mathbb P})$ then we see that the processes with and without tilde are independent on this new space. \begin{proposition}[It\^o formula] \label{propn ito formula full} Assume that \[ \mathbb E \int_0^\infty |b_t|^2 + |\sigma_t|^4 \, dt < \infty\,. \] Let $v \in \mathcal C^{1,2,(1,1)}([0,\infty)\times D \times \mathcal P_2(D))$ such that for any compact subset $\mathcal K \subset \mathcal P_2(D)$ \begin{equation} \label{eq ito integrability condition tx} \sup_{t,x,\mu \in \mathcal K} \int_{D} \left[|\partial_\mu v(t,x,\mu)(y)|^2 + |\partial_y \partial_\mu v(t,x,\mu)(y)|^2 \right]\mu(dy) < \infty \,. \end{equation} Then, for $\mu_t := \mathscr L(\tilde x_t)$, \[ \begin{split} v(t,x_t, \mu_t) - v(0,x_0,\mu_0) = & \int_0^t \left[\partial_t v(s,x_s,\mu_s) + \frac{1}{2}\text{tr}\left[\sigma_t\sigma_t^* \partial_x^2 v(s,x_s,\mu_s)\right] \right]\,ds \\ & + \int_0^t b_s \partial_x v(s,x_s,\mu_s)\, dw_s \\ & + \int_0^t \tilde{\mathbb E}\left[ \tilde b_s \partial_\mu v(s,x_s,\mu_s)(\tilde x_s) + \frac{1}{2}\text{tr}\left[\tilde{\sigma}_s\tilde{\sigma}_s^* \partial_y \partial_\mu v(s,x_s,\mu_s)(\tilde x_s)\right]\right]\,ds\,. \end{split} \] \end{proposition} Here we follow the argument from~\cite{buckdahn2017mean} explaining how to go from an It\^o formula for function of measures only, i.e. from Proposition~\ref{propn ito for meas only}, to the general case. Note that it is possible to assume that $\tilde w$, $\tilde b$, $\tilde \sigma$ and $\tilde x_0$ have the same laws as $w$, $b$, $\sigma$ as $x_0$ above, but in fact this is not necessary. In this paper this generality is needed in the proof of Lemma~\ref{lemma:uniform}. \begin{proof}[Outline of proof for Proposition~\ref{propn ito formula full}] Fix $(\bar t,\bar x)$ and apply Proposition~\ref{propn ito for meas only} to the function $u(\mu):=v(\bar t,\bar x,\mu)$ and the law $\mu_t:=\mathscr L(\tilde x_t)$. Then \[ \begin{split} v(\bar t,\bar x,\mu_t) - v(\bar t,\bar x,\mu_0) & = \int_0^t \tilde{\mathbb E}\left[ \tilde b_s \partial_\mu v(\bar t,\bar x,\mu_s)(\tilde x_s) + \frac{1}{2}\text{tr}\left[\tilde{\sigma}_s\tilde{\sigma}_s^* \partial_y \partial_\mu v(\bar t,\bar x,\mu_s)(\tilde x_s)\right]\right]\,ds \\ & =: \int_0^t M(\bar t,\bar x, \mu_s)\,ds\,. \end{split} \] We thus see that the map $t\mapsto v(\bar t, \bar x, \mu_t)$ is absolutely continuous for all $(\bar t, \bar x)$ and so for almost all $t$ we have $\partial_t v(\bar t, \bar x, \mu_t) = M(\bar t, \bar x, \mu_t)$. Note that for completeness we would need to use the definition of $\mathcal C^{1,2,(1,1)}$ functions and a limiting argument to get the partial derivative for all $t$. See the proof of the corresponding It\^o formula in~\cite{chassagneux2014classical}. We now consider $\bar v$ given by $\bar v(t, x) := v(t,x,\mu_t)$. Then $\partial_t \bar v(t,x) = (\partial_t v)(t,x,\mu_t) + M(t,x,\mu_t)$. Using the usual It\^o formula we then have \[ \begin{split} \bar v(t,x_t) - \bar v(0,x_0) = & \int_0^t \left[\partial_t v(s,x_s,\mu_s) + M(s,x_s,\mu_s) + \frac{1}{2}\text{tr}\left[\sigma_t\sigma_t^* \partial_x^2 v(s,x_s,\mu_s)\right] \right]\,ds \\ & + \int_0^t b_s \partial_x v(s,x_s,\mu_s)\, dw_s\,. \end{split} \] \end{proof} \section{Introduction} We will consider either the time interval $I=[0,T]$ for some fixed $T>0$ or $I=[0,\infty)$. Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space, $(\mathcal{F}_t)_{t\in I}$ a filtration such that $\mathcal{F}_0$ contains all sets of $\mathcal{F}$ that have probability zero and such the filtration is right-continuous. Let $w=(w_t)_{t\in I}$ be an $\mathbb{R}^{d'}$-valued Wiener process which is an $(\mathcal{F}_t)_{t\in I}$-martingale. We consider the McKean--Vlasov stochastic differential equation (SDE) \begin{equation} \label{eq mkvsde} x_t = x_0 + \int_0^t b(s,x_s,\mathscr{L}(x_s))\,ds + \int_0^t \sigma(s,x_s,\mathscr{L}(x_s))\,dw_s\,,\,\,\, t\in I\,. \end{equation} Here we use the notation $\mathscr{L}(x)$ to denote the law of the random variable $x$. The law of such an SDE satisfies a nonlinear Fokker--Planck--Kolmogorov equation (see also~\cite{bogachev2016distances} and more generally~\cite{BogachevKrylovRocknerShaposhnikov}): writing $\mu_t := \mathscr L(x_t)$ and $a:= \frac12\sigma \sigma^*$ we have, for $t\in I$, \begin{equation} \label{eq fwd kolmogorov} \langle \mu_t, \varphi \rangle = \langle \mu_0, \varphi \rangle + \int_0^t \left \langle \mu_s, b(s,\cdot, \mu_s)\partial_x \varphi + \text{tr}\left(a(s,\cdot,\mu_s) \partial_x^2 \varphi\right) \right \rangle \, ds\,\,\,\, \forall \varphi \in C^2_0( D )\,. \end{equation} The aim of this article is to study the existence and uniqueness of solutions to the equation~\eqref{eq mkvsde}. We will show that a weak solution to~\eqref{eq mkvsde} exists for unbounded and continuous coefficients, provided that we can find an appropriate measure-dependent Lyapunov function which ensures integrability of the equation. This generalises the results of~\cite{funaki1984certain} and~\cite{gyongy1996existence}. The work on SDEs with coefficients that depend on the law was initiated by~\cite{McKean66}, who was inspired of Kac's programme in Kinetic Theory~\cite{Kac56}. An excellent and thorough account of the general theory of McKean--SDEs and their particle approximations can be found in~\cite{sznitman1991topics}. Sznitman showed that if the coefficients of \eqref{eq mkvsde} are globally Lipschitz continuous, a fixed point argument on Wasserstein space can be carried out, and consequently a solution to \eqref{eq mkvsde} is obtained as the limit of classical SDEs. This means that existence and uniqueness results from classical SDEs allows one to establish existence and uniqueness of~\eqref{eq mkvsde}. If Lipschitz continuity does not hold, the fixed point argument typically fails. However, in the setting of SDEs with non-degenerate diffusion coefficient, the regularisation effect of the noise allowed Zvonkin and Krylov~\cite{Zvonkin75} and later Veretennikov~\cite{Veretennikov80}, in a general multidimensional case, to show that the fixed point argument works, assuming only that the drift coefficient is H\"{o}lder continuous. This result has been recently generalised to McKean--Vlasov SDEs in~\cite{de2015strong}. The key step of the proof is to establish smoothness property of the corresponding PDE on $D \times \mathcal{P}(D)$. To go beyond H\"{o}lder continuity one typically, uses a compactness argument to establish the existence of a solution to stochastic differential equations. In the context of McKean--Vlasov SDEs, this has been done by Funaki who was interested in probabilistic representation for Boltzmann equations~\cite{funaki1984certain}. Funaki formulated a non-linear martingale problem for McKean--Vlasov SDEs that allowed him to established existence of a solution to~\eqref{eq mkvsde} by studying a limiting law of Euler discretisation. His proof of existence holds for continuous coefficients satisfying a Lyapunov type condition in the state variable $x\in \mathbb{R}^d$ with polynomial Lyapunov functions. Whilst we also assume continuity of the coefficients, we allow for a much more general Lyapunov condition that depends on a measure. Furthermore, Funaki is using Lyapunov functions to establish integrability of the Euler scheme which is problematic if one wants to depart from polynomial functions, \cite{SzpruchZhang18}. Recently~\cite{mishura2016existence} assuming only linear growth condition in space and boundedness in measure argument and non-degeneracy of diffusion obtained existence results for~\eqref{eq mkvsde}. This novel result was achieved through use of Krylov's estimates in~\cite[Ch. 2, Sec. 2]{MR601776}) in the context of McKean--Vlasov SDEs. An alternative approach to establishing existence of solutions to McKean--Vlasov equations is to approximate the equation with a particle system (a system of classical SDEs that interact with each other through empirical measure) and show that the limiting law solves Martingale problem. In this approach, one works with laws of empirical laws i.e. on the space $\mathcal{P}(\mathcal{P}(D))$ and proves its convergence to a (weak) solution of~\eqref{eq mkvsde} by studying the corresponding non-linear martingale problem. We refer to~\cite{meleard1996asymptotic} for a general overview of that approach and to~\cite{bossy2011conditional,FournierJourdain17} and references within for recent results exploring this approach. A general approach to establish the existence of martingale solutions has also been presented in \cite{MR3609379}. Here, inspired by~\cite{mishura2016existence}, we tackle the problem using the Skorokhod representation theorem and convergence lemma~\cite{skorokhod1965}. For classical SDEs (equations with no dependence on the law), the lack of sufficient regularity of the coefficients, say Lipschitz continuity, proves to be the main challenge in establishing existence and uniqueness of solutions. Lack of boundedness of the coefficients, typically, does not lead to significant difficulty, provided these are at least locally bounded. In that case one can work with local solutions and the only concern is the possible explosion. The conditions that ensure that the solution does not explode can be formulated by using Lyapunov function techniques as has been pioneered in \cite{Khasminskii80}. The key observation is that if one considers two SDEs with coefficients that agree on some bounded open domain then the solutions if unique also agree until first time the solution leaves the domain, see, for example~\cite[Ch. 10]{StroockVaradhan2006}. This classical localisation procedure does not carry over, at least directly, from the setting of classical SDEs to McKean--Vlasov SDEs. Indeed, if we stop a classical SDE then until the stopping time the stopped process satisfies the same equation. If we take~\eqref{eq mkvsde} and consider the stopped process $y_t := x_{t\wedge \tau}$, with some stopping time $\tau$, then the equation this satisfies is \[ y_t = y_0 + \int_0^{t\wedge \tau} b(s,y_s,\mathscr{L}(x_s))\,ds + \int_0^{t\wedge \tau} \sigma(s,y_s,\mathscr{L}(x_s))\,dw_s\,,\,\,\, t\in I\,. \] Clearly, even for $t\leq \tau$ this is not the same equation since $\mathscr L(x_s) \neq \mathscr L(y_s)$. Furthermore, this is not a McKean--Vlasov SDE. This could be problematic if one would like to obtain a solution to McKean--Vlasov SDEs through a limiting procedure of stopped processes. Furthermore, let $D_k\subseteq D_{k+1}$ be a sequence of nested domains, and consider functions $\bar b$ and $\bar \sigma$ such that $\bar b = b $ and $\bar \sigma = \sigma$ on $D_k$. The equation \begin{equation*} \bar x_t =\bar x_0 + \int_0^t \bar b(s,\bar x_s,\mathscr{L}(\bar x_s))\,ds + \int_0^t \bar \sigma(s,\bar x_s,\mathscr{L}(\bar x_s))\,dw_s\,,\,\,\, t\in I\,, \end{equation*} is a McKean--Vlasov SDE, but $x_t \neq \bar x_t$ even for $t\leq \bar \tau^k$, where $\bar \tau^k = \inf\{t \geq 0: \bar x_t \notin D_k \}$. This implies that if one considers a sequence of SDEs with coefficients that agree on these subdomains, one no longer has monotonicity for the corresponding stopping times. We show that despite these difficulties it still possible to establish the existence of weak solutions to the McKean--Vlasov SDEs \eqref{eq mkvsde} using the idea of localisation, but extra care is needed. \subsection{Main Contributions} Our first main contribution is the generalisation of Lyapunov function techniques to the setting of McKean--Vlasov SDEs. The coefficients of the equation~\eqref{eq mkvsde}, depend on $(x,\mu)\in D \times \mathcal{P}(D)$ for $D\subseteq \mathbb{R}^d$. Hence the class of Lyapunov functions considered in this paper also depend on $(x,\mu)\in D \times \mathcal{P}(D)$. See~\eqref{a-nonint}. Furthermore, it is natural to formulate the {\em integrated Lyapunov} condition, in which the key stability assumption is required to hold only on $\mathcal P(D) $, see~\eqref{a-int} and Section~\ref{sec:motivating examples} for motivating examples. Note that it is not immediately clear how one can obtain tightness estimates for the particle approximation under the integrated conditions we propose. To work with Lyapunov functions on $\mathcal P(D)$, we take advantage of the recently developed analysis on Wasserstein spaces, and in particular derivatives with respect to a measure as introduced by Lions in his lectures at College de France, see~\cite{cardaliaguet2012} and~\cite[Ch. 5]{carmona2017probabilistic}. This analysis is presented in the appendix to give the measure derivative in a domain. Our second main contribution is the probabilistic proof of the existence of a stationary solution to the nonlinear Fokker--Planck--Kolmogorov \eqref{eq fwd kolmogorov}. Furthermore the calculus on the Wasserstein spaces allows to study a type Fokker--Planck--Kolmogorov on $\mathcal P_2(D)$. Indeed, for $\phi\in \mathcal C^{(1,1)}(\mathcal P_2(D))$ and $t\in I$, \begin{equation} \label{eq fwd measure} \begin{split} \phi(\mathscr L(x^\mu_t))=\phi(\mathscr L(x^\mu_0)) \, + & \int_0^t\langle \mathscr L(x^\mu_s), b(s,\cdot,\mathscr L(x^\mu_s)) \partial_\mu \phi(\mathscr L(x^\mu_s)) + \text{tr}\left[a(s,\cdot,\mathscr L( x^\mu_s)) \partial_y \partial_\mu \phi(\mathscr L(x^\mu_s))\right]\rangle\,ds. \end{split} \end{equation} Following the remark by Lions from his lectures at College de France, the equation~\eqref{eq fwd measure} can be interpreted as non-local transport equation on the space of measures. Reader may consult~\cite[Ch. 5 Sec. 7.4]{carmona2017probabilistic} for further details. Another angle is to notice that while~\eqref{eq fwd kolmogorov} gives an equation for linear functionals of the measure, equation~\eqref{eq fwd measure} is an equation for nonlinear functionals of the measure. The existence results obtained in this paper imply existence of a stationary solution to \eqref{eq fwd measure} in the case where $b$ and $\sigma$ do not depend on time. Finally, we formulate uniqueness results under the Lyapunov type condition and {\em integrated Lyapunov type condition} that is required to hold only on $\mathcal P(D) $. This extends the standard monotone type conditions studied in literature e.g~\cite{MR2860672,MR3226169,DosReis2017LDP}. Interestingly, in some special cases we are able to obtain uniqueness only under local monotone conditions. Again we do not require a non-degeneracy condition on diffusion coefficient. We support our results with the example inspired by Scheutzow~\cite{Scheutzow87} who has showed that, in general, uniqueness of solution to McKean--Vlasov SDEs does not hold if the coefficients are only locally Lipschitz. Again, we would like to highlight that since classical localisation techniques used in the SDEs seem not to work in our setting, we cannot simply obtain global uniqueness results from local uniqueness and suitable estimates on the stopping times. \subsection{Motivating Examples} \label{sec:motivating examples} Let us now present some example equations to motivate the choice of the Lyapunov condition. Consider first the McKean--Vlasov stochastic differential equation \begin{equation} \label{eq example1} d x_t = -x_t \bigg[\int_\mathbb{R} y^4 \mathscr L(x_t)(dy)\bigg]\, dt + \frac{1}{\sqrt{2}}x_t\, d w_t\,,\,\,\, x_0 \in L^4(\mathcal F_0, \mathbb R^+)\,. \end{equation} The diffusion generator for~\eqref{eq example1} is \begin{equation} \label{eq ex diff gen} L(x,\mu) v(x) := \frac{1}{4}x^2 v''(x) - x \bigg[ \int_{\mathbb R} y^4 \, \mu(dy) \bigg] v'(x)\,. \end{equation} It is not clear whether one can find a Lyapunov function such that the classical Lyapunov condition holds i.e. $L(x,\mu)v(x)\leq m_1 v(x) m_2$, for $m_1<0$ and $m_2\in \mathbb{R}$. However, with the Lyapunov function given by $v(x) = x^4$ we can establish that \begin{equation} \label{eq lyapunov example} \int_{\mathbb{R}}L(x,\mu) v(x)\mu(dx) \leq -\int_\mathbb{R} v(x) \mu(dx) + 4 \end{equation} holds. See Example~\ref{example integrated lyapunov} for details. We will see that this is sufficient to establish integrability of~\eqref{eq example1} on $I=[0,\infty)$. See Theorem~\ref{thm:weakexistence} and the condition~\eqref{eq b1}. Another way to proceed, is to directly work with $v(\mu):=\int_\mathbb{R} x^4 \, \mu(dx)$ as Lyapunov function on the measure space $\mathcal{P}_4(\mathbb{R})$. This requires the use of derivatives with respect to a measure as introduced by Lions in his lectures at College de France, see~\cite{cardaliaguet2012} or Appendix~\ref{sec measure derivatives and ito}\footnote{Derivatives with respect to a measure are defined in $\mathcal{P}_2(\mathbb{R})$, and therefore one cannot apply It\^{o} formula to $v(\mu):=\int_\mathbb{R} x^4 \, \mu(dx)$. However, in this paper we will only apply the It\^{o} formula for measures supported on compact subsets of $\mathbb R^d$.}. Then \[ \partial_\mu v(\mu)(y) = 4 y^3, \quad \partial_y\partial_\mu v(\mu)(y) = 12 y^2, \,\, y \in \mathbb{R}\,. \] The generator corresponding to the appropriate It\^o formula, see e.g. Proposition~\ref{propn ito for meas only}, is \begin{align*} & L^{\mu} v(\mu) \\ & := \int_\mathbb{R} \left( -x \int_\mathbb{R} y^4\, \mu(dy) \partial_\mu v(\mu)(x) +\frac{1}{4}x^4 \partial_y \partial_\mu v(\mu)(x) \right) \mu(d x) = \int_\mathbb{R} \left( - 4 x^4 \int_\mathbb{R} y^4 \mu(dy) + 3x^{4} \right) \mu(d x)\,. \end{align*} We note that this is the same expression as found when $v(x) = x^4$ in~\eqref{eq ex diff gen} and we integrate over $\mu$ (and so~\eqref{eq lyapunov example} again holds). In this case using the It\^o formula for measure derivatives brings no advantages. However the advantage of working with a Lyapunov function on the measure space appears where the dependence on the measure in the Lyapunov function is not linear. Consider the following McKean--Vlasov stochastic differential equation \begin{equation} \label{eq example2} dx_t = - \left( \int_\mathbb{R} (x_t - \a y)\mathscr{L}{(x_t)}(dy) \right)^3\, dt + \left( \int_\mathbb{R} (x_t - \a y)\mathscr{L}{(x_t)}(dy) \right)^2 \sigma \, dw_t\,, \end{equation} for $t\in I$, $\alpha$ and $\sigma$ constants and with $x_0 \in L^4(\mathcal{F}_0,\mathbb R)$. Assume that $m:= -(6\sigma^2 - 4 + 4\alpha) >0$. Since the drift and diffusion are non-linear functions of the law and state of the process, it is natural to seek a Lyapunov function $v \in \mathcal C^{2,(1,1)}(\mathbb{R} \times \mathcal{P}(\mathbb{R}))$. See Definition~\ref{def c122}. The generator corresponding to the appropriate It\^o formula, see e.g. Proposition~\ref{propn ito formula full}, is then given by~\eqref{eq Lmu} and we will show that for the Lyapunov function \[ v(x,\mu) = \left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^4\,, \] we have \[ \int_{\mathbb{R}}(L^\mu v)(x,\mu)\,\mu(dx) \leq m - m \int_{\mathbb{R}}v(x,\mu)\,\mu(dx)\,. \] See Example~\ref{example 2} for details. Thus the condition~\eqref{eq b1} holds. This is sufficient to establish existence of solutions to~\eqref{eq example1} on $I=[0,\infty)$ as Theorem~\ref{thm:weakexistence} will tell us. Regarding our continuity assumptions for existence of solutions to~\eqref{eq mkvsde} we note that we only require a type of joint continuity of the coefficients in $(x,\mu) \in \mathbb R^d \times \mathcal P(\mathbb R^d)$ and that this allows us to consider coefficients where the dependence on the measure does not arise via an integral with respect to the said measure. This could be for example \[ S_\alpha(\mu):=\frac{1}{\alpha}\int_0^\alpha \inf\{x\in \mathbb{R}\,:\, \mu((-\infty, x]) \geq s]\}\,ds\,, \] for $\alpha > 0$ fixed. This quantity is known as the ``expected shortfall'' and is a type of risk measure. See Example~\ref{example 3} for details. \section{Existence results} For a domain $D\subseteq \mathbb R^d$, we will use the notation $\mathcal{P}(D)$ for the space of probability measures over $( D,\mathcal{B}(D))$. We will consider this as a topological space with the topology induced by the weak convergence of probability measures. We will write $\mu_n \Rightarrow \mu$ if $(\mu_n)_n$ converges to $\mu$ in the sense of weak convergence of probability measures. For $p\geq 1$ we use $\mathcal{P}_p(D)$ to denote the set of probability measures that are $p$ integrable (i.e. $\int_D |x|^p \mu(dx) < \infty$ for $\mu \in \mathcal P_p(D)$). We will consider this as a metric space with the metric given by the Wasserstein distance with exponent $p$, see~\eqref{eq p-wasserstein}. Denote by $C_b(D)$ and $C_0(D)$ the subspaces of continuous functions that are bounded and compactly supported, respectively. We use $\sigma^*$ to denote the transpose of a matrix $\sigma$ and for a square matrix $a$ we use $\text{tr}(a)$ to denote its trace. We use $\partial_x v$ to denote the (column) vector of first order partial derivatives of $v$ with respect to the components of $x$ (i.e. the gradient of $v$ with respect to $x$) and $\partial_x^2 v$ to denote the square matrix of all the mixed second order partial derivatives with respect to the components of $x$ (i.e. the Hessian matrix of $v$ with respect to $x$). If $a,b \in \mathbb R^d$ then $ab$ denotes their dot product. Recall that we are using the concept of derivatives with respect to a measure as introduced by Lions in his lectures at Coll${\grave{\textnormal e}}$ge de France, see~\cite{cardaliaguet2012}. For convenience, the construction and main definitions are in Appendix~\ref{sec measure derivatives and ito}. In particular, see Definition~\ref{def c122} to clarify what is meant by the space $\mathcal C^{1,2,(1,1)}(I \times D \times \mathcal{P}(D))$. In short, saying that a function $v$ is in such space means that all the derivatives appearing in~\eqref{eq Lmu} exist and are appropriately jointly continuous so that we may apply the It\^o formula for a function of a process and a flow of measures, see Proposition~\ref{propn ito formula full}. The use of such an It\^o formula naturally leads to the following form of a diffusion generator. First we note that throughout this paper we assume that for a domain $D\subseteq{\mathbb R^d}$ there is a nested sequence of bounded sub-domains, i.e. bounded, open connected subsets of $\mathbb R^d$, $(D_k)_k$ such that $D_k \subset D_{k+1}$, $\overline{D_k} \subset D$ and $\bigcup_k D_k = D$. For $(t,x)\in I \times D$, $\mu \in \mathcal P( D_k)$ for some $k\in \mathbb N$ and for some $v \in \mathcal C^{1,2,(1,1)}(I \times D \times \mathcal{P}_2(D))$ we define $L^{\mu} = L^\mu(t,x,\mu)$ as \begin{equation} \label{eq Lmu} \begin{split} (L^\mu v)(t,x,\mu) & := \bigg(\partial_t v + \frac{1}{2}\text{tr}\big(\sigma \sigma^*\partial_x^2 v\big) + b \partial_x v\bigg)(t, x,\mu) \\ & + \int_{\mathbb{R}^d}\left( b (t, y, \mu) (\partial_\mu v) (t,x,\mu)(y) + \frac{1}{2}\text{tr}\big((\sigma\sigma^*)(t,y,\mu)(\partial_y \partial_\mu v)(t,x,\mu)(y) \big) \right) \mu(dy). \end{split} \end{equation} We note that in the case $v \in C^{1,2}(I \times D )$, i.e when $v$ does not depend on the measure, the above generator reduces to \begin{equation*} (L^\mu v)(t,x) = (Lv)(t,x) := \bigg(\partial_t v + \frac{1}{2}\text{tr}\big(\sigma \sigma^*\partial_x^2 v\big) + b \partial_x v\bigg)(t,x) \,. \end{equation*} \subsection{Assumptions and Main Result} We assume that $b:I\times D \times \mathcal{P}(D) \to \mathbb{R}^d$ and $\sigma:I\times D \times \mathcal{P}(D)\to \mathbb{R}^d\times \mathbb{R}^{d'}$ are measurable (later we will add joint continuity and local boundedness assumptions). We require the existence of a Lyapunov function satisfying either of the following conditions. \begin{assumption}[Lyapunov condition] \label{a-nonint} There is $v \in \mathcal C^{1,2,(1,1)}(I \times D \times \mathcal{P}_2(D))$, $v\geq 0$, such that \begin{enumerate}[i)] \item There are locally integrable, non-random, functions $m_1= m_1(t)$ and $m_2=m_2(t)$ on $I$ such that: for all $t \in I$, all $x\in D$ and all $\mu \in \mathcal P(D_k)$, $k\in \mathbb N$, we have, \begin{equation} \label{eq b2} L^\mu(t,x,\mu) v(t,x,\mu) \leq m_1(t) v(t,x,\mu) + m_2(t)\,. \end{equation} \item There is $V = V(t,x)$ satisfying for all $\mu \in \mathcal P(D_k)$, $k\in \mathbb N$, we have, \begin{equation} \label{eq V ineq} \int_D V(t,x)\,\mu(dx) \leq \int_D v(t,x,\mu)\,\mu(dx)\,\,\,\, \forall t\in I, \, \end{equation} and \begin{equation} \label{eq Vk limit} V_k := \inf_{s \in I, x\in \partial D_k } V(s,x) \,\,\, \text{$\to \infty$ as $k\to \infty$.} \end{equation} \item The initial value $x_0$ is $\mathcal{F}_0$-measurable, $\mathbb P(x_0 \in D) = 1$ and $\mathbb{E} v(0,x_0,\mathscr L(x_0)) < \infty$. \end{enumerate} \end{assumption} \begin{assumption}[Integrated Lyapunov condition] \label{a-int} There is $v \in \mathcal C^{1,2,(1,1)}(I \times D \times \mathcal{P}_2(D))$, $v\geq 0$, such that: \begin{enumerate}[i)] \item There are locally integrable, non-random, functions $m_1= m_1(t)$ and $m_2=m_2(t)$ on $I$ such that for all $t \in I$ and for all $\mu \in \mathcal P(D_k)$, $k\in \mathbb N$, we have, \begin{equation} \label{eq b1} \int_{ D }L^\mu(t,x,\mu) v(t,x,\mu) \mu(dx)\leq m_1(t) \int_{D } v(t,x,\mu) \mu(dx) + m_2(t)\,. \end{equation} \item There is $V = V(t,x)$ satisfying~\eqref{eq V ineq} and \begin{equation} \label{eq Vk D limit} V_k := \inf_{s \in I, x\in D_k^c } V(s,x) \,\,\, \text{$\to \infty$ as $k\to \infty$.} \end{equation} \item The initial value $x_0$ is $\mathcal{F}_0$-measurable, $\mathbb P(x_0 \in D) = 1$ and $\mathbb{E} v(0,x_0,\mathscr L(x_0)) < \infty$. \end{enumerate} \end{assumption} We make the following observations. \begin{remark} \label{rmk assumption} \hfill{ } \begin{enumerate}[i)] \item We have deliberately not specified the signs of the functions $m_1$ and $m_2$. \color{blue} \color{black} \item If~\eqref{eq b2} holds then for $L^{\mu,k} := \mathbbm{1}_{x\in D_k} L^\mu$ we have that~\eqref{eq b2} also holds with $L^\mu$ replaced by $L^{\mu,k}$. Indeed, since $v$ is non-negative, then $ L^\mu v (t,x,\mu) \mathbbm{1}_{x \in D_k} \leq [m_1(t) v(t,x,\mu) + m_2(t)]\mathbbm{1}_{x \in D_k} $. On the other hand if only~\eqref{eq b1} holds then, in general, this does not imply that~\eqref{eq b1} holds with $L^\mu$ replaced by $L^{\mu,k}$, unless $ \mu \in \mathcal P(D_k)$. \end{enumerate} \end{remark} Regarding the continuity of coefficients in~\eqref{eq mkvsde} and their local boundedness we require the following. \begin{assumption}[Continuity] \label{ass continuity} Functions $b:I\times D \times \mathcal{P}(D) \to \mathbb{R}^d$ and $\sigma:I\times D \times \mathcal{P}(D)\to \mathbb{R}^d\times \mathbb{R}^{d'}$ are jointly continuous in the last two arguments in the following sense: if $(\mu_n) \in \mathcal P(D)$ are such that for all $n$ \[ \sup_{t \in I} \int_{ D} v(t,x,\mu_n)\,\mu_n(dx) < \infty \] and if $(x_n \rightarrow x, \mu_n \Rightarrow \mu )$ as $n\to \infty$ then $b (t,x_n,\mu_n) \to b(t,x,\mu)$ and $\sigma (t,x_n,\mu_n) \to \sigma(t,x,\mu)$ as $n\to \infty$. \end{assumption} \begin{assumption}[Local boundedness] \label{ass local boundedness} There exist constants $c_k \geq 0$ such that for any $\mu \in \mathcal P(D)$ \[ \sup_{x\in D_k} |b(t,x,\mu)| \leq c_k \left(1+\int_{D} v(t,y,\mu)\mu(dy)\right)\,, \] \[ \sup_{x\in D_k} |\sigma(t,x,\mu)| \leq c_k \left(1+\int_{D} v(t,y,\mu)\mu(dy)\right)\,. \] \end{assumption} \begin{assumption}[Integrated growth condition] \label{ass integrated growth} There exists a constant $c \geq 0$ such that for all $\mu \in \mathcal P(D_k)$, $k\in \mathbb{N}$, we have, \[ \int_{D_k} |b(t,x,\mu)| + |\sigma(t,x,\mu)|^2 \mu(dx) \leq c \left(1+\int_{D_k} v(t,x,\mu)\mu(dx)\right),\,\,\,\, \forall t\in I. \] \end{assumption} Continuity Assumption~\ref{ass continuity} in the measure argument is very weak, but might be hard to verify. In case of unbounded domains function the property \eqref{eq V ineq} will often will often hold for $V(x)=|x|^p$, $p \geq 1$. In such case we have $\mu_n \in \mathcal P_p(D)$ for all the measures $\mu_n$ under consideration for convergence of the coefficients. But from~\cite[Theorem 6.9]{villani2009}, we know that for $\mu_n \in \mathcal P_p(D)$ weak convergence of measures is equivalent to convergence in the $p'$-th Wasserstein metric, for $p'<p$. Hence, in such case, it is enough to check that if $x_n \rightarrow x$ and $W_{p'}(\mu_n,\mu) \to 0$ as $n\to \infty$ then $b (x_n,\mu_n) \to b(x,\mu)$ and $\sigma (x_n,\mu_n) \to \sigma(x,\mu)$ as $n\to \infty$. This will be satisfied in particular if \begin{equation} \label{eq wasserstein cont crit} | b(x_n,\mu_n) - b(x,\mu) | + | \sigma(x_n,\mu_n) - \sigma(x,\mu) | \leq \rho(|x-x_n|) + W_{p'}(\mu_n,\mu), \end{equation} for some function $\rho = \rho(x)$ such that $\rho(|x|) \to 0$ as $x \to 0$. We note that this is a common assumption, see e.g.~\cite{funaki1984certain}. At this point it may be worth noting that the $p$-Wasserstein distance on $\mathcal P_p(D)$ is \begin{equation} \label{eq p-wasserstein} W_p(\mu,\nu) := \left( \inf_{\pi \in \Pi(\mu,\nu)} \int_{D\times D} |x-y|^p \, \pi(dx,dy) \right)^{\frac{1}{p}}\,, \end{equation} where $\Pi(\mu,\nu)$ denotes the set of {\em couplings} between $\mu$ and $\nu$ i.e. all measures on $\mathscr{B}(D\times D)$ such that $\pi(B,D) = \mu(B)$ and $\pi(D, B) = \nu(B)$ for every $B \in \mathscr B(D)$. Note that in the case of McKean--Vlasov SDEs it is often useful to think of the solution as a pair consisting of the process $x$ and its law i.e. $(x_t,\mathscr{L}(x_t))_{t \in I}$. The coefficients of the McKean--Vlasov SDE depend on the law of the solution and the main focus of this paper is on equations with unbounded coefficients, therefore a condition on integrability of the law is natural. \begin{definition}[$v$-integrable weak solution] \label{def soln} A $v$-integrable weak solution to~\eqref{eq mkvsde}, on $I$ in $D$ is \[ \big(\Omega, \mathcal F, \mathbb P, (\mathcal F_t)_{t \in I}, (w_t)_{t \in I}, (x_t)_{t \in I}\big), \] where $(\Omega, \mathcal F, \mathbb P)$ is a probability space, $(\mathcal F_t)_{t \in I}$ is a filtration, $(w_t)_{t \in I}$ is a Wiener process that is a martingale w.r.t. the above filtration, $(x_t)_{t \in I}$ is an adapted process satisfying~\eqref{eq mkvsde} such that $x\in C(I; D)$ a.s. and finally, for all $t\in I$ we have $\mathbb E v(t,x_t,\mathscr L(x_t)) < \infty$. \end{definition} Before we state the main theorem of this paper, we state the conditions on $m_1,m_2$ that allow one to establish the integrability and tightness estimate, which in the case $I=[0,\infty)$ needs to be uniform in time. \begin{remark}[On finiteness of $M(t)$] \label{remark on Mt} Define $\gamma(t) := \exp\left(-\int_0^t m_1(s)\,ds\right)$ and \begin{equation} \begin{split} \label{eq: uniform tauk est} M(t):= & \frac{\mathbb E v(0,\tilde x_0,\mathscr L(\tilde x_0))}{\gamma(t)} + \int_{0}^t \frac{\gamma(s)}{\gamma(t)} m_2(s) ds,\\ M^+(t):= & e^{\int_0^t (m_1(s))^+\,ds} \left(\mathbb{E} v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) + \int_0^{t } \gamma(s) m_2^+(s) \, ds\right)\, . \end{split} \end{equation} Note that $M(t)\leq M^+(t)$. \begin{enumerate}[i)] \item If $I=[0,T]$, $m_1$ and $m_2$ are set to $0$ outside $I$, leading to \[ \sup_{t<\infty} \int_{0}^t \frac{\gamma(s)}{\gamma(t)} m_2(s) ds \leq \int_0^{T} e^{\int_s^{T} m_1(r)\,dr} |m_2(s)|ds < \infty\,. \] \item If $I=[0,\infty)$ and we have \begin{equation} \label{eq extra condition on m1m2} m_1(t) \leq 0 \,\,\,\forall t\geq0 \,\,\, \text{and}\,\,\, \int_0^\infty \gamma(s)|m_2(s)|\,ds<\infty\,, \end{equation} then \[ \sup_{t<\infty} \int_0^t e^{\int_s^{t} m_1(r)\,dr} m_2(s)\,ds \leq \int_0^\infty\gamma(s)|m_2(s)| \, ds < \infty. \] \end{enumerate} In both of these cases we have $\sup_{t\in I}M(t)$ and $\sup_{t\in I}M^+(t) <\infty$. \end{remark} \begin{theorem}\label{thm:weakexistence} Let $D\subseteq \mathbb R^d$ and assumptions~\ref{ass continuity} and~\ref{ass local boundedness} hold. Then we have the following. \begin{enumerate}[i)] \item If Assumption~\ref{a-nonint} holds and $\sup_{t\in I } M^+(t) < \infty$, then there exists a $v$-integrable weak solution to~\eqref{eq mkvsde} on $I$. \item If Assumptions~\ref{a-int} and~\ref{ass integrated growth} hold and $\sup_{t\in I } M(t) < \infty$, then there exists a $v$-integrable weak solution to~\eqref{eq mkvsde} on $I$. \end{enumerate} Additionally, \[ \sup_{t\in I} \mathbb E v(t, x_t, \mathscr L( x_t)) < \infty\,. \] \end{theorem} We make the following comment. By virtue of Assumption~\ref{ass local boundedness} we have that under conditions of Theorem~\ref{thm:weakexistence}, the $v$-integrable weak solution to~\eqref{eq mkvsde} obtained by the theorem satisfies the forward nonlinear Fokker--Planck--Kolmogorov equation~\eqref{eq fwd kolmogorov}, where $\mu_t = \mathscr L(x_t)$. \newline \subsection{Proof of the existence results} We will use the convention that the infimum of an empty set is positive infinity. We extend $b$ and $\sigma$ in a measurable but discontinuous way to functions on $\mathbb{R}^+\times \mathbb R^d \times \mathcal{P}(\mathbb R^d)$ by taking \[ b(t,x,\mu) = \sigma(t,x,\mu) = 0 \,\,\, \text{if $x\in \mathbb R^d \setminus D$ or if $t\notin I$}. \] For $t\notin I$ we set $m_1(t) = m_2(t)=0$. We define \[ b^k(t,x,\mu) := \mathds 1_{x\in D_k} b(t,x,\mu)\,\,\,\text{and}\,\,\, \sigma^k(t,x,\mu) := \mathds 1_{x\in D_k} \sigma(t,x,\mu)\,. \] \begin{lemma} \label{lemma:uniform} Let Assumption~\ref{ass local boundedness} hold. Let $(\tilde \Omega, \tilde{\mathcal F}, \tilde{\mathbb P})$ be a probability space, $(\tilde{\mathcal F}_t)_{t\in I}$ a filtration, a Wiener process $\tilde w$ and a process $\tilde{x}^k$ that satisfies, for all $t \in I$, \begin{equation} \label{eq mkvsde k} d\tilde x^k_t = b^k(t,\tilde x^k_t,\mathscr L(\tilde x^k_t))\,dt + \sigma^k(t,\tilde x^k_t,\mathscr L(\tilde x^k_t))\,d\tilde w_t\,,\,\,\,\tilde x^k_0 = \tilde x_0\,. \end{equation} For $m\leq k$, let $\tilde{\tau}^k_m := \inf\{t\in I:\tilde{x}^k_t \notin D_m\}$. \begin{enumerate}[i)] \item If either Assumption~\ref{a-nonint} or~\ref{a-int} hold then for any $t\in I$, \begin{equation*} \sup_k \mathbb E v(t,\tilde x^k_t,\mathscr L(\tilde x^k_t)) \leq M(t) \,. \end{equation*} \item If either Assumption~\ref{a-nonint} or~\ref{a-int} hold then for any $t\in I$, \[ \mathbb P(\tilde{\tau}_k^k < t) \leq \mathbb P (\tilde{x}_0 \notin D_k) + M(t) V_k^{-1}\,. \] \item If Assumption~\ref{a-nonint} holds then for any $t\in I$, \[ \sup_k \mathbb P(\tilde{\tau}^k_m < t) \leq \mathbb P (\tilde{x}_0 \notin D_m) + M^+(t)V_m^{-1}\,. \] \item If Assumption~\ref{a-int} holds then for any $t\in I$ \[ \sup_{k}\mathbb P (\tilde x^k_t \notin D_m) \leq M(t)V_m^{-1}\,. \] \end{enumerate} \end{lemma} \begin{proof} For each $k$, $\mathscr{L}(\tilde x^k_t) \in \mathcal{P}_2(D)$ and therefore we can apply It\^o's formula from Proposition~\ref{propn ito formula full} to $\tilde{x}^k$, its law and $\gamma v$. Thus \[ \begin{split} & \gamma(t) v(t,\tilde x^k_t,\mathscr{L}(\tilde x^k_t)) = \gamma(0) v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) \\ & \qquad + \int_0^{t} \gamma(s) [L^\mu v - m_1 v](s,\tilde x^k_s,\mathscr{L}(\tilde x^k_s)) \, ds + \int_0^{t} \gamma(s) [(\partial_x v) \sigma](s,\tilde x^k_s,\mathscr{L}(\tilde x^k_s)) \, d\tilde w_s\,. \end{split} \] Due to the local boundedness of the coefficients and either Lyapunov condition~\eqref{eq b2} or~\eqref{eq b1} we get \begin{equation} \label{eq: bound} \mathbb E \gamma(t) v(t ,\tilde x^k_t,\mathscr{L}(\tilde x^k_t)) \leq \mathbb E \gamma(0) v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) + \int_0^t \gamma(s) m_2(s)ds \,. \end{equation} This proves the first part of the lemma. For the second part we proceed as follows. Since $\tilde{x}^k_t = \tilde{x}^k_{t\wedge \tilde{\tau}_k^k}$ for all $t\in I$, which implies $\mathscr{L}(\tilde x^k_t) = \mathscr{L}(\tilde x^k_{t\wedge \tilde{\tau}^k_k})$ for all $t\in I$, we further observe \[ \begin{split} \mathbb E v\left(t,\tilde x^k_{t},\mathscr{L}(\tilde x^k_{t})\right) = \, \mathbb E v\left(t,\tilde x^k_{t\wedge \tilde{\tau}_k^k},\mathscr{L}(\tilde x^k_{t\wedge\tilde{\tilde{\tau}}^k_k})\right) \geq \mathbb E \left[ V\left(t,\tilde x^k_{t\wedge \tilde{\tau}_k^k}\right) \mathds 1_{0< \tilde{\tau}_k^k < t} \right] = & \, \mathbb E \left[ V\left(t ,\tilde x^k_{\tilde{\tau}_k^k}\right) \mathds 1_{0 < \tilde{\tau}_k^k < t} \right]\\ \geq & V_k\mathbb P (0< \tilde{\tau}_k^k < t)\,. \end{split} \] Hence, \[ \begin{split} & \mathbb P( \tilde{\tau}_k^k < t) = \mathbb P( \tilde{\tau}_k^k < t, \tilde{\tau}_k^k >0) + \mathbb P( \tilde{\tau}_k^k < t, \tilde{\tau}_k^k= 0) \leq \mathbb P(0 < \tilde{\tau}_k^k < t) + \mathbb P (x_0 \notin D_k) \\ & \leq \frac{\mathbb E v\left(t,\tilde x^k_{t\wedge \tilde{\tau}_k^k},\mathscr{L}(\tilde x^k_t)\right)}{V_k} + \mathbb P (\tilde{x}_0 \notin D_k)\,. \end{split} \] This completes the proof of the second statement. To prove the third statement we first note that for $m>k$ we have $\mathbb P(\tau^k_m < t) = \mathbb P(x_0 \notin D_m )$. Thus we may assume that $m\leq k$. We proceed similarly as above but with the crucial difference that $\tilde{x}^k_t$ is not equal to $\tilde{x}^k_{t\wedge \tau^k_m}$ anymore. Our aim is to apply It\^{o} formula to the function $v$, the process $(\tilde x^k_{t\wedge \tau^k_m})_{t\in I}$ and the flow of marginal measures $(\mathscr{L}(\tilde x^k_t))_{t\in I}$. Note that $\mathscr{L}(\tilde x^k_{t\wedge \tau^k_m}) \neq \mathscr{L}(\tilde x^k_t)$. Nevertheless the It\^o formula~\ref{propn ito formula full} may be applied. After taking expectations this yields \[ \mathbb{E} \left[ \gamma(t\wedge \tau^k_m) v(t\wedge \tau^k_m,\tilde x^k_{t\wedge \tau^k_m},\mathscr{L}(\tilde x^k_t)) \right] = \mathbb{E} v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) + \mathbb{E} \int_0^{t\wedge \tau^k_m} \gamma(s) \left[ L^{\mu} v - m_1 v\right](s,\tilde x^k_s,\mathscr{L}(\tilde x^k_s)) \, ds. \] We now use~\eqref{eq b2} to see that \[ \begin{split} & \mathbb{E} \left[ \gamma(t\wedge \tau^k_m) v(t\wedge \tau^k_m,\tilde x^k_{t\wedge \tau^k_m},\mathscr{L}(\tilde x^k_t) )\right] \leq \mathbb{E} v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) + \mathbb{E} \int_0^{t\wedge \tau^k_m} \gamma(s) m_2(s) \, ds \\ & \leq \mathbb{E} v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) + \int_0^{t } \gamma(s) m_2^+(s) \, ds =: \bar M(t)\,. \end{split} \] Then \[ \inf_{s\leq t}\gamma(s) \mathbb{E} v(t\wedge \tau^k_m,\tilde x^k_{t\wedge \tau^k_m},\mathscr{L}(\tilde x^k_t) ) \leq \mathbb{E} \gamma(t\wedge \tau^k_m) v(t\wedge \tau^k_m,\tilde x^k_{t\wedge \tau^k_m},\mathscr{L}(\tilde x^k_t) ) \leq \bar M(t) \] and so, following same argument as in the proof of the first part of this lemma, \[ \begin{split} & \mathbb P( \tau^k_m < t) \leq \frac{1}{\inf_{s\leq t} \gamma(s)} \frac{\bar M(t)}{V_m(\mathscr L (\tilde x^k_t))} + \mathbb P (\tilde{x}_0 \notin D_m)\,. \end{split} \] We conclude by observing that \[ \inf_{s\leq t} \gamma(s) \geq e^{-\int_0^t (m_1(s))^+\,ds}. \] To prove the fourth statement, first note that for $m>k$, $\mathbb{P} (\tilde x_t^k \notin D_m)=0$ and hence we take $m\leq k$. Conditions \eqref{eq V ineq} and \eqref{eq Vk D limit} imply that \begin{equation*} \begin{split} \mathbb E v\left(t,\tilde x^k_{t},\mathscr{L}(\tilde x^k_t)\right) \geq & \int_{D} V(t,x) \mathscr{L}(\tilde x^k_t)(dx) \geq \int_{D\cap D_m^c} V(t,x) \mathscr{L}(\tilde x^k_t)(dx) \geq V_m \mathbb{P}( \tilde x^k_t \notin D_m)\,. \end{split} \end{equation*} \end{proof} \begin{remark} \label{remark on x_0} Since we are assuming that $\mathbb P(x_0 \in D) = 1$ we have \[ \lim_{k \to \infty} \mathbb P(x_0 \notin D_k) = 1 - \lim_{k\to \infty} \mathbb P(x_0 \in D_k) = 1 - \mathbb P \left(\bigcup_k \{x_0 \in D_k\}\right) = 0\,. \] \end{remark} \begin{corollary} \label{corollary bound for limit process} Let Assumption~\ref{ass local boundedness} hold. Let $(\tilde \Omega, \tilde{\mathcal F}, \tilde{\mathbb P})$ be a probability space, $(\tilde{\mathcal F}_t)_{t\in I}$ a filtration, a Wiener process $\tilde w$ and a process $\tilde{x}^k$ such that~\eqref{eq mkvsde k} holds for all $t \in I$. Assume that $\tilde x^k \to \tilde x$ in $C(I;\bar D)$. If either Assumption~\ref{a-nonint} or~\ref{a-int} hold then \[ \sup_{t\in I} \mathbb E v(t,\tilde x_t, \mathscr L(\tilde x_t)) \leq \sup_{t\in I} M(t) \,, \] where $M$ is given in~\eqref{eq: uniform tauk est}. \end{corollary} \begin{proof} By Fatou's lemma, continuity of $v$ and~\eqref{eq: uniform tauk est} we get \[ \mathbb E v(t,\tilde x_t, \mathscr L(\tilde x_t)) \leq \liminf_{k\to \infty} \mathbb E v(t,\tilde x^k_t, \mathscr L(\tilde x^k_t)) \leq \sup_{t\in I} M(t)\,. \] The results follows if we take supremum over $t$ and consider Remark~\ref{remark on Mt}. \end{proof} Our aim is to use Skorokhod's arguments to prove the existence of a weak (also known as martingale) solution to the equation~\eqref{eq mkvsde}. Before we proceed to the proof of the main Theorem~\ref{thm:weakexistence} we need to establish tightness of the law of the process \eqref{eq mkvsde k}. \begin{lemma}[Tightness] \label{lemma tightness} Let $\tilde{x}^k$ be the process defined in \eqref{eq mkvsde k}. \begin{enumerate}[i)] \item Let Assumptions~\ref{a-nonint} and~\ref{ass local boundedness} hold and $\sup_{t\in I } M^+(t) < \infty$, then the law of $(\tilde x^k)_k$ is tight on $C(I; \bar D)$. \todo[inline]{\textbf{D: Is there a reason why we don't claim tightness on $C(I;D)$? }} \item Let Assumptions~\ref{a-int},~\ref{ass local boundedness} and~\ref{ass integrated growth} hold and $\sup_{t\in I } M (t) < \infty$, then the law of $(\tilde x^k)_k$ is tight on $C(I; \bar D)$. Additionally for any $\epsilon >0$, there is $m_\epsilon$ such that for $m\geq m_{\epsilon}$ \[ \sup_k \mathbb{P}(\tau^k_m \in I)\leq \epsilon. \] \end{enumerate} \end{lemma} \begin{proof} $i)$ Under the Assumption~\ref{a-nonint} tightness of the law of $(\tilde x^k)_k$ on $C(I; \bar D)$ follows from the first statement in Lemma~\ref{lemma:uniform}, together with Remarks~\ref{remark on Mt} and~\ref{remark on x_0}. Indeed given $\varepsilon > 0$ we can find $m_0$ such that for any $m>m_0$ \[ \mathbb P(\tilde{\tau}^k_m < \infty) \leq \mathbb P(\tilde x_0 \notin D_m) + \sup_{t\in I}M(t) V_m^{-1} \leq \varepsilon/2 + \varepsilon/2\,, \] due to, in particular, our assumption that $V_m \to \infty$ as $m \to \infty$. $ii)$ First we observe that for every $\ell$ and $(t_1,\ldots,t_{\ell})$ in $I$, the joint distribution of $(\tilde{x}_{t_1}^k, \ldots, \tilde{x}_{t_\ell}^k )$ is tight. Indeed, statement iii) in Lemma \ref{lemma:uniform} guarantees tightness of the law of $\tilde{x}_{t}^k$ for any $t\in I$. Given $\varepsilon > 0$, for any $\ell \in \mathbb{N}$ we can find $m_0$ such that for any $m>m_0$ \[ \mathbb P(\tilde{x}_{t_1}^k\notin D_m, \ldots, \tilde{x}_{t_\ell}^k\notin D_m ) \leq \ell \sup_{t\in I}M(t) V_m^{-1} \leq \varepsilon\,, \] due to, the assumption that $V_m \to \infty$ as $m \to \infty$. We will use Skorokhod's Theorem (see~\cite[Ch. 1 Sec. 6]{skorokhod1965}). This will allow us to conclude tightness of the law of $(\tilde x_k)_k$ on $C(I; \bar D)$ as long as we can show that for any $\varepsilon>0$ \[ \lim_{h\rightarrow 0} \sup_k \sup_{|s_1 - s_2| \leq h } \mathbb{P}(| \tilde{x}_{s_1}^k - \tilde{x}_{s_2}^k | > \varepsilon ) =0\,. \] From~\eqref{eq mkvsde k}, using the Assumption~\ref{ass integrated growth}, we get, for $0< s_1 - s_2 < 1$, \begin{equation*} \begin{split} \mathbb{E} |\tilde x^k_{s_1} - x^k_{s_2} | \leq & \int_{s_2}^{s_1} \mathbb{E} | b^k(r,\tilde x^k_r,\mathscr L(\tilde x^k_r))|\,dr + \left( \mathbb{E} \int_{s_2}^{s_1} | \sigma^k(r,\tilde x^k_r,\mathscr L(\tilde x^k_r))|^2 \,dr \right)^{\frac12} \\ \leq & c \int_{s_2}^{s_1} (1 + \sup_k \mathbb{E} v(r,\tilde x^k_r,\mathscr L(\tilde x^k_r)) ) \,dr + \left( c \int_{s_2}^{s_1} (1 + \sup_k \mathbb{E} v(r,\tilde x^k_r,\mathscr L(\tilde x^k_r)) ) \,dr \right)^{\tfrac12} \\ \leq & c \left(1+\sup_{t\in I}M(t)\right) (s_2-s_1)^{\tfrac12} \,. \end{split} \end{equation*} Markov's inequality leads to \[ \sup_k \sup_{|s_1 - s_2|\leq h} \mathbb P\left(|\tilde x^k_{s_1} - \tilde x^k_{s_2}| > \varepsilon\right) \leq c \varepsilon (s_1 - s_1)^{\frac12} \] which concludes the proof of tightness. We will now prove the second statement in $ii)$. Note that $C(I,D)$ is open and in $C(I,\bar D)$. Note also that $C(I,D_k) \subset C(I,D_{k+1})$ and $\bigcup_k C(I,D_k) = C(I,D)$. We know that for any $\varepsilon>0$ there is a compact set $\mathcal{K}_\varepsilon \subset C(I,D)$ such that \[ \sup_k \mathbb{P}(\tilde x^k \notin \mathcal{K}_{\epsilon}) \leq \epsilon. \] Since $\mathcal{K}_\varepsilon \subset C(I;D) = \bigcup_k C(I;D_k)$ is compact and $C(I;D_k)$ are nested and open there must be some $k^\ast$ such that $\mathcal{K}_\varepsilon \subset C(I;D_{k^\ast})$. But this means that \[ \P(\tilde x^k \notin C(I;D_{k^\ast})) \leq \P(\tilde x^k \notin \mathcal{K}_\varepsilon) \] and so $ \mathbb{P}(\tau^k_m \in I )= \mathbb{P}(\tilde x^k \notin C(I;D_m)) \leq \mathbb{P}( \tilde x^k \notin C(I;D_{k^{\ast}})) \leq \P(\tilde x^k \notin \mathcal{K}_\varepsilon)\leq \varepsilon\color{black} $ for all $m\geq k^{*}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:weakexistence}] Recall that we have extended $b$ and $\sigma$ so that they are now defined on $[0, \infty) \times \mathbb R^d \times \mathcal{P}(\mathbb R^d)$. Hence we will from now on work with $I=[0,\infty)$. Let us define $t^n_i := \tfrac{i}{n}$, $i=0,1,\ldots$ and $\kappa_n(t) = t^n_i$ for $t\in [t^n_i, t^n_{i+1})$. Fix $k$. We introduce Euler approximations $x^{k,n}$, $n\in \mathbb{N}$, \[ x^{k,n}_t = x_0 + \int_0^t b^k\left(s,x^{k,n}_{\kappa_n(s)},\mathscr L(x^{k,n}_{\kappa_n(s)})\right)\,ds + \int_0^t \sigma^k\left(s,x^{k,n}_{\kappa_n(s)},\mathscr L(x^{k,n}_{\kappa_n(s)})\right)\,dw_s\,. \] Let us outline the proof: As a first step we fix $k$ and we show tightness with respect to $n$ and Skorokhod's theorem to take $n\to \infty$. The second step is then to use Lemma~\ref{lemma:uniform} and remark~\ref{remark on x_0} to show tightness with respect to $k$. Finally we can use Skorokhod's theorem again to show that (for a subsequence) the limit as $k\to \infty$ satisfies~\eqref{eq mkvsde} (on a new probability space). {\em First Step.} Using standard arguments, we can verify that, for a fixed $k$, the sequence $(x^{k,n})_n$ is tight (in the sense that the laws induced on $C([0,\infty), \bar D)$ are tight). By Prohorov's theorem (see e.g.~\cite[Ch. 1, Sec. 5]{billingsley}), there is a subsequence (which we do not distinguish in notation) such that $\mathscr{L}(x^{k,n}) \Rightarrow \mathscr{L}(x^k)$ as $n \to \infty$ (convergence in law). Hence we may apply Skorokhod's Representation Theorem (see e.g. \cite[Ch. 1, Sec. 6]{billingsley}) and obtain a new probability space $(\tilde{\Omega}^k, \tilde{\mathcal{F}}^k,\tilde{\mathbb{P}}^k)$ where on this space there are new random variables $(\tilde x_0^n, \tilde x^{k,n}, \tilde w^n)$ and $(\tilde x_0, \tilde x^k, \tilde w)$ such that \[ \mathscr{L}(\tilde{x}_0^n, \tilde{x}^{k,n}, \tilde{w}^n) = \mathscr{L}(x_0^n,x^{k,n},w^n) \quad \forall n \in \mathbb{N}\quad \text{and}\quad \mathscr{L}(\tilde{x}_0, \tilde{x}^{k}, \tilde{w}) = \mathscr{L}(x_0,x^{k},w). \] After taking another subsequence to obtain almost sure convergence from convergence in probability, \[ (\tilde{x}_0^n, \tilde{x}^{k,n}, \tilde{w}^n) \to (\tilde x_0,\tilde{x}^k, \tilde w) \,\,\, \text{as $n\to \infty$ in $C([0,\infty),D\times \bar D \times \mathbb R^{d'})$ a.s.} \] We let \[ \tilde{\mathcal{F}}^k_t := \sigma\{\tilde{x}_0\}\vee \sigma\{\tilde{x}_s, \tilde{w}_s : s\leq t\}. \] and define $\tilde{\mathcal{F}}^{k,n}_t$ analogously. Then $\tilde{w}^n$ and $\tilde{w}$ are respectively $(\tilde{\mathcal{F}}^n_t)_{t\geq 0}$ and $(\tilde{\mathcal{F}}_t)_{t\geq 0}$-Wiener processes. Define \[ \tilde{\tau}^{k,n}_k := \inf\{ t \geq 0: \tilde{x}^{k,n}_t \notin D_k \} \,\,\,\text{and}\,\,\, \tilde{\tau}_k^k := \inf\{ t \geq 0: \tilde{x}^k_t \notin D_k \}\,. \] These are respectively $\tilde{\mathcal F}^{k,n}$ and $\tilde{\mathcal F}^k$ stopping times. Moreover, due to the a.s. convergence of the trajectories $\tilde{x}^{k,n}$ to $\tilde{x}^k$ we can see that \[ \liminf_{n\to \infty} \tilde{\tau}^{k,n}_k \geq \tilde{\tau}^k_k\,. \] From the fact that the laws of the sequences are identical we see that we still have the Euler approximation on the new probability space: for $t\geq 0$ \[ d\tilde x^{k,n}_t = b^k\left(t,\tilde x^{k,n}_{\kappa_n(t)},\mathscr L(\tilde x^{k,n}_{\kappa_n(t)})\right)\,dt + \sigma^k\left(t,\tilde x^{k,n}_{\kappa_n(t)},\mathscr L(\tilde x^{k,n}_{\kappa_n(t)})\right)\,d\tilde w^n_t\,. \] Moreover for all $t\leq \tilde{\tau}^{k,n}_k$ the process $\tilde x^{k,n}$ satisfies the same equation as above but without the cutting applied to the coefficients: \[ d\tilde x^{k,n}_t = b\left(t,\tilde x^{k,n}_{\kappa_n(t)},\mathscr L(\tilde x^{k,n}_{\kappa_n(t)})\right)\,dt + \sigma\left(t,\tilde x^{k,n}_{\kappa_n(t)},\mathscr L(\tilde x^{k,n}_{\kappa_n(t)})\right)\,d\tilde w^n_t\,. \] Using Skorohod's Lemma, see~\cite[Ch. 2, Sec. 3]{skorokhod1965}, together with the continuity conditions in Assumption~\ref{ass continuity}, we can take $n\to \infty$ and conclude that for all $t\leq \tau_k^k$ we have \begin{equation} \label{eq cut mkvsde} d\tilde x^k_t = b(t,\tilde x^k_t,\mathscr L(\tilde x^k_t))\,dt + \sigma(t,\tilde x^k_t,\mathscr L(\tilde x^k_t))\,d\tilde w_t\,. \end{equation} At this point we remark that the process $\tilde x^k$ is well defined and continuous on $[0,\infty)$ but we only know that it satisfies~\eqref{eq cut mkvsde} until $\tau^k_k$. {\em Second Step.} Tightness of the law of $(\tilde x^k)_k$ in $C(I; \bar D)$ follows from Lemma \ref{lemma tightness} and Remark~\ref{remark on Mt}. From Prohorov's theorem we thus get that for a subsequence $\mathscr L(\tilde x^k) \Rightarrow \mathscr L(\tilde x)$ as $k\to \infty$ (convergence in law). From Skorokhod's Representation Theorem we then obtain a new probability space $(\bar \Omega, \bar{\mathcal{F}},\bar{\mathbb{P}})$ carrying new random variables $(\bar x_0^k, \bar x^k, \bar w^k)$ and $(\bar x_0, \bar x, \bar w)$ such that \[ \mathscr{L}(\bar x_0, \bar x, \bar w ) = \mathscr{L}(\tilde{x}_0, \tilde{x}, \tilde{w})\,, \] \[ \mathscr{L}(\bar x_0^k, \bar x^k, \bar w^k) = \mathscr{L}(\tilde{x}_0^k, \tilde{x}^k, \tilde{w}^k ) \quad \forall k \in \mathbb{N}, \] and (after taking a further subsequence to go from convergence in probability to almost sure convergence) \[ (\bar{x}_0^k, \bar{x}^k, \bar{w}^k) \to (\bar x_0,\bar{x}, \bar w) \,\,\, \text{as $k\to \infty$ in $C( I ; D \times \bar D \times \mathbb{R}^{d'})$ a.s.}\,. \] Let $\bar \tau_k^k := \inf\{ t: \bar x^k_t \notin D_k \}$, $\bar \tau^k_m := \inf\{ t: \bar x^k_t \notin D_m \}$ and $\bar \tau^{\infty}_m := \inf\{ t: \bar x_t \notin D_m \}$. Since $\sup_{t < \infty} |\bar x^k_t - \bar x_t| \to 0$ we get $\bar \tau^k_m \to \bar \tau^{\infty}_m$ as $k\to \infty$. Then from Fatou's Lemma, Remark~\ref{remark on x_0} and either part iii) of Lemma \ref{lemma:uniform} or part ii) of Lemma \ref{lemma tightness} we have that, \begin{equation}\label{eq tightness limit} \mathbb P(\bar \tau^{\infty}_m \in I) \leq \liminf_{k\to \infty} \mathbb P(\bar \tau^k_m \in I) \leq \sup_k \mathbb P(\bar \tau^k_m \in I) \to 0 \,\,\, \text{as $m \to \infty$.} \end{equation} Then the distribution of $\bar \tau^{\infty}_m$ converges in distribution, as $m\to \infty$, to a random variable $\bar \tau$ with distribution $\mathbb P(\bar \tau \leq T) = 0$ and $\mathbb P(\bar \tau = \infty) = 1$. In general convergence in distribution does not imply convergence in probability. But in the special case that the limiting distribution corresponds to a random variable taking a single value a.s. we obtain convergence in probability (see e.g.~\cite[Ch. 11, Sec. 1]{dudleybook}). Hence $\bar \tau^{\infty}_m \to \infty$ in probability as $m\to \infty$. From this we can conclude that there is a subsequence that converges almost surely. Since~\eqref{eq cut mkvsde} holds for $\tilde x^k$ we have the corresponding equation for $\bar x^k$ i.e. for $t\leq \bar \tau^k_k$, \begin{equation} \label{eq cut mkvsde bar} d\bar x^k_t = b(t,\bar x^k_t,\mathscr L(\bar x^k_t))\,dt + \sigma(t,\bar x^k_t,\mathscr L(\bar x^k_t))\,d\bar w^k_t\,. \end{equation} Fix $m < k'$. We will consider $k > k'$. Then~\eqref{eq cut mkvsde bar} holds for all $t\leq \inf_{k\geq k'} \bar \tau^k_m$. We can now consider $\bar x^k_{t\wedge \tau^k_m}$ (these all stay inside $D_m$ for all $k > k' > m$) and use dominated convergence theorem for the bounded variation integral and Skorokhod's lemma on convergence of stochastic integrals, see~\cite[Ch. 2, Sec. 3]{skorokhod1965}, and our assumptions on continuity of $b$ and $\sigma$ to let $k\to \infty$. We thus obtain, for $t\leq \inf_{k\geq k'} \bar \tau^k_m$, \begin{equation} \label{eq mkvsde bar} d\bar x_t = b(t,\bar x_t,\mathscr L(\bar x_t))\,dt + \sigma(t,\bar x_t,\mathscr L(\bar x_t))\,d\bar w_t\,. \end{equation} Now, for each fixed $m<k'$, \[ \lim_{k'\to \infty} \inf_{k\geq k'} \bar \tau^k_m = \lim_{k\to\infty} \bar \tau^k_m = \bar \tau^{\infty}_m. \] Finally we take $m \to \infty$ and since $\bar \tau^{\infty}_m \to \infty$ we can conclude that~\eqref{eq mkvsde bar} holds for all $t\in I$. The last statement of the theorem follows from Corollary~\ref{corollary bound for limit process}. \end{proof} \color{black} \subsection{Examples} \begin{example}[Integrated Lyapunov condition] \label{example integrated lyapunov} Consider the McKean--Vlasov stochastic differential equation~\eqref{eq example1} i.e. \[ d x_t = -x_t \bigg[\int_\mathbb{R} y^4 \mathscr L(x_t)(dy)\bigg]\, dt + \frac{1}{\sqrt{2}} x_t\, d w_t\,,\,\,\, x_0 = \xi >0\,. \] Then for $v(x) = x^4$ we have, \[ L(x,\mu) v(x) = 3 x^4 - 4 x^4 \int_{\mathbb R} y^4 \, \mu(dy)\,. \] We see that the stronger Lyapunov condition~\eqref{eq b2} will not hold with $m_1 < 0$ (at least for chosen $v$, which seems to be a natural choice). However, integrating leads to \[ \int_{\mathbb{R}}L(x,\mu) v(x)\mu(dx) = 3 \int_\mathbb{R} x^4 \mu(dx) - 4 \bigg(\int_\mathbb{R} x^4 \mu(dx)\bigg)^2 \] using this we will show that the integrated Lyapunov condition~\eqref{eq b1} holds i.e. that \[ \int_{\mathbb{R}}L(x,\mu) v(x)\mu(dx) \leq -\int_\mathbb{R} v(x) \mu(dx) + 4 \] is satisfied. To see this we note that $-x^2 \leq - x +1 $. Moreover, Assumption \ref{ass integrated growth} is satisfied. Condition~\eqref{eq lyapunov example} allows us to obtain uniform in time integrability properties for $(x_t)$ needed to study e.g. ergodic properties. \end{example} \begin{example}[Non-linear dependence of measure and integrated Lyapunov condition] \label{example 2} Consider the McKean--Vlasov stochastic differential equation~\eqref{eq example2} i.e. \[ dx_t = - \left( \int_\mathbb{R} (x_t - \a y)\mathscr{L}{(x_t)}(dy) \right)^3dt + \left( \int_\mathbb{R} (x_t - \a y)\mathscr{L}{(x_t)}(dy) \right)^2 \sigma \, dw_t\,, \] for $t\in I$ and with $x_0 \in L^4(\mathcal{F}_0,\mathbb R)$. Assume that $m:= -(6\sigma^2 - 4 + 4\alpha) >0$. The diffusion generator given by~\eqref{eq Lmu} is \[ \begin{split} & (L^\mu v)(x,\mu) = \bigg( \frac{\sigma^2}{2}\left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^4 \partial_x^2 v - \left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^3 \partial_x v\bigg)(x,\mu) \\ & + \int_{\mathbb{R}}\left( \frac{\sigma^2}{2} \left( \int_{\mathbb{R}} (z - \a y)\mu(dy) \right)^4 (\partial_z \partial_\mu v)(t,x,\mu)(z) - \left( \int_{\mathbb{R}} (z - \a y)\mu(dy) \right)^3 (\partial_\mu v) (t,x,\mu)(z) \,. \right) \mu(dz) \end{split} \] We will show that for the Lyapunov function \[ v(x,\mu) = \left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^4\,, \] we have \[ \int_{\mathbb{R}}(L^\mu v)(x,\mu)\,\mu(dx) \leq m - m \int_{\mathbb{R}}v(x,\mu)\,\mu(dx)\,. \] Indeed, \[ \partial_x v(x,\mu) = 4 \left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^3\,, \,\,\,\, \partial^2_x v(x,\mu) = 12\left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^2\, \,, \] \[ \partial_\mu v(x,\mu)(z) = -4\a \left( \int_{\mathbb{R}} (x - \alpha y)\mu(dy) \right)^3 \,,\,\,\,\, \partial_z \partial_\mu v(x,\mu)(z) = 0 \,. \] Hence \[ (L^{\mu}v)(x,\mu) = (6\sigma^2-4) \left(\int_\mathbb{R} (x-\alpha y) \mu(dy)\right)^6 + 4\alpha \int_\mathbb{R} \left[ \left(\int_\mathbb{R}(z-\alpha y)\mu(dy)\right)^3\left(\int_\mathbb{R} (x-\alpha y)\mu(dy)\right)^3 \right]\,\mu(dz)\,. \] Since we want an estimate over the integral of the diffusion generator we observe that \[ \begin{split} I & := \int_\mathbb{R} \int_\mathbb{R} \left[ \left(\int_\mathbb{R}(z-\alpha y)\,\mu(dy)\right)^3 \left(\int_\mathbb{R} (x-\alpha y)\mu(dy)\right)^3 \right]\,\mu(dz)\,\mu(dx)\\ & = \int_\mathbb{R}\left[ \int_\mathbb{R} \left(\int_\mathbb{R}(z-\alpha y)\,\mu(dy)\right)^3 \,\mu(dz) \left(\int_\mathbb{R} (x-\alpha y)\mu(dy)\right)^3\right]\,\mu(dx)\\ & = \int_\mathbb{R} \left(\int_\mathbb{R}(z-\alpha y)\,\mu(dy)\right)^3 \,\mu(dz)\int_\mathbb{R} \left( \int_\mathbb{R} (x-\alpha y)\mu(dy)\right)^3\,\mu(dx)\,\\ & \leq \left(\int_\mathbb{R} \left|\int_\mathbb{R}(z-\alpha y)\,\mu(dy)\right|^3 \,\mu(dz)\right)^2\,. \end{split} \] By Cauchy-Schwarz's inequality we obtain \[ I \leq \int_\mathbb{R} \left(\int_\mathbb{R}(x-\alpha y)\,\mu(dy)\right)^6 \,\mu(dx) \,. \] Hence, recalling $m:= -(6\sigma^2 - 4 + 4\alpha) > 0$ and using the inequality $-x^6 \leq 1-x^4$, we obtain that \[ \int_\mathbb{R} (L^\mu v)(x,\mu)\,\mu(dx) \leq \int_\mathbb{R} (6\sigma^2-4+4\alpha) \left(\int_\mathbb{R} (x-\alpha y)\, \mu(dy)\right)^6 \mu(dx) \leq m - m \int_{\mathbb{R}}v(x,\mu)\,\mu(dx)\,. \] Moreover, Assumption \ref{ass integrated growth} is readily satisfied. \end{example} \begin{example}[Dependence on measure without an integral] \label{example 3} Let $\mu$ be a law on $(\mathbb R, \mathscr B(\mathbb R))$ and let $F^{-1}_\mu:[0,1]\rightarrow \mathbb{R}$ be the generalized inverse cumulative distribution function for this law. Recall that the $\alpha$-Quantile is given by \[ F_\mu^{-1}(\alpha):=\inf\{x\in \mathbb{R}\,:\, \mu((-\infty, x]) \geq \alpha]\}\,. \] Define the Expected Shortfall of $\mu$ at level $\alpha$, $ES_\mu(\alpha)$, as \[ ES_\mu(\alpha):=\frac{1}{\alpha}\int_0^\alpha F_\mu^{-1}(s)\,ds\,. \] It is easy to see that for fixed $\alpha$, Expected Shortfall is a Lipschitz continuous function of measure w.r.t $p$-th Wasserstein distances for $p\geq 1$. Indeed fix $\mu$, $\nu \in \mathcal P_p(\mathbb R)$ and observe that \[ \begin{split} \left|ES_\mu(\alpha)-ES_\nu(\alpha)\right|\leq\frac{1}{\alpha}\int_0^\alpha |F_\mu^{-1}(s)-F^{-1}_\nu(s)|\, ds \leq \frac{1}{\alpha}\int_0^1 |F_\mu^{-1}(s)-F^{-1}_\nu(s)|\,ds=\frac{1}{\alpha}W_1(\mu,\nu)\leq \frac{1}{\alpha}W_p(\mu,\nu). \end{split} \] We consider the following one-dimensional example, based loosely on transformed CIR: \[ d x_t = \frac{\kappa}{2}\big[((ES_{\mathscr{L}(x_t)}(\alpha)\vee \theta)-\frac{\sigma^2}{4\kappa}) x_t^{-1}-x_t\big]\, dt +\frac{1}{2}\sigma \,dw_t. \] Here $x_0$ satisfies $\P[x_0>0]=1$ and $\kappa \theta \geq \sigma^2$. Note that by defining $D:=(0,\infty)$ and $D_k:=[\frac{1}{k},k]$, we have boundedness of the coefficients on $D_k$ and from the above observations and assumptions one can easily verify that the conditions of Theorem \ref{thm:weakexistence} are satisfied. In particular consider $v(x)=x^2+x^{-2}$. Then, \begin{equation}\notag \begin{alignedat} {1} L(x,\mu)v(x) &= \frac{\kappa}{2}\big[((ES_{\mu}(\alpha)\vee \theta)-\frac{\sigma^2}{4\kappa}) x^{-1}-x\big](2(x-x^{-3}))+\frac{1}{8}\sigma^2(2+6x^{-4}) \\ &={\kappa}\big[((ES_{\mu}(\alpha)\vee \theta)-\frac{\sigma^2}{4\kappa})\big] -\kappa x^2 - \big[\kappa (ES_{\mu}(\alpha)\vee \theta)- {\sigma^2} \big]x^{-4}+\kappa x^{-2}+\frac{\sigma^2}{4}\\ &\leq {|\kappa|} |ES_{\mu}(\alpha)|+ \kappa \theta +\kappa x^{-2}\\ &\leq {|\kappa|} |\int_\mathbb{R} x\,\mu(dx)|+ \kappa \theta +\kappa x^{-2} \\ &\leq \frac{1}{2}{\kappa^2} + \frac{1}{2}\int_\mathbb{R} x^2\,\mu(dx) + \kappa \theta +\kappa x^{-2}.\\ \end{alignedat} \end{equation} Integrating with respect to $\mu$ we see that condition \eqref{eq b1} holds. Therefore, due to Theorem~\ref{thm:weakexistence}, we have existence of a weak solution to the above McKean--Vlasov equation. \end{example} \section{Uniqueness}\label{sec uniq} In this Section we prove continuous dependence on initial conditions and uniqueness under two types of Lyapunov conditions. For the novel {\em integrated} global Lyapunov condition we provide an example that has been inspired by the work of~\cite{Scheutzow87} on non-uniqueness of solutions to McKean--Vlasov SDEs. \subsection{Assumptions and Results}\label{subsec uniq results} \wh{I think we need $\bar v\in C^{1,2}(I \times \mathbb R^d)$ instead of $\bar v\in C^{1,2}(I \times D)$ since $x-y$ is not necessarily in $D$. I have changed this from how it was written before.} Recall that by $\pi \in \Pi(\mu, \nu)$ we denote a coupling between measures $\mu$ and $\nu$. In this section we work with a subclass of Lyapunov functions $\bar v\in C^{1,2}(I \times \mathbb R^d)$ that has the properties: $\bar v\geq 0$, $\text{Ker}\,\, \bar v = \{0\}$ and $\bar v(x) = \bar v(-x)$ for $x\in \mathbb R^d$. For this class of Lyapunov functions we define semi-Wasserstein distance on $\mathcal{P}(D)$ as\color{black}, \begin{equation} \label{eq v-wasserstein} W_{ \bar v}(\mu,\nu) := \left( \inf_{\pi \in \Pi(\mu,\nu)} \int_{D\times D} \bar v(x-y) \, \pi(dx,dy) \right)\,. \end{equation} Indeed $W_v$ is a semi-metric and the triangle inequality, in general, does not hold. Note that $\bar v$ does not depend on a measure. For $(t,x,y)\in I \times D \times D$, $(\mu,\nu) \in \mathcal P(D) \times \mathcal P(D)$, we define the generator as follows \begin{align*} L(t,x,y,\mu,\nu)\bar v(t,x-y) := & \partial_t \bar v (t,x-y) + \frac{1}{2}\text{tr}\big((\sigma(t,x,\mu) - \sigma(t,y,\nu))(\sigma(t,x,\mu) - \sigma(t,y,\nu))^* \partial_x^2 \bar v(t,x-y)\big) \\ & + ( b(t,x,\mu) - b(t,y,\nu)) \partial_x \bar v(t,x-y) \,. \end{align*} \ls{Note that at this point Lyapunov function does not depend on measure. There are two reasons for that: a) We would need to assume $L^2$ integrability to use Ito formula; b) We would need a non-trivial example of the function such that $v(x,\pi)=0$ for $x=0$ and $\pi$ being a coupling of two identical laws.} \begin{assumption}[Global Lyapunov condition] \label{ass gmc} There exist locally integrable, non-random, functions $g=g(t)$ and $h=h(t)$ on $I$, such that for all $(t,x,\mu)$ and $(t,y,\nu)$ in $I \times D \times \mathcal{P}(D)$ \begin{equation}\label{eq gmc} L(t,x,y,\mu,\nu)\bar v(t,x-y) \leq g(t) \bar v(x-y) + h(t) W_{\bar v}(\mu,\nu)\,. \end{equation} \end{assumption} \wh{Have left non-integrated condition with the semi-wasserstein distance - we could alternatively write it for all couplings like we do below... since LHS in the above doesn't depend on the coupling, the two conditions are equivalent, but perhaps notationally $W_v$ is nice.} \begin{assumption}[Integrated Global Lyapunov condition] \label{ass int gmc} There exists locally integrable, non-random, function $h=h(t)$ on $I$, such that for all $(t,\mu)$, $(t,\nu)$ in $I \times \mathcal{P}(D)$ and for all couplings $\pi \in \Pi(\mu,\nu)$ \begin{equation} \label{eq igmc} \int_{D \times D }L(t,x,y,\mu,\nu)\bar v(t,x-y) \pi(dx, dy) \leq h(t) \int_{D\times D} \bar v(x-y) \, \pi(dx,dy)\,. \end{equation} \ls{holds for one measure} \end{assumption} Theorem~\ref{thm contdep} gives a stability estimate for the solution to \eqref{eq mkvsde} with respect to initial condition (continuous dependence on the initial conditions). \begin{theorem}[Continuous Dependence on Initial Condition]\label{thm:pathuniq} \label{thm contdep} Let Assumption~\ref{ass local boundedness} hold. Let $x^i$, $i=1,2$ be two solutions to~\eqref{eq mkvsde} on the same probability space such that $\mathbb{E} \bar v(x^1_0 - x^2_0)<\infty$. \begin{enumerate}[i)] \item If Assumption~\ref{ass gmc} holds then for all $t\in I$ \begin{equation} \label{eq cont dep} \mathbb{E} \bar v( x^1_t - x^2_t) \leq \exp\left( \int_0^t \left[g(s) + h(s) + 2|h(s)|\right] \,ds \right) \mathbb{E} \bar v(x^1_0 - x^2_0)\,. \end{equation} \item If Assumptions~\ref{ass int gmc} and either of~\ref{a-int} or~\ref{a-nonint} hold and if there are $p,q$ with $1/p+1/q = 1$ and a constant $\kappa$ such that for all $(t,x,\mu)$ in $I \times D \times \mathcal{P}(D)$ \begin{equation} \label{eq integrability for uniqueness} |\partial_x \bar v(x -y)|^{2p} + |\sigma(t,x,\mu)|^{2q} + |\sigma(t,y,\nu)|^{2q} \leq \kappa (1 + v(t,x,\mu)+v(t,y,\nu)) \end{equation} then for all $t\in I$ \begin{equation} \label{eq int cont dep} \mathbb E \bar v(x^1_t - x^2_t) \leq \exp\left( \int_0^t h(s) \,ds \right) \mathbb E \bar v(x^1_0 - x^2_0)\,. \end{equation} \end{enumerate} \end{theorem} First we note that in the case when $I$ is a finite time interval then the sign of the functions $g$ and $h$ plays no significant role. In relation to the study of ergodic SDEs e.g.~(18) in~\cite{Bolley2007} we make the following observations. If $I=[0,\infty)$ and Assumption~\ref{ass gmc} holds then if $g + h + 2|h| < 0$ then $\lim_{t\to \infty} \mathbb E \bar v(x^1_t - x^2_t)^2 = 0$. However we see that while the spatial dependence of coefficients can play a positive role for the stability of the equation (if $g$ is negative) it seems that the measure dependence never has such positive role, regardless of the sign of $h$. If $I=[0,\infty)$ and we are in the second case of Theorem~\ref{thm contdep} then negative $h$ can play a positive role for stability (but unlike the first case we also need the condition~\eqref{eq integrability for uniqueness}). \begin{proof} Note that if we are in case ii) then, in the following we set $g = 0$ for all $t\in I$. Let \[ \varphi(t) = \exp\left(-\int_0^t [g(s) + h(s)]\, ds\right) \,. \] Applying the classical It\^o formula to $\varphi\, \bar v(x^1-x^2)$ we have that for $t\in I$ \begin{equation} \label{eq after ito for uniq} \begin{split} \varphi(t) \bar v(x^1_t-x^2_t) = & \bar v(x^1_0 - x^2_0) \\ & + \int_0^{t} \varphi(s) \big[ L(s,x^1_s,x^2_s,\mathscr L(x^1_s),\mathscr L(x^2_s))\bar v(t,x^2_s-x^2_s) - (g(s) + h(s)) \bar v(x^1_{s}-x^2_{s}) \big]\,ds\\ & + \int_0^{t} \varphi(s) \partial_x \bar v( x^1_s-x^2_s)(\sigma(s,x^1_s,\mathscr L(x^1_s))-\sigma(s,x^2_s,\mathscr L(x^2_s))) dw_s. \end{split} \end{equation} {\em Case i)} Assumption~\ref{ass gmc} implies \begin{equation*} \begin{alignedat}{1} \varphi(t) \bar v(x^1_{t}-x^2_{t}) \leq \, & \bar v(x^1_0 - x^2_0) + \int_0^t \varphi(s)\big[ h(s) W_{\bar v}(\mathscr L(x^1_s),\mathscr L(x^2_s)) - h(s) \bar v(x^1_{s}-x^2_{s}) \big] \,ds\\ & + \int_0^{t} \varphi(s) \partial_x \bar v( x^1_s-x^2_s)(\sigma(s,x^1_s,\mathscr L(x^1_s))-\sigma(s,x^2_s,\mathscr L(x^2_s))) dw_s. \end{alignedat} \end{equation*} Define the stopping times $\{\tau^i_m\}_{m\geq 1}$, $i=1,2$ and $\{\tau_m\}_{m\geq 1}$ \[ \tau^i_m := \inf \{t\in I \,:\, x^i_t\notin D_m \}\,,\,\, i = 1,2\,\,\,\text{and}\,\,\, \tau_m := \tau^1_m \wedge \tau^2_m\,. \] By Definition~\ref{def soln} we know that $x^i \in C(I;D)$ a.s. and so $\tau^i_m \nearrow \infty$ a.s. and hence $\tau_m \nearrow \infty$ a.s. as $m\to \infty$. The local boundedness of $\sigma$ ensures that the stochastic integral in the above is a martingale on $[t\wedge\tau_m]$, hence \begin{equation*} \begin{alignedat}{1} & \mathbb{E}[ \varphi(t\wedge\tau_m) \bar v( x^1_{t\wedge\tau_m}-x^2_{t\wedge\tau_m})]\\ & \leq \mathbb{E}[\bar v(x^1_0 - x^2_0)] + \mathbb{E}\left[\int_0^{t\wedge\tau_m} \varphi(s)\big[ h(s)W_{\bar v}(\mathscr L(x^1_s),\mathscr L(x^2_s)) - h(s) \bar v( x^1_{s}-x^2_{s}) \big] \,ds \right]\\ & \leq \mathbb{E}[\bar v( x^1_0 - x^2_0 )] + \mathbb{E}\left[\int_0^{t} \varphi(s)\big[ |h(s)| \bar v(x^1_{s}-x^2_{s}) \big]\,ds \right] \,, \end{alignedat} \end{equation*} where the last inequality follows from the definition of the semi-Wasserstein distance. Since $\tau_m\nearrow\infty$ as $m\rightarrow\infty$, application of Fatou's Lemma gives \[ \mathbb{E}[\varphi(t)\bar v( x^1_t - x^2_t )] \leq \mathbb{E} \bar v(x^1_0 - x^2_0) + \int_0^t |h(s)|\mathbb{E}[ \varphi(s) \bar v( x^1_s-x^2_s )]\,ds. \] From Gronwall's lemma we get~\eqref{eq cont dep}. {\em Case ii)} Taking expectation in~\eqref{eq after ito for uniq}, recalling that in this case $g=0$ and then using Assumption~\ref{ass int gmc} we have \begin{equation*} \mathbb E \left[\varphi(t) \bar v (x^1_{t}-x^2_{t}) \right] \leq \mathbb E\bar v(x^1_0 - x^2_0) + \mathbb E\int_0^{t} \varphi(s) \partial_x \bar v( x^1_s-x^2_s)(\sigma(s,x^1_s,\mathscr L(x^1_s))-\sigma(s,x^2_s,\mathscr L(x^2_s)))\, dw_s. \end{equation*} Corollary~\ref{corollary bound for limit process} together with~\eqref{eq integrability for uniqueness} and local integrability of $g$ and $h$ ensures that stochastic integral in the above expression is a martingale. Indeed \[ \begin{split} & \int_0^t \varphi(s)^2\mathbb E\left[ |\partial_x \bar v(x^1_s - x^2_s)|^2 |\sigma(s,x^1_s,\mathscr L(x^1_s))-\sigma(s,x^2_s,\mathscr L(x^2_s))|^2 \right]\,ds \\ & \leq \int_0^t \varphi(s)^2\mathbb E\left[ \frac{1}{p}|\partial_x \bar v(x^1_s - x^2_s)|^{2p} + \frac{1}{q}|\sigma(s,x^1_s,\mathscr L(x^1_s))-\sigma(s,x^2_s,\mathscr L(x^2_s))|^{2q} \right]\,ds \\ & \leq c_{p,q} \int_0^t \varphi(s)^2\mathbb E\left[\partial_x \bar v(x^1_s - x^2_s)|^{2p} + |\sigma(s,x^1_s,\mathscr L(x^1_s))|^{2q}+|\sigma(s,x^2_s,\mathscr L(x^2_s))|^{2q} \right]\,ds \\ & \leq c_{p,q} \int_0^t \varphi(s)^2 \kappa \left( 1 + \mathbb E v(s,x^1_s,\mathscr L(x^1_s)) + \mathbb Ev(s,x^2_s,\mathscr L(x^2_s)) \right)\,ds < \infty\,. \end{split} \] Hence \begin{equation*} \varphi(t)\mathbb{E}[ \bar v(x^1_{t}-x^2_{t})] \leq \mathbb{E}[ \bar v(x^1_0 - x^2_0)]\,. \end{equation*} \end{proof} \begin{corollary} \label{corollary uniqueness} If the conditions for either case i) or ii) of Theorem~\ref{thm contdep} hold and if $x^1_0 = x^2_0$ a.s. then the solutions to~\eqref{eq mkvsde} are pathwise unique. \end{corollary} \begin{proof} If $I=[0,T]$ then uniqueness follows immediately from Theorem~\ref{thm contdep}, the fact that $Ker\,\, \bar v = \{0\}$ and local integrability of $g$ and $h$. If $I=[0,\infty)$ then it is enough to observe that when $x_0^1=x_0^2$ then, due to Theorem~\ref{thm contdep}, uniqueness holds on the interval $[0,s]$, for some $s>0$ and in particular $x^1_s = x^2_s$ a.s. Thus we can continue in a recursive manner to obtain uniqueness on the intervals $[ks,(k+1)s]$ for $k\in \mathbb{N}$. \end{proof} \subsection{Example due to Scheutzow.} Consider the McKean--Vlasov SDE of the form \begin{equation} \label{eq:examplescheutzow} x_t=x_0+\int_0^t B(x_s,\mathbb{E}[\bar b(x_s)])\,ds + \int_0^t \Sigma(x_s,\mathbb{E}[\bar \sigma(x_s)])\,dw_s\,. \end{equation} Our study of this more specific form of McKean--Vlasov SDE is inspired by~\cite{Scheutzow87}, where it has been shown that in the case when $\sigma=0$ and either of functions $b$ or $\bar b$ is locally Lipschitz then uniqueness, in general, does not hold. We will show that if we impose some structure on the local behaviour of the functions then these, together with the integrability conditions established in Theorem~\ref{thm:weakexistence}, are enough to obtain unique solution~\eqref{eq:examplescheutzow}. To be more specific: we impose local (in the second variable) monotone condition on functions $b$ and $\sigma$, which is weaker than local (in the second variable) Lipschitz condition, and local Lipschitz condition on functions $\bar b$ and $\bar \sigma$. \begin{assumption} \label{ass scheutzow} \hfill{} \begin{enumerate}[i)] \item Local Monotone condition: there exists locally bounded function $M=M(x',y',x'',y'')$ such that $\forall x,x',x'',y,y',y'' \in D$ \[ 2(x-y)(B(x,x')-B(y,y')) + |\Sigma(x,x'') - \Sigma(y,y'')|^2 \leq M(x',y',x'',y'')(| x - y |^2 + | x' - y' |^2 + |x'' - y''|^2) \] \item there exists $\kappa$ such that $\forall (t,x,\mu) \in I \times D \times \mathcal{P}(D)$ \[ |\bar b(x)| + |\bar \sigma (x)| \leq \kappa(1 + v(t,x,\mu) )\,. \] \item $\forall (t,x,\mu) \in I \times D \times \mathcal{P}(D)$ there exists $\kappa$ such that \[ |\bar b(x)-\bar b(y)|+ |\bar \sigma(x)-\bar \sigma(y)|\leq \kappa(1+\sqrt{v(t,x,\mu) } + \sqrt{v(t,y,\mu)})|x-y|\,. \] \end{enumerate} \end{assumption} \begin{theorem} If Assumptions~\ref{a-int} hold, if $\sup_{t\in I} M(t) < \infty$ and if Assumptions~\ref{ass local boundedness}, \ref{ass scheutzow} hold then the solution to~\eqref{eq:examplescheutzow} is unique. \end{theorem} We will need the following observation: if $\pi \in \Pi(\mu, \nu)$ then, due to the theorem on disintegration, (see for example~\cite[Theorem 5.3.1]{ambrosio2008}) there exists a family $(P_{x})_{x\in D} \subset \mathcal P(D)$ such that \[ \int_{D\times D} f(x,y)\,\pi(dx,dy) = \int_D \left(\int_D f(x,y)\,P_x(dy)\right)\,\mu(dx) \] for any $f=f(x,y)$ which is a $\pi$-integrable function on $D\times D$. In particular if $f=f(x)$ then \[ \int_{D\times D} f(x)\,\pi(dx,dy) = \int_D f(x)\left(\int_D \,P_x(dy)\right)\,\mu(dx) = \int_D f(x)\,\mu(dx)\,. \] \begin{proof} Our aim is to show that Assumption~\ref{ass gmc} holds since then uniqueness follows from Corollary~\ref{corollary uniqueness}. We know, from Lemma~\ref{lemma:uniform} that for any $t\in I$ we have $\int_D v(t,x,\mathscr L(x_t)) \, \mathscr L(x_t)(dx) \leq \sup_{t\in I}M(t)$ and so it is in fact enough to verify~\eqref{eq gmc} for measures $\mu$ such that $\int_D v(t,x,\mu)\, \mu(dx) \leq \sup_{t\in I}M(t)$. From Assumption~\ref{ass scheutzow} i), we have \[ 2(x-y)(b(x,\mu)-b(y,\nu))+|\sigma(x,\mu) - \sigma(y,\nu)|^2 \leq M(x',y',x'',y'')[|x-y|^2 + |x'-y'|^2 + |x''-y''|^2]\,, \] where $x' = \int_D \bar b(z)\mu(dz)$, $y' = \int_D \bar b(z)\nu(dz)$, $x'' = \int_D \bar \sigma(z)\mu(dz)$ and $y'' = \int_D \bar \sigma(z)\nu(dz)$. We note that each of $|x'|$,$|y'|$,$|x''|$ and $|y''|$ are in a compact subset of $\mathbb R$, since due to Assumption~\ref{ass scheutzow} ii) we have \[ \kappa\left(1+\int_D v(t,z,\mu)\,\mu(dz)\right) + \kappa\left(1+\int_D v(t,z,\mu)\,\nu(dz)\right) \leq 2\sup_{t\in I}M(t) \,. \] As $M$ maps bounded sets to bounded sets we can choose a constant $g$ sufficiently large so that $M(x',y',x'',y'')\leq g$ for all $\mu, \nu$. We apply the remark on disintegration to see that \[ |x'-y'|^2 = \left|\int_D \bar b(\bar x)\mu(d\bar x) - \int_D \bar b(\bar y)\nu(d\bar y)\right|^2 = \left|\int_{D\times D} (\bar b(\bar x) - \bar b(\bar y))\,\pi(dx,d\bar y)\right|^2\,. \] From Assumption~\ref{ass scheutzow} iii) we get \[ \begin{split} |x'-y'|^2 & \leq \kappa^2 \int_{D \times D} (1+\sqrt{v(t,\bar x,\mu) } + \sqrt{v(t,\bar y,\mu)})^2\, \pi(d\bar x,d\bar y) \int_{D \times D} |\bar x-\bar y|^2 \pi(d\bar x,d\bar y) \\ & \leq 4\left(\kappa^2 \sup_{t\in I}M(t)\right)\int_{D \times D} |\bar x-\bar y|^2 \pi(d\bar x,d\bar y)\,. \end{split} \] Since the calculation for $|x''-y''|^2$ is identical we finally obtain \[ 2(x-y)(b(x,\mu)-b(y,\nu))+|\sigma(x,\mu) - \sigma(y,\nu)|^2 \leq g|x-y|^2 + 8 g \kappa^2 \sup_{t\in I}M(t)\int_{D \times D} |\bar x-\bar y|^2 \pi(d\bar x,d\bar y) \] as required to have Assumption~\ref{ass gmc} satisfied. \end{proof} \section{Invariant Measures} \subsection{Semigroups on $C_b(D)$} We will establish the existence of a stationary measure for semigroups associated with solutions to~\eqref{eq mkvsde} via the Krylov--Bogolyubov Theorem (see \cite[Chapter 7]{DaPrato}). Let the conditions of Theorem \ref{thm:weakexistence} hold with suitable assumptions on $m_1$ and $m_2$ such that we are within the regime where $I=[0,\infty )$. For every point $y\in D$ fix a process $(x^y_t)_{t\geq 0}$ that is a $v$-integrable solution to the McKean--Vlasov SDE~\eqref{eq mkvsde} started from $y$. We then define a semigroup $(P_t)_{t\geq 0}$ by \[ P_t\varphi(y):=\mathbb E[\varphi(x^y_t)]\,\,\,\text{for $t\geq 0$, $\varphi\in C_b(D)$.} \] Clearly $P_t\varphi(y) = \langle \varphi, \mathscr L(x^y_t)\rangle$ and if $\varphi\in C_0^2(D)$ then $\langle \varphi, \mu_t \rangle := \langle \varphi, \mathscr L(x^y_t)\rangle$ is given by~\eqref{eq fwd kolmogorov}. This means that establishing existence of invariant measure to~\eqref{eq mkvsde} shows that if $b$ and $\sigma$ are independent of $t$ then there is a stationary solution to~\eqref{eq fwd kolmogorov}. The two main conditions for Krylov--Bogolyubov's theorem to hold is that the semigroup is Feller and a tightness condition. As we are not assuming any non-degeneracy of the diffusion coefficient we cannot always guarantee that the semigroup is Feller. See, however, Lemma~\ref{cor station} for a partial result. \begin{theorem}\label{cor station feller} If the conditions of Theorem \ref{thm:weakexistence} hold with $I=[0,\infty)$ (i.e. we have either $\sup_{t\in [0,\infty) } M^+(t) < \infty$ or $\sup_{t\in [0,\infty) } M(t) < \infty$) and the semigroup $(P_t)_{t\geq 0}$ has the Feller property then there exists an invariant measure for $(P_t)_{t\geq 0}$ acting on $C_b(D)$. \end{theorem} \begin{proof} Fix $y\in D$ and let $(\mu_t)_{t\geq 0}$ be defined as \[ \mu_t := \frac1t\int_0^t \P(x^y_s\in \cdot )\, ds = \frac1t\int_0^t\mathscr L(x^y_s) \,ds\,. \] By Fatou's Lemma and Lemma~\ref{lemma tightness} we know that for any $\varepsilon > 0$ there exists sufficiently large $m_0$ such that for all $m>m_0$ we have $\sup_{t\in I}\mathbb P[x^{y_0}_t\notin D_m]<\varepsilon$. Therefore $\mu_t(D\setminus D_m)=\frac{1}{t}\int_0^t\mathbb P(x^y_s\notin D_m)\,ds<\varepsilon$ and hence $(\mu_t)_{t\geq 0}$ is tight. Since we are assuming that the Feller property holds, the conclusion now follows from Krylov--Bogolyubov Theorem (see \cite[Chapter 7]{DaPrato}). \end{proof} \wh{Changed from strictly increasing to non-decreasing. We don't need the additional assumption the $W_{\bar v}(x-y)<\infty$ if we let $\bar v\in C^{1,2}(I \times \mathbb R)$ } \begin{lemma}\label{cor station} If the assumptions of Theorem \ref{thm:weakexistence} hold with $I=[0,\infty)$ along with either Assumption \ref{ass gmc} or \ref{ass int gmc} and that $\bar v$ is non-decreasing, then the semigroup $(P_t)_{t\geq 0}$ acting on $C_b(D)$ is Feller. \end{lemma} \begin{proof} For $\varepsilon>0$, by continuity of $\varphi$ there exists $\delta_\varphi >0$ s.t. $|x^{y_1}_t-x^{y_2}_t|<\delta_\varphi\implies |\varphi(x^{y_1}_t)-\varphi(x^{y_2}_t)|<\varepsilon/2$. Then \begin{equation*} \begin{split} |P_t\varphi(y_1)-P_t\varphi(y_2) |& =|\mathbb E[\varphi(x^{y_1}_t)-\varphi(x^{y_2}_t)]| \leq \mathbb E[|\varphi(x^{y_1}_t)-\varphi(x^{y_2}_t)| ]\\ & = \mathbb E[|\varphi(x^{y_1}_t)-\varphi(x^{y_2}_t)|(\mathbbm{1}_{|x^{y_1}_t-x^{y_2}_t|< \delta_\varphi}+\mathbbm{1}_{|x^{y_1}_t-x^{y_2}_t|\geq \delta_\varphi}) ]. \\ & <\frac{\varepsilon}{2}\mathbb P[|x^{y_1}_t-x^{y_2}_t|<\delta_\varphi ]+2|\varphi|_\infty\mathbb P[|x^{y_1}_t-x^{y_2}_t|\geq\delta_\varphi ]\\ & \leq \frac{\varepsilon}{2}+2|\varphi|_\infty\mathbb P[|x^{y_1}_t-x^{y_2}_t|\geq\delta_\varphi ].\\ \end{split} \end{equation*} \wh{Slightly changed the ordering of the english here to hopefully make it more clear} We have, via the non-decreasing property of $\bar v$ (first inequality) and the continuous dependence on initial condition~\eqref{eq cont dep} and~\eqref{eq int cont dep} (second inequality), \wh{Changed from Markov's inequality. Non-decreasing property of $\bar v$ gives first inequality. Helps to note that at this stage, $\delta_\varphi$ is fixed. } \begin{equation*} \bar{v}(\delta_\varphi)\mathbb P[|x^{y_1}_t-x^{y_2}_t|\geq\delta_\varphi ]\leq \mathbb E[\bar{v}(x^{y_1}_t-x^{y_2}_t)]\leq c_t\mathbb E[\bar v(y_1-y_2)]. \end{equation*} By continuity of $\bar{v}$, for any $\varepsilon_{\bar v}>0$ there exists $\delta_{\bar v}$ such that, if $|y_1-y_2|<\delta_{\bar{v}}$, $\bar v(y_1-y_2)<\varepsilon_{\bar v}$. Therefore, by choosing $\varepsilon_{\bar v}$ small enough such that $\frac{2c_t|\varphi|_\infty}{\bar{v}(\delta_\varphi)} \bar v(y_1-y_2) < \frac{2c_t|\varphi|_\infty}{\bar{v}(\delta_\varphi)} \varepsilon_{\bar{v}} <\varepsilon/2$, we have, for $|y_1-y_2|<\delta_{\bar v}$, $|P_t\varphi(y_1)-P_t\varphi(y_2)|<\varepsilon$. Boundedness of $P_t\varphi $ is immediate by definition. \end{proof} \subsection{Semigroups on $C_b(\mathcal P_2(D))$} Now we consider semigroups acting on functions of measures. Define the semigroup $(\mathscr P_t)_{t\geq 0}$ by \begin{equation} \label{eq semigroup for fns of meas} \mathscr P_t\phi(\mu)=\phi(\mathscr L(x^\mu_t)) \,\,\, \text{for $\phi\in C_b(\mathcal{P}_2(D))$ and $t\geq 0$.} \end{equation} Here $x^\mu_t$ denotes a solution to~\eqref{eq mkvsde} started from $\mu$. To ensure that $\mathscr L(x^\mu_t) \in \mathcal P_2(D)$ we assume that the conditions of Theorem~\ref{thm:weakexistence} hold with $V$ satisfying $V(t,x)\geq |x|^2$. If $D=\mathbb R^d$ then we can apply the chain rule for functions of measures from e.g.~\cite{buckdahn2017mean} or~\cite{chassagneux2014classical} to obtain that for $\phi\in \mathcal C^{(1,1)}(\mathcal P_2(D))$ \begin{equation} \label{eq fpkeq meas} \begin{split} & \phi(\mathscr L(x_t)) - \phi(\mathscr L(x_0)) \\ & = \int_0^t\langle \mathscr L(x^{\mu}_s), b(s,\cdot,\mathscr L(x^{\mu}_s)) \partial_\mu \phi(\mathscr L(x^{\mu}_s)) + \text{tr}\left[a(s,\cdot,\mathscr L(x^{\mu}_s)) \partial_y \partial_\mu \phi(\mathscr L(x^{\mu}_s))\right]\rangle\,ds. \end{split} \end{equation} In the case that $D\subseteq \mathbb R^d$ we have to assume that there is $\varepsilon>0$ and $k\in \mathbb N$ such that $V(t,x) \geq |x|^{2+\varepsilon}$ for $x\in D\setminus D_k$. We consider first $x^{k,\mu}$ given by~\eqref{eq mkvsde k} started from $\mu$. By Proposition~\ref{propn ito for meas only} we have for $\phi\in \mathcal C^{(1,1)}(\mathcal P_2(D))$ that \begin{equation} \label{eq fpkeq meas on k} \begin{split} & \phi(\mathscr L(x^k_t)) - \phi(\mathscr L(x^k_0))\\ & = \int_0^t\left\langle \mathscr L(x^{k,\mu}_s), b(s,\cdot,\mathscr L(x^{k,\mu}_s)) \partial_\mu \phi(\mathscr L(x^{k,\mu}_s)) + \text{tr}\left[a(s,\cdot,\mathscr L(x^{k,\mu}_s)) \partial_y \partial_\mu \phi(\mathscr L(x^{k,\mu}_s))\right]\right\rangle\,ds. \end{split} \end{equation} From Lemma~\ref{lemma:uniform} we get that $\sup_k \sup_t \mathbb E |x^k_t|^{2+\varepsilon} < \infty$. Moreover Lemma~\ref{lemma tightness} implies, together with Prohorov's theorem convergence of a subsequence of the laws (and since we know the limit of these is given by~\eqref{eq mkvsde} due to the proof of Theorem~\ref{thm:weakexistence}). We thus have convergence $W_2(\mathscr L(x^k_t), \mathscr L(x_t)) \to 0$ as $k\to \infty$. Due to continuity of coefficients $b$, $\sigma$ and since $\phi\in \mathcal C^{(1,1)}(\mathcal P_2(D))$ we can take the limit $k \to \infty$ in~\eqref{eq fpkeq meas on k} to obtain~\eqref{eq fpkeq meas}. \wh{I previously sent in an email a question asking why we use $V(t,x)\geq|x|^2$ here and not $V(t,x)\geq|x|^p$ for $1\leq p< \infty $. I am happy with why $|x|^2$ is used above, but can't we consider semigroups on $C_b(\mathcal{P}_p(D))$ in the following theorem with 'no' change to the proof? } \begin{theorem}\label{cor meas station feller} Let the conditions of Theorem~\ref{thm:weakexistence} hold with $I=[0,\infty)$, and $V(t,x)\geq|x|^2$ for $x\in D\setminus D_k$ for some $k\in \mathbb N$. If the semigroup $(\mathscr P_t)_{t\geq 0}$ given by~\eqref{eq semigroup for fns of meas} is Feller then there exists an invariant measure. \end{theorem} We will need to following fact from~\cite{meleard1996asymptotic} to prove this theorem: Let $S$ be a Polish space and $(m_t)_{t\geq 0} $ be a family of probability measures on $\mathcal{P}(S)$ i.e. $m_t \in \mathcal P(\mathcal P(S))$. Define the intensity measure $I(m_t)$ by \[ \langle I(m_t),f \rangle=\int_{\mathcal{P}(S)} \langle \nu,f \rangle \, m_t(d\nu)\,,\,\,\,\, f \in B(S)\,. \] Here $B(S)$ denotes all the bounded measurable functions from $S$ to $\mathbb R$. Then $(m_t)_{t\geq 0}$ is tight if and only if the family of intensity measures $(I(m_t))_{t\geq 0}\subset \mathcal{P}(S)$ is tight. \begin{proof}[Proof of Theorem~\ref{cor meas station feller}] We recall that $\mathcal{P}_2(D)$ with the Wasserstein distance $W_2$ is Polish~\cite[Theorem 6.18]{villani2009}. Fix $\mu \in \mathcal P_2(D)$ and let $x^\mu$ be a solution to~\eqref{eq mkvsde bar}. We note that with $\pi_t(\mu, B) := \delta_{\mathscr L(x^\mu_t)}(B)$ we have, from~\eqref{eq semigroup for fns of meas}, that \[ \mathscr P_t \phi(\mu) = \phi(\mathscr L(x^\mu_t)) = \int_{\mathcal P_2(D)} \delta_{\mathscr L(x^\mu_t)}(\nu)\phi(\nu)\,d\nu = \int_{\mathcal P_2(D)} \phi(\nu)\,\pi_t(\mu,d\nu)\,. \] Define the family of measures $(m^\mu_t)_{t\geq 0} \subset \mathcal P(\mathcal P_2(D))$ by \[ m^\mu_t(B) := \frac1t \int_0^t \mathbb \pi_s(\mu, B)\,ds = \frac1t \int_0^s \delta_{\mathscr L(x_s^\mu)}(B)\,ds\,,\,\,\,\, B \in \mathscr B(\mathcal P_2(D))\,. \] To apply the Krylov--Bogolyubov Theorem we need to show that the family $(m_t)_{t\geq 0}$ is tight. We observe that for all $f\in B(D)$ we have \[ \begin{split} \int_{\mathcal P(D)} \langle \nu,f \rangle \, m^\mu_t(d\nu) = & \int_{\mathcal{P}(D)} \langle \nu,f \rangle \, \frac{1}{t}\int_0^t\delta_{\mathscr L (x^\mu_s)}\,(d \nu)\,ds = \frac{1}{t}\int_0^t\langle \mathscr L (x^\mu_s),f \rangle ds = \left\langle \frac{1}{t}\int_0^t \mathscr L (x^\mu_s)\,ds, f \right\rangle. \\ \end{split} \] Therefore $I(m^\mu_t) = \frac{1}{t}\int_0^t \mathscr L (x^\mu_s)\,ds$. It remains to show that family of intensity measures $(I(m_t))_{t\geq 0}\subset \mathcal{P}(D)$ is tight. For $B \in \mathscr B(D)$ we have \[ I(m^\mu_t)(B) = \frac{1}{t}\int_0^t\langle \mathscr L (x^\mu_s), \mathds 1_B \rangle \,ds = \frac{1}{t}\int_0^t \mathscr L (x^\mu_s)(B) \, ds = \frac{1}{t}\int_0^t \mathbb P(x^\mu_s \in B)\, ds\,. \] By Fatou's Lemma and Lemma~\ref{lemma tightness} we know that for any $\varepsilon > 0$ there exists sufficiently large $m_0$ such that for all $m>m_0$ we have $\sup_{t\in I}\mathbb P[x^\mu_t\notin D_m]<\varepsilon$. Therefore $I(m^\mu_t)(D\setminus D_m)=\frac{1}{t}\int_0^t\mathbb P(x^\mu_s\notin D_m)ds<\varepsilon$ and hence $(I(m^\mu_t))_{t\geq 0}$ is tight. \end{proof} We do not assume non-degeneracy of the diffusion thus, in general, the semigroup $\mathscr P_t$ is not expected to be Feller. However Lemma~\ref{cor station meas} gives a partial result. \begin{lemma}\label{cor station meas} Let assumptions of Theorem \ref{thm:weakexistence} hold for $I=[0,\infty)$ along with either Assumption~\ref{ass gmc} or~\ref{ass int gmc}. Assume further that \[ W_{\bar v}(\mu,\nu)<\infty \,\,\,\text{ for $\mu,\nu$ in }\,\,\, \mathcal P_{v}(D):=\bigg\{\mu\in \mathcal P(D) : \int_D v(0,x,\mu)\,\mu(dx)<\infty\bigg\}\,. \] Then the semigroup $(\mathscr P_t)_{t\geq 0}$ acting on $C_b(\mathcal{P}_v(D) )$ and defined as in~\eqref{eq semigroup for fns of meas} is Feller. \end{lemma} \wh{Not sure if you want this little explanation below, seems as you didn't want to draw attention to the polynomial case... I wrote it anyway please comment out as preferred.} Note that here, we are considering a semigroup acting on space of measures possibly different to that previously considered. In the case where $v$ and $\bar v$ are polynomials is often simple process to have the assumptions of Lemma \ref{cor station meas} replace the assumption of the Feller property in Theorem \ref{cor meas station feller} and $W_{\bar v}(\mu,\nu)< \infty$ for any $\mu,\nu\in \mathcal{P}_v(D)$ is no longer required. \begin{proof} Fix $t\in I$ and $\mu_1, \mu_2 \in \mathcal P_{\bar v}(D)$. From the continuous dependence on initial condition, Theorem~\ref{thm contdep}, we have \begin{equation}\notag \begin{alignedat}{1} W_{\bar v}(\mathscr L(x^{\mu_1}_t),\mathscr L(x^{\mu_2}_t))\leq \mathbb E [ {\bar v}(x^{\mu_1}_t - x^{\mu_2}_t) ] \leq c_t \mathbb E [ {\bar v}(x^{\mu_1}_0 - x^{\mu_2}_0)] = c_t \int_{D\times D} {\bar v}(x-y) \pi(dx,dy) \,. \end{alignedat} \end{equation} Taking the infimum over all the possible couplings yields, \begin{equation} \label{eq feller for meas 1} \begin{alignedat}{1} W_{\bar v}(\mathscr L(x^{\mu_1}_t),\mathscr L(x^{\mu_2}_t))\leq c_t W_{\bar v}(\mu_1,\mu_2). \end{alignedat} \end{equation} Let $\varepsilon > 0$ be given. For any $\phi \in C_b(\mathcal{P}_v(D) )$ there is $\delta_\phi$ such that $W_{\bar v}(\mathscr L(x^{\mu_1}_t),\mathscr L(x^{\mu_2}_t)) < \delta_\phi$ implies that $|\phi(\mathscr L(x^{\mu_1}_t)) - \phi(\mathscr L(x^{\mu_2}_t))|< \varepsilon$. Now take $\delta := \delta_\phi / c_t$. Then, due to~\eqref{eq feller for meas 1}, if $W_{\bar v}(\mu_1,\mu_2) \leq \delta$ then $W_{\bar v}(\mathscr L(x^{\mu_1}_t),\mathscr L(x^{\mu_2}_t))<\delta_\phi$ and we get $|P_t \phi(\mu_1) - P_t \phi(\mu_2)| < \varepsilon$ as required. \end{proof} \section*{Acknowledgements} We are grateful to Sandy Davie and X\={i}l\'{i}ng Zh\={a}ng, both from the University of Edinburgh, for numerous discussions on the topic of this work and many helpful suggestions. William Hammersley was supported by The Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016508/01), the Scottish Funding Council, Heriot-Watt University and the University of Edinburgh.
{ "timestamp": "2018-05-08T02:16:41", "yymm": "1802", "arxiv_id": "1802.03974", "language": "en", "url": "https://arxiv.org/abs/1802.03974" }
\section{Introduction} Reducible gauge field theories require ghosts, ghosts-for-ghosts, and higher-ghosts as much as necessary, whose gauge algebras may necessitate equations of motion to be closed algebras. The Batalin-Vilkovisky (BV) formalism gives a convenient way to describe such gauge theories, in which the action is expressed in terms of fields and antifields \cite{bv papers, bv reviews}. A string field is an assembly of such spacetime fields and string field theory is an infinitely reducible gauge theory, in which the BV master action is expressed in terms of \textit{string fields} and \textit{string antifields} \cite{Witten:1985cc, bosonic bv, Zwiebach:1992ie, A infinity and bv}. \vspace{1mm} In bosonic string field theory, its classical BV master action is constructed by just relaxing the ghost number restriction of the classical action---there is a ready-made procedure. However, unfortunately, one cannot apply this procedure to superstring field theory in general; how to construct its BV master action has not been clarified yet, except for very limited cases. A successful formulation of open superstring field theory was first proposed by Berkovits \cite{Berkovits:1995ab}, which is characterized by a string field living in the \textit{large} Hilbert space \cite{Friedan:1985ge} and a Wess-Zumino-Witten-like (WZW-like) action having the large gauge invariance. Using a real parameter $t \in [0,1]$, this WZW-like action is often written in the following condensed form \begin{subequations} \begin{align} \label{S} S [\Phi ] & = - \int_{0}^{1} dt \, \Big{\langle } \, A_{t} [ t \Phi ] \, , \, Q \, A_{\eta } [t \Phi ] \, \Big{\rangle } _{\textsf{bpz}} = - \frac{1}{2} \, \big{\langle } \, \Phi \, , \, Q \, \eta \, \Phi \, \big{\rangle } _{\textsf{bpz}} + \cdots \, , \end{align} where $Q$ is the BRST operator, $\eta $ denotes the zero-mode of the eta ghost, $\langle A , B \rangle _{\textsf{bpz}}$ is the BPZ inner product of $A,B$ in the large Hilbert space, and $A_{\eta } [\Phi ] = \eta \, \Phi + \cdots $ is a nonlinear functional\footnote{A functional $A_{t} [t \Phi ] = \partial_{t} ( t \Phi ) + \cdots $ is determined by given $A_{\eta } [\Phi ]$; for example, $A_{t} [t \Phi _{\textsf{B}}] = (\partial _{t} e^{t\Phi _{\textsf{B}}}) e^{-t \Phi _{\textsf{B}}} $. } of the dynamical string field $\Phi$ defined by a solution of the Maurer-Cartan equation \begin{align} \label{pure} 0 & \equiv \eta \, A_{\eta } [ \Phi ] - A_{\eta } [ \Phi ] \ast A_{\eta } [ \Phi ] \, . \end{align} The dots denote the nonlinear interacting terms. The symbol $\ast $ denotes Witten's associative star product \cite{Witten:1985cc}; using it, a solution of (\ref{pure}) is given by $A_{\eta } [ \Phi _{\textsf{B}} ] = (\eta \, e^{\Phi _{\textsf{B}}} ) e^{- \Phi _{\textsf{B}}}$. Since $Q$ and $\eta$ are nilpotent and graded commutative, it is invariant under the \textit{large} gauge transformations \begin{align} \label{Phi} \delta \Phi = Q \, \Lambda + \eta \, \Omega + \dots , \end{align} \end{subequations} where $\Lambda$ and $\Omega$ are appropriate string fields of gauge parameters. This \textit{large} gauge symmetry complicates its gauge-fixing problem. A master action for the free theory can be constructed in a simple manner \cite{Torii:2011zz, Kroyter:2012ni, Torii:2012nj}; however, its ghost--antifield parts or BV transformations take somewhat different forms from the kinetic term of (\ref{S}) or the linear part of (\ref{Phi}), respectively. By contrast, a nonlinear BV master action has proven to be difficult to find even for perturbative one, and it has remained unsolved problem up until now. \vspace{1mm} There is a more practical option: One can ignore the large Hilbert space and consider superstring field theory within the small Hilbert space \cite{Erler:2013xta}; iff all fields and their gauge algebras are strictly restricted to be small, one can apply the ready-made BV procedure.\footnote{Higher string ghosts and their gauge algebra can be large keeping the dynamical string field small, in which the ready-made procedure is not applicable \cite{Matsunaga:2017phm}: It requires the BV formalism in the large Hilbert space. } Quantum aspects of superstring fields have been studied by utilizing such a gauge-fixable formulation. It is however expected that this gauge-fixable theory is obtained from superstring field theory in the large Hilbert space via partial gauge-fixing \cite{Iimori:2013kha, Iimori:2015aea, Erler:2015rra}.\footnote{Partial gauge fixing is an operation omitting some part $\Phi ^{\eta }$ of the dynamical field $\Phi = \Phi ^{\xi } + \Phi ^{\eta }$ \textit{by hand}; at the same time, corresponding gauge degrees of (\ref{Phi}) are appropriately omitted \textit{by hand}---like gauge fixing. } It seems to be intuitively clear at the classical level, but its validity has remained unclear because we have not succeeded to construct a BV master action in the large Hilbert space yet. A lack of understanding of the BV formalism of superstring field theory in the large Hilbert space is not just an issue for the Berkovits formulation but also a matter of all other formulations.\footnote{For including the Ramond sector, see \cite{Kunitomo:2015usa, Matsunaga:2015kra} for the large; see \cite{Erler:2016ybs, Konopka:2016grr} for the small. For closed superstring field theories, see \cite{Berkovits:2004xh, Matsunaga:2014wpa} for the large; see \cite{Erler:2014eba, Sen:2015uaa} for the small; see \cite{Goto:2015pqv} for intermediation. } \vspace{-1mm} \subsubsection*{BV master action in the large Hilbert space} Several differently-looking formulations of superstring field theory are currently established; these all can be embedded into the \textit{large} Hilbert space and their gauge structures are understood in a unified manner---at least, at the classical level. In this process, even for a very trivial embedding of a gauge-fixable theory based on the small Hilbert space, the original gauge symmetry is enlarged as (\ref{Phi}) and the Wess-Zumino-Witten-like gauge structure arises \cite{Matsunaga:2017phm, Erler:2015uoa, Matsunaga:2016zsu, Erler:2017onq}. Since the gauge-fixing of WZW-like string field theory has been an unsolved problem, this result implies that a gauge-fixable theory turns into a gauge-unfixable theory after the embedding---so, the BV master action should exist even in the large Hilbert space. To see it, we consider the simplest situation, the \textit{large} $A_{\infty }/L_{\infty }$ theory: An action in the large Hilbert space which is defined by the trivial embedding of a gauge-fixable action in the small Hilbert space \cite{Erler:2015rra, Erler:2015uoa}. \vspace{1mm} In this paper, we construct BV master actions for open superstring field theory in the large Hilbert space on the basis of several different approaches. Through these constructions, we would like to clarify the following questions; ``How can we apply the BV formalism to the \textit{large} theory?'', ``Why does our ready-made BV procedure not work in the large Hilbert space?'' and ``What should we take into account to treat \textit{large} gauge symmetries?'', which are our motivations. In the most of this paper, we focus on the Neveu-Schwarz (NS) sector of open superstring field theory, the \textit{large} $A_{\infty }$ theory. One can apply the completely same prescriptions to the Ramond sector or to the \textit{large} $L_{\infty }$ action for closed superstring field theory. \vspace{-1mm} \subsubsection*{As an example of the WZW-like action} The large $A_{\infty }$ theory is the simplest but a nontrivial example of the WZW-like open string field theory \cite{Erler:2017onq}: It is completely described by a pair of mutually commutative $A_{\infty }$ algebras $(\Eta \,; \mathbf{M} )$; alternatively, it is also described by the equivalent $A_{\infty }$ pair $(\Eta - \boldsymbol{\ast } \,; \mathbf{Q} )$. The large $A_{\infty }$ action just equals to the WZW-like action (\ref{S}) based on another solution of (\ref{pure}) given by \begin{align} \label{another sol} A_{\eta } [ t \Phi ] \equiv \pi _{1} \widehat{\textbf{G}} \frac{1}{1 - t \, \eta \, \Phi } \, , \end{align} where $\widehat{\mathbf{G}}$ is an $A_{\infty }$ morphism satisfying $\widehat{\mathbf{G}} \, \Eta = (\Eta - \boldsymbol{\ast }) \widehat{\mathbf{G}}$ and $\widehat{\mathbf{G}} \, \mathbf{M} = \mathbf{Q} \, \widehat{\mathbf{G}}$ given in \cite{Erler:2015rra, Erler:2015uoa}. Hence, our construction of BV master actions for the large $A_{\infty }$ theory provides evidence of BV master actions for the WZW-like formulation; it will be a first step to clarify the BV formalism of WZW-like superstring field theory in the large Hilbert space. \vspace{-1mm} \subsubsection*{Organization of the article} To apply the BV formalism, we have to analyse the gauge reducibility of the theory and find the minimal set of the fields--antifields, which we first explain in section 2. We also explain that the conventional BV approach provides an elegant string field representation of the BV antibracket, which is very useful for perturbative constructions. In section 3, we show that although one can construct a lower order BV master action up to the second order of the antifield number, there exists no proper solution at higher order within the (naive) conventional BV approach. Section 4 is devoted to explain how we can avoid the no-go result of section 3 by using the constrained BV approach \cite{Batalin:1992mk}. In this approach, the construction of a master action is equivalent to specify the form of an (unconstrained) action $S_{\textsf{bv} }$ and constraints $\widehat{\Gamma }$. In addition to it, we have to specify how to assemble \textit{extra string fields} $\varphi _{\textrm{ex}}$ in string field theory. So, we have to find out an appropriate pair $( S_{\textsf{bv}} , \widehat{\Gamma } , \varphi _{\textrm{ex} } )$ giving a proper solution of the constrained BV master equation, which we explain in section 5. First, we show that Berkovits' prescription \cite{Berkovits:2012np} works well and gives a correct constrained BV master action for \textit{partially gauge-fixed} superstring field theory in the large Hilbert space. In order to remove this partially-gauge-fixing assumption, one has to impose further constraints, reassemble the extra string fields, or replace the starting (unconstrained) BV action. Then, we construct appropriate constrained BV actions in the large Hilbert space (without the partially-gauge-fixing assumption) on the basis of several different prescriptions. In particular, a constrained BV master action obtained in section 5.4 resembles canonical transformations switching $Q$- and $\eta$-gauge symmetries \cite{Matsunaga:2017phm}, and these properties may be helpful to see what happens in the large theory. In section 6, we revisit the conventional BV approach. On the basis of remediations inspired by the results of the constrained BV approach, we construct a BV master action within the conventional BV approach. Notations, basic identities, and some elementary facts are in appendix A. \section{Minimal set: String fields--antifields} In this paper, we clarify how to apply the BV formalism to superstring field theory in the large Hilbert space by using the large $A_{\infty }$ theory---the simplest example of the WZW-like formulation. As we will see, the classical action of the large $A_{\infty }$ theory \begin{subequations} \begin{align} \label{original action a} S [\Phi ] = - \frac{1}{2} \big{\langle } \, \Phi \, , \, Q \, \eta \, \Phi \, \big{\rangle } _{\textsf{bpz}} - \frac{1}{3} \big{\langle } \, \Phi \, , \, M_{2} \big{(} \eta \, \Phi , \eta \, \Phi \big{)} \big{\rangle } _{\textsf{bpz}} - \frac{1}{4} \big{\langle } \, \Phi , \, M_{3} \big{(} \eta \, \Phi , \eta \, \Phi , \eta \, \Phi \big{)} \big{\rangle } _{\textsf{bpz}} + \cdots \end{align} is given (or defined) by the trivial embedding of the \textit{small} $A_{\infty }$ theory. In this section, we analyse its gauge reducibility and give the minimal set of string fields--antifields. Let $\Phi $ be a Neveu-Schwarz (NS) open superstring field living in the large Hilbert space, which carries world-sheet ghost number $0$ and picture number $0$. The \textit{large} string field $\Phi $ reduces to a \textit{small} string field $\Psi \equiv \eta \, \Phi $ by acting $\eta$ on it; the \textit{small} string field $\Psi$ satisfies $\eta \, \Psi = 0$ and carries world-sheet ghost number $1$ and picture number $-1$, which lives in the small Hilbert space. We write $\mathbf{M} = \mathbf{Q} + \mathbf{M} _{2} + \cdots $ for the NS open superstring products given by \cite{Erler:2013xta}: The $g$-th product $M_{g}$ carries world-sheet ghost number $2-g$ and picture number $g-1$. As a functional of the \textit{small} dynamical string field $\Psi$, the \textit{small} $A_{\infty }$ action $S' [\Psi ]$ is given by \begin{align*} & S' [\Psi ] = - \frac{1}{2} \lla \, \Psi \, , \, Q \, \Psi \, \rra _{\textsf{bpz}} - \frac{1}{3} \lla \, \Psi \, , \, M_{2} ( \Psi , \Psi ) \rra _{\textsf{bpz}} - \frac{1}{4} \lla \, \Psi \, , \, M_{3} \big{(} \Psi , \Psi , \Psi \big{)} \rra _{\textsf{bpz}} + \cdots \, . \end{align*} We write $\slla \eta A, \eta B \srra _{\textsf{bpz}}$ for the BPZ inner product of $\eta A$ and $\eta B$ in the small Hilbert space, which equals to the BPZ inner product $\langle A , \eta B \rangle _{\textsf{bpz}} = -(-)^{A} \langle \eta A , B \rangle _{\textsf{bpz}}$ in the large Hilbert space. This small $A_{\infty }$ theory is easily gauge-fixable iff all string fields of gauge parameters and their gauge algebras are also restricted to the small Hilbert space: One can construct its BV master action $S'_{\textsf{bv}}$ by just relaxing ghost number constraint as $S'_{\textsf{bv}} \equiv S' [\psi ]$ where $\psi $ carries all spacetime and world-sheet ghost numbers. By contrast, one cannot construct the BV master action $S_{\textsf{bv}}$ for the large $A_{\infty }$ action (\ref{original action a}) in a similar manner because of its WZW-like large gauge symmetries, although $S[\Phi ]$ is obtained from the trivial embedding of gauge-fixable $S'[\Psi ]$, which we explain. \subsection{Gauge reducibility and string ghosts} For simplicity, we take coalgebraic and suspended notation; see appendix A. With a real parameter $t \in [0,1]$, the large $A_{\infty }$ action (\ref{original action a}) has the following compact expression \begin{align} \label{original action} S [\Phi ] & = \int_{0}^{1} dt \, \Big{\langle } \, \Phi \, , \, \mathbf{M} \frac{1}{1- t \, \eta \, \Phi } \, \Big{\rangle } \, , \end{align} \end{subequations} where $\mathbf{M} = \mathbf{Q} + \mathbf{M} _{2} + \mathbf{M} _{3} + \cdots $ denotes the $A_{\infty }$ superstring products and $\langle A , B \rangle $ is the graded symplectic form---it is the suspended BPZ inner product (\ref{susp}), but we call it as ``the BPZ inner product'' simply. This action is invariant under the following large gauge transformations \begin{align} \label{gauge invariance} \delta \Phi = \pi _{1} \big[ \hspace{-1.1mm} \big[ \mathbf{M} , \, \bLambda _{-1,0} \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi } + \Eta \, \Lambda _{-1,1} \, , \end{align} where $ [ \hspace{-0.6mm} [ \mathbf{C} , \mathbf{D} ] \hspace{-0.6mm} ] $ denotes the graded commutator of coderivations $\mathbf{C}$ and $\mathbf{D}$; see appendix A. The gauge symmetry (\ref{gauge invariance}) has the following gauge reducibility \begin{align*} \delta _{g+1} \big( \delta _{g} \Lambda _{-g,p} \big) = 0 \, , \hspace{5mm} \delta _{g} \Lambda _{-g,p} = \pi _{1} \big[ \hspace{-1.1mm} \big[ \mathbf{M} , \, \bLambda _{-g-1,p} \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi } + \Eta \, \Lambda _{-g-1,p+1} \, , \end{align*} where $\Lambda _{-g,p}$ denotes a $g$-th string gauge-parameter field and defining $\Lambda _{0,0} \equiv \Phi $ may be helpful. While the $g$-label runs from $0$ to infinity, the $p$-label runs from $0$ to $g$ as shown by \cite{Torii:2011zz, Kroyter:2012ni, Torii:2012nj}. Hence, the large $A_{\infty }$ theory is infinitely reducible just as the Berkovits theory \cite{Berkovits:1995ab}. \vspace{1mm} As well as the string field $\Phi $, these string gauge-parameter fields $\Lambda _{-g,p}$ can be expanded in terms of spacetime gauge-parameter fields $\lambda _{g,p}^{\, r}$ and world-sheet bases $| \cZ _{-g,p}^{\, r} \rangle $; see appendix A for these bases. The BV formalism implies that when string gauge-parameter fields are given by $\Lambda _{-g,p} = \sum_{r} \lambda _{-g,p}^{\, r} | \cZ _{-g,p}^{\, r} \rangle$, corresponding \textit{string ghost fields} are obtained by replacing each spacetime parameter field $\lambda _{-g,p}^{\, r}$ with corresponding spacetime ghosts $\phi _{-g,p}^{\, r}$ as follows \begin{align} \label{ghost string field} \Phi _{-g,p} = \sum_{r} \phi _{g,p}^{\, r} \, \big{|} \cZ _{-g,p}^{\, r} \big{\rangle } \, . \end{align} The $r$-label distinguishes different bases carrying the same world-sheet ghost and picture numbers. We sometimes write $\Phi _{0,0} \equiv \Phi $ for simplicity. Therefore, the following set of the dynamical string field $\Phi $ and string ghosts $\Phi _{-g,p}$ arises in the theory \begin{align} \label{string ghost set} \Big{\{ } \, \Phi _{-g,p} \, \Big{|} \, \Phi _{0,0} \equiv \Phi \, ; \, 0 \leq g \, , \,\, 0 \leq p \leq g \, \Big{\} } \, . \end{align} More precisely, all of the spacetime fields which are coefficients of these string fields (\ref{string ghost set}) are necessitated to fix the gauge symmetries (\ref{gauge invariance}). In other words, a set of spacetime ghost fields $A'_{\textrm{min}} \equiv \{ \, \phi _{g,p}^{r} \, | \, 0<g , \, 0 \leq p \leq g \, ; r \in \mathbb{N} \, \}$ are required. We write $A_{0}$ for the set of spacetime dynamical fields, $A_{0} \equiv \{ \phi _{0,0}^{r} \}_{r\in \mathbb{N}}$. The pair $A_{\textrm{min}} \equiv A_{0} \oplus A'_{\textrm{min}}$ of dynamical fields $A_{0}$ and these ghosts $A'_{\textrm{min}}$ requires their spacetime antifields $A_{\textrm{min}}^{\ast } = \{ (\phi _{g,p}^{r})^{\ast } | \, 0 \leq g , \, 0 \leq p \leq g \, ; r \in \mathbb{N} \, \}$ in the BV formalism. Hence, the minimal set of spacetime fields--antifields is given by \begin{align} \label{minimal set} \cA _{\textrm{min}} \equiv A_{\textrm{min}} \oplus A_{\textrm{min}}^{\ast } = \Big{\{ } \, \phi _{g,p}^{r} \, , \, (\phi _{g,p}^{r} )^{\ast } \, \Big{|} \, 0 \leq g \, , \,\, 0 \leq p \leq g \,\, ; \, r \in \mathbb{N} \, \Big{\} } \, . \end{align} On this minimal set, one can define a non-degenerate antibracket \begin{align} \label{minimal antibracket} \big{(} \, F \, , \, G \, \big{)}_{\textrm{min}} \equiv \sum_{g \geq 0} \sum_{p, r} \bigg[ \, \frac{\overset{\leftarrow }{\partial } F}{\partial \phi _{g,p}^{\, r} } \, \frac{\overset{\rightarrow }{\partial } G}{\partial (\phi _{g,p}^{\, r})^{\ast } } \, - \, \frac{\overset{\leftarrow }{\partial } F}{\partial (\phi _{g,p}^{\, r})^{\ast } } \, \frac{\overset{\rightarrow }{\partial } G}{\partial \phi _{g,p}^{\, r} } \, \bigg] \, , \end{align} where $\frac{\overset{\rightarrow }{\partial } F}{\partial \phi }$ is the left-derivative, $\frac{\overset{\leftarrow }{\partial } F}{\partial \phi }$ is the right-derivative, and $\frac{\overset{\rightarrow }{\partial } F}{\partial \phi } = (-)^{\phi (F+1)} \frac{\overset{\leftarrow }{\partial } F}{\partial \phi }$ holds. One can quickly find $( F ,G )_{\textrm{min}} = -(-)^{(F+1)(G+1)} ( G , F )_{\textrm{min}}$ in this expression. \subsection{String antifields and string antibracket} In the conventional BV approach, for a given string (ghost) field $\Phi _{-g,p}$ of (\ref{ghost string field}), its string antifield $(\Phi _{-g,p})^{\ast }$ is introduced by assigning $|\cZ _{-g,p}^{\, r \, \ast }\rangle $ to each spacetime antifield $( \phi _{g,p}^{\, r} )^{\ast }$, \begin{align} \label{naive string antifield} (\Phi _{-g,p})^{\ast } = \sum_{r} (\phi _{g,p}^{\, r})^{\ast } \, \big{|} \cZ _{-g,p}^{\, r \, \ast } \big{\rangle } \, , \end{align} where $|\cZ _{-g,p}^{\, r \, \ast }\rangle $ is the dual basis for $|\cZ _{-g,p}^{\, r}\rangle $ such that $\langle \cZ _{-g,p}^{\, r \, \ast } , \cZ _{-h,q}^{\, s} \rangle = - \delta ^{r,s} \, \delta _{g,h} \, \delta _{p,q}$\,. Since dual bases are uniquely determined for given bases, this type of string antifield seems to be most natural. See appendix A for details of these bases. The set of string fields (\ref{string ghost set}) and their string antifields defined by (\ref{naive string antifield}) gives the minimal set of \textit{string fields--antifields} \begin{align} \label{string fields--antifields} \cA _{\Phi } |_{\textrm{min}} \equiv \Big{\{ } \, \Phi _{-g,p} \, , (\Phi _{-g,p} )^{\ast } \, \Big{|} \, 0 \leq g \, , \, 0 \leq p \leq g \, \Big{\} } \, . \end{align} As shown by \cite{Kroyter:2012ni}, this type of string antifield (\ref{naive string antifield}) provides an elegant string field representation of the BV antibracket (\ref{minimal antibracket}) in the Darboux form, \begin{align} \label{string field rep} \big{(} \, F \, ,\, G \, \big{)} _{\textrm{min}} = \sum_{g,p} \bigg[ \, F \frac{\overset{\leftarrow }{\partial } }{\partial \Phi _{-g,p}} \cdot \frac{\overset{\rightarrow }{\partial } }{\partial (\Phi _{-g,p})^{\ast }} G - F \frac{\overset{\leftarrow }{\partial } }{\partial (\Phi _{-g,p})^{\ast }} \cdot \frac{\overset{\rightarrow }{\partial } }{\partial \Phi _{-g,p}} G \, \bigg] \, , \end{align} where $A \cdot B$ denotes the BPZ inner product in the large Hilbert space $A \cdot B \equiv \langle A , B \rangle$\,. Note that $F$ and $G$ are functionals of (\ref{string fields--antifields}), which can be identified with functionals of (\ref{minimal set}). One can define \textit{string-field derivatives} of a functional $F=F[\Phi _{\alpha }]$ of string fields $\Phi _{\alpha }$ by utilizing its total derivative $\delta F $. When a given string field $\Phi _{\alpha }$ consists of spacetime fields $\{ \phi _{\alpha }^{r} \} _{r}$ as $\Phi _{\alpha } = \sum_{r} \phi _{\alpha }^{r} | \cZ_{\alpha }^{r} \rangle $, we require that the total derivative of $F = F[\Phi _{\alpha }] = F [\{ \phi _{\alpha }^{r} \} _{r}]$ has the following expression \begin{align*} \bigg{\langle } \, \delta \Phi _{\alpha } \, , \, \frac{\overset{\rightarrow }{\partial } F}{\partial \Phi _{\alpha }} \, \bigg{\rangle } \equiv \sum_{r} \delta \phi _{\alpha }^{r} \, \frac{\overset{\rightarrow }{\partial } }{\partial \phi _{\alpha }^{r}} F \, , \hspace{5mm} \bigg{\langle } \, \frac{\overset{\leftarrow }{\partial } F}{\partial \Phi _{\alpha }} \, , \, \delta \Phi _{\alpha } \, \bigg{\rangle } \equiv F \sum_{r} \frac{\overset{\leftarrow }{\partial } }{\partial \phi _{\alpha }^{r}} \, \delta \phi _{\alpha }^{r} \, , \end{align*} which provides the string-field derivatives. Note that the relation of the left- and right-derivatives $\frac{\overset{\rightarrow }{\partial } F}{\partial \phi } = (-)^{\phi (F+1)} \frac{\overset{\leftarrow }{\partial } F}{\partial \phi }$ determines that of string-field derivatives. We assume that the variations of string fields (\ref{ghost string field}) and string antifields (\ref{naive string antifield}) are given by \begin{align*} \delta \Phi _{-g,p} \equiv \sum_{r} \delta \phi_{g,p}^{r} \, \big{|} \cZ _{-g,p}^{\, r} \big{\rangle } \, , \hspace{5mm} \delta (\Phi _{-g,p})^{\ast } = \sum_{r} \delta (\phi _{g,p}^{r})^{\ast } \, \big{|} \cZ _{-g,p}^{\, r \, \ast } \big{\rangle } \, . \end{align*} Then, by using the relations (\ref{1>g}) and (\ref{g>1}), the BV antibracket (\ref{minimal antibracket}) reduces to (\ref{string field rep}). \vspace{1mm} Let us consider the free action $K[\Phi ]$ and its gauge variation---the kinetic term of (\ref{S}) and the linear part of (\ref{Phi}). Its master action $K_{\textsf{bv}}$ gives the kinetic term of the master action $S_{\textsf{bv}}$. As shown in \cite{Kroyter:2012ni}, a master action for the free theory is given by \begin{align} \label{free} K_{\textsf{bv}}[\Phi , \Phi ^{\ast }] = \frac{1}{2} \big{\langle } \, \Phi \, , \, Q \, \eta \, \Phi \, \big{\rangle } + \sum_{g \geq 0} \sum_{p=0}^{g} \big{\langle } ( \Phi _{-g,p} )^{\ast } , \, Q \, \Phi _{-1-g,p} + \eta \, \Phi _{-1-g,p+1} \, \big{\rangle } \, , \end{align} which is indeed a functional of string fields (\ref{ghost string field}) and string antifields (\ref{naive string antifield}). \section{Conventional BV approach} In the conventional BV approach, we require the following three properties to obtain a master action $S_{\textsf{bv}} = S_{\textsf{bv}} [\varphi , \varphi ^{\ast }]$ as a functional of \textit{string fields} $\varphi $ and \textit{string antifields} $\varphi ^{\ast }$: \begin{enumerate}[label=\textbf{\roman*)}, leftmargin=!] \item Regarding the states: The master action $S_{\textsf{bv}}$ consists of the dynamical string field, the string ghost fields introduced in (\ref{ghost string field}), and the string antifields given by (\ref{naive string antifield}). \item Regarding the operators and products: The master action $S_{\textsf{bv}}$ consists of the operators and products which appear in the original action (\ref{original action}) and its gauge symmetry algebra (\ref{gauge invariance}), namely, $\mathbf{M}$, $\Eta$, and the large BPZ inner product only. \item The master action $S_{\textsf{bv}}$ does not include explicit insertions of $\bxi $ or $\mathbf{M} ^{-1}$: These operations enable us to remove the above requirement (i) or (ii) effectively. \end{enumerate} However, although a perturbative master action $S_{\textsf{bv}} = S^{(0)} + S^{(1)} + S^{(2)} + \dots $ is obtained up to the second order, this (naive) conventional BV approach breaks down at the third order $S^{(3)}$ of the antifield number expansion. There is no solution satisfying the above three requirements, which we explain in this section. A reader interested in constructing BV master actions can skip this section; this section is independent of the other sections. \vspace{1mm} As expected, if one use $\bxi $ or $\mathbf{M} ^{-1}$ insertions explicitly, a master action can be constructed.\footnote{Then, some of higher gauge tensors have to include $\bxi $ or $\mathbf{M} ^{-1}$ explicitly, although it does not appear in (\ref{original action}) or (\ref{gauge invariance}) explicitly. The results of section 5 and 6 imply that there may be difference between the gauge tensors based on string fields--antifields and those based on spacetime fields--antifields. } It implies that string ghost fields or string antifields are reassembled, or new products which never appear in the action nor its gauge invariance are used, to obtain the master action. In section 5, keeping the forms of string ghost fields and the requirement for operators and products, we construct the master action by just reassembling (physical) string antifields. \subsection{Naive construction breaks down} We perturbatively solve the master equation using the antifield number expansion. We write $\textrm{afn}[\phi ]$ for the antifield number of the spacetime field or antifield $\phi$. It is assigned to the spacetime antifields only: $\textrm{afn}[ \phi ]=0$ if $\phi $ is not an antifield. In particular, $\textrm{afn}[c]=0$ for $c \in \mathbb{C}$ and a world-sheet basis has no antifield number. The antifield number is additive with respect to the multiplication $\textrm{afn}[\phi \psi ]=\textrm{afn}[\phi ] + \textrm{afn}[\psi ]$, and thus $\textrm{afn}[\phi ] + \textrm{afn}[ \frac{\partial }{\partial \phi }]=0$\,. We find \begin{align*} \textrm{afn} [ \phi _{g,p} ] = - \textrm{afn} \bigg[ \frac{\overset{\rightarrow }{\partial } }{\partial \phi _{g,p}} \bigg] = 0 \, , \hspace{3mm} \textrm{afn} \big[ (\phi _{g,p} )^{\ast } \big] = - \textrm{afn} \bigg[ \frac{\overset{\rightarrow }{\partial } }{\partial (\phi _{g,p})^{\ast }} \bigg] = g + 1 \, , \end{align*} where $\phi _{g,p}$ denotes a $g$-th ghost. A master action $S_{\textsf{bv}}$ is a functional of all fields--antifields appearing in the minimal set and one can expand it with respect to the antifield number \begin{align*} S_{\textsf{bv}} = S + \sum_{a=1}^{\infty } S^{(a)} \, , \end{align*} where $S^{(a)}$ denotes the antifield number $a$ part of the master action $S_{\textsf{bv}}$, namely, $\textrm{afn}[S^{(a)}] = a$\,. The original action is the antifield number zero part $S^{(0)} \equiv S$, which is the initial condition of the BV formalism. Because of $\textrm{afn} \big[ \frac{\partial S^{(a)} }{\partial (\phi _{g,p} )^{\ast } } \big] = a-g-1$ and $\textrm{afn} \big[ \frac{\partial S^{(a)} }{\partial \phi _{g,p} } \big] = a$\,, the antifield number $a$ part of the master equation is given by \begin{subequations} \begin{align} \label{a-part of BV eq} \frac{1}{2} \big{(} \, S_{\textsf{bv}} \, , \, S_{\textsf{bv}} \, \big{)} \big{|}^{(a)}_{\textrm{min}} \equiv \sum_{b=0}^{a} \sum_{g=0}^{b} \bigg[ \sum_{p=0}^{g} S^{(a-[b-g])} \frac{\overset{\leftarrow }{\partial }}{\partial \phi _{g,p} } \frac{\overset{\rightarrow }{\partial }}{\partial (\phi _{g,p})^{\ast } } S^{(1+b)} \bigg] = 0 \, . \end{align} Note that $(S_{\textsf{bv}} , S_{\textsf{bv}})|_{\textrm{min}}^{(a)}$ consists of $S^{(0)}, \cdots , S^{(a+1)}$ because of $\frac{\partial S^{(a)}}{\partial \phi _{g,p}}= \frac{\partial S^{(a)}}{\partial (\phi _{g,p})^{\ast }} = 0$ for $a \leq g$\,. By solving these, one can construct a solution $S_{\textsf{bv}}$ of the master equation \begin{align*} \big{(} \, S_{\textsf{bv}} \, , \, S_{\textsf{bv}} \, \big{)}_{\textrm{min}} = \sum_{a=0}^{\infty } \big{(} \, S_{\textsf{bv}} \, , \, S_{\textsf{bv}} \, \big{)} \big{|}^{(a)}_{\textrm{min}} = 0 \, . \end{align*} In the conventional BV approach, (\ref{a-part of BV eq}) has the following string field representation \begin{align} \frac{1}{2} \big{(} \, S_{\textsf{bv}} \, , \, S_{\textsf{bv}} \, \big{)} \big{|}^{(a)}_{\textrm{min}} \equiv \sum_{b=0}^{a} \sum_{g=0}^{b} \bigg[ \sum_{p=0}^{g} S^{(a-[b-g])} \bigg{\langle } \frac{\overset{\leftarrow }{\partial }}{\partial \Phi _{-g,p} } \, , \, \frac{\overset{\rightarrow }{\partial }}{\partial (\Phi _{-g,p})^{\ast } } \bigg{\rangle } S^{(1+b)} \bigg] = 0 \, . \end{align} \end{subequations} Note that the antifield number expansion of $S_{\textsf{bv}}$ defines the following odd vector field $\overset{\rightarrow }{\Delta }$ lowering the antifield number by one, $\textrm{afn}[\overset{\rightarrow }{\Delta }]=-1$, \begin{align*} \overset{\rightarrow }{\Delta } \equiv \sum_{g=0}^{\infty } \sum_{p=0}^{g} S^{(g)} \frac{\overset{\leftarrow }{\partial }}{\partial \phi _{g,p} } \, \frac{\overset{\rightarrow }{\partial }}{\partial (\phi _{g,p})^{\ast } } = \sum_{g=0}^{\infty } \sum_{p=0}^{g} \frac{\overset{\leftarrow }{\partial } S^{(g)} }{\partial \Phi _{-g,p} } \cdot \frac{\overset{\rightarrow }{\partial }}{\partial (\Phi _{-g,p})^{\ast } } \, , \end{align*} where the dot denotes the BPZ inner product in the large Hilbert space. The odd vector field $\overset{\rightarrow }{\Delta }$ acting on $S^{(a+1)}$ is uniquely determined by given lower parts $S^{(0)}, \cdots , S^{(a)}$. The first equation $( S_{\textsf{bv}} , S_{\textsf{bv}} )|^{(0)}_{\textrm{min}} = 0$ reduces to $\overset{\rightarrow }{\Delta } S^{(1)} = 0$ because of $\frac{\partial S^{(a)}}{\partial (\phi _{g,p})^{\ast }}=0$ for $a \leq g$\,; a solution is given by \begin{align} \label{S^{(1)}} S^{(1)} = \Big{\langle } \, (\Phi )^{\ast } \, , \, \textrm{M} (\Phi _{-1,0} ) + \eta \, \Phi _{-1,1} \Big{\rangle } \, , \end{align} where we wrote $\textrm{M} ( \Phi _{0} ) \equiv \pi _{1} \, \big[ \hspace{-1.1mm} \big[ \mathbf{M} , \bPhi _{0} \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi }$ for brevity. The antifield number 2 part $S^{(2)}$ is determined by the second equation $(S^{(1)} , S^{(1)})_{\textrm{min}} + \overset{\rightarrow }{\Delta } S^{(2)} = 0$\,. To be proper, $S^{(2)}$ has to include $\Phi _{-2,p}$ and $(\Phi _{-1,p})^{\ast }$ in addition to $\Phi _{-1,p}$ and $(\Phi )^{\ast }$. We find a solution \begin{align} \label{S^{(2)}} S^{(2)} & = \Big{\langle } \, (\Phi _{-1,0} )^{\ast } \, , \, \textrm{M} ( \Phi _{-2,0} ) + \frac{1}{2} \textrm{M} ( \Phi _{-1,0} , \eta \, \Phi _{-1,0} ) + \eta \, \Phi _{-2,1} \Big{\rangle } \, \nonumber\\ & \hspace{5mm} + \Big{\langle } (\Phi _{-1,1} )^{\ast } \, , \, \textrm{M} ( \Phi _{-2,1} ) - \frac{1}{2} \textrm{M} \big{(} \Phi _{-1,0} , \textrm{M} ( \Phi _{-1,0} ) \big{)} + \eta \, \Phi _{-2,2} \Big{\rangle } \, \nonumber\\ & \hspace{5mm} + \Big{\langle } (\Phi )^{\ast } \, , \, \frac{1}{2} \textrm{M} \big{(} \Phi _{-2,0} , ( \Phi )^{\ast } \big{)} + \frac{1}{4} \textrm{M} \big{(} \Phi _{-1,0} , \eta \, \Phi _{-1,0} , ( \Phi )^{\ast } \big{)} \Big{\rangle } \, , \end{align} where we defined $\textrm{M} ( \Phi _{1} , \Phi _{2} ) \equiv \pi _{1} \, \big[ \hspace{-1.1mm} \big[ [ \hspace{-0.6mm} [ \mathbf{M} , \bPhi _{1} ] \hspace{-0.6mm} ] , \bPhi _{2} \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi } = (-)^{\Phi _{1} \Phi _{2} } \textrm{M} (\Phi _{2} , \Phi _{1} )$ for brevity. The second line includes a double $\mathbf{M}$-term, and $\mathbf{M}$-terms and $\Eta$-terms appear in symmetric manner. We introduce the following graded symmetric function of $n$-inputs \begin{align*} \textrm{M} ( \Phi _{1} , \dots , \Phi _{n} ) & \equiv \pi _{1} \, \Big[ \hspace{-1.3mm} \Big[ ... \big[ \hspace{-1.1mm} \big[ [ \hspace{-0.6mm} [ \mathbf{M} , \bPhi _{1} ] \hspace{-0.6mm} ] , \bPhi _{2} \big] \hspace{-1.09mm} \big] , \dots \big] \hspace{-1.09mm} \big] , \bPhi _{n} \Big] \hspace{-1.3mm} \Big] \frac{1}{1-\eta \, \Phi } \, , \end{align*} which satisfies $\textrm{M}( ... , A , B , ... ) = (-)^{AB} \textrm{M} ( ... , B , A , ... )$\,. As (\ref{S^{(1)}}) and (\ref{S^{(2)}}), we would like to construct the next correction $S^{(3)}$ satisfying the antifield number $2$ part of the master equation (\ref{a-part of BV eq}). However, there is no solution based on the (naive) conventional BV approach: One cannot construct higher $S^{(a \geq 3)}$ as a functional of $\Phi _{-g,p}$ and $(\Phi _{-g,p})^{\ast }$ unless using projectors acting on string fields--antifields.\footnote{For a given string field $\varphi $, we split it as $\varphi = \varphi _{1} + \cdots + \varphi _{n}$; for each split-part $\varphi _{a}$, we introduce its (split) string antifield $(\varphi _{a} )^{\ast }$, which may satisfy $\varphi ^{\ast } = (\varphi _{1})^{\ast } + \cdots + (\varphi _{n})^{\ast }$. As we will see in section 5 or 6, the master action $S_{\textsf{bv}}$ can be constructed as a functional of these split-parts $\varphi _{1} , ... , \varphi _{n}$ of the string (anti-)field, $S_{\textsf{bv}} = S_{\textsf{bv}} [ \varphi _{a} , (\varphi _{a})^{\ast } ]$. It is not a functional of the sum $\varphi = \varphi _{1} + \cdots + \varphi _{n}$ or $\varphi ^{\ast } = (\varphi _{1} )^{\ast } + \cdots + (\varphi _{n} )^{\ast }$. So we need $\cP _{a}$ s.t. $\cP _{a} \varphi = \varphi _{a}$.} The above lower order solutions $S^{(1)}$ and $S^{(2)}$ uniquely determine the following quantity, \begin{align*} F ^{(2)} & \equiv \bigg{\langle } \frac{ \overset{\leftarrow }{\partial } S^{(2)} }{\partial \Phi } \, , \, \frac{\overset{\rightarrow }{\partial } S^{(1)} }{\partial (\Phi )^{\ast } } \bigg{\rangle } + \bigg{\langle } \frac{\overset{\leftarrow }{\partial } S^{(1)} }{\partial \Phi } \, , \, \frac{\overset{\rightarrow }{\partial } S^{(2)} }{\partial (\Phi )^{\ast } } \bigg{\rangle } + \sum_{p=0}^{1} \bigg{\langle } \frac{\overset{\leftarrow }{\partial } S^{(2)} }{\partial \Phi _{-1,p} } \, , \, \frac{\overset{\rightarrow }{\partial } S^{(2)} }{\partial (\Phi _{-1,p})^{\ast } } \bigg{\rangle } \, . \end{align*} The antifield number 2 part of the master equation (\ref{a-part of BV eq}) is equivalent to \begin{align} \label{broken eq} F^{(2)} + \overset{\rightarrow }{\Delta } \, S^{(3)} = 0 \, . \end{align} The odd vector field $\overset{\rightarrow }{\Delta }$ acting on $S^{(3)}$ is uniquely determined by $S^{(0)}$, $S^{(1)}$, and $S^{(2)}$. Note that a proper $S^{(3)}$ must include $\Phi _{-3,p}$ and $(\Phi _{-2,p})^{\ast }$. The equation (\ref{broken eq}) should hold for each pair of string field variables, and one can find solutions for generic pairs; however, the equation (\ref{broken eq}) has no solution for the pair of string field variable $(\Phi _{-1,0} , \Phi _{-1,0} , \Phi _{-1,0} , (\Phi _{-1,1})^{\ast })$ unfortunately. We find that $\overset{\rightarrow }{\Delta } S^{(3)}$ has the following form, \begin{align*} \overset{\rightarrow }{\Delta } S^{(3)} = \bigg{\langle } (\textrm{e.o.m.}) \, , \, \frac{\overset{\rightarrow }{\partial } S^{(3)} }{\partial (\Phi )^{\ast } } \bigg{\rangle } + \bigg{\langle } \eta \, (\Phi _{-1,1} )^{\ast } , \, \frac{\overset{\rightarrow }{\partial } S^{(3)} }{\partial (\Phi _{-2,2})^{\ast } } \bigg{\rangle } + \bigg{\langle } \textrm{M} \big{(} ( \Phi _{-1,1} )^{\ast } \big{)} , \, \frac{\overset{\rightarrow }{\partial } S^{(3)} }{\partial (\Phi _{-2,1})^{\ast } } \bigg{\rangle } + \cdots , \end{align*} where the last dots denote the terms which consist of the other pairs of variables. The corresponding terms of $F^{(2)}$ must be able to be rewritten into the same form to satisfy (\ref{broken eq}). Unfortunately, we find \begin{align*} F^{(2)} = \Big{\langle } \, (\textrm{e.o.m.}) , \, F \, \Big{\rangle } + \Big{\langle } \, \eta \, (\Phi _{-1,1} )^{\ast } , \, F_{\eta } \, \Big{\rangle } + \Big{\langle } \, \textrm{M} \big{(} \Phi _{-1,1} )^{\ast } \big{)} , \, F_{\textrm{M}} \, \Big{\rangle } + \Big{\langle } ( \Phi _{-1,1})^{\ast } , \, E \, \Big{\rangle } + \dots \, , \end{align*} where the dots denote the terms which consist of the other pairs of variables and the explicit forms of $F$, $F_{\textrm{M}}$, $F_{\eta }$, and $E$ are given by \begin{align*} F & \equiv - \frac{1}{4} \Big[ \textrm{M} \Big{(} \Phi _{-1,0} , \eta \, \Phi _{-1,0} , \textrm{M} \big{(} \Phi _{-1,0} , (\Phi _{-1,1})^{\ast } \big{)} \Big{)} + \textrm{M} \Big{(} \Phi _{-1,0} , \textrm{M} \big{(} \Phi _{-1,0} , \eta \, \Phi _{-1,0} , (\Phi _{-1,1})^{\ast } \big{)} \Big{)} \Big] \, , \\ F_{\eta } & \equiv - \frac{1}{4} \Big[ \textrm{M} \big{(} \Phi _{-1,0} , \textrm{M} (\Phi _{-1,0}) , \textrm{M} (\Phi _{-1,0}) \big{)} + \textrm{M} \big{(} \Phi _{-1,0} , \eta \, \Phi _{-1,0} , \textrm{M} (\Phi _{-1,0} ) \big{)} \Big] \, , \\ F_{\textrm{M}} & \equiv \frac{1}{4} \textrm{M} \big{(} \Phi _{-1,0} , \eta \, \Phi _{-1,0} , \textrm{M} ( \Phi _{-1,0} ) \big{)} \, , \hspace{15mm} E \equiv \frac{1}{4} \textrm{M} \big{(} \Phi _{-1,0} , \textrm{M} ( \Phi _{-1,0} , \eta \, \Phi _{-1,0} ) \big{)} \, . \end{align*} The nonzero fourth term, which is extra and breaks (\ref{broken eq}), cannot be absorbed by the first three terms. Hence, although one can construct a lower order solution $S_{\textsf{bv}} = S + S^{(1)} + S^{(2)} + O(3)$, there is no solution for higher $S^{(a>2)}$ based on the (naive) conventional BV approach. \subsection{On the gauge tensor formulae} The BV master equation is a generating function of the identities satisfied by the gauge tensors---what does the breakdown of (\ref{broken eq}) mean? Let us consider the gauge tensors arising from (\ref{gauge invariance}). We write $\cR^{\alpha }_{\beta }$, $\cT ^{\alpha }_{\beta \gamma }$, or $\cE ^{\alpha \beta }_{\gamma \delta }$ for gauge tensors in the sense of \begin{align*} \delta \Phi = \cR ^{\Phi }_{\alpha } (\Lambda _{\alpha } ) \, , \hspace{3mm} [ \hspace{-0.6mm} [ \delta _{1} , \delta _{2} ] \hspace{-0.6mm} ] \Phi = \cR ^{\Phi }_{\gamma } \, \cT ^{\gamma }_{\alpha _{1} \alpha _{2} } ( \Lambda _{\alpha _{1}}, \Lambda _{\alpha _{2}} ) - \frac{\partial S}{\partial \Phi '} \, \cE ^{\Phi ' \Phi }_{\alpha _{2} \alpha _{1}} (\Lambda _{\alpha _{1}} , \Lambda _{\alpha _{2}} ) \, , \end{align*} where $\cR ^{\Phi } \equiv \cR ^{(0,0)}$ and the Greek indices denote appropriate world-sheet ghost and picture numbers: $\cR ^{\Phi }_{\alpha } (\Lambda _{\alpha } ) = \cR ^{\Phi }_{(-1,0)}( \Lambda _{-1,0} ) + \cR ^{\Phi }_{(-1,1)} (\Lambda _{-1,0} )$. These $\cR $, $\cT$, and $\cE$ define the following gauge tensors, which include terms corresponding to $(\Phi _{-1,0} , \Phi _{-1,0} , \Phi _{-1,0} , (\Phi _{-1,1})^{\ast })$, \begin{align*} \cA _{\alpha \beta \gamma }^{\delta } & \equiv \frac{1}{3} \sum _{\textrm{cuclic}} \bigg[ \frac{\partial \cT ^{\delta }_{\alpha \beta } }{\partial \Phi '} \, \cR ^{\Phi '}_{\gamma } - \cT ^{\delta }_{\alpha \iota } \, \cT ^{\iota }_{\beta \gamma } \bigg] \, . \\ \cB ^{\delta \iota }_{\alpha \beta \gamma } & \equiv \frac{1}{3} \sum_{\textrm{cyclic}} \bigg[ \frac{\partial \cE ^{\delta \iota } }{\partial \Phi '} \, \cR ^{\Phi '}_{\gamma } - \cE ^{\delta \iota }_{\alpha \beta } \, \cT^{\delta }_{\beta \gamma } - \frac{\partial \cR ^{\delta }_{\alpha }}{\partial \Phi '} \, \cE ^{\Phi' \iota }_{\beta \gamma } + \frac{\partial \cR ^{\iota }_{\alpha }}{\partial \Phi ' } \, \cE^{\Phi ' \delta }_{\beta \gamma } \bigg] \, . \end{align*} We find the following relation of the on-shell Jacobi identity of the gauge transformations \begin{align*} \cR ^{i }_{\delta } \, \cA ^{\delta } _{\alpha \beta \gamma } = \frac{\partial S}{\partial \Phi '} \, \cB ^{\Phi ' \delta }_{\alpha \beta \gamma } \, \hspace{3mm} \Longleftrightarrow \hspace{3mm} \sum_{\textrm{cyclic}} \big[ \hspace{-1.1mm} \big[ [ \hspace{-0.6mm} [ \delta _{1} , \delta _{2} ] \hspace{-0.6mm} ] , \delta _{3} \big] \hspace{-1.09mm} \big] \Phi = 3 \frac{\partial S}{\partial \Phi '} \, \cB ^{\Phi ' \Phi }_{\alpha _{3} \alpha _{2} \alpha _{1} } \big{(} \Lambda _{\alpha _{1} } , \Lambda _{\alpha _{2} } , \Lambda _{\alpha _{3} } \big{)} \, . \end{align*} Then, the master equation (\ref{broken eq}) is equivalent to the existence of the higher gauge tensors $\cF _{\alpha \beta \gamma }^{\delta }$ and $\cD _{\alpha \beta \gamma }^{\delta \iota }$ satisfying \begin{align} \label{gauge tensor eq} \cA ^{\delta }_{\alpha \beta \gamma } = - \cR ^{\delta }_{\iota } \, \cF ^{\iota }_{\alpha \beta \gamma } + \frac{\partial S}{\partial \Phi '} \, \cD ^{\Phi ' \delta }_{\alpha \beta \gamma } \, . \end{align} The right-hand side has the form as $\overset{\rightarrow }{\Delta }S^{(3)}$ of (\ref{broken eq}); the left-hand side provides the same kind of extra terms as $F^{(2)}$ of (\ref{broken eq}). Note however that the relation (\ref{gauge tensor eq}) should hold automatically when the set of independent gauge generators is complete. Since these gauge tensors $\cR$, $\cT$, $\cE$, ... are naively defined as functionals of \textit{string fields} $\varphi $, this result implies that one should have to consider them as functionals of \textit{spacetime fields} or rather fine parts $\varphi _{a}$ of total string fields $\varphi = \varphi _{1} + \cdots + \varphi _{n}$. The master action $S_{\textsf{bv}}$ will consist of these fine gauge tensors. \section{Nonminimal set: Constrained string fields--antifields} The previous no-go result implies that we should not consider $S_{\textsf{bv}}$ as a functional of string fields--antifields naively. We should consider rather fine parts $\varphi _{a}$ of string fields--antifields $\varphi = \sum_{a} \varphi _{a}$ such as spacetime fields; or equivalently, we have to reassemble the string fields--antifields in order to obtain $S_{\textsf{bv}}$ as a functional of string fields--antifields themselves. \subsection{How to assemble string antifields} There is no criteria or rule for how to assemble string antifields unlike string ghost fields in the BV formalism: It just suggests that how or what kind of spacetime ghost fields must be introduced from the gauge invariance, and one can introduce their spacetime antifields such that the antibracket takes the Darboux form. It just tells us whether a given master action, a functional of \textit{spacetime} fields--antifields, is proper or not. In general, the string antifield $(\Phi _{g,p})^{\ast }$ for the string field $\Phi _{g,p}$ can take the following form, \begin{align} \label{p-relax} (\Phi _{g,p} )^{\ast } = \sum_{r} (\phi _{g,p}^{r} )^{\ast } \, \big{|} \, g , p \, ; r \, \big{\rangle } \, , \hspace{5mm} \big{|} \, g,p \, ; r \, \big{\rangle } = \sum_{h,q} (a_{g,p}^{r})_{h,q} \, \big{|} \cZ _{2+h,-1+q}^{\, r} \big{\rangle } \, , \end{align} where $(a_{g,p}^{r})_{h,q}$ is some constant. As we saw in section 2, the relation $\langle \Phi _{g,p} , (\Phi _{g,p})^{\ast } \rangle =1$ gives the simplest assembly. For generic assembly of string antifields, its ``string field representation'' of the antibracket cannot take the Darboux form: When $(a_{g,p}^{r})_{h,p} \not= 0$ for $h \not= 0$ or $q \not= 0$, we find $( F ,G )_{\textrm{min}} = \sum F \overset{\leftarrow}{\partial }_{a} E^{ab} \overset{\rightarrow }{\partial }_{b} G$ where $\partial _{a}$ denotes a string (anti-)field derivative and $E^{ab}$ is not an orthogonal antisymmetric matrix. In this paper, we consider the case $(a_{g,p}^{r})_{h,p}=0$ for $h \not= 0$ but may be $(a_{g,p}^{r})_{0,p} \not= 0$, which depends on the construction. Then, all components of $E^{ab}$ have the same Grassmann parity, but its string antibracket may not be Darboux for the $p$-label.\footnote{The spacetime antibracket can always take the Darboux form even if its string field representation cannot. } In the large Hilbert space, one can split a given state into its $\eta $- and $\xi $-exact components as \begin{subequations} \begin{align} \label{decomposition} \Phi _{-g,p} = \sum_{r} \phi _{g,p}^{r \, \eta } \, \eta \, \xi \, \big{|} \cZ _{-g,p}^{\, r} \big{\rangle } + \sum_{r} \phi _{g,p}^{r \, \xi } \, \xi \, \eta \, \big{|} \cZ _{-g,p}^{\, r} \big{\rangle } \, . \end{align} The new label of $\phi ^{ \eta}$ or $\phi ^{\xi }$ denotes that it is a coefficient spacetime field multiplied by an $\eta$- or $\xi $-exact world-sheet basis, respectively. Inspired by the formal relation $\langle \Phi _{g,p} , \, (\Phi _{g,p})^{\ast } \rangle \not= 0$, we require that the string antifield $(\Phi _{-g,p})^{\ast }$ for (\ref{decomposition}) takes the following form \begin{align} (\Phi _{-g,p} )^{\ast } = \sum_{r} ( \phi _{g,p}^{r \, \xi } )^{\ast } \, \eta \, \xi \, \big{|} \, g,p \, ; r \, \big{\rangle } + \sum_{r} ( \phi _{g,p}^{r \, \eta } )^{\ast } \, \xi \, \eta \, \big{|} \, g,p \, ; r \, \big{\rangle } \, . \end{align} In other words, the $\eta $-exact components $(\Phi ^{\ast })^{\eta }$ of the string antifield $\Phi ^{\ast }= (\Phi ^{\ast })^{\eta } + (\Phi ^{\ast })^{\xi }$ correspond to the $\xi $-components $\Phi ^{\xi }$ of the string field $\Phi = \Phi ^{\eta } + \Phi ^{\xi }$ because of $\langle \Phi ^{\eta } , (\Phi ^{\ast })^{\eta } \rangle = \langle \Phi ^{\xi } , (\Phi ^{\ast })^{\xi } \rangle = 0$\,. In terms of spacetime fields, we assume \begin{align} \big{(} (\phi _{g,p} )^{\ast } \big{)}^{\eta } = \big{(} \phi _{g,p}^{\,\, \xi } \big{)}^{\ast } \, , \hspace{5mm} \big{(} (\phi _{g,p} )^{\ast } \big{)}^{\xi } = \big{(} \phi _{g,p}^{\,\, \eta } \big{)}^{\ast } \, . \end{align} \end{subequations} As will see, this requirement simplifies our analysis and computations. \vspace{1mm} How should we assemble the string antifields?---to find it, we take the constrained BV approach, in which a solution of the constrained master equation determines how the physical string antifields should be assembled. We first consider extra string fields as \cite{Berkovits:2012np} and introduce their string antifields as the (naive) conventional BV approach. Then, we impose appropriate second constraints and consider the Dirac antibracket, which determines the physical string antifields on the constrained state space---appropriately assembled string antifields. In other words, we translated finding the assembly of string antifields into finding out appropriate constraints such that its Dirac antibracket is well-defined and the constrained master action becomes the generator of appropriate BV transformations. As will be discussed in 5.3, for example, one can introduce string antifield $(\Phi _{-g,p})^{\ast }|_{\Gamma }$ for (\ref{decomposition}) via appropriate constraints $\Gamma $ as \begin{align} \label{example} (\Phi _{-g,p} )^{\ast } \big{|}_{\Gamma } = \sum_{r} \eta \, \bigg[ \big{(}\phi _{g,p}^{r \, \xi } \big{)}^{\ast } \big{|} \cZ _{1+g ,-p}^{\, r} \big{\rangle } + \big{(} \phi _{g,p}^{r \, \eta } \big{)}^{\ast } \big{|} \cZ _{1+g, -1-p}^{\, r} \big{\rangle } \bigg] \, , \end{align} which gives a master action on the constrained string field--antifield space. Then, physical string antifields $\widetilde{\Phi } = \widetilde{\Phi } [ \Phi , \Phi ^{\ast } ; \Gamma ]$ are given by a funtcional of these string fields--antifields which satisfy the canonical relation $( \Phi _{a} , \widetilde{\Phi }_{b} )_{\Gamma } = \delta _{a,b}$ in the Dirac antibracket defined by (\ref{Dirac antibracket}). \subsection{Extra fields and constraints} The Berkovits' constrained BV approach \cite{Berkovits:2012np} is a specific case of the BV formalism based on a redundant non-minimal set \cite{Batalin:1992mk}. As many as the antifields included in the minimal set, we introduce \textit{extra} spacetime ghost fields which carry negative spacetime ghost number, \begin{align} \label{spacetime extra fields} A_{\textrm{ex}} = \Big{\{ } \, \phi _{-1-g,-p}^{\, r} \, \Big{|} \,\, 0 \leq g \, , \,\, 0 \leq p \leq g \,\, ; \, r \in\mathbb{N} \, \Big{\} } \, . \end{align} For these extra spacetime ghosts, we introduce their spacetime antifields $A_{\textrm{ex}}^{\ast} = \{ (\phi _{-g,-p} )^{\ast } |\, 0 < g , 0 \leq p \leq g \}$. These give a set of extra fields--antifields $\cA _{\textrm{ex}} = A_{\textrm{ex}} \oplus A_{\textrm{ex}}^{\ast }$\,. We consider the \textit{non-minimal set} of fields--antifields \begin{align} \label{nonminimal set} \cA = \cA _{\textrm{min}} \oplus \cA _{\textrm{ex}} = \Big{\{ } \phi _{g,p}^{r} \, , \, (\phi _{g,p}^{r} )^{\ast } \, ; \phi _{-1-g,-p}^{\, r} \, , \, (\phi _{-1-g,-p}^{\, r} )^{\ast} \, \Big{|} \, 0 \leq p \leq g \,\, ; r \in \mathbb{N} \, \Big{\} } \, , \end{align} and define an antibracket acting on this $\cA$ by \begin{align} \label{antibracket} \big{(} \, F \, , \, G \, \big{)} \equiv \sum_{g \in \mathbb{Z}} \sum_{p,r} \bigg[ \, \frac{\partial _{r} F}{\partial \phi _{g,p}^{r} } \, \frac{\partial _{l} G}{\partial \phi _{g,p}^{r \, \ast } } - \frac{\partial _{r} F}{\partial \phi _{g,p}^{r \, \ast } } \, \frac{\partial _{r} G}{\partial \phi _{g,p}^{r} } \, \bigg] \, . \end{align} We introduce a set of constraint equations $\Gamma $ which has the same degrees of freedom as a set of extra fields--antifields $\cA _{\textrm{ex}}$\,. These $\Gamma $ split into the first and second class constraints. For any functions $F,G$ which are invariant under the first class $\Gamma$, one can define a non-degenerate Dirac antibracket on $\cA / \Gamma $ using the second class $\Gamma $ by \begin{align} \label{Dirac antibracket} \big{(} \, F \, , \, G \, \big{)}_{\Gamma } \equiv \big{(} \, F \, , \, G \, \big{)} - \sum_{a,b} \big{(} \, F \, , \, \Gamma _{a} \, \big{)} \, \big{[} ( \Gamma , \Gamma )^{-1} \big{]}_{ab} \, \big{(} \, \Gamma _{b} \, , \, G \, \big{)} \, . \end{align} It enables us to consider a master equation on the constraint space $\cA / \Gamma $. See \cite{Batalin:1992mk} for details. \vspace{1mm} In this paper, we just consider extra spacetime ghosts which have the same labels as original spacetime antifields appearing in the minimal set as \cite{Berkovits:2012np}. Hence, our non-minimal set of fields--antifields (\ref{nonminimal set}) is twice the size of the minimal set (\ref{minimal set}). We construct constrained master actions $S_{\textsf{bv}}$ based on this redundant set (\ref{nonminimal set}) and constrained brackets (\ref{Dirac antibracket}), instead of (\ref{minimal set}) and (\ref{minimal antibracket}). In string field theory, by using these extra spacetime fields, we can introduce \textit{extra string fields} \begin{align*} \Phi _{g,-p} = \sum_{r} \phi _{-g,p}^{r} \, \big{|} \, g , p \, ; r \, \big{\rangle } \, \end{align*} and consider a set of \textit{extra} string fields--antifields. In principle, there is no restriction on its assembly as long as it gives a solution of the constrained BV master equation. \section{Constrained BV approach} Let us consider the ghost string fields $\Phi _{-g,p}$ of (\ref{ghost string field}), which are naturally determined by the gauge reducibility. For each spacetime field $\phi _{g,p}^{r}$ of $\Phi _{-g,p} = \sum _{r} \phi _{g,p}^{r} \, | \cZ _{-g,p}^{r} \rangle $\,, its spacetime antifield $(\phi _{g,p}^{r})^{\ast }$ is introduced. Now we add extra ghosts and their antifields, and our set of fields--antifields (\ref{nonminimal set}) is non-minimal and twice the size of (\ref{minimal set}). The pair of spacetime fields and their antifields $\{ \phi _{g,p}^{r} , (\phi _{g,p}^{r} )^{\ast } \} _{g,p,r}$ defines a non-degenerate antibracket (\ref{antibracket}) on functions of fields--antifields. \vspace{2mm} We consider a set $\{ \Phi _{1+g,-p} | 0 \leq g , \,\, 0 \leq p \leq g \}$ of \textit{extra string fields}, string fields consisting of extra spacetime ghosts, and assume that as string ghost fields (\ref{ghost string field}), these are assembled as \begin{align} \label{extra ghost string field} \Phi _{1+g,-p} = \sum_{r} \phi _{-1-g,-p}^{r} \, \big{|} \cZ _{1+g,-p}^{r} \big{\rangle } \, . \end{align} This type of extra string field has the \textit{same} Grassmann parity or total grading as original string fields. As will see, $\{ \eta \, \Phi _{1+g,-p} \} _{g,p}$ will correspond to a half of conventional string antifields. \vspace{2mm} Let $\cA _{\varphi } \equiv \{ \Phi _{-g,p} , \Phi _{1+g,-p} \} _{0 \leq p \leq g}$ be the set of \textit{all string fields}\,: the dynamical string field, string ghost fields, and extra string fields. We write $\varphi $ for the sum of all string fields, \begin{align} \label{all string fields} \varphi \equiv \varphi _{-} + \varphi _{\textsf{ex}} \, , \hspace{5mm} \varphi _{-} \equiv \Phi + \sum _{g > 0} \sum_{p=0}^{g} \Phi _{-g,p} \, , \hspace{3mm} \varphi _{\textsf{ex}} \equiv \sum _{g \geq 0} \sum_{p=0}^{g} \Phi _{1+g,-p} \, , \end{align} where $\varphi _{-}$ denotes the sum of the original string fields and $\varphi _{\textsf{ex}}$ denotes the sum of the extra string fields. As proposed by Berkovits \cite{Berkovits:2012np}, we take the following constrained BV action \begin{align} \label{Berkovits BV} S_{\textsf{bv}} [\varphi ] = \int _{0}^{1} dt \, \Big{\langle } \, \varphi \, , \, \mathbf{M} \frac{1}{1 - t \, \eta \, \varphi } \, \Big{\rangle } \, , \end{align} which has the same form as the original action (\ref{original action}). Clearly, this $S_{\textsf{bv}}[\varphi ]$ is not proper on (\ref{nonminimal set}) and satisfies $( S_{\textsf{bv}} , S_{\textsf{bv}} ) = 0$ because it consists of fields only. We introduce antifields into $S_{\textsf{bv}}$ such that it gives a proper master action by imposing appropriate constraints $\widehat{\Gamma }$. Note that as well as the original action $S[\Phi ]$, the above action $S_{\textsf{bv}}[\varphi ]$ has a special property. Recall that in the large Hilbert space, one can decompose the string field $\Phi _{-g,p}$ as (\ref{decomposition}), in which a spacetime field $\phi _{g,p}^{\, r \, \eta }$ is multiplied by an $\eta$-exact world-sheet basis. Then, for any pairs of $(g,p)$, we find the following relation \begin{align} \label{kernel} \frac{\partial }{\partial \phi _{g,p}^{\, r \, \eta } } S_{\textsf{bv}} [ \varphi ] = \Big{\langle } \eta \, \xi \, \cZ _{-g,p}^{\, r} , \, \mathbf{M} \frac{1}{1- \eta \, \varphi } \Big{\rangle } = 0 \, . \end{align} This kind of property plays a crucial role in the constrained BV approach. \vspace{1mm} In the rest, we often omit the $r$-label and use the following short notation for brevity: \begin{align} \label{half} \big{|} \cZ _{-g,p}^{\,\, \eta } \big{\rangle } \equiv \eta \, \xi \, \big{|} \cZ _{-g,p} \big{\rangle } \, , \hspace{5mm} \big{|} \cZ _{-g,p}^{\,\, \xi } \big{\rangle } \equiv \xi \, \eta \, \big{|} \cZ _{-g,p} \big{\rangle } \, . \end{align} Likewise, we often use $| \cZ _{-g,p}^{\, \ast \, \eta } \rangle \equiv \eta \, \xi \, | \cZ _{-g,p}^{\, \ast } \rangle $ and $| \cZ _{-g,p}^{\, \ast \, \xi } \rangle \equiv \xi \, \eta \, | \cZ _{-g,p}^{\, \ast } \rangle $ as (\ref{half}). See appendix A for their properties. We write $\Phi _{-g,p} = \phi _{g,p}^{\, \eta } \, | \cZ _{-g,p}^{\,\, \eta } \rangle + \phi _{g,p}^{\, \xi } \, | \cZ _{-g,p}^{\,\, \xi } \rangle $ for the decomposition (\ref{decomposition}) of the string field $\Phi _{-g,p}$. Then, one can expand the antibracket (\ref{antibracket}) using \begin{align*} \frac{\overset{\leftarrow }{\partial }}{\partial \phi _{g,p} } \frac{\overset{\rightarrow }{\partial }}{\partial (\phi _{g,p} )^{\ast } } = \frac{\overset{\leftarrow }{\partial }}{\partial \phi _{g,p}^{\,\, \xi } } \frac{\overset{\rightarrow }{\partial }}{\partial (\phi _{g,p}^{\,\, \xi })^{\ast } } + \frac{\overset{\leftarrow }{\partial }}{\partial \phi _{g,p}^{\,\, \eta } } \frac{\overset{\rightarrow }{\partial }}{\partial (\phi _{g,p}^{\,\, \eta } )^{\ast } } \, . \end{align*} \subsection{Preliminary: Constrained BV for partially gauge-fixed theory} We write $(\varphi )^{\ast }$ for the sum of all string antifields. Utilizing $\varphi $ of (\ref{all string fields}) and corresponding $(\varphi )^{\ast }$, we impose the following constraint on the space of string fields \begin{subequations} \begin{align} \label{total constraint} \widehat{\Gamma } \equiv ( \varphi )^{\ast } - \eta \, \varphi \, . \end{align} This $\widehat{\Gamma }$ provides constraint equations on each string field, which introduce the spacetime antifields $(\phi _{g,p}^{r} )^{\ast }$ into the constrained master action $S_{\textsf{bv}} [\varphi ] |_{\widehat{\Gamma }}$\,. One can decompose this constraint $\widehat{\Gamma }$ with respect to its spacetime ghost number; for $g \geq 0$, we find \begin{align} \label{total constraint b} \Gamma _{g,p} & \equiv \frac{1}{\sqrt{2} } \Big[ (\Phi _{1+g,-p} )^{\ast } - \eta \, \Phi _{-g,p} \Big] \, , \hspace{5mm} \Gamma _{-1-g,-p} \equiv \frac{1}{\sqrt{2} } \Big[ ( \Phi _{-g,p} )^{\ast } - \eta \, \Phi _{1+g,-p} \Big] \, . \end{align} \end{subequations} Note that the $g$-label of $\Gamma $ denotes its spacetime ghost number, and these give equations for spacetime fields--antifields on a fixed world-sheet basis. The $\xi$-exact components $(\Phi ^{\ast })^{\xi }$ of string antifields $\Phi ^{\ast}= (\Phi ^{\ast })^{\eta } + (\Phi ^{\ast })^{\xi }$ give the first class constraints. The second class constraints are imposed between the $\xi$-exact components $\Phi ^{\xi }$ of string fields $\Phi = \Phi ^{\eta } +\Phi ^{\xi }$ and the $\eta $-exact components $(\Phi ^{\ast } )^{\eta } = ( \Phi ^{\xi } )^{\ast }$ of string antifields $\Phi ^{\ast } = (\Phi ^{\ast })^{\eta } + (\Phi ^{\ast })^{\xi }$. Since our master action (\ref{simplest bv}) is invariant under these first class $\Gamma $, we focus on the second class $\Gamma $. Independent second class constraints give the following nonzero antibracket \begin{align} \label{simplest} \big{(} \, \Gamma _{g,p}^{(1)} \, , \, \Gamma _{g',p'}^{(2)} \, \big{)} & = \delta _{g+g',-1} \, \delta _{p+p',0} \, (-)^{g+1} \Big( \eta \, \big{|} \cZ _{-g,p}^{\,\, \xi} \big{\rangle } \Big)^{(1)} \big{|} \cZ _{-g,p}^{\, \ast \, \eta } \big{\rangle } ^{(2)} \, . \end{align} We used the relation (\ref{half basis}) and short notations $| \cZ _{-g,p}^{\, \ast \, \eta } \rangle \equiv \eta \, \xi \, | \cZ _{-g,p}^{\, \ast } \rangle $ and $| \cZ _{-g,p}^{\, \ast \, \xi } \rangle \equiv \xi \, \eta \, | \cZ _{-g,p}^{\, \ast } \rangle $ introduced in (\ref{half}). We quickly find that the following matrix \begin{align} \label{inverse of simplest} \big{(} \, \Gamma _{g,p}^{(1)} \, , \, \Gamma _{g',p'}^{(2)} \, \big{)}^{-1} & = - \delta _{g+g',-1} \, \delta _{p+p',0} \, \big{|} \cZ _{-g',p'}^{\,\, \xi } \big{\rangle } ^{(1)} \Big{(} \xi \, \big{|} \cZ _{-g',p'}^{\, \ast \, \eta } \big{\rangle } \Big{)}^{(2)} \, \end{align} gives the inverse of (\ref{simplest}) in the sense of \begin{align} \label{def of inv} \sum_{h,q} \sum_{h',q'} \big{(} \, \Gamma _{g,p}^{(1)} \, , \, \Gamma _{h,q}^{(1')} \, \big{)} \cdot \big{(} \, \Gamma _{h,q}^{(1')} \, , \, \Gamma _{h',q'}^{(2')} \, \big{)}^{-1} \cdot \big{(} \, \Gamma _{h',q'}^{(2')} \, , \, \Gamma _{g',p'}^{(2)} \, \big{)} = \big{(} \, \Gamma _{g,p}^{(1)} \, , \, \Gamma _{g',p'}^{(2)} \, \big{)} \, , \end{align} where the dot denotes the inner product of two states: $| A \rangle ^{(1)} \cdot | B \rangle ^{(1)} = \langle A,B \rangle = \langle A | ^{(1)} \cdot \langle B |^{(1)}$ and $| A \rangle ^{(1)} \cdot | B \rangle ^{(2)} = \langle A | ^{(1)} \cdot \langle B |^{(2)} = 0$\,. In particular, in the inner product of (\ref{def of inv}), the matrix (\ref{inverse of simplest}) works as a projector onto fixed world-sheet ghost and picture numbers, and switches the label of the state space. By taking inner products on both sides of (\ref{inverse of simplest}), we find \begin{subequations} \begin{align} \label{rel} (-)^{g} \, \big{|} \cZ _{-g,p}^{\, \ast \, \eta } \big{\rangle } ^{(1)} \cdot \Big( \, \Gamma _{-1-g,-p}^{(1)} \, , \, \Gamma _{g',p'}^{(2)} \, \Big) ^{-1} \cdot \big{|} \cZ _{1+g',-p'}^{\, \ast \, \eta } \big{\rangle } ^{(2)} = - \delta _{g,g'} \, \delta _{p,p'} \, . \end{align} Note that the half bases satisfy $\langle \cZ _{g,p}^{\, \ast \, \xi } , \cZ _{h,p}^{\, \eta } \rangle = (-)^{g} \langle \cZ _{g,p}^{\, \ast \, \eta } , \cZ _{h,q}^{\, \xi } \rangle = - \delta _{g,h} \, \delta _{p,q}$\,; see appendix A. The antibrackets of these second class $\Gamma $ and $S_{\textsf{bv}} [\varphi ]$ are \begin{align} \label{rel b} \big{(} \, \Gamma _{g,p} \, , \, S_{\textsf{bv}} [\varphi ] \, \big{)} & = - \Gamma _{g,p} \frac{\overset{\leftarrow }{\partial } }{\partial (\phi ^{\xi })^{\ast } } \, \frac{\overset{\rightarrow }{\partial } }{\partial \phi ^{\xi }} S_{\textsf{bv}} [\varphi ] = - \big{|} \cZ _{1+g,-p}^{\, \ast \, \eta } \big{\rangle } \, \Big{\langle } \frac{\partial \varphi }{\partial \phi _{-1-g,-p}^{\,\, \xi } } , \, \mathbf{M} \frac{1}{1-\eta \, \varphi } \Big{\rangle } \nonumber\\ & = - \big{|} \cZ _{1+g,-p}^{\, \ast \, \eta } \big{\rangle } \, \big{\langle } \cZ _{1+g,-p}^{\,\, \xi } \big{|} \, \eta \, \xi \, \big{|} \, \mathbf{M} \frac{1}{1-\eta \, \varphi } \big{\rangle } \, , \\ \big{(} \, S_{\textsf{bv}} [\varphi ] \, , \Gamma _{-1-g,-p} \, \big{)} & = S_{\textsf{bv}} [\varphi ] \frac{\overset{\leftarrow }{\partial } }{\partial \phi ^{\xi } } \, \frac{\overset{\rightarrow }{\partial } }{\partial (\phi ^{\xi })^{\ast }} \Gamma _{g,p} = (-)^{g} \Big{\langle } \mathbf{M} \frac{1}{1-\eta \, \varphi } , \, \frac{\partial \varphi }{\partial \phi _{g,p}^{\,\, \xi } } \Big{\rangle } \, \big{|} \cZ _{-g,p}^{\, \ast \, \eta } \big{\rangle } \nonumber\\ & = (-)^{g} \big{\langle } \mathbf{M} \frac{1}{1-\eta \, \varphi } \big{|} \cZ _{-g,p}^{\,\, \xi } \big{\rangle } \, \big{|} \cZ _{-g,p}^{\, \ast \, \eta } \big{\rangle } \, . \end{align} \end{subequations} Note that $- \langle \cZ _{1+g,-p}^{\,\, \xi } | \, \eta = (-)^{g} \langle \, \eta \, \cZ _{1+g,-p}^{\,\, \xi } | = (-)^{gp} \langle \cZ _{2+g,-p-1}^{\,\, \eta } | = \langle \cZ _{-g,p}^{\, \ast \, \eta } |$ holds. Using these relations (\ref{rel}-c), the definition of dual bases (\ref{dual BPZ basis}), and defining properties (\ref{half basis}), we obtain \begin{align} \label{constrained master eq} \big{(} \, S_{\textsf{bv}} \, , \, S_{\textsf{bv}} \, \big{)}_{\Gamma } & \equiv \big{(} \, S_{\textsf{bv}} \, , \, S_{\textsf{bv}} \, \big{)} - \sum_{g,p} \sum_{g' , p' } \big{(} \, S_{\textsf{bv}} \, , \, \Gamma _{g,p} \, \big{)} \cdot \big{(} \, \Gamma _{g,p} \, , \, \Gamma _{g',p'} \, \big{)}^{-1} \cdot \big{(} \, \Gamma _{g',p'} \, , \, S_{\textsf{bv}} \, \big{)} \nonumber\\ & = \sum_{g,p} \sum_{g',p'} \big{\langle } \mathbf{M} \frac{1}{1-\eta \, \varphi } \big{|} \cZ _{-g,p}^{\,\, \xi } \big{\rangle } \, \delta _{g,g'} \, \delta _{p,p'} \Big{[} - \big{\langle } \cZ _{1+g',-p'}^{\,\, \xi } \big{|} \, \eta \, \Big{]} \, \xi \, \big{|} \mathbf{M} \frac{1}{1-\eta \, \varphi } \big{\rangle } \nonumber\\ & = - \Big{\langle } \mathbf{M} \frac{1}{1-\eta \, \varphi } , \, \xi \, \mathbf{M} \frac{1}{1-\eta \, \varphi } \Big{\rangle } = 0 \, . \end{align} The action $S_{\textsf{bv}}$ and constraint $\widehat{\Gamma }$ give a solution of the constrained BV master equation. Here, we used the mutual commutativity $ [ \hspace{-0.6mm} [ \Eta , \, \mathbf{M} ] \hspace{-0.6mm} ] = 0$ and the cyclic $A_{\infty }$ relation of $\mathbf{M}$, \begin{align*} \Big{\langle } \mathbf{M} \frac{1}{1-\eta \, \varphi } , \, \xi \, \mathbf{M} \frac{1}{1-\eta \, \varphi } \Big{\rangle } & = \sum_{k=1}^{\infty } \sum_{l=1}^{\infty } \Big[ \frac{k}{k+l} + \frac{l}{k+l} \Big] \Big{\langle } \mathbf{M}_{k} \frac{1}{1-\eta \, \varphi } , \, \xi \, \mathbf{M}_{l} \frac{1}{1-\eta \, \varphi } \Big{\rangle } \nonumber\\ & = \sum_{k=1}^{\infty } \sum_{l=1}^{\infty } \frac{2}{k+l} \Big{\langle } \, \eta \, \varphi , \, \mathbf{M}_{k} \frac{1}{1-\eta \, \varphi } \otimes \xi \Big[ \pi _{1} \, \mathbf{M}_{l} \frac{1}{1-\eta \, \varphi } \Big] \otimes \frac{1}{1-\eta \, \varphi } \Big{\rangle } \, \nonumber\\ & = \sum_{n=1}^{\infty } \frac{1}{n+1} \sum_{m=0}^{n-1} \Big{\langle } \, \varphi , \, \big[ \hspace{-1.1mm} \big[ \, \mathbf{M}_{m+1} , \, \mathbf{M}_{n-m} \, \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \varphi } \Big{\rangle } \, . \end{align*} Note however that, as can be seen in \cite{Berkovits:2012np}, the constrained BV approach based on (\ref{extra ghost string field}), (\ref{Berkovits BV}), and (\ref{total constraint}) works well just for a \textit{partially gauge-fixed} theory. Although it indeed gives a solution of the constrained master equation as (\ref{constrained master eq}), we find that $S_{\textsf{bv}} [ \varphi ] |_{\Gamma }$ has an undesirable property as a proper master action for the \textit{large} theory: $\Phi _{-g,p=g}$ for $g>0$ behaves as a nontrivial auxiliary ghost field. We write $(\varphi _{-} )^{\ast }$ for the sum of the (original) string antifields $\{ (\Phi _{-g,p} )^{\ast } | g \geq 0 , \, 0 \leq p \leq g \}$ corresponding to the (original) string fields $\{ \Phi _{-g,p} \, | \, g\geq 0, \, 0 \leq p \leq g \} $. On the constraint surface defined by (\ref{total constraint}), the above $S_{\textsf{bv}} [\varphi ]$ takes the following form \begin{align} \label{simplest bv} S_{\textsf{bv} } [\varphi ] |_{\Gamma } & = \int _{0}^{1} dt \, \big{\langle } \xi \, ( \eta \, \varphi _{-} + (\varphi _{-})^{\ast } ), \, \mathbf{M} \frac{1}{1 - t \, ( \eta \, \varphi _{-} - (\varphi _{-} )^{\ast } ) } \big{\rangle } \nonumber\\ & = \frac{1}{2} \big{\langle } \Phi , \, Q \, \eta \, \Phi \big{\rangle } + \sum_{g\geq 0} \sum_{p=0}^{g} \big{\langle } (\Phi _{-g,p})^{\ast } , \, Q \, \Phi _{-1-g,p} \big{\rangle } \nonumber\\ & \hspace{20mm} + \sum_{g\geq 0} \sum_{p=0}^{g} \big{\langle } (\Phi _{-g,p} )^{\ast } , \, \sum_{n>1} \mathbf{M} _{n} \frac{1}{1- \eta \, \varphi _{-} - ( \varphi _{-} )^{\ast } } \big{\rangle } \, . \end{align} The ghost string field $\Phi _{-g,p=g}$ for $g>0$ does not have its kinetic term. This line of ghosts does not exist in a partially gauge-fixed theory $S[\Phi ^{\xi } ]$ from the beginning, and then it gives a correct proper master action. By contrast, the large theory $S[\Phi ]$ requires this line to describe the gauge invariance generated by counting $\Phi ^{\eta }$ of $S[\Phi = \Phi ^{\xi } + \Phi ^{\eta }]$ in spite of $S[\Phi ] = S[\Phi ^{\xi } ]$. \vspace{1mm} We would like to emphasise that this kind of problem (or ambiguity) occurs in every superstring field theories based on the large Hilbert space, even for the conventional BV approach, when we focus on the \textit{spacetime} fields or fine parts $\varphi _{a}$ of string fields $\varphi = \varphi _{1} + \cdots + \varphi _{n}$. Let us consider the kinetic term of (\ref{S}). Although string fields live in the large Hilbert space, the kinetic term $K[\Phi ]$ satisfies the property (\ref{kernel}), namely, $K[\Phi ] = K[\Phi ^{\xi } ]$ holds for $\Phi = \Phi ^{\xi } + \Phi ^{\eta }$. It implies that the $(\phi _{0,0}^{\eta })^{\ast }$-dependence becomes irrelevant in the master equation \begin{align*} \bigg[ K_{\textsf{bv}} \frac{\overset{\leftarrow }{\partial }}{\partial (\phi _{0,0}^{\eta })^{\ast }} \bigg] \, \frac{\overset{\rightarrow }{\partial }}{\partial \phi _{0,0}^{\eta }} K_{\textsf{bv}} = 0 \, . \end{align*} The master action $K_{\textsf{bv}}$ can be any functional of spacetime antifields $(\phi _{0,0}^{\eta })^{\ast }$; for example, one could choose $\langle (\Phi ^{\eta })^{\ast } , \eta \, \Phi _{-1,1} \rangle = 0$\,. Then, the kinetic term of the master action (\ref{free}), which was proposed in \cite{Kroyter:2012ni} on the basis of the conventional BV approach, reduces to \begin{align} \label{partially gauge-fixed free} K_{\textsf{bv}} = \frac{1}{2} \big{\langle } \Phi ^{\xi } , \, Q \, \eta \, \Phi ^{\xi } \big{\rangle } + \big{\langle } \Phi ^{\ast } , \, Q \, \Phi _{-1,0} \big{\rangle } + \sum_{g \geq 1} \sum_{p=0}^{g-1} \big{\langle } ( \Phi _{-g,p} )^{\ast } , \, Q \, \Phi _{-1-g,p} + \eta \, \Phi _{-1-g,p+1} \big{\rangle } \, , \end{align} where the string antifield is defined by (\ref{naive string antifield}) and $\langle (\Phi ^{\xi })^{\ast } , \eta \, \Phi _{-1,1} \rangle = 0$\,. This is nothing but the kinetic term of a proper BV master action for the partially gauge-fixed theory $K = K[\Phi ^{\xi }]$. As (\ref{partially gauge-fixed free}) is proper for the partially gauge-fixed theory $K=K[\Phi ^{\xi }]$ but not appropriate for the large theory $K=K[\Phi ]$, the above pair $(S_{\textsf{bv}} , \varphi , \widehat{\Gamma } )$ just gives a correct proper BV master action for \textit{partially gauge-fixed} theory in the large Hilbert space. \vspace{1mm} We would like to construct a constrained BV master action for the \textit{large} theory, which we explain in the rest of this section. We investigate appropriate pairs of $(S_{\textsf{bv}} , \varphi , \widehat{\Gamma } )$ which give kinetic terms for these $\Phi _{-g,p=g}$ on the basis of three different approaches. \subsection{Constrained BV master action based on improved constraints} We show that the kinetic terms of $S_{\textsf{bv}} [\varphi ]|_{\Gamma }$ in (\ref{simplest bv}) can be remedied by improving constraint equations, keeping the form of $S_{\textsf{bv}} [\varphi ]$ and the assembly (\ref{extra ghost string field}) of extra string fields. The key property is (\ref{kernel}). Note that for $0\leq p \leq g$, the above constraints can be written as \begin{subequations} \begin{align} \label{improved constraint a} \Gamma _{g,p} & \equiv \frac{1}{\sqrt{2}} \Big[ (\Phi _{1+g,-p}^{\xi } )^{\ast } - \eta \, \Phi _{-g,p} \Big] \, , \hspace{5mm} \Gamma _{-1-g,-p} \equiv \frac{1}{\sqrt{2}} \Big[ (\Phi _{-g,p}^{\xi } )^{\ast } - \eta \, \Phi _{1+g,-p} \Big] \, , \end{align} in which $(\Phi ^{\eta })^{\ast }$ does not appear. In addition to these, for $0 \leq p \leq g$, we impose the following (nonlinear) constraint equations\footnote{These constraints are weaker than the following type of linear constraint, \begin{align*} \widetilde{\gamma }_{-1-g,-p} & \equiv \big[ \delta (\phi _{g,p}^{\, \eta } )^{\ast } - \delta \phi _{-1-g,-1-p} \big] \, \mathbf{M} _{\eta } \, \big{|} \cZ _{1+g,-1-p} \big{\rangle } \, , \hspace{3mm} \mathbf{M} _{\eta } \, | \cZ _{g,p} \rangle \equiv \big[ \pi _{1} \, \mathbf{M} \frac{1}{1-\eta \, \varphi } \otimes | \cZ _{g,p} \rangle \otimes \frac{1}{1-\eta \, \varphi } \big]_{g,p} \, . \end{align*} Roughly, the constraints (\ref{improved constraint}) are written as $\delta \gamma _{-1-g,-p} = \big[ \delta (\phi _{g,p}^{\,\, \eta } )^{\ast } - \sum _{h,q} \delta \phi _{h,q}^{\,\, \eta } A_{g-h,p-q}^{(2+g,-1-p)} \big] | \cZ _{2+g,-1-p}^{\,\, \xi } \rangle $, where $A_{s}^{(g,p)}$ denotes that of $\mathbf{M} _{\eta } A = \sum_{s,g,p} A_{s}^{(g,p)} | \cZ _{g,p} \rangle $ and has spacetime ghost number $s$. } \begin{align} \label{improved constraint} \gamma _{g,p} \equiv (\Phi _{1+g,-p}^{\,\, \eta } )^{\ast } - \bigg[ \xi \, \mathbf{M} \frac{1}{1- \eta \, \varphi } \bigg] _{1-g,p-1} \, , \hspace{5mm} \gamma _{-1-g,-p} \equiv (\Phi _{g,p}^{\,\, \eta } )^{\ast } - \bigg[ \xi \, \mathbf{M} \frac{1}{1- \eta \, \varphi } \bigg{]}_{2+g,-1-p} \, , \end{align} \end{subequations} where $[A]_{g,p}$ denotes the components of $A$ which have world-sheet ghost number $g$ and picture number $p$\,. For example, $[\varphi ]_{-g,p} = \Phi_{-g,p}$ for (\ref{all string fields}). As will see, the above constraints $\gamma $ provides the kinetic terms $\langle (\Phi _{-g,p}^{\,\, \eta })^{\ast } , \, \eta \, \Phi _{-g,p}^{\,\, \xi } \rangle $ in $S_{\textsf{bv}}[\varphi ] |_{\Gamma , \gamma }$. Since these constraints are second class, we have to consider the Dirac antibracket based on $\Gamma $ and $\gamma $\,: \begin{align*} \big{(} \, S_{\textsf{bv}} , \, S_{\textsf{bv}} \, \big{)}_{\Gamma , \gamma } & \equiv \big{(} \, S_{\textsf{bv}} , \, S_{\textsf{bv}} \, \big{)} - \big{(} \, S_{\textsf{bv}} , \, \Gamma \, \big{)} \! \cdot \! \big{(} \, \Gamma \, , \, \Gamma ' \, \big{)}^{-1} \! \! \cdot \big{(} \, \Gamma ' \, , \, S_{\textsf{bv}} \big{)} - \big{(} \, S_{\textsf{bv}} , \, \gamma \, \big{)} \! \cdot \! \big{(} \, \gamma \, , \, \Gamma \, \big{)}^{-1} \! \! \cdot \big{(} \, \Gamma \, , \, S_{\textsf{bv}} \big{)} \nonumber\\ & \hspace{10mm} - \big{(} \, S_{\textsf{bv}} , \, \Gamma \, \big{)} \! \cdot \! \big{(} \, \Gamma \, , \, \gamma \, \big{)}^{-1} \! \! \cdot \big{(} \, \gamma \, , \, S_{\textsf{bv}} \big{)} - \big{(} \, S_{\textsf{bv}} \, , \, \gamma \, \big{)} \! \cdot \! \big{(} \, \gamma , \, \gamma ' \, \big{)}^{-1} \! \! \cdot \big{(} \, \gamma ' \, , \, S_{\textsf{bv}} \big{)} \, . \end{align*} The inverse matrices labeled by $\gamma $ take complicated forms because of $\gamma $'s nonlinearity. However, by construction of $S_{\textsf{bv}} [\varphi ]$, we do not have to know the explicit form of $(\gamma , \Gamma )^{-1}$, $(\Gamma , \gamma )^{-1}$, or $(\gamma , \gamma ')^{-1}$ to solve the master equation. Because of $\frac{\partial }{\partial \phi ^{\ast }} S_{\textsf{bv}} [\varphi ] = \frac{\partial }{\partial \phi ^{\eta }} S_{\textsf{bv}} [\varphi ] =0$ and $\frac{\partial }{\partial (\phi ^{\xi })^{\ast }} \gamma = 0$, we find \begin{align} \label{invariant constraint} \big{(} \, \gamma _{g,p} \, , \, S_{\textsf{bv}} [\varphi ] \, \big{)} & = - \gamma _{g,p} \frac{\overset{\leftarrow }{\partial }}{\partial (\phi ^{\eta })^{\ast }} \frac{\overset{\rightarrow }{\partial }}{\partial \phi ^{\eta } } S_{\textsf{bv}} [\varphi ] = 0 \, , \end{align} and thus the constrained master action (\ref{Berkovits BV}) is invariant\footnote{As another option, one could introduce $\gamma $ as the first class constraints which preserve $S_{\textsf{bv}} [\varphi ]$.} under new constraints $\gamma$. Hence, these new constraints $\gamma $ give no contribution to $(S_{\textsf{bv}} , S_{\textsf{bv}} )_{\Gamma , \gamma }$ and we find \begin{align*} \big{(} \, S_{\textsf{bv}} \, , \, S_{\textsf{bv}} \, \big{)}_{\Gamma , \gamma } = - \big{(} \, S_{\textsf{bv}} \, , \, \Gamma \, \big{)} \big{(} \, \Gamma \, , \, \Gamma ' \, \big{)}^{-1} \big{(} \, \Gamma ' \, , \, S_{\textsf{bv}} \, \big{)} = \big{(} \, S_{\textsf{bv}} \, , \, S_{\textsf{bv}} \, \big{)}_{\Gamma } = 0 \, . \end{align*} On the constrained subspace, we can rewrite the master action as follows, \begin{align*} S_{\textsf{bv}} [\varphi ] |_{\Gamma , \gamma } & = \int_{0}^{1} dt \, \Big{\langle } \xi \, ( \eta \, \varphi _{-}^{\,\, \xi } + \eta \, \varphi _{\textsf{ex}}^{\,\, \xi } ) , \, \mathbf{M} \frac{1}{1- t \, \eta \, \varphi } \Big{\rangle } \nonumber\\ & = \sum_{g\geq 0} \sum_{p=0}^{g} \bigg[ \Big{\langle } \eta \, \Phi _{-g,p}^{\,\, \xi } , \, \xi \, \mathbf{M} \frac{1}{1- t \, \eta \, \varphi } \Big{\rangle } + \Big{\langle } \eta \, \Phi _{1+g,-p}^{\,\, \xi } , \, \bxi \, \mathbf{M} \frac{1}{1- t \, \eta \, \varphi _{-} - t \, \eta \, \varphi _{\textsf{ex}} } \Big{\rangle } \bigg] \nonumber\\ & = \sum_{g \geq 0} \sum_{p=0}^{g} \Big{\langle } (\Phi _{-g,p}^{\,\, \eta } )^{\ast } , \, \Eta \, \Phi _{-g,p}^{\,\, \xi } \Big{\rangle } + \sum_{g \geq 0} \sum_{p=0}^{g} \Big{\langle } ( \Phi _{-g,p}^{\,\, \xi } )^{\ast } , \, \bxi \, \mathbf{M} \frac{1}{1- t \, \eta \, \varphi _{-} - t \, ( \varphi _{-}^{\,\, \xi } )^{\ast } } \Big{\rangle } \, . \end{align*} In contrast to (\ref{simplest bv}), it includes string antifields for $\Phi _{-g,p}^{\,\, \eta }$ and has the kinetic terms for all original string fields and their string antifields; it is proper for the \textit{large} theory. \vspace{1mm} In the constrained BV approach, the BV transformation of $\varphi $ is given by $\delta _{\textsf{bv}} \varphi = ( \varphi , S_{\textsf{bv}} )_{\Gamma ,\gamma }$\,. By construction of the constraints, $\delta _{\textsf{bv}} \varphi $ has an orthogonally decomposed form. Note that while $( \varphi ^{\xi } , \Gamma )$ is the antibracket of $\xi$-exact $\varphi ^{\xi }$ and $\eta $-exact $\Gamma $, $(\varphi ^{\eta } , \gamma )$ is the antibracket of $\eta $-exact $\varphi ^{\eta }$ and $\xi$-exact $\gamma $\,. We set $\Omega \equiv \xi \, ( \varphi ^{\eta } , \, \gamma \, ) \cdot ( \gamma , \, \Gamma )^{-1} \cdot ( \Gamma , \, S_{\textsf{bv}} )$ using the unknown inverse. We find that the BV transformation $\delta _{\textsf{bv}}\varphi = \delta _{\textsf{bv}} \varphi ^{\xi } + \delta _{\textsf{bv}} \varphi ^{\eta }$ is given by \begin{align*} \delta _{\textsf{bv}} \varphi ^{\xi } = \big{(} \, \varphi ^{\xi } \, , \, S_{\textsf{bv}} \, \big{)}_{\Gamma , \gamma } = \pi _{1} \, \bxi \, \mathbf{M} \frac{1}{1-\eta \, \varphi } \, , \hspace{5mm} \delta _{\textsf{bv}} \varphi ^{\eta } = \big{(} \, \varphi ^{\eta } \, , \, S_{\textsf{bv}} \, \big{)}_{\Gamma , \gamma } = \Eta \, \Omega \, . \end{align*} \subsection{Alternative: Modifying extra string fields} There is another option to obtain a proper BV master action for the large theory. Let us consider the extra string fields of (\ref{extra ghost string field}). We decompose them as follows \begin{subequations} \begin{align} \label{modified extra} \Phi _{1+g,-p} \equiv \phi _{-1-g,-p}^{\,\, \eta } \, \big{|} \cZ _{1+g,-p}^{\,\, \eta } \big{\rangle } + \phi _{-1-g,-p}^{\,\, \xi } \, \big{|} \cZ _{1+g,-p}^{\,\, \xi } \big{\rangle } \,\, . \end{align} For these (\ref{modified extra}), we introduce additional extra string fields $\bar{\Phi }$ assembled as \begin{align} \bar{\Phi }_{1+g,-1-p} \equiv \phi _{-1-g,-p}^{\,\, \xi } \, \big{|} \cZ _{1+g,-1-p}^{\,\, \eta } \big{\rangle } + \phi _{-1-g,-p}^{\,\, \eta } \, \big{|} \cZ _{1+g,-1-p}^{\,\, \xi } \big{\rangle } \, . \end{align} \end{subequations} While the set of extra spacetime ghost fields is not changed, the assemblies of extra ghost string fields are modified by considering additional world-sheet bases. For simplicity, we consider the sum of these extra ghost string fields as follows \begin{align*} \varphi '_{\textsf{ex}} \equiv \sum_{g = 0}^{\infty } \sum_{p=0}^{g+1} \Phi _{1+g,-p}' \, , \hspace{5mm} \Phi '_{1+g,-p} \equiv ( 1 - \delta _{p,g+1} ) \, \Phi _{1+g,-p} + (1 - \delta _{p,0} ) \, \bar{\Phi }_{1+g,-p} \, . \end{align*} Note that the $p$-label of $\Phi ^{\prime }_{1+g,-p}$ runs from $0$ to $g+1$ unlike that of $\Phi _{1+g,-p}$\,. Now, the sum of all string fields (\ref{all string fields}) is replaced by $\varphi ' \equiv \varphi _{-} + \varphi '_{\textsf{ex}}$, and the constrained BV master action $S_{\textsf{bv}} = S_{\textsf{bv}} [\varphi ']$ is a functional of $\varphi '$\,. Namely, $S_{\textsf{bv}}$ is changed via modifying extra ghost string fields from $\varphi $ to $\varphi '$\,. Since this $S_{\textsf{bv}}[\varphi ']$ includes kinetic terms for all string fields as follows \begin{align*} S_{\textsf{bv}} [\varphi '] = \frac{1}{2} \big{\langle } \, \Phi , \, Q \, \eta \, \Phi \, \big{\rangle } + \sum_{g=0}^{\infty } \sum_{p=0}^{g+1} \big{\langle } \, \Phi _{1+g,-p}' , \, Q \, \eta \, \Phi _{-1-g,p} \, \big{\rangle } + \cdots \, , \end{align*} the resultant $S_{\textsf{bv}} [ \varphi ' ] |_{\Gamma '}$ has corresponding terms under appropriate constraints $\Gamma '$\,. Instead of (\ref{total constraint}), for instance, we can impose the same type of \textit{second} class constraint \begin{align*} \widehat{\Gamma }' \equiv (\varphi ')^{\ast } - \eta \, \varphi ' \, , \end{align*} where $(\varphi ' )^{\ast }$ denotes the sum of (the $\eta $-exact parts of) all string antifields \begin{align*} ( \varphi ' )^{\ast } = \bigg{[} \Phi ^{\ast } + \sum_{g > 0} \sum_{p=0}^{g} (\Phi _{-g,p})^{\ast } + \sum_{g \geq 0} \sum_{p=0}^{g+1} ( \Phi _{1+g,p}^{\prime } )^{\ast } \bigg{]}^{\eta } \, . \end{align*} The above $\widehat{\Gamma }^{\prime }$ provides the same type of second class constraint as (\ref{total constraint b}) for $0 \leq p \leq g$. By contrast, for $p = g+1$, we set $\Phi _{-g,g+1} = (\Phi _{-g,g+1})^{\ast } =0$ and constraints reduce to simple ones: \begin{align*} \Gamma _{g,p}^{\prime } = \frac{1}{\sqrt{2}} \Big[ (\Phi _{1+g,-p}^{\prime })^{\ast } - \underbrace{\eta \, \Phi _{-g,p} }_{\underset{(g<p)}{\longrightarrow } 0} \Big] \, , \hspace{5mm} \Gamma _{-g-1,-p}^{\prime } = \frac{1}{\sqrt{2}} \Big[ \underbrace{(\Phi _{-g,p})^{\ast }}_{\underset{(g<p)}{\longrightarrow } 0} - \eta \, \Phi _{1+g,-p}^{\prime } \Big] \, . \end{align*} These constraints assign (physical) antifields to world-sheet bases labeled by different picture numbers and the constrained string antifields are given by (\ref{example}). In this case, the string field representation of the antibracket does not have the Darboux form. Note however that the spacetime antibracket itself can always take the Darboux form and thus as we will see, the constrained master equation holds in the same manner as (\ref{constrained master eq}). \vspace{1mm} Note that we consider the second class and do not impose the first class constraints on the extra string fields this time. While the $\eta$-acted extra string field is given by \begin{subequations} \begin{align} \label{ex' a} \eta \, \Phi _{1+g,-p}^{\prime } = \Big{[} \phi _{-1-g,-p}^{\, \xi } + \phi _{-1-g,1-p}^{\, \eta } \Big{]} (-)^{1+g} \eta \, \big{|} \cZ _{1+g,-p}^{\, \xi } \big{\rangle } \, , \end{align} the string antifield for the extra string field $\Phi _{1+g,-p}^{\prime }$ is given by \begin{align} \label{ex' b} \big{(} \Phi _{1+g,-p}^{\prime } \big{)}^{\ast } = \Big[ (\phi _{-1-g,-p}^{\, \xi })^{\ast } + (\phi _{-1-g,1-p}^{\, \eta })^{\ast } \Big] \big{|} \cZ _{1+g,-p}^{\, \ast \, \eta } \big{\rangle } \, . \end{align} \end{subequations} The second class constraints are imposed on these $\eta $-exact states (\ref{ex' a}-b). Let us check that the pair $(S_{\textsf{bv}} [ \varphi ' ] , \, \Gamma ' )$ solves the constrained BV master equation. We consider the antibracket of constraints, whose $\phi ^{\xi }$-part is the same as (\ref{simplest}). We find \begin{align} \label{2'=2+1} \big{(} \, \Gamma _{g,p}^{\prime \, (1)} \, , \, \Gamma _{g',p'}^{\prime \, (2)} \, \big{)} = \big{(} \, \Gamma _{g,p}^{(1)} \, , \, \Gamma _{g',p'}^{(2)} \, \big{)} + \Gamma _{g,p}^{\prime \, (1)} \bigg[ \frac{\overset{\leftarrow }{\partial }}{\partial \phi ^{\eta } } \frac{\overset{\rightarrow }{\partial }}{\partial (\phi ^{\eta })^{\ast }} - \frac{\overset{\leftarrow }{\partial }}{\partial (\phi ^{\eta })^{\ast } } \frac{\overset{\rightarrow }{\partial }}{\partial \phi ^{\eta } } \bigg] \Gamma _{g',p'}^{\prime \, (2)} \, . \end{align} In particular, the second term of (\ref{2'=2+1}) vanishes for $p=0$. By construction of extra string fields--antifields (\ref{ex' a}-b), one can quickly find that (\ref{2'=2+1}) is given by \begin{align*} \big{(} \, \Gamma _{g,p}^{\prime \, (1)} \, , \, \Gamma _{g',p'}^{\prime \, (2)} \, \big{)} = \delta _{g+g',-1} \, \delta _{p+p',0} (-)^{g+1} \sum_{k=0,1} \Big{(} \eta \, \big{|} \cZ _{-g,p+k}^{\, \xi } \big{\rangle } \Big{)}^{(1)} \big{|} \cZ _{-g,p+k}^{\, \ast \, \eta } \big{\rangle } ^{(2)} \, \end{align*} for $0 < p \leq g$. The $k=0$ parts arise from the first part of (\ref{2'=2+1}); the $k=1$ parts arise from the second part of (\ref{2'=2+1}). Likewise, the following relation holds for $p=g+1$, \begin{align*} \big{(} \, \Gamma _{g,p}^{\prime \, (1)} \, , \, \Gamma _{g',g'+1}^{\prime \, (2)} \, \big{)} = \delta _{g+g',-1} \, \delta _{p+(g'+1),0} (-)^{g+1} \Big{(} \eta \, \big{|} \cZ _{-g,p+1}^{\, \xi } \big{\rangle } \Big{)}^{(1)} \big{|} \cZ _{-g,p+1}^{\, \ast \, \eta } \big{\rangle } ^{(2)} \, . \end{align*} In the sense of (\ref{def of inv}), the inverse matrix of (\ref{2'=2+1}) is given by \begin{align} \label{inv'} \big{(} \, \Gamma _{g,p}^{\prime \, (1)} \, , \, \Gamma _{g',p'}^{\prime \, (2)} \, \big{)} ^{-1} & = \delta _{g+g',-1} \bigg[ \delta _{p,0} \big{|} \cZ _{-g',1}^{\, \xi } \big{\rangle } ^{(1)} \Big{(} \xi \, \big{|} \cZ _{-g',1}^{\, \ast \, \eta } \big{\rangle } \Big{)}^{(2)} + \delta _{p,g+1} \big{|} \cZ _{-g',g'}^{\, \xi } \big{\rangle } ^{(1)} \Big{(} \xi \, \big{|} \cZ _{-g',g'}^{\, \ast \, \eta } \big{\rangle } \Big{)}^{(2)} \bigg] \nonumber\\ & \hspace{15mm} - \delta _{g+g',-1} \, \delta _{p+p',0} \sum_{k=0,1} \big{|} \cZ _{-g',p'+k}^{\, \xi } \big{\rangle } ^{(1)} \Big{(} \xi \, \big{|} \cZ _{-g',p'+k}^{\, \ast \, \eta } \big{\rangle } \Big{)}^{(2)} \, . \end{align} Note that $| \cZ _{g,p+k}^{\, \ast } \rangle \cdot | \cZ_{g,p+l} \rangle = - \delta _{k,l}$ holds. The Dirac antibracket is defined by using (\ref{inv'}). Let us consider the antibracket of $\Gamma _{g,p}^{\prime }$ and $S_{\textsf{bv}} [\varphi ^{\prime }]$, whose $\phi ^{\xi }$-part is the same as (\ref{rel b}). We find \begin{align*} \big{(} \, \Gamma _{g,p}^{\prime } \, , S_{\textsf{bv}} [\varphi ^{\prime }] \, \big{)} & = \big{(} \, \Gamma _{g,p} \, , S_{\textsf{bv}} [\varphi ^{\prime }]\, \big{)} + \Gamma _{g,p}^{\prime } \bigg[ \frac{\overset{\leftarrow }{\partial }}{\partial \phi ^{\eta } } \frac{\overset{\rightarrow }{\partial }}{\partial (\phi ^{\eta })^{\ast }} - \frac{\overset{\leftarrow }{\partial }}{\partial (\phi ^{\eta })^{\ast } } \frac{\overset{\rightarrow }{\partial }}{\partial \phi ^{\eta } } \bigg] S_{\textsf{bv}} [\varphi ^{\prime } ] \nonumber\\ & = \big{(} \, \Gamma _{g,p} \, , \, S_{\textsf{bv}} [\varphi ] \, \big{)} - \big{|} \cZ _{1+g,1-p}^{\, \ast \, \eta } \big{\rangle } \, \big{\langle } \cZ _{1+g,1-p}^{\,\, \xi } \big{|} \, \eta \, \xi \, \big{|} \, \mathbf{M} \frac{1}{1-\eta \, \varphi } \big{\rangle } \, . \end{align*} The second term is not zero because of $\frac{\partial }{\partial \phi ^{\eta }} S_{\textsf{bv}} [\varphi ^{\prime } ] \not= 0$ unlike (\ref{kernel}) and contracts with the $k=1$ parts of (\ref{inv'}) only in the Dirac antibracket, which gives the same contribution as (\ref{constrained master eq}) after the sum. The first term contracts with the $k=0$ parts only and there are no other contractions in the Dirac antibracket. Hence, we obtain $(S_{\textsf{bv}} , S_{\textsf{bv}} )_{\Gamma ^{\prime } } = (S_{\textsf{bv}} , S_{\textsf{bv}} )_{\Gamma } + (S_{\textsf{bv}} , S_{\textsf{bv}} )_{\Gamma } = 0$\,. \clearpage \subsection{Other constrained BV master actions: Switching $\mathbf{M}$ to $\Eta$} Can we simplify the nonlinear constraints (\ref{improved constraint}) by retaking the constrained BV action $S_{\textsf{bv}}$ of (\ref{Berkovits BV})?---it is possible. The construction of the Dirac antibracket suggests that such a constrained BV action will be obtained by switching a part of (\ref{Berkovits BV}) which generates nonlinear $\mathbf{M}$-gauge transformations to the terms which generate linear $\Eta $-gauge transformations as \cite{Matsunaga:2017phm}. We introduce the following extra string fields $\Psi _{2+g,-1-p}$ and their sum $\psi $, \begin{align} \label{shifted extra ghost} \psi \equiv \sum_{g \geq 0} \sum_{p=0}^{g} \Psi _{2+g,-1-p} \, , \hspace{5mm} \Psi _{2+g,-1-p} \equiv \sum \phi _{-1-g,p} \, \big{|} \cZ _{2+g,-1-p} \big{\rangle } \, . \end{align} It has the same form as the original string antifield $(\Phi _{-g,p})^{\ast }$. Note that the $\eta $-exact component of $\Psi _{2+g,-1-p}$ equals to $\eta \, \Phi _{1+g,-p}$\,, and thus these new extra string fields are Grassmann odd: $(-)^{\varphi } = (-)^{\psi +1}$. We split the sum of all string fields (\ref{all string fields}) into two parts \begin{align*} \varphi = \varphi _{1} + \varphi _{2} \, . \end{align*} One can consider any splittings as long as $\varphi _{1}$ include the dynamical field $\Phi $\,. As a functional of these $\varphi _{1}$, $\varphi _{2}$ and $\psi $, we consider the following action, \begin{align} \label{switched BV} S_{\textsf{bv}} [\varphi _{1} ; \varphi _{2} , \psi ] = \int_{0}^{1} dt \, \bigg{\langle } \, \varphi _{1} \, , \, \mathbf{M} \frac{1}{1- t \, \eta \, \varphi _{1} } \, \bigg{\rangle } + \big{\langle } \, \psi \, , \, \Eta \, \varphi _{2} \, \big{\rangle } \, . \end{align} It reduces to the original action (\ref{original action}) if we set all extra fields zero. We write $S_{1}$ for the first term and $S_{2}$ for the second term: $S_{\textsf{bv}} [\varphi _{1} ; \varphi _{2} , \psi ]= S_{1}[\varphi _{1}] + S_{2}[\varphi _{2} , \psi ]$\,. Note that the $\eta $-exact components of $\psi$ do not appear in the second term. The variation of $S_{\textsf{bv}}$ is given by \begin{align*} \delta S_{\textsf{bv}} & = \Big{\langle } \, \delta \varphi _{1} , \, \mathbf{M} \frac{1}{1 - \eta \, \varphi _{1}} \, \Big{\rangle } + \big{\langle } \, \delta \varphi _{2} \, , \, \Eta \, \psi \, \big{\rangle } + \big{\langle } \, \delta \psi \, , \, \Eta \, \varphi _{2} \, \big{\rangle } \, . \end{align*} We find that the action $S_{\textsf{bv}}=S_{1}[\varphi _{1}] + S_{2}[\varphi _{2} , \psi ]$ is invariant under the gauge transformations \begin{align*} \delta \varphi _{1} = \pi _{1} \, \big[ \hspace{-1.1mm} \big[ \mathbf{M} , \bLambda _{\varphi } \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \varphi _{1} } + \Eta \, \Omega_{\varphi _{1}} \, , \hspace{5mm} \delta \varphi _{2} = \Eta \, \Omega_{\varphi _{2}} \, , \hspace{5mm} \delta \psi = \Eta \, \Omega _{\psi } \, , \end{align*} where $\Lambda _{\varphi }$, $\Omega _{\varphi _{a}}$, and $\Omega _{\psi }$ denote appropriate gauge parameters. Hence, one can obtain the gauge invariant action (\ref{switched BV}) by replacing a part of (\ref{Berkovits BV}) by $S_{2}$, in which $\mathbf{M}$-terms turn into $\Eta$-terms while keeping the gauge invariance. We impose the constraint equations $\widehat{\gamma } = \{ \Gamma _{g,p} , \gamma _{g,p} \} _{g,p}$, \begin{align} \label{simple constraint} \Gamma _{-1-g,-p} \equiv (\Phi _{-g,p}^{\, \xi } )^{\ast } - \eta \, \Phi _{1+g, -p} \, , \hspace{5mm} \gamma _{-1-g,-p} \equiv (\Phi _{-g,p}^{\, \eta } )^{\ast } - \xi \, \eta \, \Psi _{2+g,-1-p} \, , \end{align} for each spacetime ghost number. Note that $(\Gamma , \gamma )=0$ by construction. We find $( \Gamma , \Gamma )^{-1} = \xi $ and $(\gamma , \gamma )^{-1}= \eta \, \xi $\,, and thus $( S_{1} , S_{1})_{\Gamma } = 0$ holds as (\ref{constrained master eq}). Since $(S_{2} , S_{2} )_{\gamma } = 0$ holds as (\ref{invariant constraint}), we find that $S_{\textsf{bv}}=S_{1} + S_{2}$ gives a solution of the constrained master equation, \begin{align*} \big{(} \, S_{\textsf{bv}} \, , \, S_{\textsf{bv}} \, \big{)}_{\Gamma , \gamma } = \big{(} \, S_{1} \, , \, S_{1} \, \big{)}_{\Gamma } + \big{(} \, S_{2} \, , \, S_{2} \, \big{)}_{\gamma } = 0 \, . \end{align*} The relation between (\ref{Berkovits BV}) and (\ref{switched BV}) may be understood as a BV canonical transformation.\footnote{In the context of the conventional BV approach for the free theory, this type of $Q$-$\eta$ switching operation is just a result of BV canonical transformations. See \cite{Matsunaga:2017phm} for details. } \section{Conventional BV approach revisited} We gave several solutions of constrained master equations in the previous section, in which $S_{\textsf{bv}}|_{\widehat{\gamma }}$ of (\ref{switched BV}) will be rather plain. We can rewrite the constraints (\ref{simple constraint}) into the simple form \begin{align} \widehat{\gamma } = \varphi ^{\ast } - \psi \, , \end{align} where (\ref{shifted extra ghost}) is extended for all $g$ by using $\psi ^{\eta } \equiv \eta \, \varphi $\,. This simple expression of the constraints resembles us the conventional BV approach, and it suggests that one could construct a BV master action $S_{\textsf{bv}}$ based on the minimal set within the conventional BV approach. Besides reassembling string antifields, to split string fields $\varphi = \varphi _{1} + \varphi _{2}$ and to utilise each $\varphi _{a}$ as an argument of $S_{\textsf{bv}}$ play a crucial role in the constrained BV approach. As we show in this section, one can construct a conventional BV master action $S_{\textsf{bv}}$ as a function of $(\Phi ^{\xi })^{\ast }$ and $(\Phi ^{\eta })^{\ast }$, not a function of the sum $(\Phi )^{\ast } = (\Phi ^{\xi })^{\ast } + (\Phi ^{\eta })^{\ast }$\,. While we introduce the string antifield $(\Phi _{-g,p})^{\ast }$ for the string field $\Phi _{-g,p}$ as the usual conventional BV approach, we consider their $\xi $- or $\eta$-exact components separately. Note that $(\Phi ^{\ast })^{\eta } = (\Phi ^{\xi })^{\ast }$ and $(\Phi ^{\ast })^{\xi } = (\Phi ^{\eta })^{\ast }$ because of \begin{align*} \big{\langle } (\Phi _{-g,p}^{\,\, \xi } )^{\ast } , \, \Phi _{-g',p'}^{\,\, \xi } \big{\rangle } = \delta _{g,g'} \, \delta _{p,p'} \, , \hspace{5mm} \big{\langle } (\Phi _{-g,p}^{\,\, \eta } )^{\ast } , \, \Phi _{-g',p'}^{\,\, \eta } \big{\rangle } = \delta _{g,g'} \, \delta _{p,p'} \, . \end{align*} \subsection{Orthogonal decomposition} We consider the orthogonal decomposition of the gauge transformation $\delta \Phi = \delta \Phi ^{\xi } + \delta \Phi ^{\eta }$. By redefining gauge parameters as follows \begin{align} \label{redef} \Lambda _{-1,0}^{\textrm{new}} \equiv - \Lambda _{-1,0}^{\textrm{old}} \, , \hspace{5mm} \Lambda _{-1,1}^{\textrm{new}} \equiv \pi _{1} \, \bxi \, \big[ \hspace{-1.1mm} \big[ \, \mathbf{M} , \, \bLambda _{-1,0}^{\textrm{old}} \, \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi } + \Lambda _{-1,1}^{\textrm{old}} \, , \end{align} we set the $\eta $-exact component of the gauge transformations $\delta \Phi ^{\eta }$ linear \begin{subequations} \begin{align} \label{pgf a} \delta \Phi ^{\xi } & = \pi _{1} \, \bxi \, \big[ \hspace{-1.1mm} \big[ \, \mathbf{M} , \, \Eta \, \bLambda _{-1,0}^{\textrm{new}} \, \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi } \, , \\ \label{pgf} \delta \Phi ^{\eta } & = \Eta \, \Lambda _{-1,1}^{\textrm{new}} \, . \end{align} \end{subequations} As we will see, it enables us to simplify the other higher gauge transformations: Except for the gauge parameters $\{ \Lambda _{-g,0} \} _{g>0}$ carrying picture number $0$, the $\xi $-components of $\delta \Lambda _{-g,p} $ are proportional to the equations of motion and thus become trivial transformations; The $\eta$-exact components of all $\delta \Lambda _{-g,p}$ can be linearised by redefining the gauge parameters. We find \begin{align*} \delta \Lambda _{-1,1}^{\textrm{new}} & = \pi _{1} \, \bxi \, \big[ \hspace{-1.1mm} \big[ \, \mathbf{M} , \, \delta \bLambda _{-1,0}^{\textrm{old}} \, \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi } + \delta \Lambda _{-1,1}^{\textrm{old}} \nonumber\\ & = \pi _{1} \, \bxi \, \Big[ \hspace{-1.3mm} \Big[ \, \mathbf{M} , \, \pi _{1} \big[ \hspace{-1.1mm} \big[ \mathbf{M} , \bLambda _{-2,1}^{\textrm{old}} \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \Phi } \Big] \hspace{-1.3mm} \Big] \frac{1}{1-\eta \, \Phi } + \Eta \, \bigg[ \pi _{1} \, \bxi \, \big[ \hspace{-1.1mm} \big[ \mathbf{M} , \bLambda _{-2,1}^{\textrm{old}} \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \Phi } + \Lambda _{-2,2}^{\textrm{old}} \bigg] \, \nonumber\\ & = \bxi \, T (\Lambda _{-2,1}^{\textrm{new}} ) + \Eta \, \Lambda _{-2,2}^{\textrm{new}} \, , \end{align*} where $T(\Lambda ) \equiv \pi _{1} \big[ \hspace{-1.1mm} \big[ [ \hspace{-0.6mm} [ \mathbf{M} , \bLambda ] \hspace{-0.6mm} ] , (\textrm{e.o.m}) \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \Phi }$ denotes a trivial transformation. Hence, for any $0<p\leq g$, the higher gauge transformations can be rewritten as follows \begin{subequations} \begin{align} \delta \Lambda _{-g,0}^{\textrm{new}} & = \pi _{1} \, \bxi \, [ \hspace{-0.6mm} [ \mathbf{M} , \Eta \, \bLambda _{-g-1,0}^{\textrm{new}} ] \hspace{-0.6mm} ] \frac{1}{1 - \eta \, \Phi } + \Eta \, \Lambda _{-g-1,1}^{\textrm{new}} \, , \\ \label{pgf higher} \delta \Lambda _{-g,p}^{\textrm{new}} & = \Eta \, \Lambda _{-g-1,p+1}^{\textrm{new}} \, . \end{align} \end{subequations} While the orthogonal decomposition $\delta\Phi = \delta \Phi ^{\xi } + \delta \Phi ^{\eta }$ makes $\delta \Lambda ^{\xi }$ trivial for $p>0$, redefinitions of $\Lambda $ make $\delta \Lambda ^{\eta }$ linear. These operations enable us to obtain a simple BV master action. \vspace{1mm} Note that partial gauge fixing is an operation omitting $\Phi ^{\eta }$ and (\ref{pgf}) at the classical level. Then, the line of $\Lambda _{-g,p=g}$ of (\ref{pgf higher}) does not appear in its higher gauge transformations. It gives the gauge reducibility of partially gauge-fixed superstring field theory in the large Hilbert space, in which reassembled string fields--antifields which correspond to (\ref{redef}) will be rather natural. \subsection{BV master action} Let $F$ be a functional of the minimal set of spacetime fields--antifields, which may be a functional of string fields or string antifields. We perturbatively construct $S_{\textsf{bv}}$ satisfying \begin{align*} \delta _{\textsf{bv}} F = \big{(} \, F \, , \, S_{\textsf{bv}} \, \big{)}_{\textrm{min}} \, , \end{align*} whose nilpotency is our guiding principle. The initial condition is $S_{\textsf{bv}}|^{(0)} = S[\Phi ]$. We write $(\Phi ^{\xi })^{\ast }$ or $(\Phi ^{\eta } )^{\ast }$ for the string antifield for the $\xi $- or $\eta$-exact component of the dynamical string field $\Phi = \Phi ^{\xi }+\Phi ^{\eta }$, respectively. As we will see, $S_{\textsf{bv}}$ becomes a functional of $(\Phi ^{\xi })^{\ast }$ and $(\Phi ^{\eta })^{\ast }$\,. We require that as well as $\Phi ^{\xi } \in \textrm{Im} [\xi ]$ and $\Phi ^{\eta } \in \textrm{Im}[\eta ]$, their BV transformations satisfy \begin{align*} \delta _{\textsf{bv}} \Phi ^{\xi } = \big{(} \, \Phi ^{\xi } , \, S_{\textsf{bv}} \, \big{)}_{\textrm{min}} = \frac{\partial S_{\textsf{bv}} }{\partial (\Phi ^{\xi })^{\ast } } \in \textrm{Im} [ \xi ] \, , \hspace{5mm} \delta _{\textsf{bv}} \Phi ^{\eta } = \big{(} \, \Phi ^{\eta } , \, S_{\textsf{bv}} \, \big{)}_{\textrm{min}} = \frac{\partial S_{\textsf{bv}} }{\partial (\Phi ^{\eta })^{\ast } } \in \textrm{Im} [ \eta ] \, . \end{align*} Further, we require that the $\eta $-exact component of the BV transformations are linear \begin{align} \label{linear} \delta _{\textsf{bv}} \Phi _{-g,p}^{\, \eta } = \big{(} \, \Phi _{-g,p}^{\, \eta } \, , \, S_{\textsf{bv}} \, \big{)}_{\textrm{min}} = \Eta \, \Phi _{-g-1,p+1} \, . \end{align} In other words, we consider redefinitions of gauge parameter fields given in section 6.1 and focus on the gauge algebra of the orthogonally decomposed gauge transformations. In general, $\delta \Phi _{-g,p}^{\eta }$ could be a nonlinear function of fields--antifields. This requirement (\ref{linear}) is too restrictive and should be removed to find a more general form of the BV master action in the large Hilbert space. However, as we will see, this requirement prohibits any interacting terms of $\Phi _{-g,p \not= 0}$ or $(\Phi _{-g,p\not= 0})^{\ast }$, and it enables us to construct a simple BV master action. We find that string-antifield derivatives of $S^{(1)}$ are given by \begin{subequations} \begin{align} \label{afd a} \delta _{\textsf{bv}} \Phi ^{\xi } |^{(0)} = \big{(} \, \Phi ^{\xi } \, , \, S_{\textsf{bv}} \, \big{)} |^{(0)} = \frac{\partial S^{(1)} }{\partial (\Phi ^{\xi } )^{\ast } } & = \pi _{1} \, \bxi \, \big[ \hspace{-1.1mm} \big[ \mathbf{M} , \Eta \, \bPhi _{-1,0} \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi } \, , \\ \delta _{\textsf{bv}} \Phi ^{\eta } |^{(0)} = \big{(} \, \Phi ^{\eta } \, , \, S_{\textsf{bv}} \, \big{)} |^{(0)} = \frac{\partial S^{(1)} }{\partial (\Phi ^{\eta } )^{\ast } } & = \Eta \, \Phi _{-1,1} \, , \end{align} \end{subequations} which are determined from the gauge transformations (\ref{pgf a}-b) and their gauge algebra. Note that $(\Phi ^{\xi })^{\ast }$ is $\eta $-exact $(\Phi ^{\xi} )^{\ast } = \cP ^{\eta } (\Phi ^{\xi} )^{\ast }$ and $(\Phi ^{\eta })^{\ast }$ is $\xi$-exact $(\Phi ^{\eta } )^{\ast } = \cP ^{\xi } (\Phi ^{\eta } )^{\ast }$\,. These string-antifield derivatives (\ref{afd a}-b) determine the antifield number $1$ part of $S_{\textsf{bv}}$ as follows \begin{align} \label{afn 1 part} S^{(1)} = \Big{\langle } ( \Phi ^{\xi } )^{\ast } , \, \bxi \, \big[ \hspace{-1.1mm} \big[ \mathbf{M} , \Eta \, \bPhi _{-1,0} \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi } \Big{\rangle } + \big{\langle } ( \Phi ^{\eta } )^{\ast } , \, \Eta \, \Phi _{-1,1} \big{\rangle } \, . \end{align} Note that this $S^{(1)}$ is not a functional of $\Phi ^{\ast } = (\Phi ^{\xi })^{\ast } + (\Phi ^{\eta })^{\ast }$ but a functional of $(\Phi ^{\xi })^{\ast }$ and $(\Phi ^{\eta })^{\ast }$. Clearly, string-field derivatives of (\ref{afn 1 part}) become $\eta $-exact states as follows \begin{align*} & \hspace{15mm} \frac{\partial S^{(1)} }{\partial \Phi ^{\xi } } = \pi _{1} \, \big[ \hspace{-1.1mm} \big[ [ \hspace{-0.6mm} [ \mathbf{M} , \Eta \, \bPhi _{-1,0} ] \hspace{-0.6mm} ] , (\bPhi ^{\xi })^{\ast } \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi } \, , \\ & \frac{\partial S^{(1)} }{\partial \Phi _{-1,0}^{\,\, \xi } } = \pi _{1} \, \big[ \hspace{-1.1mm} \big[ \mathbf{M} , (\bPhi ^{\xi })^{\ast } \big] \hspace{-1.09mm} \big] \frac{1}{1-\eta \, \Phi } \, , \hspace{12mm} \frac{\partial S^{(1)} }{\partial \Phi _{-1,1}^{\,\, \xi } } = \Eta \, ( \Phi ^{\eta } )^{\ast } \, . \end{align*} In other words, as the original action, a half of the string-field derivatives vanish: \begin{align*} \frac{\partial S^{(1)} }{\partial \Phi ^{\eta } } = \frac{\partial S^{(1)} }{\partial \Phi _{-1,0}^{\,\, \eta } } = \frac{\partial S^{(1)} }{\partial \Phi _{-1,1}^{\,\, \eta } } = 0 \hspace{8mm} \textrm{as} \hspace{5mm} \frac{\partial S}{\partial \Phi ^{\eta } }= 0 \, . \end{align*} By construction of (\ref{linear}), a half of the string-antifield derivatives of $S^{(2)}$ are given by \begin{align*} \delta _{\textsf{bv}} \Phi ^{\eta } |^{(1)} & = \frac{\partial S^{(2)} }{\partial (\Phi ^{\eta })^{\ast } } = 0 \, , \hspace{8mm} \delta _{\textsf{bv}} \Phi _{-1,p}^{\,\, \eta } |^{(0)} = \frac{\partial S^{(2)} }{\partial (\Phi _{-1,p}^{\,\, \eta })^{\ast } } =\Eta \, \Phi _{-2,1+p} \, \hspace{3mm} (p = 0,1 ) \, . \end{align*} To solving the master equation, the other string-antifield derivatives of $S^{(2)}$ have to take \begin{align*} \delta _{\textsf{bv}} \Phi ^{\xi } |^{(1)} & = \frac{\partial S^{(2)} }{\partial (\Phi ^{\xi })^{\ast } } = \pi _{1} \, \bxi \Big[ \hspace{-1.3mm} \Big[ \big[ \hspace{-1.1mm} \big[ [ \hspace{-0.6mm} [ \mathbf{M} , \Eta \, \bPhi _{-1,0} ] \hspace{-0.6mm} ] , \Eta \, \bPhi _{-1,0} \big] \hspace{-1.09mm} \big] + [ \hspace{-0.6mm} [ \mathbf{M} , \Eta \, \bPhi _{-2,0} ] \hspace{-0.6mm} ] , (\bPhi ^{\xi })^{\ast } \Big] \hspace{-1.3mm} \Big] \frac{1}{1-\eta \, \Phi } \, , \\ \delta _{\textsf{bv}} \Phi _{-1,0}^{\,\, \xi } |^{(0)} & = \frac{\partial S^{(2)} }{\partial (\Phi _{-1,0}^{\,\, \xi })^{\ast } } = \pi _{1} \, \bxi \, \bigg[ \big[ \hspace{-1.1mm} \big[ [ \hspace{-0.6mm} [ \mathbf{M} , \Eta \, \bPhi _{-1,0} ] \hspace{-0.6mm} ] , \Eta \, \bPhi _{-1,0} \big] \hspace{-1.09mm} \big] + [ \hspace{-0.6mm} [ \mathbf{M} , \Eta \, \bPhi _{-2,0} ] \hspace{-0.6mm} ] \bigg] \frac{1}{1-\eta \, \Phi } \, , \\ \delta _{\textsf{bv}} \Phi _{-1,1}^{\,\, \xi } |^{(0)} & = \frac{\partial S^{(2)} }{\partial (\Phi _{-1,1}^{\,\, \xi })^{\ast } } = 0 \, . \end{align*} Note that the requirement (\ref{linear}) prohibits not only nonlinear $\eta$-transformations but also the interacting terms of $\Phi _{-1,1}$\,. These derivatives determine the antifield number $2$ part of the master action $S^{(2)}$ satisfying $\big{(} S^{(0)} + S^{(1)} + S^{(2)} + \cdots , \, S^{(0)} + S^{(1)} + S^{(2)} + \cdots \big{)}_{\textrm{min}} = 0$. Likewise, one can construct $S^{(3)}$, $S^{(4)}$, and higher $S^{(n)}$ on the basis of the antifield number expansion. These are functionals of $\Phi _{-g,p}$, $(\Phi _{-g,p}^{\, \xi })^{\ast }$, and $(\Phi _{-g,p}^{\, \eta })^{\ast }$ as expected. \vspace{1mm} Let $\varphi _{p}$ be the sum of all string fields carrying world-sheet picture number $p$, which can be decomposed as $\varphi _{p} = \varphi _{p}^{\xi } + \varphi _{p}^{\eta }$\,. We write $(\varphi _{p}^{\xi } )^{\ast }$ or $(\varphi _{p}^{\eta })^{\ast }$ for the string antifield for the $\xi $- or $\eta $-exact component of $\varphi _{p}$ respectively as follows, \begin{align*} \varphi _{p} \equiv \sum_{g=p}^{\infty } \Phi _{-g,p} \, , \hspace{5mm} (\varphi _{p}^{\xi } )^{\ast } = \sum_{g=p}^{\infty } ( \Phi _{-g,p}^{\, \xi } )^{\ast } \, , \hspace{5mm} (\varphi _{p}^{\eta } )^{\ast } = \sum_{g=p}^{\infty } ( \Phi _{-g,p}^{\, \eta } )^{\ast } \, . \end{align*} The dynamical string field $\Phi$ is included in $\varphi _{0}$ and the sum $\varphi $ of all string fields is given by $\varphi = \varphi _{0} + \sum_{p>0} \varphi _{p}$\,. We find that the BV master action $S_{\textsf{bv}} = S_{\textsf{bv}} [ \, \varphi , ( \varphi ^{\xi })^{\ast } ( \varphi ^{\eta } )^{\ast } ]$ is \begin{align} \label{revisited} S_{\textsf{bv}} = \int_{0}^{1} dt \, \bigg{\langle } \varphi _{0} + \xi \, ( \varphi _{0}^{\xi } )^{\ast } , \, \mathbf{M} \frac{1}{1- t \, \eta \, ( \varphi _{0} + \xi \, ( \varphi _{0}^{\xi } )^{\ast } ) } \bigg{\rangle } + \sum_{p > 0} \Big{\langle } ( \varphi _{p-1}^{\eta } )^{\ast } , \, \Eta \, \varphi _{p} \Big{\rangle } \, . \end{align} While the first term is a functional of $\varphi _{0}$ and $(\varphi _{0}^{\xi })^{\ast}$, the second term is a functional of $\varphi _{p>0}$, $(\varphi _{0}^{\eta })^{\ast }$, and $(\varphi _{p}^{\eta })^{\ast }$. The variation of $S_{\textsf{bv}}$ takes the following form \begin{align*} \delta S_{\textsf{bv}} & = \bigg{\langle } \, \delta \varphi _{0} , \, \mathbf{M} \frac{1}{1- \eta \, ( \varphi _{0} + \xi \, ( \varphi _{0}^{\xi } )^{\ast } ) } \bigg{\rangle } + \sum_{p>0} \Big{\langle } \delta \varphi _{p} , \, \Eta \, ( \varphi _{p-1}^{\, \eta } )^{\ast } \Big{\rangle } \nonumber\\ & \hspace{10mm} + \bigg{\langle } \, \delta ( \varphi _{0}^{\xi } )^{\ast } , \, \bxi \, \mathbf{M} \frac{1}{1-\eta \, ( \varphi _{0} + \xi \, ( \varphi _{0}^{\xi } )^{\ast } ) } \bigg{\rangle } + \sum _{p>0} \Big{\langle } \delta ( \varphi _{p-1}^{\, \eta } )^{\ast } , \, \Eta \, \varphi _{p} \Big{\rangle } \, . \end{align*} Note that $\Phi _{-g,p}$ for $p>0$ has no interacting term, and thus the third term has no contraction with the second or fourth term in the master equation. Clearly, our master action (\ref{revisited}) satisfies \begin{align*} \frac{1}{2} \big{(} \, S_{\textsf{bv}} , \, S_{\textsf{bv}} \, \big{)}_{\textrm{min}} = \frac{\overset{\leftarrow }{\partial } S_{\textsf{bv}} }{\partial \varphi ^{\xi }} \cdot \frac{\overset{\rightarrow }{\partial } S_{\textsf{bv}} }{\partial (\varphi ^{\xi } )^{\ast }} + \frac{\overset{\leftarrow }{\partial } S_{\textsf{bv}} }{\partial \varphi ^{\eta }} \cdot \frac{\overset{\rightarrow }{\partial } S_{\textsf{bv}} }{\partial ( \varphi ^{\eta } )^{\ast }} = 0 \, . \end{align*} While the BV transformations of string fields take the following forms, \begin{subequations} \begin{align} \delta \varphi _{0} & = \big{(} \, \varphi _{0}^{\xi } + \varphi _{0}^{\eta } \, , \, S_{\textsf{bv}} \, \big{)}_{\textrm{min}} = \pi _{1} \, \bxi \, \mathbf{M} \frac{1}{1 - \eta \, ( \varphi _{0} + \xi \, (\varphi _{0} )^{\ast }) } + \Eta \, \varphi _{1} \, , \\ \delta \varphi _{p} & = \big{(} \, \varphi _{p}^{\xi } + \varphi _{p}^{\eta } \, , \, S_{\textsf{bv}} \, \big{)}_{\textrm{min}} = \Eta \, \varphi _{p+1} \, , \end{align} \end{subequations} the BV tramsformations of string antifields are given by \begin{subequations} \begin{align} \delta (\varphi _{0} )^{\ast } & = \big{(} \, (\varphi _{0}^{\xi })^{\ast } + (\varphi _{0}^{\eta })^{\ast } \, , \, S_{\textsf{bv}} \, \big{)}_{\textrm{min}} = \pi _{1} \, \mathbf{M} \frac{1}{1 - \eta \, ( \varphi _{0} + \xi \, (\varphi _{0} )^{\ast }) } \, , \\ \delta (\varphi _{p} )^{\ast } & = \big{(} \, (\varphi _{p}^{\xi })^{\ast } + (\varphi _{p}^{\eta })^{\ast } \, , \, S_{\textsf{bv}} \, \big{)}_{\textrm{min}} = \Eta \, ( \varphi _{p-1}^{\, \eta } )^{\ast } \, . \end{align} \end{subequations} The role of the second term of (\ref{revisited}) in perturbation theory depends on the gauge-fixing condition: For example, it is integrated out and trivially decouples in the Siegel gauge; however, it will provide nontrivial contributions to loop amplitudes in the $d_{0}$-gauge. See \cite{Kroyter:2012ni, Torii:2012nj}. \section{Concluding remarks} In this paper, we developed the Batalin-Vilkovisky formalism of superstring field theory in the large Hilbert space with the goal of understanding of how to construct large master actions for interacting theory. We first showed that the constrained BV approach \cite{Batalin:1992mk} is well applicable, in which Berkovits' simple prescription \cite{Berkovits:2012np} is rather suitable for the large but partially gauge-fixed theory. By modifying its constraints, extra string fields, or starting unconstrained action, we constructed several constrained BV master actions in the large Hilbert space. We next showed that the conventional BV approach is also applicable iff we give up constructing master actions as naive functionals of string fields--antifields. We constructed a BV master action as a functional of fine parts $\{ \varphi _{a} \}_{a=1}^{n}$ of string fields--antifields $\varphi = \varphi _{1} + \cdots + \varphi _{n}$, not a function of string fields--antifields themselves. It is worth mentioning that our analysis is quickly applicable to the \textit{large} theory which is obtained by embedding \cite{Erler:2016ybs} or \cite{Erler:2014eba}, and thus BV master actions for the \textit{large} $A_{\infty }$ theory including the Ramond sector or the \textit{large} $L_{\infty }$ theory are constructed in the completely same manner. Also, since BV master actions in the large Hilbert space are obtained, one can discuss the validity of partial gauge-fixing now. We conclude with some remarks. \vspace{-2.5mm} \subsubsection*{BV formalism in the large Hilbert space} \vspace{-1.5mm} First, it is worth visiting and connecting different pairs of $(S_{\textsf{bv}} , \widehat{\Gamma } , \varphi _{\textsf{ex}} )$ to obtain a better understanding of the BV formalism in the large Hilbert space. While we gave several constrained BV master actions in section 5, there would exist some canonical transformations connecting these. Next, it is desirable to find a more general form of the master action in the large Hilbert space on the basis of the conventional BV approach. Our master action (\ref{revisited}) has a simple form but is constructed based on the requirement (\ref{linear}), which will be too restrictive. \vspace{-2.5mm} \subsubsection*{WZW-like formulation} \vspace{-1.5mm} Our results give a simple example of constructing BV master actions for WZW-like superstring field theory (\ref{S}), which is based on the parametrisation (\ref{another sol}). It is important to clarify whether one can construct a BV master action for other parametrisation of the WZW-like functional (\ref{pure}), such as $A_{\eta }[\Phi ] = ( \eta e ^{\Phi } ) e^{- \Phi }$. One may be able to apply constrained BV approach for the Berkovits theory in a similar manner, in which partial gauge fixing may take a nonlinear form such that $(\delta e^{\Phi }) e^{-\Phi }= Q \Lambda + \eta \, \Omega - [ \hspace{-0.6mm} [ A_{\eta } , \Omega ] \hspace{-0.6mm} ] $ is orthogonally decomposed as (\ref{pgf a}-b). A procedure which does not depend on these parametrisations would be necessitated to apply the BV formalism to the general WZW-like formulation \cite{Matsunaga:2016zsu, Erler:2017onq}. \vspace{-2.5mm} \subsubsection*{Gauge tensors' formulae} \vspace{-1.5mm} Since our master actions are not naive functionals of string fields--antifields, it is interesting to clarify the difference of the gauge tensors based on spacetime and string fields. It reveals why a ready-made BV procedure does not work in the large Hilbert space from the point of view of the gauge algebra. Then, how ``partial gauge fixing'' fixes the gauge would be clarified. \section*{Acknowledgements} The authors would like to thank Mitsuhiro Kato and the organizers of "Strings, Fields, and Particles 2017 at Komaba". H.M. also thanks Ted Erler, Hiroshi Kunitomo, Yuji Okawa, Martin Schnabl, and Shingo Torii. M.N. is grateful to Yuji Tachikawa. This research has been supported by the Grant Agency of the Czech Republic, under the grant P201/12/G028.
{ "timestamp": "2018-07-17T02:08:06", "yymm": "1802", "arxiv_id": "1802.04171", "language": "en", "url": "https://arxiv.org/abs/1802.04171" }
\section{Introduction} The cataclysmic variable AR~Scorpii (AR~Sco) shows large-amplitude, highly periodic pulsations across the electromagnetic spectrum every 1.97 minutes, superimposed upon a strong waveform at the system's 3.56-h orbital period \citep{marsh16}. The system's low X-ray luminosity rules out the presence of significant accretion by the white dwarf (WD) primary from its M-dwarf companion, so neither of these signals is powered by accretion \citep{marsh16, takata}. Instead, AR~Sco has been called a white-dwarf pulsar because its pulsations consist of synchrotron radiation and are apparently powered by the spin-down of its highly magnetized ($\lesssim$~500~MG) WD, similar to neutron-star pulsars \citep{marsh16, buckley17}. The spin-down of the WD is a foundational conclusion from \citet{marsh16}, who calculated that the spin-down rate they detected is large enough to power AR Sco's pulsations. However, \citet{pb18} contested the significance of this spin-down after finding that the \citet{marsh16} spin-down ephemeris did not accurately predict the frequencies of the spin and orbital periods in their optical photometry. Although \citet{pb18} concluded that a linear spin ephemeris accurately described their data, they were also careful to note that their result constrained---but did not rule out---the WD spin-down. An unambiguous detection of the slowing spin rate, they wrote, would require additional observations. AR Sco's light curve contains a number of remarkable features at different timescales. The orbital waveform is brightest at phase $\sim$0.4 and has a peak-to-peak amplitude of $\sim$1.5-2 mag in the optical, depending on the bandpass. \citet{katz17} proposes two alternative models to explain the orbital modulation and why it does not peak at superior conjunction. To provide observational constraints for these models, \citet{littlefield17} analyzed archival, ground-based photometry as well as 79 days of continuous photometry by the Kepler \textit{K2} mission. They reported that while the system's overall brightness has remained relatively stable since 2005, the orbital waveform peaked at a different phase and had a slightly lower amplitude between 2005-2007. Moreover, the \textit{K2} photometry showed aperiodic brightness fluctuations at the level of a few percent on a timescale of days \citep{littlefield17}. \begin{figure*} \centering \includegraphics[width=\textwidth]{sample_lightcurve.pdf} \caption{Sample SLKT light curve of AR Sco. The lower panel replots one section of the upper panel so that the pulsations may be seen more distinctly.} \label{sample_lightcurve} \end{figure*} The 1.97-min pulsations are arguably AR Sco's defining observational characteristic and are remarkable for their speed, amplitude (a factor of $\sim4$ in the optical), and phase coherence across a wide range of wavelengths, including radio, near-infrared, optical, ultraviolet \citep{marsh16, stanway} and even soft X-rays \citep{takata}. Their period corresponds with the beat period ({\it i.e.}, the orbital sideband) between the binary orbital period and 1.95-min WD spin period, and they are thought to originate on the inner hemisphere of the M5-class companion star \citep{marsh16, takata}. \citet{geng16} proposes that the WD's magnetic axis is inclined with respect to its rotational axis and that the pulses are caused by the interaction of the WD's magnetosphere with the secondary's wind. The 1.95-min spin period of the white dwarf in AR~Sco is extremely short when compared with the system's orbit. White dwarfs are not born spinning so rapidly, and it is thought that a phase of high accretion powered AR~Sco's rapid spin-up, followed by the current epoch of little or no mass transfer \citep{marsh16, buckley17}. Here, we present high-cadence photometry with the twin objectives of (1) searching for a spin-down and (2) disentangling the spin and beat periods. \section{Data} \subsection{SLKT observations} We obtained 39 hours of high-time-resolution photometry of AR Sco using the 80-cm Sarah L. Krizmanich Telescope (SLKT) and an unfiltered Santa Barbara Instrument Group STL-1001 CCD camera at the University of Notre Dame in 2016, 2017, and 2018. As indicated in Table~\ref{log}, which lists details of each time series, the exposure time was 2~s, and factoring in the overhead between images, the typical cadence was 5-6~s, and each time series usually spanned 1-3~h. Fig.~\ref{sample_lightcurve} plots a representative, 2.5-hour-long light curve and zooms in on one segment during which the pulsations were especially prominent. \begin{figure} \centering \includegraphics[width=\columnwidth]{power_spectra.pdf} \caption{Lomb-Scargle power spectra of the SLKT data, focused on the beat and spin frequencies. The Lomb-Scargle model for both power spectra used two harmonic terms. The light contour gives the $1\sigma$ confidence interval from \citet{pb18}, while the dark contour is the projected confidence interval from the spin-down ephemeris in \citet{marsh16}.} \label{power_spectra} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{orbital_phase.pdf} \caption{{\bf Top:} The light curve of AR~Sco phased on the system's orbital period. The scatter results from the $\sim$2-min periodic flashes that are incoherent at this phasing. The green line is a Fourier series with 3 harmonics that traces the base of these pulsations, and it represents the orbital modulation in the absence of pulsations. {\bf Bottom:} The residuals after subtraction of the orbital modulation. The amplitude of the pulsed variation peaks at orbital phases $\sim$0.25 and $\sim$0.75.} \label{orbital_phaseplot} \end{figure} The system's optical pulsations are extremely fast and well-defined, so CCD data must be well-timed in order to be useful. Consequently, we synchronized the clock of the CCD control computer to an atomic clock prior to the start of each time series. Additionally, we measured the shutter lag of the detector (the time offset between the shutter actuation and the timestamp recorded in the FITS header), found it to be stable from night-to-night, and applied an appropriate correction to the image timestamps. Finally, we applied a BJD$_{TDB}$ correction to all observations using routines in Astropy \citep{astropy}. We used aperture photometry to extract AR Sco's light curve, but there were no optimal comparison stars within the field of view. According to APASS photometry, each of the field stars is quite red, probably because there is a dark nebula associated with the $\rho$ Ophiuchi complex along the line of sight to AR Sco. Faced with this paucity of choices, we selected UCAC4 336-082341 ($\alpha_{2000}$ = 16h 22m 06.481s, $\delta_{2000} = -22^{\circ}$ 53' 27.84'', $g'-r' = 2.17$) as our comparison star. Since the spectral properties of such a reddened star are likely a poor match for those of AR Sco, any attempt to infer standard $V$ magnitudes from the unfiltered photometry would probably suffer from serious systematic errors. Consequently, we do not place our photometry on a standard magnitude scale. \subsection{AAVSO observations} As indicated in Table~\ref{aavso_log}, coauthors FJH and GM observed AR Sco and submitted their observations to the AAVSO International Database\footnote{https://www.aavso.org} under AAVSO observer codes HMB and MGW, respectively. Their observations had cadences that ranged from 14-35~s. \begin{figure}[h] \includegraphics[width=\columnwidth]{beat_bins.pdf} \caption{The beat pulse shape as a function of orbital phase. After subtracting the orbital modulation, the light curve was divided into ten orbital phase bins, and each bin is phased on the beat period. Two beat cycles are shown in each panel. Two unequal amplitude peaks are seen each beat cycle suggesting that both poles of the WD magnetic field are interacting with the secondary star. The amplitude and pulse shape are seen to vary with orbital phase. The pulse shape is asymmetric during the first half of an orbit, resulting in its centroid shifting in phase by over 10~s over an orbit.} \label{beat_bins} \end{figure} \begin{table} \centering \caption{Log of SLKT observations} \label{log} \begin{tabular}{ccccc} \hline UT Start Date & Length (hr) & Cadence (s) \\ \hline 2016-07-28 & 2.0 & 6 \\ 2016-08-03 & 1.6 & 5 \\ 2016-08-22 & 0.9 & 5 \\ 2016-08-23 & 1.1 & 5 \\ 2016-09-01 & 1.0 & 5 \\ 2016-09-02 & 0.9 & 5 \\ 2016-09-03 & 0.7 & 4 \\ 2016-09-04 & 0.7 & 4 \\ 2017-04-23 & 2.9 & 5 \\ 2017-05-07 & 2.3 & 5 \\ 2017-05-08 & 1.8 & 5 \\ 2017-05-15 & 1.0 & 5 \\ 2017-05-17 & 1.9 & 5 \\ 2017-06-01 & 3.1 & 5 \\ 2017-06-02 & 2.1 & 5 \\ 2017-06-03 & 0.8 & 4 \\ 2017-07-07 & 1.6 & 5 \\ 2017-08-12 & 0.9 & 5 \\ 2018-02-26 & 1.4 & 5 \\ 2018-03-18 & 1.0 & 5 \\ 2018-03-25 & 1.7 & 7 \\ 2018-03-26 & 2.8 & 5 \\ 2018-04-18 & 1.3 & 5 \\ 2018-04-21 & 0.8 & 5 \\ 2018-05-24 & 2.5 & 5 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Log of AAVSO observations. The ``Obs.'' column contains the AAVSO code of the observer.} \label{aavso_log} \begin{tabular}{cccc} \hline Observer & UT Start Date & Length (hr) & Cadence (s) \\ \hline HMB & 2015-07-24 & 1.7 & 35 \\ HMB & 2015-07-25 & 4.2 & 35 \\ HMB & 2015-07-26 & 4.3 & 35 \\ HMB & 2015-07-27 & 4.2 & 35 \\ HMB & 2015-08-07 & 2.7 & 35 \\ HMB & 2015-08-08 & 2.3 & 35 \\ HMB & 2015-08-10 & 2.1 & 35 \\ HMB & 2016-04-28 & 4.8 & 34 \\ HMB & 2016-04-29 & 5.7 & 24 \\ HMB & 2016-04-30 & 5.8 & 24 \\ HMB & 2016-05-01 & 5.9 & 24 \\ HMB & 2016-05-02 & 5.2 & 24 \\ HMB & 2016-05-03 & 5.7 & 24 \\ HMB & 2016-05-04 & 5.7 & 24 \\ HMB & 2016-05-05 & 5.8 & 14 \\ HMB & 2016-07-31 & 2.6 & 35 \\ MGW & 2016-08-05 & 4.1 & 24 \\ MGW & 2016-08-16 & 2.3 & 19 \\ MGW & 2016-09-04 & 3.7 & 23 \\ HMB & 2016-09-09 & 2.7 & 35 \\ HMB & 2016-09-10 & 2.6 & 35 \\ \hline \end{tabular} \end{table} \section{Analysis} \label{sec:analysis} Lomb-Scargle power spectra of the SLKT data, presented in Fig.~\ref{power_spectra}, show that the measured spin and beat frequencies agree with those reported in \citet{pb18}. However, the predicted spin and beat frequencies from the \citet{marsh16} spin-down ephemeris are in poor agreement with the peaks observed in our data, confirming the result in \citet{pb18}. We explore this issue in depth in Sec.~\ref{sec:spin_down}. After eliminating a small number of SLKT observations with a signal-to-noise ratio of less than 5, we display our photometry in Fig.~\ref{orbital_phaseplot}. The data have been phased to the orbital period based on the ephemeris in \citet{marsh16}. The zero point of the orbital phase is defined as the moment that the red secondary star is at inferior conjunction. The overall light curve shape is similar to the slower cadence data analyzed by \citet{littlefield17}. The light curve is asymmetric, with a quick rise to maximum brightness at an orbital phase of 0.4 and a second peak at an orbital phase of $\sim$0.75. \begin{figure} \includegraphics[width=\columnwidth]{drifting_beat.pdf} \caption{The AR~Sco light curve phased on the beat period as it varies over a binary orbit. The orbital waveform has been removed. The major beat pulse peaks at orbital phase $\sim$0.25, then shifts to earlier beat phases before reaching a minimum at orbital phase $\sim$0.55.} \label{drifting_beat} \end{figure} We assume that the observed light curve can be represented as the sum of a slowly varying orbital modulation and the high-frequency, pulsed emission. To remove the variations associated with the orbit, we divided the orbit into phase bins and identified the faintest 5\%\ of the points in each bin. Because contamination by the pulsed emission inflates the amplitude of the orbital waveform in a simple Lomb-Scargle power spectrum, the faintest observations in these bins more accurately describe the underlying orbital modulation, and we selected the exact threshold from trial-and-error. We then represented these points with a Fourier series so that we could predict the strength of the orbital modulation as a function of orbital phase. As seen in Fig.~\ref{orbital_phaseplot}, the orbital variation is moderately asymmetric, with the peak brightness just before phase 0.5 and the minimum at phase 0.0. \begin{figure} \includegraphics[width=\columnwidth]{O-C.pdf} \caption{An O$-$C diagram of the beat pulses reveals an obvious orbital-phase dependence. The increased scatter in the residuals near orbital phase 0.5 corresponds with a dropoff in the amplitude of the beat pulse. Decreased SNR near orbital phase 0.0 contributes to the noisy timings near that phase.} \label{O-C} \end{figure} Subtracting the orbital-modulation function from the phased orbital light curve yields the pulsed light curve as a function of orbital phase (bottom panel in Fig.~\ref{orbital_phaseplot}). The pulse amplitude displays a strong orbital-phase dependence, peaking at orbital phase $\sim$0.25, reaching minimum amplitude near phase $\sim$0.6, and rebounding around phase 0.75. \begin{figure*} \includegraphics[width=\textwidth]{model.pdf} \caption{{\bf Top:} An example of our data covering nearly half a binary orbit (points connected by a gray line). The thick line shows the light curve model combining a double peaked beat pulse with a double peaked spin pulse. {\bf Bottom:} The three components used to build the model are an orbital modulation, the beat pulse, and the spin pulse. The spin and beat components each have two harmonic terms. The superposition of the spin and beat pulses causes the combined pulse profile to vary across the orbital period. This simple model accounts for the pulse amplitude and phase variations of over the orbit, but tends to underestimate the heights of the peaks over the brightest portion of the light curve. } \label{model} \end{figure*} To more closely investigate the changes in the pulse amplitude and morphology, we split the data into ten equally-sized, non-overlapping orbital bins and phased each bin to the beat period. The resulting plots (Fig.~\ref{beat_bins}) show that the beat pulse has two unequal maxima per cycle, as has been noted by \citet{marsh16}. Our beat phaseplots show that the shape, amplitude, and phase of both maxima vary as a function of orbital phase. At orbital phase 0.0, the major pulse is broad and symmetric, but between orbital phases 0.1-0.3, its shape changes, with the rise to beat maximum becoming longer than the decline. This is also the orbital phase when the major pulse has the largest amplitude. Around orbital phase 0.35, the major peak is again symmetric but soon skews in the other direction. Eventually, the major peak nearly disappears around phase 0.55. For the second half of the orbit, the major pulse remains symmetrical, but is weaker than in the first half of the orbit. The minor pulse reaches its largest amplitude between orbital phases 0.3 and 0.4, significantly later than phase 0.25 for the major pulse. The beat pulse undergoes a significant phase shift, depending on the orbital phase. This is most easily seen in a heatmap of the pulse brightness versus both the orbital and beat phase (see Fig.~\ref{drifting_beat}). This figure clearly shows that between orbital phases 0.1 and 0.5, the major peak shifts by 10\%\ in beat phase. The shift in the beat pulse over the second half of the orbit is smaller than the first half, but the pulse is generally seen to be broader and fainter. \citet{takata} reported similar findings from their analysis of 39~ks of data obtained on 2016 September 19 with the XMM Newton satellite's Optical/UV Monitor Telescope, and our results imply that this behavior is stable on timescales of years. The amplitude and phase shifts of the beat pulse are consistent with the addition of second periodic signal with a slightly different frequency \citep[e.g., as observed in FO Aqr;][]{om89}. For example, if the spin pulse were to be isolated and plotted in the heat map in Fig.~\ref{drifting_beat}, it would run diagonally since the figure phases the data to the beat period. To explore this possibility, we modeled the full light curve as the superposition of three periodic signals: the spin, beat, and orbital periods (similar to the light curve model for FO~Aqr in its low state \citep{littlefield16}). We found the best-fit trigonometric function at each of those frequencies by a simple least-squares fit, with each term consisting of three harmonics. We also attempted to fit additional frequencies detected in the power spectrum, but adding these terms did not significantly improve the quality of the fit. The results of this fit (Fig.~\ref{model}) reveal that the beat and spin models are both double-peaked, with the beat pulse having two unequal maxima. The maxima of the spin pulse, by contrast, are roughly equal. Our model predicts that the amplitude of the spin pulse is about 50\%\ of the amplitude of the beat pulse, while the minor pulses are comparable in brightness. However, one limitation of our model is that it underestimates the amplitude of the highest-amplitude beat pulses. This might be a consequence of the fact that the model does not account for the changing visibility of the secondary's inner hemisphere across the orbital period. As a test of this model, we computed the O$-$C for the blended spin-beat pulse, and as shown in Fig.~\ref{O-C}, it offers a reasonably accurate prediction of the actual O$-$C. The simulated O$-$C includes a correction for light-travel delays caused by the secondary's orbital motion. The light-travel delay for pulsations originating from material orbiting in the plane of the binary is given by \begin{equation} \Delta t = -\frac{d}{c}\sin(i)\cos(2\pi[\phi_{orb}-\phi_0]),\label{light-travel}\end{equation} where $d$ is the distance of the emission from the binary center of mass, $i$ is the orbital inclination, $c$ is the speed of light, and $\phi_0$ is orbital phase of inferior conjunction for the emitting material. Given the requirement that the emission originate on the donor \citep{marsh16}, the smallest-possible light-travel delay would occur for emission arising at the first Lagrangian point (L$_1$). If we adopt M$_{1}$ = 0.8 M$_{\odot}$ and M$_{2}$ = 0.3 M$_{\odot}$, as did \citet{marsh16}, and assume an orbital inclination of $i = 60^{\circ}$, then the semi-amplitude of the light-travel delay would be 0.6~s. Although the effect of light-travel delays is relatively minor at our $\sim$5-s cadence, a higher time resolution might potentially be able to discern light-travel delays from emission from different regions of the secondary (\textit{i.e.}, across a range of values of $d$ and $\phi_{0}$). Although there might be a light-travel delay associated with the spin pulse, the region in which the optical spin pulse is generated is unknown, making it impossible to calculate its light-travel delay. \citet{buckley17} provide evidence that it might originate in a ``striped wind" outside the light cylinder of the magnetosphere, which is almost an order of magnitude larger than the binary orbital separation. \section{Spin-down ephemeris}\label{sec:spin_down} \subsection{Measuring the spin-down} \citet{marsh16} detected a significant spin-frequency derivative of $\dot{\nu} = -(2.86\pm0.36)\times10^{-17}$ Hz s$^{-1}$, implying a spin-down of sufficient magnitude to power the optical pulsations. However, \citet{pb18} found that the \citet{marsh16} spin-down ephemeris did not accurately predict the frequencies observed in their power spectra and concluded their photometry was consistent with a constant spin period of 0.008538220(3)~Hz, where the number in parentheses is the uncertainty on the final digit. While \citet{pb18} showed that the spin-down ephemeris from \citet{marsh16} was inaccurate, they noted that their results could still be reconciled with a nonzero $\dot{\nu}$ and that a longer baseline of observations was necessary to investigate this possibility. Our results in Sec.~\ref{sec:analysis} show that the beat pulse is more readily measured than the lower-amplitude spin pulse, so we use O$-$C measurements of the beat pulse to search for a change in the spin period. Assuming that any change in the orbital period is small over the baseline of observations, the derivative in the beat frequency will be a direct measure of the derivative of the spin frequency. We measured 1,077 beat-pulse timings\footnote{These timings are available as an online table, and in Table~\ref{pulse_timings}, we provide a sample of them to illustrate the format of the data.} from our dataset by fitting a Gaussian to each well-observed beat pulse. The timings, which span two years and three observing seasons, have a sufficiently long baseline to search for $\dot{\nu}$. We applied a correction to all pulse timings to compensate for the orbital-phase dependence of their arrival times, using the empirical fit from Fig.~\ref{O-C}. We then tested the \citet{pb18} linear ephemeris by computing an O$-$C diagram, using the beat period implied by their spin period. To improve the signal-to-noise ratio, we averaged the O$-$C residuals for each night and used the standard error of their mean as the 1$\sigma$ uncertainty for each night. The residuals in this plot (Fig.~\ref{O-C_PB18}) show a rising trend, suggesting that the beat period inferred from \citet{pb18} is not a good match to our data. Further, there appears a significant curvature in the $O-C$ measurements consistent with the presence of a period derivative. \begin{table} \centering \caption{Beat-pulse timings. The full table is available online as a machine-readable table.} \label{pulse_timings} \begin{tabular}{cccc} \hline Epoch\tablenotemark{a} & T$_{ max }$ [BJD]\tablenotemark{b} & T$_{ max,corr}$[BJD]\tablenotemark{c} & $\pm$ [d]\ \\ \hline 0 & 2457941.668881 & 2457941.668860 & 0.000008 \\ 1 & 2457941.670226 & 2457941.670210 & 0.000009 \\ 2 & 2457941.671584 & 2457941.671572 & 0.000011 \\ 3 & 2457941.672936 & 2457941.672930 & 0.000008 \\ 4 & 2457941.674330 & 2457941.674329 & 0.000010 \\ 5 & 2457941.675686 & 2457941.675691 & 0.000011 \\ 6 & 2457941.677076 & 2457941.677088 & 0.000008 \\ 7 & 2457941.678407 & 2457941.678427 & 0.000010 \\ 8 & 2457941.679758 & 2457941.679786 & 0.000010 \\ 9 & 2457941.681116 & 2457941.681153 & 0.000012 \\ \hline \end{tabular} \raggedright \tablenotetext{a}{Relative to Eq.~\ref{ephem}.} \tablenotetext{b}{Raw pulse timings, uncorrected for orbital-phase dependence of arrival times.} \tablenotetext{c}{Pulse timings corrected for orbital phase.} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{O-C_PB18.pdf} \caption{An O$-$C diagram of the SLKT beat-pulse timings using the \citet{pb18} beat period. Each point is the mean residual from a given observing run, and each errorbar is the standard error of that mean. A quadratic fit to the residuals yields a noticeably improved fit to the data compared to a simple linear fit, indicating that the measured O$-$C values are the result of a period derivative and not simply an inaccurate period.} \label{O-C_PB18} \end{figure} We employed two independent fitting procedures---a bootstrap fit and an affine-invariant Markov Chain Monte Carlo algorithm \citep{emcee}---to generate linear and quadratic ephemerides for the pulse maxima. We inspected the residuals from the linear and quadratic ephemerides (Fig.~\ref{ephemeris_OC}) to determine which best described the data. While the measurements for each season systematically evade the linear fit, this trend vanished with the inclusion of a quadratic term, and the $\chi^{2}_{red}$ statistic dropped from 13.1 to 2.6. Based on the quadratic fit, we calculate a beat ephemeris of \begin{multline} T_{max}[BJD] = 4.91(31)\times10^{-16}E^2 +\\ 0.0013680458481(46)E + \\2457941.6688507(36). \label{ephem} \end{multline} The quadratic coefficient is equivalent to $\frac{1}{2}\bar{P}_{beat}\dot{P}_{beat},$ where $\bar{P}_{beat}$ is the average beat period, yielding a unitless period derivative of $\dot{P}_{beat} = 7.18(45)\times10^{-13}.$ To convert this to a frequency derivative, we start with the definition $\nu = P^{-1}$ and differentiate with respect to $P$, obtaining $d\nu = -P^{-2}{dP}.$ The time derivative of this relation is \begin{equation} \dot{\nu} = -\frac{\dot{P}}{P^{2}}.\label{nu-dot}\end{equation} Thus, our $\dot{P}_{beat}$ is equivalent to $\dot{\nu}_{beat} = -5.14(32) \times 10^{-17}$ Hz s$^{-1}$. \begin{figure} \includegraphics[width=\columnwidth]{ephemeris_OC.pdf} \caption{A comparison of the residuals from the best-fit linear and quadratic ephemerides to beat-pulse timings from the SLKT (red markers) and AAVSO data (blue markers). Only the SLKT data were used to generate the ephemerides; the AAVSO data are shown as an independent test of both fits. Each SLKT point gives the mean and standard error of the residuals from one observing run. As described in the text, a constant offset of 0.7~sec was added to each AAVSO measurement to compensate for uncorrected shutter lag.} \label{ephemeris_OC} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{aavso.pdf} \caption{Top: a representative 1.2-hr section from a 5.7-hr AAVSO light curve whose image-to-image cadence was 24~s. Individual beat pulses are insufficiently resolved at this cadence for standard O$-$C analysis. The `CV' bandpass is unfiltered with a V zeropoint. Bottom: A phase plot of the beat pulse using the the full 5.7-hr light curve. The waveform of the beat pulse is sufficiently well-sampled that its phase of maximum light can be measured. The red markers are phase bins.} \label{fig:aavso} \end{figure} \subsection{Testing the spin-down with AAVSO photometry} Fig.~\ref{ephemeris_OC} includes O$-$C values from beat pulses in AAVSO photometry, but in order to provide an independent test of the spin-down ephemeris, these values were not included in the calculation of Eq.~\ref{ephem}. Because the cadences of these time series were too slow for O$-$C analysis of individual beat maxima, we took each AAVSO light curve and phased it to the beat period using our linear and quadratic ephemerides. The resulting beat-phase plots, an example of which is shown in Fig.~\ref{fig:aavso}, can be used to measure the average phase of the beat-pulse maxima provided that enough beat cycles were observed. We filtered the AAVSO photometry to exclude any time series with a cadence slower than 35~s or a duration shorter than 1~hr, and we applied a correction for the orbital-phase dependence of the pulse arrival times. We further excluded any time series afflicted by aliasing between the beat period and the observing cadence. With the benefit of the extended baseline of observations, it is immediately obvious in Fig.~\ref{ephemeris_OC} that the linear ephemeris leads to strong curvature in the residuals. By contrast, the quadratic residuals do not show any systematic trend. No shutter-lag correction was applied to the AAVSO data, and the raw AAVSO quadratic residuals showed an offset of -0.7~s with respect to contemporaneous SLKT residuals. An uncorrected shutter lag will cause timestamps to be earlier than the data that they describe, imparting a small, negative O$-$C to the pulse timings. We therefore attribute the constant, negative offset of the AAVSO residuals to uncorrected shutter lag, and in Fig.~\ref{ephemeris_OC}, we add an offset of 0.7~s to each AAVSO residual to compensate for this effect. Regardless of the cause of this offset, the lack of a systematic trend in the AAVSO quadratic residuals supports our measurement of the spin-down. \subsection{The nature of $\dot{\nu}_{beat}$} \begin{figure} \centering \includegraphics[width=\columnwidth]{PDM.pdf} \caption{PDM analysis showing the effect of an increase in the orbital frequency across 13 years of CRTS and ASAS-SN photometry (thick black line). For each value of $\dot{\nu}_{orb}$, we calculated a quadratic orbital ephemeris, computed the PDM statistic of the resulting phase plot, and normalized it to the PDM statistic of the linear-ephemeris phase plot. A value greater than 1 (dotted horizontal line) indicates that adding $\dot{\nu}_{orb}$ increases the scatter in the phase plot, while values less than 1 indicate that $\dot{\nu}_{orb}$ reduces the scatter. The colored lines show the results of simulations in which an artificial $\dot{\nu}_{orb}$ was injected into a model of the orbital light curve. For $\dot{\nu}_{orb} \geq 2 \times 10^{-18}$ Hz s$^{-1}$, our technique successfully recovers the simulated $\dot{\nu}_{orb}$ at the global minimum of a given curve. We therefore conclude that $\dot{\nu}_{orb} < \sim 2 \times 10^{-18}$ Hz s$^{-1}$ and is too small to account for the observed $\dot{\nu}_{beat}$ (dashed vertical line). } \label{orbital_nu-dot} \end{figure} Because the beat frequency is defined as $\nu_{beat} = \nu_{spin} - \nu_{orb}$, its time derivative is $\dot{\nu}_{beat} = \dot{\nu}_{spin} - \dot{\nu}_{orb}$, meaning that the observed $\dot{\nu}_{beat}$ could be caused either by a decrease of the spin frequency or an increase in the orbital frequency. If the orbital period were changing so rapidly, there would be detectable consequences in long-term photometry. To explore this possibility, we used phase-dispersion minimization \citep[PDM; ][]{PDM} to determine whether an increasing orbital frequency could reduce the scatter in orbital phase plots using 13 years of survey photometry from the Catalina Real-Time Transient Survey \citep[CRTS;][]{drake} and All-Sky Automated Survey for Supernovae \citep[ASAS-SN;][]{shappee, kochanek}. For a range of $\dot{\nu}_{orb}$ between $1 \times 10^{-19}$ Hz s$^{-1}$ and $1 \times 10^{-16}$ Hz s$^{-1}$, we phased the survey photometry using a quadratic orbital ephemeris in which we assumed an initial orbital period of $P_0 = 0.14853528$~d \citep{marsh16} at an epoch of JD = 2455000.5 \citep{pb18}.\footnote{This is the epoch of the CRTS photometry used by \citet{marsh16} to measure the orbital period. It is different than the epoch in the orbital ephemeris in \citet{marsh16}, which reported a spectroscopically determined epoch and assumed---correctly---that the orbital period did not change appreciably in the intervening 5~years since the CRTS epoch. Phase plots using the CRTS epoch will have a uniform horizontal offset with respect to the \citet{marsh16} epoch, but this does not impact our PDM analysis.} For each quadratic phase plot, we computed the PDM statistic and normalized it to the the PDM statistic for the phase plot constructed with the linear ephemeris from \citet{marsh16}. The results of this analysis are shown in Fig.~\ref{orbital_nu-dot}. For $\dot{\nu}_{orb} > 2 \times 10^{-18}$ Hz s$^{-1}$, the phase plots contain an obvious increased scatter with respect to the linear ephemeris; below that value, the effect of $\dot{\nu}_{orb}$ becomes negligible, so we constrain $\dot{\nu}_{orb} < \sim2 \times 10^{-18}$ Hz s$^{-1}$. To verify this constraint, we created a simulated orbital light curve of AR Sco, injected an artificial $\dot{\nu}_{orb}$, and matched the sampling of the simulated data to the actual observations. As Fig.~\ref{orbital_nu-dot} shows, our algorithm successfully recovered values of $\dot{\nu}_{orb}$ in excess of $2 \times 10^{-18}$ Hz s$^{-1}$, but below that threshold, it could not discern the simulated $\dot{\nu}_{orb}$, bolstering our constraint. The non-detection of $\dot{\nu}_{orb}$ is consistent with the measurements of the orbital phase of maximum light from \citet{littlefield17}. An increasing orbital frequency (\textit{i.e.,} $\dot{P}_{orb} < 0$) would induce concave-down curvature in their Fig.~4, but this is not seen. We conclude, therefore, that any $\dot{\nu}_{orb}$ contributes negligibly to $\dot{\nu}_{beat}$, such that $\dot{\nu}_{beat} = \dot{\nu}_{spin}$ to within our measurement uncertainty. Thus, the subscript notation becomes unnecessary for $\dot{\nu}$. Because we have established that $\dot{\nu}$ is the same for the spin and beat frequencies, we can directly compare our measurement of $\dot{\nu}$ with its counterpart from \citet{marsh16}. Our estimate is larger by a factor of $\sim$1.8, but it still satisfies the constraint from \citet{pb18} that $\sim -2\times10^{-16}$ Hz s$^{-1} < \dot{\nu} < \sim 1\times10^{-16}$ Hz s$^{-1}$. \subsection{Reconciling \citet{marsh16} with \citet{pb18}} \begin{figure} \centering \includegraphics[width=\columnwidth]{projected_pbeat.pdf} \caption{Projected beat period using Eq.~\ref{ephem}. The two markers indicate the beat periods reported by \citet{marsh16} and \citet{pb18}. Since \citet{pb18} did not explicitly provide a beat period, we calculated it from their spin period and the \citet{marsh16} orbital period. As indicated in the legend, \citet{marsh16} reported a 90\% confidence interval for their result, while \citet{pb18} provided a 1$\sigma$ interval. Even though these two points were not considered in the fitting procedure for Eq.~\ref{ephem}, the projected beat period is in excellent agreement with both measurements, substantiating our measurement of the WD's spin-down.} \label{fig:beat_period} \end{figure} The disagreement between the \citet{marsh16} spin-down ephemeris and the measured spin frequency in \citet{pb18} provides a stringent test of our spin-down ephemeris. In Fig.~\ref{fig:beat_period}, we plot the extrapolated beat period from Eq.~\ref{ephem} as a function of beat-cycle count, overlaying the beat periods from both \citet{marsh16} and \citet{pb18} at the appropriate epochs.\footnote{\citet{pb18} reported only $\nu_{spin}$, so we used $\nu_{orb}$ from \citet{marsh16} to calculate the corresponding $\nu_{beat}.$ } Even though our beat ephemeris was calculated without regard to either of these two measurements, the projected beat period is in excellent agreement with both measurements. Furthermore, the difference between the \citet{pb18} and \citet{marsh16} spin frequencies, when divided by the difference in epochs yields $\dot{\nu} = -(5.9\pm1.5)\times10^{-17}$ Hz s$^{-1}$, consistent with both our measurement ($\dot{\nu} = -5.14(32) \times 10^{-17}$ Hz s$^{-1}$) and the constraints from \citet{pb18}. This suggests that the inability of the \citet{marsh16} ephemeris to correctly predict the spin and beat frequencies in \citet{pb18} is a consequence of an underestimated $\dot{\nu}$ and can be remedied by using our measurement of the spin-down. Despite the error in their estimate of $\dot{\nu}$, it is remarkable, given the sparse sampling and comparatively low time resolution of the survey photometry available to them, that they were able to accurately measure $\nu_{orb}$, $\nu_{beat}$, and $\nu_{spin}$ while also estimating $\dot{\nu}$ to within a factor of 2. \subsection{Spin-down luminosity} Our precise measurement of the frequency decay rate allows us to improve the estimate of the spin power available for conversion into the observed electromagnetic (EM) energy. The spin-down luminosity, is given by $L_{\dot{\nu}} =-4\pi^{2}I\nu_{spin}\dot{\nu},$ where $I$ is the WD's moment of inertia \citep{marsh16}. As did \citet{marsh16}, we assume a 0.8~M$_\odot$ WD with a radius of 0.01~R$_\odot$. The mass-radius relation for non-relativistic WD stars means that the moment of inertia changes as $I\propto M^{1/3}$, and is rather insensitive to variations in the assumed mass. In the non-relativistic regime, WDs are predicted to have a density structure like that of a polytrope with an index of 1.5, and we calculate a moment of inertia of $I = 0.25MR^2 = 2\times 10^{43}$ kg$\;$m$^2$. However, the strong magnetic fields and the rapid spin rate of the WD may have an effect on the precise value of the moment of inertia \citep{fs17}. Thus, we find the spin-down power is $3\times 10^{26}$~W. The Gaia DR2 parallax \citep{gaia16, gaia18} to AR~Sco ($8.492\pm0.041$~mas) is equivalent to a distance of $117.8\pm0.6$~pc, which is close enough to the \citet{marsh16} value that it is unnecessary to correct their measurement of the average EM power of the pulsations. We find the efficiency of converting the spin energy to detected EM emission to be $\sim$4\%. \section{Conclusion} The complex morphological variations seen in the optical light curve of AR Sco are explained as the superposition of the spin and beat pulses, as well as a slower orbital modulation. As noted in earlier studies, a beat cycle consists of a major and minor pulse. Here, we show the minor pulse has half the amplitude of the brighter signal after removing contamination from the spin pulse. We find the spin cycle also consists of two distinct pulses, but their amplitudes are comparable. The major spin pulse is half the amplitude of the major beat pulse and therefore, about the same amplitude as the minor beat pulse. Over an orbit, the major beat and spin pulses add constructively between phases 0.2 and 0.3, resulting in the highest-amplitude optical variations observed in the system. Half an orbit later, the major beat and minor spin pulses add together along with the minor beat and major spin pulses. This combination results in lower amplitude peaks when compared with orbital phases around 0.25. The smallest amplitudes are seen when the beat and spin pulses are out of phase and destructively interfere around orbital phases between 0.5 and 0.6. This model provides a good overall fit to the rapid variations seen in AR~Sco. We also show that this model explains the O$-$C variations seen in the timings of the beat pulses. Our simple model assumes constant beat and spin amplitudes over an orbit, and this does not fully match the largest amplitudes fluctuations observed around orbital phases 0.2 to 0.4. A more complete model would take into account the changing viewing angle of the secondary star thought to be the origin of the beat emission. The major beat and spin pulses coincide at orbital phase 0.3, resulting in the highest amplitude peaks in the AR~Sco light curve. The beat pulse likely comes from near the surface of the red secondary star \citep{marsh16}, while the spin pulse is likely originating in the magnetic field of the WD. If we assume that beat maxima occur when one of the WD magnetic poles is pointing toward the secondary, then the geometry of the system suggests that we see the peak spin pulse when the WD magnetic pole is nearly perpendicular to our line-of-sight. Perhaps most importantly, our results establish that the WD is indeed spinning down \citep[as originally proposed by][]{marsh16}, but the frequency derivative that we measure is almost twice as large as their estimate. Our updated spin-down ephemeris successfully passes two tests: it accurately predicts the evolution of the beat period between the \citet{marsh16} and \citet{pb18} epochs, and it also accounts for the pulse-arrival times in AAVSO photometry. Furthermore, it comports with the constraints on the spin-down rate established in \citet{pb18}. Our measurement of the spin-down of the WD confirms the conclusion by \citet{marsh16} that the pulsed EM emission from AR~Sco can be powered by the rotational energy of the WD. \acknowledgments The Sarah L. Krizmanich Telescope is a generous donation by the Krizmanich family to the University of Notre Dame. It is named in honor of their daughter. We thank David Buckley for showing us several figures from an early draft of the \citet{pb18} manuscript. We avoided using any of this advance information in our paper. We are grateful to our anonymous referee for urging us to measure the spin-down rate of the white dwarf using the SLKT data. This research uses observations of AR Sco from the AAVSO International Database contributed by GM and FJH, both of whom also participate in the Center for Backyard Astrophysics collaboration \citep{cba}. \software{\\Astropy \citep{astropy}, emcee \citep{emcee}} \facility{AAVSO}
{ "timestamp": "2018-09-11T02:01:29", "yymm": "1802", "arxiv_id": "1802.04323", "language": "en", "url": "https://arxiv.org/abs/1802.04323" }
\section{#1} \vspace{-0.01in} } \newcommand{\Subsection}[1]{ \vspace{-0.15in} \subsection{#1} \vspace{-0.015in} } \newcommand{\Subsubsection}[1]{ \vspace{-0.15in} \subsubsection{#1} \vspace{-0.015in} } \begin{document} \pagenumbering{gobble} \title{The Coupled TuFF-BFF Algorithm for Automatic 3D Segmentation of Microglia} \name{Tiffany Ly$^{\dagger}$, Jeremy Thompson$^{\ddagger}$, Tajie Harris$^{\ddagger}$, and Scott T. Acton$^{\dagger}$, \textit{Fellow, IEEE}} \address{$^{\dagger}$C.L. Brown Department of Electrical \& Computer Engineering \\ $^\ddagger$ Center for Brain Immunology and Glia, Department of Neuroscience, University of Virginia\\ Charlottesville, Virginia USA} \maketitle \begin{abstract} \end{abstract} We propose an automatic 3D segmentation algorithm for multiphoton microscopy images of microglia. Our method is capable of segmenting tubular and blob-like structures from noisy images. Current segmentation techniques and software fail to capture the fine processes and soma of the microglia cells, useful for the study of the microglia role in the brain during healthy and diseased states. Our coupled tubularity flow field (TuFF)-blob flow field (BFF) method evolves a level set toward the object boundary using the directional tubularity and blobness measure of 3D images. Our method found a 20$\%$ performance increase against state of the art segmentation methods on a dataset of 3D images of microglia even in images with intensity heterogeneity throughout the object. The coupled TuFF-BFF segmentation results also yielded 40$\%$ improvement in accuracy for the ramification index of the processes, which displays the efficacy of our method. \begin{keywords} microglia, 3D segmentation, level set, active contour \end{keywords} \section{Introduction} \label{sec:intro} Despite the fact that glia occupy some 80$\%$ of the human brain, the automated segmentation of microglia cells is an open problem. Recent studies in field of neuroscience have shown that studying the morphology of microglia in different scenarios may give significant insight to neurological diseases and brain injury. Microglia are the tissue resident macrophages of the brain parenchyma and have diverse roles in brain development, homeostasis and in injury and disease \cite{colonna2017microglia, schafer2015microglia}. Pioneering in vivo studies demonstrated that microglia processes are constantly in motion even in the healthy brain and were therefore ascribed a surveillant function \cite{nimmerjahn2005resting, davalos2012fibrinogen}. Microglial morphology and behavior are known to be indicative of the physiologic state of the brain and are likely intricately linked with their functions in the healthy brain \cite{perry2010microglia}. The constant motion of microglial processes is postulated to be important for allowing microglia to sense and respond rapidly to their environment including monitoring synaptic activity, sensing invading pathogens and dying cells, and responding to injury \cite{wake2009resting, tremblay2010microglial, davalos2012fibrinogen,madry2017microglial}. During brain injury and disease this constant movement is altered as microglia retract. Their processes take on a more amoeboid morphology, however, little is known about how the decrease in microglia process movement affects their ability to perform their functions in the brain. Microglia morphology and behavior are complex and methods of automatic segmentation and analysis of these aspects of microglia are lacking in the field of image processing. Through multiphoton microscopy imaging, it is apparent that the morphology and movement of microglia differs significantly between the brains of healthy mice and the brains of infected mice. Microglia of infected mice have decreased sampling of tissue in part due to the ramification of microglia processes. With high-throughput imaging, we may be able to reveal a quantitative model of these differences. However, there are currently no segmentation tools that are specific to the segmentation of microglia. \begin{figure}[t!] \centering \renewcommand{\tabcolsep}{0.05cm} \setlength{\belowcaptionskip}{-10pt} \begin{adjustbox}{width=\linewidth} { \begin{tabular}{cc} \includegraphics[width=.18\linewidth, height = 0.13\linewidth]{images/orig_maygreat.png}& \includegraphics[width=.18\linewidth, height = 0.13\linewidth]{images/bff_maygreat.png} \end{tabular} } \end{adjustbox} \caption{\small{3D microglia image from multiphoton microscopy. Segmentation of microglia using coupled TuFF-BFF.}} \vspace{-0.5cm} \label{fig:onecell} \end{figure} Nimmerjahn \textit{et al.} manually traced the ends of the processes to get a rough estimation of the velocity of length change and drew out microglia for other measurements \cite{nimmerjahn2005resting}. This manual method does not give accurate measurements for the fine processes and is not feasible for high throughput data. Others quantified microglia size and processes movement by thresholding the foreground and background \cite{davalos2005atp}, manually outlining the cell, and manually counting primary branches using ImageJ software (National Institutes of Health) \cite{gyoneva2014systemic}. The most automated segmentation effort for microglia images was reported by Madry \textit{et al.} in which Vaa3D software is used to trace microglia \cite{madry2017microglial}. These methods were usually done in 2D and is not a good fit for images that have high intensity inhomogeneity and background noise. As discussed later in Section \ref{sec: exp}, imaging microglia from healthy and infected mice with multiphoton microscopy result in images with varying intensity contrast throughout the cell which makes it difficult to threshold and separate the object from the background. In this paper, we propose an automatic method for segmentation of 3D images of microglia. Our method can capture the fine processes and soma in noisy images without prior processing. We compare our method to state of the art segmentation techniques that are generally used for processing biological images. \section{Method: Coupled TuFF-BFF} \label{method} A flow-field technique is an approach to segmentation that uses a vector to extend the segmented region. Coupled TuFF-BFF is an automatic microglia segmentation algorithm that optimally combines the tubularity flow field technique (TuFF) \cite{mukherjee2015tubularity} with a blob flow field (BFF) technique. The TuFF algorithm is specific to neuron dendritic trees because it only searches for tubular structures in an image. The fine processes of microglia do have tubular shapes, but the TuFF algorithm does not account for the microglia soma. Our coupled TuFF-BFF algorithm segments both the processes and soma while minimizing the overlap of their segmentation. Coupled TuFF-BFF is in the family of active contour models that pull a contour or snake towards the edges or lines of the object in an image \cite{kass1988snakes, malladi1995shape, li2007active, mansouri2004constraining, ray2002active, goobic2005image, cui2006monte}. The snake is evolved by minimizing an energy functional, $\varepsilon(\phi)$, that follows some constraints until it converges to the object boundary, or zero level set. $\phi$ is the level set function that is positive inside the zero level set and negative on the outside. \subsection{Tubular Flow Field algorithm} TuFF \cite{mukherjee2015tubularity} uses the tubular structure of the vessel-like objects to evolve a level set towards the objects boundary. The evolution of the contour relies on the tubular vector field of the image \cite{li2007active} which is attained by the orthonormal eigenvectors $\textbf{e}_i(\textbf{x})$, where \textbf{x} is the pixel position within the image domain $\Omega$. The eigenvectors are ordered by increasing magnitude of the eigenvalues, $|\lambda_1| \leq |\lambda_2| \leq |\lambda_3| >>0$. These eigenvalues are attained by computing the Hessian matrix of the Gaussian-smoothed image. The algorithm uses Frangi's vessel enhancement technique\cite{frangi1998multiscale} to distinguish and enhance tubular structures in an image by using a multiscale vesselness function according to the three directions of $\textbf{e}_i(\textbf{x})$. The segmentation is achieved by minimizing an energy functional $\varepsilon(\phi)$: \vspace{-0.2cm} \begin{equation} \varepsilon(\phi)=\varepsilon_{reg}(\phi)+\varepsilon_{evolve}(\phi)+\varepsilon_{attr}(\phi) \end{equation} \label{eq:energy} \vspace{-1cm} \begin{equation} \varepsilon_{reg}(\phi)= v_1\int_{\Omega} |\nabla \textit{H}(\phi)|\textit{d}\textbf{x} \end{equation} \label{eq:energyreg} \vspace{-1cm} \begin{equation} \varepsilon_{evolve}(\phi)= -\int_{\Omega}\sum_{i=1}^d \alpha_i(\textbf{x})\langle\textbf{e}_i(\textbf{x}),\textbf{n}(\textbf{x})\rangle^2 \textit{H}(\phi)\textit{d}\textbf{x} \end{equation} \label{eq:energyevolve} \vspace{-0.2cm} where $\varepsilon(x)_{reg}$ is the smoothness energy, $\varepsilon(x)_{evolve}$ is the curve evolution energy, and $\varepsilon(x)_{attr}$ is the attraction energy. The smoothness weight, $v_1$, controls the smoothness of the level set curve. $\varepsilon(x)_{reg}$ constrains the length of the zero level set with the gradient of the Heaviside function in terms of $\phi$. The vector $\textbf{n}(\textbf{x}) $ is the outward normal to the zero level set of $\phi$ which effects the evolution along the vessel width. $\varepsilon_{attr}(\phi)$ is the attractive energy which uses the vector field to connect smaller disjoint fragments to larger fragments during the segmentation. The energy functional, $\varepsilon(\phi)$, is minimized where $\phi$ is iteratively updated using gradient descent \cite{mukherjee2015tubularity}. \subsection{Coupled TuFF-BFF for reconstruction of microglia} Similar to the tubularity measure, the proposed method uses a blobness vector field in the algorithm to account for the soma of the cell. Since the soma and the processes have varying thickness, we scale the width of the Gaussian corresponding to their sizes, where the width of the soma is to be much larger than the width of the fine processes. The blobness measure is calculated by again ordering the eigenvalues of the Hessian matrix by increasing magnitudes, $|\lambda_1| \leq |\lambda_2| \leq |\lambda_3|$ to attain a structure that has high magnitude of $\lambda$ in three orthonormal directions \cite{frangi1998multiscale, antiga2007generalizing}. \begin{figure*}[t!] \centering \renewcommand{\tabcolsep}{0.05cm} \setlength{\belowcaptionskip}{-10pt} \begin{adjustbox}{width=.95\textwidth} { \begin{tabular}{ccccc} {Original} & {Ground truth} & {Coupled TuFF-BFF} & {L2S\cite{mukherjee2015region}} & {Chan-Vese\cite{chan2001active}} \\ \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/orig_maygreat.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/gt_maygreat.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/bff_maygreat.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/l2S_maygreat.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/cv_maygreat.png} \\ \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/orig_may_100_400.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/gt_may100_400_700_1000.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/bff_may100_400_700_1000.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/l2s_may100_400_700_1000.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/cv_may_100_400.png} \\ \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/orig_may350_650.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/gt_may350_650.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/bff_may350_650.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/l2s_may350_650.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/cv_may350_650.png} \\ \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/orig_may450_750.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/gt_may450_750.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/bff_may450_750.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/l2s_may450_750.png} & \includegraphics[width=.16\linewidth, height = 0.12\linewidth, scale=0.1]{images/cv_may450_750.png} \end{tabular} } \end{adjustbox} \caption{\small{Segmentation results of 3D microglia images.}} \vspace{-0.5cm} \label{fig:results} \end{figure*} After computing the tubular and blobness information, the initial level set is attained from the 3D stack. The level set contours $\phi_1$ to capture the processes and $\phi_2$ to capture the soma are separately initialized by Otsu thresholding \cite{otsu1979threshold} the image's vessel- and blob-enhanced image. The processes and soma of microglia are simultaneously segmented by evolving their level sets and minimizing their respective energy functionals, $\varepsilon_{TuFF}(\phi_1)$ and $\varepsilon_{BFF}(\phi_2)$: \vspace{-0.2cm} \begin{equation} \varepsilon_{TuFF}(\phi_1)=\varepsilon_{reg}(\phi_1)+\varepsilon_{evolve}(\phi_1)+\varepsilon_{attr}(\phi_1)+\varepsilon_{repel}(\phi_2) \end{equation} \label{eq:energytuff} \vspace{-1cm} \begin{equation} \varepsilon_{BFF}(\phi_2)=\varepsilon_{reg}(\phi_2)+\varepsilon_{evolve}(\phi_2)+\varepsilon_{attr}(\phi_2)+\varepsilon_{repel}(\phi_1) \end{equation} \label{eq:energybff} \vspace{-1cm} \begin{equation} \varepsilon_{repel}(\phi_{i})=\int_\Omega \textit{H} (\phi_{TuFF})\textit{H} (\phi_{BFF})dx \end{equation} \label{eq:repel} Although the vesselness and blobness segmentations are separate, they are linked by using the result of both level sets in the $\varepsilon_{repel}(\phi)$ term. $\varepsilon_{repel}(\phi_{i})$ penalizes the regions of overlap between the two level sets. The level set functions $\phi$ can be iteratively updated by solving $\frac{\partial \varepsilon}{\partial \phi}$ which, by the chain rule, can be solved with $ \frac{\partial \phi}{\partial t}$, where \textit{t} denotes each iteration\cite{mukherjee2015tubularity}. We call this \textit{F}, the velocity of the level set implementation: \begin{equation} F = \frac{\partial \phi_{reg}}{\partial t}+ \alpha \frac{\partial \phi_{evolve}}{\partial t} + v_1 \frac{\partial \phi_{attr}}{\partial t} + \frac{r \partial \phi_{repel}}{\partial t} \end{equation}\label{eq:bffvel} The regions of overlap between both level sets are computed for $\frac{r \partial \phi_{repel}}{\partial t}$, where the repel term $r=0$ when there is no overlap. This term changes the velocity, \textit{F}, within the overlapping regions to repel away from their opposing level set $\phi$. Thus, the repel force energy functional $\varepsilon_{repel}(\phi)$ minimizes the overlap between the segmentation of the processes and soma to attain a joint segmentation. \vspace{-.3cm} \section{Experimental Results and Analysis} \label{sec: exp} The dataset consists of 3D images of microglia imaged from healthy mice brains using multiphoton microscopy. \begin{figure} \centering \includegraphics[width=.8\linewidth]{images/Dice.png} \caption{\small{Dice index of the segmentation using Coupled TuFF-BFF, L2S\cite{mukherjee2015region}, and Chan-Vese\cite{chan2001active}.}} \label{fig:dice} \vspace{-0.5cm} \end{figure} \vspace{-.5cm} \subsection{Imaging and fluorescence technique} The dataset consists of 3D images of microglia from mice using multiphoton microscopy. To label microglia in the mouse brain we used mice with an inducible cre recombinase under the control of the CX3CR1 promoter crossed to the Ai6 fluorescent reporter mouse (Jackson Laboratories, Bar Harbor, ME) to generate CX3CR1creERT2/+ X Ai6ZsGreen \cite{yona2013fate, madisen2010robust}. At post-natal day (P23) 23, mice were given 10uL/g body weight of a 20mg/mL Tamoxifen (Sigma) solution in corn oil to induce recombination of the floxed stop codon leading to ZsGreen expression in microglia. All procedures adhered to guidelines of the Institutional Animal Care and Use Committee (ACUC) at the University of Virginia. Microglia of adult mice (7-10 weeks old) were imaged using a Leica TCS SP8 multiphoton microscopy system equipped with a equipped with a Coherent Chameleon Ti:Sapphire laser and a 25x 0.95 NA immersion lens. ZsGreen was excited with a wavelength of 880 nm. \vspace{-.3cm} \subsection{Dataset} The 3D movies of microglia were imaged over 20 minutes with z-stacks taken at one minute intervals, containing single or multiple microglia per field of view. Some of the images were cropped from a larger field of view containing about 10 different cells and two images were imaged from a zoomed in view of one individual cell. The images ranged from a horizontal pixel width of .01 um and a vertical pixel width of .01 um to horizontal pixel width of .2 um and a vertical pixel width of .2 um. In the 3D images, there is variation in intensity contrast throughout the cell, non-structural noise, and fluorescence bleeding through z-stack due to the lengthy imaging technique which makes it difficult to visualize and process. The images were pre-processed using histogram equalization which increased the intensity throughout the cell but further increased noise in the background. The parameter for the width of the Gaussian filter is dependent on imaging depth. For our experiments we used $\sigma=$ 0.5 to 1 to find the processes and $\sigma=$ 4 to 7 to attain the soma structure. The smoothness parameter was set from $v_1 =$ .02 to .09 to attain the best segmentation results. All experiments required fewer than 50 iterations. \begin{figure} \centering \includegraphics[width=.9\linewidth]{images/dice_CH.png} \caption{\small{Dice index of surveyed area from the segmentation using Coupled TuFF-BFF, L2S\cite{mukherjee2015region}, and Chan-Vese\cite{chan2001active}.}} \label{fig:diceCH} \vspace{-.5cm} \end{figure} \subsection{Performance evaluation} In our experiments, we compare the coupled TuFF-BFF microglia segmentation results with those given by L2S \cite{mukherjee2015region} and the Chan-Vese segmentation method \cite{chan2001active}. The groundtruth in 3D was attained by manually tracing the object slice by slice from the z-stack. It must be noted that this was done by eye and could have some error. Figure \ref{fig:results} shows the visual comparison of the segmentation results for our dataset. Our result shown on the third column captures both the soma and processes. Figure \ref{fig:dice} shows the Dice coefficient comparison of each segmentation method to the ground truth. Since the soma is much larger than the fine processes in the microglia, the processes have less volumetric impact on the similarity score. As explained in Section \ref{sec:intro}, segmenting the processes is important for quantifying the extension from the soma and its volume of surveillance. We use the Dice coefficient to quantitatively compare the ramification by taking the convex hull of the resulting segmentation. The Dice coefficient is a similarity measure that is computed using with $2*\frac{|intersection(A,B)|}{(|A| + |B|)}$ where $A$ is the ground truth and $B$ is the compared image. From Figure \ref{fig:diceCH}, the average Dice score for coupled TuFF-BFF was 0.77, compared to 0.53 for L2S \cite{mukherjee2015region} and .58 for Chan-Vese \cite{chan2001active}. It must be noted that L2S required manual user initialization for each 2D image in the stack. While the Chan-Vese method has automatic seed selection, our coupled TuFF/BFF method was the only method that was a true 3D segmentation algorithm. L2S could not consistently capture the entire processes due to the intensity inhomogeneity throughout the object and background noise. The Chan-Vese segmentation could capture the extensions of the processes but did not work well with noise and attained false positives in the reconstruction. Since our method uses the tubular and blob information of the object to separate foreground and background, the segmentation only evolved within the object boundaries. From the segmentation of microglia from 3D multiphoton images, we attained quantification of the ramification of the microglia processes using the index provided by Madry \textit{et al.} The ramification index in Table 1 quantifies the extension of the processes from the soma. The ramification index of 1 is the soma with no ramification and a larger index denotes greater ramification. We compare the ramification index attained from the segmentation result from each method with that attained from the ground truth. The mean absolute error for coupled TuFF-BFF was 1.49 compared with 3.92 and 3.78 for L2S \cite{mukherjee2015region} and Chan-Vese \cite{chan2001active}, respectively. \begin{center} \textbf{Table 1} \ \ Ramification Index\\ \begin{tabular}{ccccc} \hline {No.} & groundtruth & TuFF-BFF& L2S & Chan-Vese\\ \hline \#1 & 8.88 & 7.88 & 4.0 & 7.46 \\ \#2 & 7.69 & 10.14 & 2.1 & 9.89 \\ \#3 & 6.54 & 5.98 & 4.34 & 8.76\\ \#4 & 9.02 & 13.6 & 5.48 & 12.4\\ \#5 & 6.44 & 7.22 & 5.26 & 18.3\\ \#6 & 8.60 & 8.74 & 3.57 & 11.0\\ \#7 & 9.09 & 7.70 & 4.78 & 12.86\\ \#8 & 8.88 & 7.88 & 4.0 & 7.46\\ \#9 & 11.18 & 12.7 & 7.56 & 16.48\\ \hline {MAE:} & -- & 1.49 & 3.92 & 3.78\\ \hline \\ \label{table: ram} \end{tabular} \vspace{-0.5cm} \end{center} \section{Conclusion} In this paper, we proposed an automated segmentation method that captured microglia in 3D images. There was no smoothing or enhancement to the image prior to the application of our algorithm. Coupled TuFF-BFF was able to segment processes and soma from 3D images of microglia from the mouse brain. It was able to simultaneously capture the object of interest from images despite intensity inhomogeneity throughout the cell and background noise. While our method performed better than the state of the art, it could be further improved to attain a more accurate thickness of the cell and capture the low intensity areas of the branches. We plan to apply our method on images of microglia from mice in other states that significantly alters the microglia morphology. Another extension planned involves using coupled TuFF-BFF to extending existing cell tracking algorithms \cite{mansouri2004constraining, goobic2005image, cui2006monte} \clearpage \bibliographystyle{IEEEtran} {\small
{ "timestamp": "2018-02-13T02:20:54", "yymm": "1802", "arxiv_id": "1802.04156", "language": "en", "url": "https://arxiv.org/abs/1802.04156" }
\section{Introduction} \label{sec:intro} The need for improved energy resolution and sensitivity to smaller energy depositions led to the proposal to use cryogenic techniques as an instrument for detecting low energy particles.\\ In 1984 three different groups suggested low temperature detectors as an instrument to investigate fundamental problems in nuclear and astroparticle physics. Fiorini and Niinikowsky \cite{Fiorini:1983yj} proposed cryogenic calorimeters for the study of the double beta decay and of the neutrino mass measurement. Drukier and Stodoloky \cite{drukier1} studied the use of superconducting detectors for the search for the coherent neutrino scattering off nuclei. Finally McCammon and coworkers \cite{McCammon} indicated X-ray astrophysics as a possible field where cryogenic devices might have an important impact.\\ In the last 30 years, Low Temperature Detectors (LTD) experienced a very rapid growth and reached a level of maturity and versatility such to represent one of the leading technology in different research fields nowadays. Examples are: dark matter \cite{pirroreview}, neutrinoless double beta decays \cite{Poda:2017jnl}, neutrino mass measurement \cite{Nucciotti:2015rsl}, rare nuclear decays and processes \cite{demarcillac}, X-ray astrophysics \cite{Ullom}, cosmological microwave background precision measurements \cite{pirroreview}.\\ Nevertheless to face the challenges imposed by the aforementioned researches and increase the sensitivity of the current experiments, further and substantial technological developments are necessary.\\ This review describes the potentialities of the future technical improvements in the search for rare nuclear decays by bolometers. Although the term bolometer was originally used for a detector measuring the intensity of electromagnetic radiation by heating up the detector itself, through the text it will be used as synonymous of cryogenic calorimeter. This paper is organized as follows: in section \ref{sec:herc} the basic principles of cryogenic detectors are summarized together with the main advantages and drawbacks compared to traditional devices. In section \ref{DBD} the double beta decay physics case is analyzed: actual limits and future developments are discussed. In particular section \ref{BAPI} describes the effort for the implementation of active background identification techniques while needs for reduction of the environmental radioactivity is discussed in section \ref{err}. Finally section \ref{rare} highlights the use of bolometers for other rare $\alpha$\ and $\beta$\ nuclear decays as well as electron capture processes. \section{High Energy Resolution Calorimeters} \label{sec:herc} \subsection{Conventional Calorimeters} \label{sec:conv} A calorimeter is a device which is sensitive to the energy deposited by a single particle. \\ All conventional calorimeters share the same principle: a ionizing particle interacting with a solid medium, deposits part of its energy into the medium itself. The released energy $E$ produces out-of-equilibrium excitation quanta: electron-hole pairs or photons. The quanta are collected as completely as possible before they decay into an undetectable channels. To get good energy resolution, the detector response must be uniform throughout the detection volume so that the fraction of energy released into the desired channel and the collection efficiency are the same for all events. The number of excited quanta is proportional to $E$ and inversely proportional to the mean energy necessary to produce each of them $w$; poissonian statistical fluctuations in the number of created quanta represent the ultimate limit of the technology.\\ When energy resolution matters, the choice in conventional detectors is limited to germanium or silicon devices. For example, in silicon, $w$=3.6 eV and the best energy resolution obtained in semiconductor X-rays detectors is about 125 eV Full With Half Maximum (FWHM) at 6 keV \cite{quaglia}. The excitation energy $w$ is about three times the band gap $E_g$. The presence of several modes with an excitation energy less than $E_g$ and momentum conservation, that requires lattice vibration excitations (phonons), implies that 70\% of the energy goes into undetectable channel. The statistical contribution to the energy resolution $\Delta E_{rms}$ is: \begin{equation} \Delta E_{rms} = \sqrt{wFE}, \label{risoSi} \end{equation} where F is the Fano factor \cite{Fano,Klein}, which reduces the overall random spread when multiples excitation mechanisms play a role. On the other hand, the maximum phonon energy in Si is only 60 meV; much more phonons than electron-hole pairs are produced. The possibility to detect such phonons overcomes the limit imposed on the energy resolution by poissonian fluctuations and allows much smaller energy depositions to be detected.\\ This is the rationale behind the cryogenic calorimeter technique.\\ \subsection{Bolometers} \label{sec:basic} A bolometer is solid state device composed by an absorber, connected through a thermal link to a heat sink, and equipped with a temperature sensor (thermometer) for the conversion of phonons into an electrical signal. Different thermometers are commonly used depending on the specific application: high resistivity doped semiconductors (Neutron Transmutation Doped Thermistors \cite{Hallerf, Haller}), paramagnetic sensors (Metallic Magnetic Calorimeters \cite{porst}), superconducting materials sensors (Kinetic Inductance Detector \cite{day}, Transition Edge Sensors \cite{Ullom}, Superconducting Tunnel Junctions \cite{Enns}, Superheated Superconducting Granules \cite{Enns}). Several reviews on bolometers have been published \cite{Enns,bolo-review1,bolo-review2,Twerenbold,McCammon200411,Ullom} which include a very general description of different LTDs making using of different sensor approaches. Here we recall only the basic principles. Bolometers can be operated as equilibrium or non-equilibrium devices. In the former all the released energy $E$ degrades into heat while in the latter out-of-equilibrium (ballistic) phonons are collected. In the most simple model, $E$ is fully thermalized and the temperature variation is $\Delta T=E/C(T)$ where $C(T)$ is the detector heat capacity at a working temperature $T$. In order to have the highest temperature variation, $C$ must be as low as possibile. This leads to the necessity to operate the detector at temperatures well below one Kelvin and to select materials in order to avoid contributions that increase the heat capacity. Several material-related characteristics contribute to the specific heat: the lattice contribution that is proportional to $({T/T_D})^3$ where $T_D$ is the Debye temperature of the material; the electron contribution that depends on ${T/T}_F$ where $T_F$ indicates the Fermi temperature; the paramagnetic component which is proportional to $1/T$. It is clear that the paramagnetic contribution is very dangerous, but also the use of conductors could be limited by the specific heat of electrons. For a superconductor the electron heat decreases as $\exp(-2T_c/T)$, where $T_c$ is the superconducting critical temperature. For specific applications, superconductors could represent absorbers of interest in which the excited quanta are the quasi-particles induced by the breaking of Cooper pairs. Typical phonons excitations are limited by the Debye cut frequency and lie in the range of tens of meV for most materials. The ultimate energy resolution could be therefore very high and limited only by thermodynamic fluctuations due to the random exchange of phonons with the thermal bath. It has been shown in Ref. \citen{bolo-review2} that $\Delta E_{rms}$ is: \begin{equation} \Delta E_{rms} = \sqrt{\xi C_0K_BT_0^2}, \label{risoBo} \end{equation} where $K_B$ is the Boltzmann constant, $T_0$ is the heat sink temperature and $\xi$ is a parameter that depends on the thermometer characteristic which is one in the ideal case but can reach values up to ten. Using Eq. \ref{risoBo} a resolution of the order few eV can be reached. In reality several contributions can deteriorate the resolution: Johnson noise of the sensor and polarization network, phonon noise due to the temperature gradient, electronic noise of the amplifier, microphonic noise, metastable electron-holes state or long lived non-thermal phonons. Anyhow, using a suitable thermometer and with an appropriate electronic readout, energy resolutions of few eV are reachable. \\ It's remarkable that $\Delta E_{rms}$ does not depend on $E$, on the thermal conductance G of the thermal link and of the detector time constant $\tau$=G/C. The feature remains valid even including signal and noise power spectra and more refined analyses if the assumption of fully thermalization is achieved \cite{bolo-review2}. This opens the window to operate very massive detectors with different absorber materials provided they are kept at sufficiently low temperature.\\ On the other hand, cryogenic particle detectors sensitive to ballistic phonons, are faster relative to equilibrium devices, since thermal equilibrium often takes a very long time to establish at low temperatures, and with no restrictions on the equilibration time, they offer even more flexibility in choice of materials. They are subject to branching statistics as well as ionization detectors but the number of excited quanta is much larger. They may suffer from position dependence and/or the lifetime and detection efficiency of excitations but for applications that require large volumes of dielectric material and do not need exceptionally good energy resolution, the speed advantage may outweigh other considerations. \subsubsection{Strenghts and Weaknesses} \label{sec:limitation} To summarize, the main advantages of bolometers compared to the well established semiconductor ionization technology are: \begin{itemlist} \item better energy resolutions; \item enhanced sensitivity to low energy release; \item wide flexibility of materials usable for the absorber. This characteristic is of primary importance when a particular isotope is needed as the absorber or the source of particles. \end{itemlist} \noindent Despite these advantages, bolometers show limitations and practical challenges. \begin{itemlist} \item They need very a complicated apparatus necessary to maintain very low temperatures. Despite the improvements in cryogenic techniques, the size of the experimental volume is limited to a cubic meter. Moreover the cryostat must be very stable since the signal is represented by a very tiny temperature rise (hundreds of $\mathrm{\mu}$K for one MeV of released energy $E$) compared to the thermic bath and temperature fluctuations may limit the detector response. Furthermore unwanted noise, introduced by the cryogenic apparatus itself, could jeopardize the excellent energy resolution and affect the energy threshold. \item The slowness of the bolometric response, that can extends up to several seconds for equilibrium devices, represents a concern in the search for rare processes. Pile-up events, induced by cosmic rays and by natural radioactivity in the bolometer itself and surrounding material, requires operation in deep underground sites, protection against external radioactivity and a careful selection of radio-pure materials for the detector itself. \item In rare nuclear processes a small signal must be resolved over a large background. Full thermalized bolometers are almost equally sensitive to any kind of particle, despite the way energy is released. Electrons, $\alpha$\ particles, and nuclear recoils, depositing the same amount of energy in the detector, produce a pulse with the same amplitude and shape.\\ In addiction the response is unaffected by the impact point of the event. If this feature allows for an excellent energy resolution, it makes however impossible to distinguish bulk from near surface particle interactions. In other words bolometers don't have a dead layer at the surface, they are fully sensitive in their volume.\\ The lack of particle identification and the impossibility to tag surface events makes the external background reduction a paramount concern. This represents the most serious limitation that searches, carried out with bolometric techniques, are facing. \end{itemlist} \subsubsection{Hybryd bolometers} \label{sec:hybryd} To overcome the aforementioned limits, hybrid bolometers were developed, in which a double readout is exploited. The temperature increase is measured in parallel with ionization, scintillation or Cherenkov light detection. This possibility permits the discrimination between events that release energy with different efficiencies in different detectable channels. Typical examples are neutrons which interact mainly by nuclear recoils, with only a very small fraction of energy that goes into the ionization channels or $\alpha$\ particles that are quenched in the scintillation channel. This idea was initially developed for dark matter searches and lead to realization of very sensitive detectors for both heat-ionization and heat-scintillation devices \cite{cdms,Agnese:2014aze,Agnese:2013ixa,Armengaud:2016cvl,Angloher:2015ewa,Angloher:2014myn,Angloher:2016ooq,Angloher:2016hbv}. Recently hybrid bolometers come to play an important role in the search for Majorana neutrinos through the neutrinoless double beta decay. The discovery potential of future experiment is closely related to successful implementation of this technology. \section{Double Beta Decay} \label{DBD} \subsection{The ${0\nu\beta \beta}$\ physics case} The neutrinoless double beta decay ${0\nu\beta \beta}$\ \cite{Furry} is a transition, in which a nucleus (A,Z) decays into its isobar (A,Z+2) with the simultaneous emission of two electrons. Both the parent and the daughter nucleus must be more bound than the intermediate one (A,Z+1) in order to avoid the occurrence of the sequence of two single beta decays. Such a condition, due to the pairing term, is fulfilled in nature for 35 even-even nuclei \cite{giuntibook}. \\ This process violates the lepton number by two units; it's not allowed by the Standard Model of interactions but it's envisaged in many of its extensions in which neutrinos are their own antiparticles \cite{giuntibook}. Its discovery would ascertain unambiguously the nature of neutrinos as Majorana fermions \cite{PhysRevD.25.2951}, would constrain the absolute neutrino mass scale and provide support to leptogenesis theories \cite{Luty:1992un}. In the standard paradigma \cite{giuntibook,pdg} the decay is mediated only by the exchange of three virtual light neutrinos between two charged weak interaction vertices. The chirality mismatch imposed by the V-A structure of the ElectroWeak theory leads to an amplitude proportional to a linear combination of the three neutrino masses. The absolute value of the neutrino masses is unknown yet but their sum is constrained to be less than 0.66 eV at 95\% C.L. from cosmological observations \cite{pdg,Cremonesi:2013vla}. On the other hand, the squared mass differences are well measured from neutrino oscillation experiments.\\ Three possible orderings are therefore conceivable: normal hierarchy (NH), in which m$_{\nu1}$ \textless m$_{\nu2}$ \textless m$_{\nu3}$, inverted hierarchy (IH) where m$_{\nu3}$ \textless m$_{\nu1}$ \textless m$_{\nu2}$, and the quasi-degenerate hierarchy (QD), for which mass differences are tiny compared to their absolute values. \\ Being a second-order weak interaction process and due the smallness of neutrinos masses, extraordinary long lifetimes ($\tau$\textgreater10$^{25}$yr) are expected for the ${0\nu\beta \beta}$\ decay.\\ Despite decades of experimental search it has not been observed so far. Actual limits on the mean lifetimes $\tau$ are in the range of $10^{24-26}$ yr \cite{Cremonesi:2013vla,DellOro:2016tmg}; running experiments are deeply probing the QD parameter space and have the possibility to start to scan the IH region \cite{Cremonesi:2013vla}. \\ The goal of the next generation experiments is to completely cover the IH mass scheme and to have a high chance to assess the neutrino nature in their first operational stages \cite{Agostini:2017jim}.\\ The main signature of the ${0\nu\beta \beta}$\ decay is a peak in the sum energy spectrum of the electrons at the transition energy of the reaction (commonly referred as ${Q_{\beta\beta}}$). A typical ${Q_{\beta\beta}}$\ for nuclei of experimental interest lies in the two-three MeV energy range. The signal peak must be resolved on top of a continuum background induced by natural and anthropogenic radioactive decay chains and cosmogenic induced activity. Consequently, the main task in ${0\nu\beta \beta}$\ searches is to decrease the background in the Region Of Interest (ROI).\\ The requirements to achieve the IH coverage ($\tau \sim 10^{27-28}$ yr) descends consequently: about a few thousand moles of the isotope under study (several hundreds of kgs) must be measured in combination with a background close to zero at the ton $\times$ year exposure scale and a FWHM energy resolution better than $0.5$\% \cite{Cremonesi:2013vla,Artusa:2014wnl}.\\ Bolometers are natural candidate detector for this purpose. They show excellent energy resolution, high detection efficiency thanks to the source-equal-detector approach, scalability to the ton scale and can be made of different materials allowing the search in several candidate nuclei.\\ The state of the art in the bolometric search for the ${0\nu\beta \beta}$\ is represented by the CUORE (Cryogenic Underground Observatory for Rare Events) experiment \cite{Artusa:2014lgv}. CUORE recently demonstrated \cite{Alduino:2017ehq} that a thousand TeO$_2$ bolometers can be successfully operated at a temperature of about ten mK studying the decay of the $^{130}$Te ($Q_{\beta\beta}$ $\sim$ 2527 keV \cite{Redshaw:2009zz, scielzo09,Rahaman:2011zz}). Despite the improvement compared to its predecessor CUORICINO \cite{Andreotti:2010vj} and CUORE-0 \cite{Aguirre:2014lua, Alfonso:2015wka,Alduino:2016vjd}, the CUORE background is currently of the order of $10^{-2}$ counts keV$^{-1}$kg$^{-1}$ y$^{-1}$ \cite{Alduino:2017ehq} as expected from simulations \cite{Alduino:2017qet}.\\ Its sensitivity is mainly limited by energy-degraded $\alpha$'s, emitted by surface contaminations on the crystals and on the copper supporting structure \cite{Bucci:2009fk,Alduino:2017qet}. High energy (four - six MeV) $\alpha$\ particles, in fact, lose only part of their energy in the crystal or surrounding materials and give rise to a continuum of events in the ROI. The amount of surface contamination is less than a few 10$^{-8}$ counts h$^{-1}$ keV$^{-1}$ cm$^{-2}$ and can not be measured or screened with any standard technique.\\ As stated in section \ref{sec:limitation} the impossibility to disentangle particles on the crystal surface or external to it and the lack of particle identification represents the actual limits of bolometers in the search of ${0\nu\beta \beta}$\ decay. \\ Given the enormous effort already devoted to surface treatment, it is unlikely that the required reduction in the background level can be achieved by improving the radio-purity of the detector materials alone. A major role to overcome this limit consists in the development of new technologies for active background suppression. \\ This is the goal of the CUPID project (CUORE Upgrade with Particle ID) \cite{Wang:2015raa,Wang:2015taa} that aims at enhancing the sensitivity for a bolometric experiment by two orders of magnitude increasing the source mass and reducing the backgrounds using isotopically enriched bolometers with particle identification. The pursued approaches are based on different physical principles and different techniques and will be detailed in section \ref{BAPI}. The reduction of the background induced by sources other than surface ones are reported in section \ref{err}. \subsection{Other second order weak processes} In the ${0\nu\beta \beta}$\ case discussed so far, the decay was assumed to proceed to the ground state of the final nucleus. Given the short range of MeV electrons in a solid medium (few mm) the majority of the candidate events consists in a monochromatic energy release contained in a single crystal. The decay can also proceed to the excited states of the final nucleus. Although the longer predicted half life for these cases, they could be of experimental interest because the de-excitation $\gamma$\ of the final nucleus give rise to a multi-crystals signature and therefore to a strongly reduced background. \\ However the exchange of Majorana neutrinos between the two weak vertices can occur in transitions with a nuclear charge change of $\Delta$Z=-2 through $0\nu \beta^+ \beta^+$, $0\nu$EC$\beta^+$, $0\nu$ECEC decays modes, where EC is the electron capture acronym. In the last mode no leptons are available to carry away the released energy and the decay can happen through radiative \cite{Sujkowski} or resonant decay \cite{Bernabeu}. The ECEC mode is therefore typically suppressed but enhancements (up to a factor 10$^6$) may happen when there is degeneracy of the initial and final excited atomic states (resonance condition). The $\Delta$Z=-2 processes are interesting because they can provide an insight into the ${0\nu\beta \beta}$\ mechanism since they are dominated by right-handed weak currents. \\ The most sensitive probe is represented by the ${0\nu\beta \beta}$\ process to the ground state and in the rest of the review only this decay is considered. Anyhow the potentialities of the future experiments are related to the background reduction capability and this aspect leads to significative improvements for all modes, irrespective of their signature. The ${0\nu\beta \beta}$\ process is always accompanied by the ${2\nu\beta \beta}$\ decay which is allowed by the Standard Model since two neutrinos are emitted in the finale state. This second order weak process has been observed for 11 nuclei \cite{Barabash:2015eza} and represent the rarest nuclear decays ever measured. The experimental signature is weaker than the ${0\nu\beta \beta}$\ mode and consists in a broad spectrum from zero up the the ${Q_{\beta\beta}}$\ value (neutrinos emitted at rest) and, as for the ${0\nu\beta \beta}$\ case, most of events are contained in a single crystals. The extraction of the signal is therefore challenging because it must be disentangled from several background sources and particle identification is not going to play a fundamental role since backgrounds are mainly $\beta$/$\gamma$\ i.e. same particle type of the signal. On the other hand the limited size of actual cryogenic apparatuses imposes the use of bolometers enriched in the isotope under investigation, therefore the ${2\nu\beta \beta}$\ signal to background ratio will dramatically increase thus allowing better measurements of the ${2\nu\beta \beta}$\ shape and intensity. Finally it must be stressed the ${2\nu\beta \beta}$\ represent the irreducible background for the ${0\nu\beta \beta}$\ search as discussed in section \ref{err}. \section{Bolometers with Active Background Suppression} \label{BAPI} \subsection{Scintillating bolometers} \label{sb} The use of heat-scintillation hybrid bolometers for the ${0\nu\beta \beta}$\ search was proposed in 2005 \cite{Pirro:2005ar}.\\ In a scintillating bolometer \cite{nsvecchioarticoloCaF2,Pirro:2005ar,Giuliani2012} a small fraction (between one per cent and one per mill) of the released energy $E$ is converted into scintillation light. This eventually escapes the crystal and is absorbed by a thin bolometer working as light detector.\\ The ratio between the two signals (scintillation/heat) depends on the particle type; $\beta$/$\gamma$\ particles have a light yield (LY) which is typically different from the LY of $\alpha$\ interactions or neutrons that are quenched. Consequently, the dual readout allows particle identification. In addition, the freedom in the choice of the absorber provides the unique opportunity of selecting the ${0\nu\beta \beta}$\ isotopes with high ${Q_{\beta\beta}}$ \cite{Arnaboldi:2010tt,Gironi:2009ay, Arnaboldi:2010jx}. \\ This is a very important aspect. The most prominent natural high-energy $\gamma$ s, induced by the $^{208}\mathrm{Tl}$\ decay, are distributed up to 2615 keV while only rare $\gamma$\ decays coming from the $^{214}$Bi decay populate the above region and up to 3270 keV. Selecting a ${0\nu\beta \beta}$\ candidate with a ${Q_{\beta\beta}}$\ larger then 2615 keV will leads to a $\gamma$\ background reduction in the ROI by about one order of magnitude as can be inferred from the $\gamma$\ spectrum measured at the Laboratori Nazionali del Gran Sasso \cite{Bucci:2009fk}.\\ With a proper absorber choice, scintillating bolometers can simultaneously get rid of both $\alpha$\ induced background and the most intense natural $\gamma$\ radioactivity.\\ In the last ten years several scintillating bolometers were operated with remarkable results using as ${0\nu\beta \beta}$\ emitters: $^{82}$Se(${Q_{\beta\beta}}$ =2998 keV\cite{wang2016}), $^{100}$Mo(${Q_{\beta\beta}}$=3034 keV\cite{wang2016}), $^{116}$Cd (${Q_{\beta\beta}}$=2813 keV\cite{wang2016}). The effort was devoted not only to the development of the light detector technology but also to the production of very pure enriched crystals (see section \ref{err}). Three small scale pilot experiments using scintillating and isotopically enriched crystals are (or close to) being operated as final demonstrator in view of a next-generation experiment. CUPID-0 \cite{Artusa:2016maw}, formerly LUCIFER \cite{Beeman:2013sba}, is currently running using an array of 24 enriched Zn$^{82}$Se crystals; LUMINEU/CUPID-0-Mo \cite{Armengaud:2016dqg, Armengaud:2017hit} will start data taking in January 2018 with an array of 20 Li$_2$$^{100}$MoO$_4$ crystals; AMORE \cite{Kim:2015pua} is operating 5 $^{48-dep}$Ca$^{100}$MoO$_4$ bolometers but it foresees the use of other molybdates such as Zn$^{100}$MoO$_4$ or Li$_2$$^{100}$MoO$_4$ for the final experiment. The first two mentioned experiments are part of the CUPID R\&D. No demonstrators for $^{116}$CdWO$_4$ bolometers are on going or planned despite the good results obtained \cite{Danevich:2016xcm}; hence only results for $^{100}$Mo and $^{82}$Se are discussed. One of the greatest advantages of scintillating bolometers is that the amount of collected light is high enough to be recorded using a germanium slab operated as bolometer and equipped with a standard Neutron Transmutation Doped (NTD) germanium thermistor. NTDs are heavily doped semiconductors, obtained by thermal neutron irradiation \cite{Hallerf, Haller} and with impurity concentrations slightly below the metal-insulator transition. In this variable-range-hopping regime, their resistivity depends exponentially on the temperature. They are high impedance devices (1-100 $M\Omega$), read out in constant current biasing mode and matched to room temperature JFET amplifiers. They are commonly used for the heat channel readout. The possibility to use the same sensor with minimal modification also for the light channel represents a clear advantage in terms of reliability, robustness and impact on the cryogenic infrastructure in view of a reuse of the existing CUORE infrastructure for a future experiment. Those light detectors have been extensively characterized \cite{Beeman:2013zva} and the technology could be considered already mature. Light detectors show a baseline RMS noise of hundreds of eV and allow particle identification even in the case of the worst scintillators (LY$\sim$1 keV/MeV) \cite{Artusa:2016maw,Armengaud:2017hit,Beeman:2013zva}. The ratio of the light signal associated to an $\alpha$\ interaction (QF) to $\beta$/$\gamma$\ on for events with the same heat energy release is defined Quenching Factor (QF). It is of the order of 0.2 for most of the studied compounds with the only exception of the ZnSe which exhibits a QF of 3-6 \cite{Arnaboldi:2010jx,Beeman:2013vda,Beeman:2013sba,Artusa:2016maw}. The discrimination capability of a scintillating bolometer is usually parametrized by a quantity, the Discrimination Power (DP), defined as \begin{equation} \mathrm{DP} = \left|\mu_{\beta /\gamma}-\mu_{\alpha}\right|/\sqrt{\sigma_{\beta/\gamma}^2+\sigma_\alpha^2}, \label{dp} \end{equation} where $\mu$ and $\sigma$ denote the average value and the standard deviation of the $\alpha$ or $\beta$/$\gamma$\ distributions respectively and quantities are computed at ${Q_{\beta\beta}}$\ since they might depend on the energy. The DP can be the light/heat ratio but can also be applied to any pulse shape variable. ZnSe and some molybdates, in fact, show a peculiar feature: the thermal pulse induced by an $\alpha$\ particle has a slightly faster decay time than that induced by $\beta$/$\gamma$\ interactions \cite{Gironi:2009ay,Beeman:2012jd,Armengaud:2017hit,Artusa:2016maw,Gironi:2010hs,Beeman:2013vda,Arnaboldi:2010jx,Arnaboldi:2010gj,Beeman:2012gg,ZnMo4poda,Casali:2017zvs,Kim:2017xrs,Kim:2015pua}. \begin{figure}[htb] \centerline{\includegraphics[width=8.0cm]{bw_ZnSe}} \caption{Shape parameter of a light detector as a function of the energy released in a Zn$^{82}$Se bolometer. The energy scale is calibrated on $\beta$/$\gamma$\ and thus referred as keVee. The lines indicate the 2$\sigma$ (continuous) and 3$\sigma$ (dashed) $\beta$/$\gamma$\ and $\alpha$\ bands. $\alpha$\ events produced by a smeared Sm source (below 3MeVee) and by contaminations of the crystal bulk (peaks above 5MeVee) can be easily rejected, in particular in the region of interest for the $^{82}$Se ${0\nu\beta \beta}$ (dashed vertical green line). Inset: time development of light pulses produced by $\beta$/$\gamma$\ and a $\alpha$\ interactions with energy of about 2.6 MeV. A DP of 12 is obtained at the $^{82}$Se ${Q_{\beta\beta}}$. Figure adapted from Ref. \citen{Artusa:2016maw} with kind permission of the European Physical Journal (EPJ).} \label{znse} \end{figure} \begin{figure}[hbt] \centerline{\includegraphics[width=8.0cm]{bw_Limo}} \caption{Light Yield-vs-heat scatter-plot obtained with AmBe neutron source with a 151 g Li$_2$MoO$_4$ scintillating bolometer. A clear separation between $\beta$/$\gamma$\ and $\alpha$\ interactions is visible. The $\beta$/$\gamma$\ band exceeds the natural $^{208}$Tl end-point because of the prompt de-excitation $\gamma$ s following $^9$Be($\alpha$,n)$^{12}$C$^{\star}$ reaction. The cluster of events in the $\alpha$\ region is caused by the reaction $^6$Li(n,t)$\alpha$. Figure adapted from Ref. \citen{Armengaud:2017hit} with kind permission of the European Physical Journal (EPJ).} \label{limo} \end{figure} This is a tiny (few percent) effect that allows to discern the nature of the interacting particle without light detection thus greatly simplifying the detector assembly and readout. This effect could be ascribed to the long scintillation decay time (of the order of hundreds microseconds) and the high percentage of non-radiative de-excitations of the scintillation channels, that produce delayed phonons \cite{GironiPSD}. However a very good signal-to-noise ratio for all the channels is required because of the smallness of the effect and the discrimination power was not always reproducible in different experimental measurements. Further developments are necessary before considering a reliable technology.\\ In the case of ZnSe bolometers, a pulse shape difference, more discriminating than the one on the heat bolometer, is seen on the light channel \cite{Beeman:2013vda}. This is currently used by the CUPID-0 collaboration \cite{Artusa:2016maw} to avoid the leakage of the $\alpha$\ band of the LY into the $\beta$/$\gamma$\ band as observed in the light-vs-heat scatter plot \cite{Beeman:2013vda}. Examples of distributions of discriminating variables are reported in Fig. \ref{znse} and Fig. \ref{limo}. \\ The requirements for a bolometric experiment to assess the IH hierarchy mass region imply a background level in the ROI of $10^{-4}$ counts kg$^{-1}$ y$^{-1}$ \cite{Beeman:2011bg}. This requires a rejection factor on $\alpha$ s better than 99.9\% while preserving a signal efficiency greater then 90\% \cite{Beeman:2011bg}. A DP of 3.1 or greater is necessary to satisfy this criterion. Table \ref{tab:PID} reports the the $\beta$/$\gamma$\ LY, QF and DP for all the compounds for which a pilot experiment is running or in construction phase; all of them exhibit a DP exceeding 9, much above the requested threshold. \begin{table}[hbt] \tbl{The table reports the light yield (LY), quenching factor (QF), and discrimination power (DP) for the enriched scintillating bolometer demonstrators quotet in the text.} {\begin{tabular}{@{}lllll@{}} \toprule Bolometer & $LY_{\beta/\gamma}$(keV/MeV) & $QF$ & $DP$ & Refs. \\ \colrule \hline ZnSe & 2.6--6.4 & 3--4.6 & 9--17 & \cite{Beeman:2013vda,Arnaboldi:2010jx} \\ Zn$^{82}$Se & 3.3--5.2 & 2.7 & 10--12 & \cite{Artusa:2016maw} \\ \hline $^{40}$Ca$^{100}$MoO$_4$ & n.a. & 0.19--0.33 & 9--11 & \cite{kim-ieee} \\ \hline Li$_2$$^{100}$MoO$_4$ & 0.73--0.77 & 0.15--0.22 & 12-18 & \cite{Armengaud:2017hit,Poda:2017jnl} \\ \botrule \end{tabular} \label{tab:PID}} \end{table} \subsection{Cherenkov light in TeO$_2$ bolometers} \label{nonsb} The TeO$_2$ bolometers used by the CUORE collaboration do not show any significant scintillation. A tiny light signal was observed in 2004 \cite{coron} and seems to find confirmation recently \cite{Berge:2017nys}. In any case the light yield is low ($\sim$20 eV) and negligible compared to another and more important process that takes place: the emission of light through the Cherenkov effect. \begin{figure}[hbt] \centerline{\includegraphics[width=9cm]{Fig4d}} \caption{Detected light versus calibrated heat in a CUORE-like TeO$_2$ bolometer read out with a CUPID-0 NTD Ge thermometer. The mean light is clearly energy dependent for the $\beta$/$\gamma$\ peaks (circles below 3 MeV) and compatible with zero for the $\alpha$\ decay of the $^{210}$Po (triangle). TeO$_2$ does not scintillate, however Cherenkov light is produced by $\beta$/$\gamma$\ interactions (circles) and not by $\alpha$\ ones (triangle). Figure adapted from Ref. \citen{Casali:2014vvt} with kind permission of the European Physical Journal (EPJ). } \label{Cherenkov} \end{figure} In 2010 the use of the Cherenkov light in TeO$_2$ bolometers was suggested as a tool for particle identification \cite{TabarellideFatis:2009zz}. The threshold for Cherenkov emission in TeO$_2$ is around 50 keV for electrons and around 400 MeV for $\alpha$ s. At the energy scale of interest for ${0\nu\beta \beta}$ , the signal electrons emit light while $\alpha$\ particles do not. Several tests were done on small \cite{Bellini:2012rc,Willers:2014eoa,Gironi:2016nae} and large crystals \cite{Beeman:2011yc,Schaffner:2014caa,Casali:2014vvt,Casali:2015gya,Artusa:2016mat, Berge:2017nys} to characterize the discrimination power.\\ The challenge of this method is the detection of the extremely small amount of light emitted by electrons at the $^{130}$Te ${0\nu\beta \beta}$\ energy (${Q_{\beta\beta}}$ $\sim$2.5 MeV) that is of the order of 100 eV \cite{Casali:2013bva,Casali:2014vvt,Casali:2016luq}, i.e. comparable to the noise resolution of NTD-based standard light detectors used in scintillating bolometers (see Fig. \ref{Cherenkov}). A signal-to-noise ratio greater than 5 is needed to reach $\alpha$/($\beta$/$\gamma$) separation allowing for a 99.9\% rejection of the $\alpha$\ background \cite{Casali:2014vvt}. Attempts to increase the light collection \cite{Casali:2014vvt} do not lead to significative results, this implies that a light detector technology with a noise level below 20 eV RMS is mandatory. Furthermore the light detectors must be robust and reproducible in view of a ton-scale experiment with about 1000 bolometers, made of radio-pure materials and possibly have a multiplexed readout to avoid a large heat-load on the cryogenic apparatus. Finally the light detector must have an active area comparable to the top bolometer face (about 20 cm$^2$) in order to maximize the light collection. A carefully optimized Ge bolometer with a NTD-Ge sensor \cite{Coron2004} achieved the required performance in terms of resolution but reproducibility and robustness are far to be demonstrated. The next three sections are dedicated to the status and perspectives of the light detector technologies under development. \subsubsection{Transition edge sensors and metallic magnetic calorimeters} \label{tes} A technology, able to reach the desired energy resolution on a large area light detector, does exist and is currently implemented by the CRESST dark matter experiment \cite{Angloher:2011uu,Angloher:2014myn,Angloher:2015ewa}. A half mm thick sapphire disc, with a one micron layer of silicon on it, is equipped with thin tungsten Transition Edge Sensor (TES) coupled to an aluminum absorber. A TES sensor is a resistive device that operates at the critical temperature $T_c$ of the superconductor so that the resistivity changes sharply from zero to a finite value in a very narrow temperature interval. TESs are biased at a constant voltage and their low impedance (in the few m$\Omega$ - $\Omega$ range) imposes the use of Superconducting Quantum Interference Device (SQUID) amplifiers. They are intrinsically fast devices with a bandwidth of MHz or more. This offers, in addition to the excellent energy resolution, two advantages: the pulse shape sensitivity is significantly improved and time resolution better than one ms can be achieved. A CRESST light detector, coupled to a CUORE-style bolometer, demonstrated an event-by-event basis $\alpha$/($\beta$/$\gamma$) separation \cite{Schaffner:2014caa} (see Fig. \ref{Che_tes}) but scaling of the technology to a thousand detectors requires a dedicated development on the reproducibility of the technology (e.g. uniformity of transition temperature across many channels) at temperatures of about ten mK and on the readout multiplexing capability to reduce the wiring complexity and the heat load. Solutions exist in the astrophysics community \cite{Nucciotti:2015rsl} but the portability to the ${0\nu\beta \beta}$\ research field is not trivial. They are based on the use of RF-SQUIDs coupled to superconducting coplanar waveguide (CPW) GHz resonators and homodyne detection. Tuning the resonators at different frequencies, it is possibile to multiplex several RF carriers (see section \ref{mkid}). This approach, called Microwave Multiplexing ($\mu$MUX), has been demonstrated for two channels \cite{Noroozian} and is quickly developing \cite{Dicker} but it has been shown to work up to now only for compact arrays of micro-calorimeters with mass much less than one milligram. \begin{figure}[h] \centerline{\includegraphics[width=9cm]{bw_Ch_tes}} \caption{Background data of the TeO$_2$ bolometer is shown in the light yield-energy plane. Light yield obtained with a massive (285 g) TeO$_2$ bolometer and TES-equipped Silicon on Sapphire light detector. Two distributions can be noted: a band due to $\beta$/$\gamma$\ interactions as well as the less populated band at zero light yield due to $\alpha$\ particles from a degraded $\alpha$\ source. The bands which indicate the region expected for $\beta$/$\gamma$\ events are shown in form of central probability bands. The dotted lines are $\pm 1.28 \sigma$ contours whereas the solid lines are $\pm 3 \sigma$ contours, thus 99.8\% of all $\beta$/$\gamma$\ events are expected to be contained within the two solid contour lines. A DP of 3.7 is achieved. The $\alpha$\ particle distribution appears at a light yield of zero, separated from the populated $\beta$/$\gamma$\ band. The dashed vertical line indicates the Q-value of $^{130}$Te of 2530 keV. Figure adapted from Ref. \citen{Schaffner:2014caa} with permission from Elsevier. } \label{Che_tes} \end{figure} Other TES implementations are under study in the ${0\nu\beta \beta}$\ community \cite{Wang:2015taa}. The first aims at reaching a lower $T_c$ making use of the proximity effect in bilayer films of superconductor and normal conductor (e.g. Ir-Au, Ir-Pt, Mo-Au, etc). The second makes use of NbSi, which is a superconductor for an appropriate stoichiometric ratio with an intrinsic high resistivity in the normal state (1-5~M$\Omega$). This would allow the use of the same conventional electronics, based on JFETs, as for NTDs when they are operated within the transition. This solution does not provide all the advantages related to the low-impedance TESs, but it is possible to get a temperature sensitivity up to ten times higher of that achieved by NTDs keeping the same front-end electronics, and so with a minimal impact on the readout structure. \\ Another class of sensors, which share similar characteristics with TESs, is represented by Metallic Magnetic Calorimeters (MMC). They base their principle on the strong temperature dependence of the magnetization in paramagnetic sensors and are typically made of Au:Er. A variation of the magnetic moment can be read out with high sensitivity using meander-shaped thin-film pickup coils and SQUID magnetometers. This effect, already exploited with outstanding results in X-ray spectroscopy \cite{porst}, can be used to develop exceptionally sensitive thermometers. They are very fast sensors (rise-time below 50~$\mu$s) and can reach an energy resolution better than ten eV. Because of these two features, their multiplexed readout is even more demanding than that of TESs and the only feasible approach is $\mu$MUX \cite{kempfs}. MMCs are adopted by the AMORE collaboration \cite{Kim:2017xrs} although the amount of scintillation light produced by the $^{48-dep}$Ca$^{100}$MoO$_4$ crystal does not require a very sensitive light detector. No multiplexing readout is applied in this case. \subsubsection{Kinetic inductance detectors} \label{mkid} The working principle of a Kinetic Inductance Detectors (KID) is based on the change of its kinetic inductance when the density of Cooper pairs is modified \cite{day}. In superconducting materials the Cooper pairs, characterized by a binding energy smaller than 1$\,$meV, move through the lattice without scattering. If a RF electromagnetic field is applied, the pairs oscillate and acquire a kinetic inductance. The inductor is inserted into a high quality factor ($Q>10^3$) RLC circuit giving rise to a resonator with a resonant frequency $f_0=1/2\pi\sqrt{LC}$. An energy release $E$, able to break Coopers pairs into quasi-particles, changes the kinetic inductance and thus the transfer function and could be inferred by the variations in phase and amplitude of the transmitted signal. KID detectors are a leading technology in astroparticle physics \cite{Monfardini:2011yh,Mazin:2013wvi} and their use in the ${0\nu\beta \beta}$\ field was proposed by the CALDER project \cite{Battistelli:2015vha}.\\ Their strengths are: several KIDs can be coupled to the same feedline and can be multiplexed by making them resonate at slightly different frequencies since $f_0$ can be easily changed by slightly modifying the layout of the capacitor and/or inductor of the circuit; the readout electronics is quite simple and operated at room temperature, with the exception for a low noise cryogenic amplifier; performances do not depend critically on the working temperature, provided it is well below the critical temperature of the superconductor.\\ The main drawback is that dimensions must be smaller than the wavelength of the excitation signal, so that the current in the inductor is uniform and the signal does not depend on the position of the energy release. Their size is limited to a few mm$^2$, by the optimal range (1-4 GHz) of already available electronics and the number of resonators that can be coupled to the same line. To reach a large surface light detector, KIDs are deposited on silicon substrate as in CRESST light detectors~\cite{Angloher:2011uu}. Photons impinging on the back side of the chip produce ballistic phonons which scatter through the substrate and reach the KID on the opposite surface \cite{Moore:2012au,Swenson:2010yf}. To compensate the efficiency loss with respect to direct absorption, a few KIDs per light detector are needed. In the last 3 year the CALDER \cite{Battistelli:2015vha} project developed and tested several KID detectors using Aluminum as superconducting material \cite{Cardani:2015tqa,Casali:2015bhk,Colantoni:2016alu,Martinez:2016rks,Vignati:2016adb,Colantoni:2016tpk,Bellini:2016lgg,Casali:2017yro}. With a 4 mm$^2$ single KID resonator on a 2x2 cm$^2$ substrate, an energy resolution of about 80 eV has been achieved \cite{Bellini:2016lgg}. The energy resolution of KIDs scales as T$_c/\sqrt{QL}$\cite{Zmui, McCammon}. Too boost it to the desired level, different superconductors with optimized T$_c$ and L were investigated. The first large area (25 cm$^2$) detector made with Al+Ti+Al KID is being measured and results will be published soon. \subsubsection{Neganov-Trofimov-Luke effect} \label{nl} If a light detector comprises a semiconductor substrate, its baseline noise resolution could be enhanced exploiting the Neganov-Trofimov-Luke (NFL) effect \cite{luke,neganov}. An electric field applied to the device, in fact, accelerates electron and holes generated by an energy release $E$ inside the detector itself. The work done by the field on the charges produces an enhancement of the thermal signal recorded by the thermometer attached to the semiconductor wafer. The total energy $E_t$ dissipated is \begin{equation} E_{t} = E(1+\frac{eV}{w}), \label{NL} \end{equation} where e is the electron charge, V is the applied drift voltage across the electrodes, and $w$ is the mean energy needed to create an electron-hole pair. The amplification is independent of any other source of noise and allows to lower the baseline noise resolution and decrease the energy threshold. This mechanism is well known and used in dark matter searches \cite{stark,Isaila:2011kp,cdms,edel} and in the last two years has been successfully applied to detect the Cherenkov light in TeO$_2$ bolometers with different devices with both germanium and silicon absorbers.\cite{Willers:2014eoa,Casali:2015gya, Artusa:2016mat,Gironi:2016nae,Biassoni:2015eij}. \\ Recently a complete event-by-event $\alpha$/($\beta$/$\gamma$) separation in a full-size TeO$_2$ CUORE bolometer coupled to a NTD-based germanium light detector with NTL amplification has been achieved \cite{Berge:2017nys}. In this case the electrodes, a set of concentric Al rings on a side, generate an electric field parallel to the surface that allows to decrease the charge trapping probability thanks to the short path length of the charges to the electrodes. This represent a fundamental result in view of an application in CUPID \cite{Wang:2015raa} since it could be adopted with minimal modification of the entire readout with the respect to the one actually in use in the CUORE experiment. Devices with silicon absorbers and TES \cite{Willers:2014eoa} and NTD \cite{Gironi:2016nae,Biassoni:2015eij} sensors were also developed. In the case of NTD sensors, the advantages compared to germanium absorber hinge on the wider range of processing technologies for silicon, that potentially allows the integration of thermal sensors and mechanical suspension structures. A further advantage of silicon over germanium is the fact that specific heat of silicon is a factor four smaller than germanium, opening the possibility of building substantially larger detectors without compromising the signal amplitude which is inversely proportional to the heat capacity of the device. Promising results have been obtained with 2x2 cm$^2$ area detectors\cite{Biassoni:2015eij} and first 5x5 cm$^2$ sample are under measurement. Plans to use the NFL effect with KID sensors for single photon counting are also under investigation. \subsection{Surface sensitive bolometers} \label{shape} Tagging surface events is difficult, as bolometers are fully sensitive device in the volume and often present a single response to any type of fast energy deposition, irrespective of its nature and location. Even when $\alpha$\ background could be reduced at the desired level, a non negligible contribution could be represented by single $\beta$\ particles emitted in decays of $^{214}$Bi as well as by $^{210,208}$Tl decays that emit electron and $\gamma$ s in coincidence producing a single event which escapes delayed coincidences tagging (see section \ref{err}). An alternative approach, to those aiming of hybrid bolometers, consists into achieving impact-point sensitivity making use of superconducting Al films (about ten $\mu$m) deposited on the detector surface which can modify the signal shape of surface events \cite{Schnagl:2000}. The physical principle is the following: athermal phonons generated by a particle that releases its energy within a few mm from the surface ($\alpha$\ or $\beta$ s) break Cooper pairs in the superconducting film and produce quasi-particles, which have a long lifetime (on the order of milliseconds) in high purity aluminum. The quasi-particles recombination produces ballistic phonons, that will add a delayed component to the leading edge of the signal read out by the sensor on the main bolometric absorber. For bulk events, instead, the athermal phonon population reaching the Al film is more degraded in energy and less efficient in producing quasi-particles. Surface events will have therefore longer rise-time compared to bulk interactions. This mechanism has been evidenced in Ref. \citen{Oli2008} and the proof of principle of this technique for ${0\nu\beta \beta}$\ decay detector has been demonstrated in a TeO$_2$ bolometer with deposited Al film and using fast phonon sensors based on NbSi films, with rise times on the order of one ms \cite{Nones:2012}. Unfortunately, the current NbSi sensor technology is unsuitable for ${0\nu\beta \beta}$\ search because the important component of athermal phonons in the signal induces position dependent amplitude and thus deteriorate the energy resolution. The recently ERC-approved project CROSS \cite{Giuliani:2017} aims at achieving surface-to-bulk signal separation with the use of NTD sensors, i.e. with a heat pulse rise-time on the order of tens of milliseconds. This might be possible as the excellent signal-to-noise ratio characterizing the typical CUORE readout has the potential to highlight even tiny pulse-shape differences. This technique could be applied also to scintillating bolometers since once the surface $\alpha$ background is rejected, the dominant contribution arises from surface $\beta$s contaminations \cite{Artusa:2014wnl}. \section{Environmental Radioactivity Reduction} \label{err} The particle identification techniques, on which detector developments are mainly focused, aim at reducing to negligible levels the effect of surface contaminations of detector materials that represents the dominant background in CUORE. However, the reduction of surface contamination effects can't by itself ensure the achievement of a two orders of magnitude background reduction as foreseen in CUPID. Other different sources, as bulk contaminations of crystals, copper supporting structure, lead shields, and the -small parts- as glue, bonding wires or readout cables and pads, can contribute to the ROI counting rate at levels of $\sim 10^{-3}$ counts kg$^{-1}$ y$^{-1}$ \cite{Artusa:2014wnl}.\\ All of this results in serious restrictions in the use of materials. Stringent purification protocols for crystal production must be developed and all the materials close to the detectors have to be fabricated from radio-pure materials and assembled in radon free environment with dedicated radio-pure tools. A special attention has also to be paid to avoid cosmogenic activation. An exhaustive list of low background techniques exploited in this research field can be found in Ref. \citen{Poda:2017jnl}.\\ The bulk activity of the crystal absorber must be controlled to a level such to not spoil the background index the ROI. Internal $\alpha$\ decays from U/Th chains can not contribute to background since they give rise to sharp peaks with energies Q$_{\alpha} >$ 4 MeV, i.e. far above the ROI. Internal $\beta$ decays with Q$_{\beta} >$ 3 MeV could represent instead a worrisome background due to their continuum spectrum. They are generated by $^{214}$Bi ($^{238}$U chain) and its daughter $^{210}$Tl and by $^{208}$Tl ($^{232}$Th chain) as reported in the scheme of Fig. \ref{BiPo}. \begin{figure}[h] \centerline{\includegraphics[width=6.5 cm]{214BiDecay.pdf}\includegraphics[width=6.5cm]{208Tl_decay.pdf}} \caption{Left: decay scheme of $^{214}$Bi. The short life of its daughter, $^{214}$Po, causes the pile-p of the $\beta$\ emitted by $^{214}$Bi and the $\alpha$\ particle produced by $^{214}$Po. Right: decay scheme of $^{208}$Tl. The delayed coincidence with the $\alpha$\ particle produced by its mother, $^{212}$Bi, allows to suppress this background source. Illustration is a courtesy of Laura Cardani. } \label{BiPo} \end{figure} Given the slowness of the bolometric response the $^{214}$Bi $\beta$ decay is followed by the fast $^{214}$Po $\alpha$\ decay and their energy is summed up far from the ROI (Bi-Po events). The $^{208}$Tl and $^{210}$Tl decays instead to stable $^{208}$Pb and $^{210}$Pb respectively; anyway they could be tagged by delayed coincidence with the primary $\alpha$\ decay. In order to limit the dead time introduced with this technique ($\beta$\ half life of the order of minutes) the total activity of U/Th contamination in the bulk must be kept at the level of the $\mu$Bq/kg.\\ Another important aspect which must be taken into account is that the increase of the sensitive mass demands the production of high-quality, radio-pure enriched crystals. The difficulty in operating cryogenic systems with an experimental value larger than the existing CUORE one makes the isotope enrichment the only viable way for enhancing the mass of the isotope of interest. The use of enriched material has consequences on the purification/crystallization chain. The enriched material could have residual chemical impurities which may require additional purification stages to get high-quality crystals. Enrichment is generally done by gas centrifugation in facilities used to separate different isotopes without special radio-purity concern and therefore an additional purification is required. Furthermore the enriched material is expensive, and the growth procedure must be adapted in order to reduce as much as possible the irrecoverable losses of the initial charge. Crystal bulk contaminations \cite{Poda:2017jnl} are actually at the level of few to ten $\mu$Bq/kg and are approaching the target value to reduce the internal background to a harmless level.\\ Even if it would be possibile to get rid of U and Th contaminations, the ${2\nu\beta \beta}$\ decay induces an irriducibile background for the ${0\nu\beta \beta}$\ search. The end-point of the ${2\nu\beta \beta}$\ spectrum (when neutrinos are emitted at rest) contribute to the background in the ROI since all the available energy is carried out by the two electrons expect for the two negligible neutrino masses. The ratio of ${2\nu\beta \beta}$\ to ${0\nu\beta \beta}$\ event rate depends on the ${2\nu\beta \beta}$\ half life and on the energy resolution and could be assumed negligible in the case of IH region for an energy resolution better than one per cent \cite{Artusa:2014wnl}. On the other hand, accidental pile-up of two ${2\nu\beta \beta}$\ events in the same detector within a time window smaller than the typical time response of the detector can produce a signal that mimics a ${0\nu\beta \beta}$\ decay. This contribution can be suppressed using the leading edge of the thermal response that range from microseconds (athermal sensors) to milliseconds (thermal sensors). This turns out to be problematic only in the case of $^{100}$Mo which has the fastest observed ${2\nu\beta \beta}$\ decay and can contribute to ROI with a background of the level of 10$^{-2}$ counts keV$^{-1}$kg$^{-1}$ y$^{-1}$ \cite{Beeman:2011bg,Chernyak:2012zz,Chernyak:2014ska} in case of slow NTD thermal sensors to 10$^{-4}$ counts keV$^{-1}$kg$^{-1}$ y$^{-1}$ in case of fast sensors sensitive to athermal phons as MMC \cite{Luqman:2016okt}. The discrimination capability depends on the slope-to-noise ratio \cite{spieler} and it has been demonstrated that the use of light detector with NLF signal amplification, could lower this background down to 6$\times$10$^{-5}$ counts keV$^{-1}$kg$^{-1}$ y$^{-1}$ \cite{Chernyak:2016aps}. In the CUORE background budget \cite{Alduino:2017qet}, no positive indication of bulk contaminations in the other detector elements have been obtained, However, current upper limits could translate to potentially dangerous counting rates for the CUPID background target. One order of magnitude improvement in the sensitivities of presently screening technology is therefore mandatory.\\ The screening techniques commonly used are: HPGe (High Purity Germanium detector), ICPMS (Inductively Coupled Plasma Mass Spectrometry, NAA (Neutron Activation Analysis). NAA and HPGe measurements can reach a sensitivity of the order of $\mu$Bq/kg on $^{232}$Th in copper, the material used as supporting structure for the crystal absorber. The sensitivity is limited by the mass of the copper sample that can't be increased {\it ad libitum} due to the self-absorption of the $\gamma$\ lines inside the sample. It could be increased making use of pre-concentration of contaminants through chemical treatment of materials which is equivalent to a mass increase of the sample. The technique is used in ICPMS measurements but can also be applied to NAA or HPGe spectroscopy. However, it requires a dedicated study for each material as well as a very careful control of systematics.\\ Some detectors parts used in the form of foils, as super-insulation or flat cables, are not suitable for HPGe due to their small mass neither for NAA or ICPMS which have restrictive conditions on the material that can be analyzed. In these cases, surface alpha spectroscopy through Si surface barrier diodes proved to reach competitive sensitivities. An alternative technique consists of using a bolometric detector for the measurement of surface/bulk contamination; TeO$_2$ slabs can be used to realize a sandwich-like detector where samples are inserted in-between thin bolometers. Given the better energy resolution, the lower energy threshold and the superior radio-purity \cite{Cardani:2012xq}, this approach could reach a sensitivity up to a factor 100 higher than the actual ones and could provide information on the X-ray emission of the samples providing a complementary information for contamination identification. \section{Other Rare Nuclear Decays and Processes} \label{rare} \subsection{$\alpha$\ decays} The discovery of the $\alpha$\ decay of $^{209}$Bi with a half-life of 1.9$\times$10$^{19}$ yr in 2003 \cite{demarcillac} renewed the interest in the field of rare $\alpha$\ decays as a fundamental tool for the study of the structure of nuclei and for a better understanding of the theoretical framework of nuclear models \cite{XuRen}. The possibility to produce massive bolometers with a wide choice of materials has very clear advantages. A significant amount of the nucleus of interest could be embedded in the detector itself, i.e. the detector and the decay source coincide. As a consequence the decay is full contained in detector thus resulting in excellent detector efficiency. This aspect is of primary importance for rare $\alpha$\ decays search also because of the short (few microns) range of a MeV $\alpha$\ particle in a solid medium. In addition the high energy resolution and the capability to identify the interacting nature of the particle in the detector, as discussed in section \ref{sb}, leads to tremendous background suppression, especially for rare decays with transition energy lower than 2.6 MeV (as for some lead isotopes) which would otherwise be overwhelmed by the near background from $\gamma$\ emission of $^{208}\mathrm{Tl}$. Bolometers allow to measure half lives much longer than the age of the Universe. They led to the conclusive test on the identification the $\alpha$\ decay of $^{209}$Bi \cite{demarcillac} and the discovery of its decays to the first excited level \cite{Beeman:2011kv,cardanirari}, to the discovery of $^{180}W$ ~\cite{Cozzini} and $^{151}$Eu ~\cite{Casali:2013zzr} $\alpha$\ decays and to set the most stringest lower limits on the half life of the $\alpha$\ decays of lead isotopes \cite{Beeman:2013Pb}.\\ An alternative approach consists in the use of a well known scintillating bolometer doped with the isotope under investigation. This allows to select a vey radio-pure crystal with high light yield and excellent $\alpha$/($\beta$/$\gamma$) separation and to study more elements as Sm, Nd, Os, Hf, Pt. The drawback is that the mass of the candidate isotope is limited to few grams. This technique was recently proposed and used for a precise measurement of the half life and transition energy of the $^{148}$Sm $\alpha$\ decay in a ZnWO$_4$ crystal \cite{Casali:2016vbw}. \subsection{$\beta$\ decays} The improvement of the sensitivity in the search for rare $\alpha$\ decays had an impact also on the study of extreme $\beta$\ decay as the ones generated by small decay energies or initial and final nuclear states with large angular momentum difference. \\ The knowledge of rare $\beta$\ decay existence and their spectral shape is fundamental since they can represent a background for the ${0\nu\beta \beta}$\ search. The $^{214}$Bi, for example, has a $Q_\beta$= 3270 keV thus exceeding the ${Q_{\beta\beta}}$\ of the more studied ${0\nu\beta \beta}$\ isotopes (see section \ref{err}). In about 19\% of the cases it decays to the ground state of $^{214}$Po with change in angular momentum J and parity $\pi$, $\Delta J^{\Delta \pi}=1^-$ (first forbidden non-unique transition). Its shape is not well measured experimentally neither theoretically predicted, moreover it must be taken into account that forbidden $\beta$\ shape can significantly deviate from known allowed spectra. But there is a more important feature related to forbidden $\beta$\ decays: their shape could be used to infer the ratio of the weak axial to vector coupling constant g$_A$/g$_V$ in nuclear decays.\\ The ${0\nu\beta \beta}$\ half life is proportional to the fourth power of g$_A$. The recent analyses of nuclear models in $\beta$\ and ${2\nu\beta \beta}$\ decays indicate that the value of g$_A$ could be quenched, up to a ratio of g$_{free}$/g$_A$ $\sim$4, where g$_{free}$ = 1.27 is the free value of g$_A$ inferred from the neutron decay. This could potentially translates into a two order of magnitude difference in the sensitivity of ${0\nu\beta \beta}$\ experiment. This {\it naive} expectation has been very recently scaled back to a factor between two and six if a consistent approach is used for the calculation of the ${2\nu\beta \beta}$\ and ${0\nu\beta \beta}$\ decays \cite{Suhonen:2017rjf}. Nevertheless the measurement of g$_A$ is of pivotal importance. This could be inferred by the shape of $\Delta J^{\Delta \pi}=4^+$ non-unique forbidden $\beta$\ decays as for $^{113}$Cd and $^{115}$In \cite{Haaranen:2016rzs,Haaranen:2017ovc,Kostensalo:2017xxq,Kostensalo:2017jgw,Suhonen:2017krv}. For such decays the shape of energy spectrum relies on the sum of different nuclear matrix elements with different phase space factors which include $g_A$ and $g_V$ and their values could be extracted by the comparison between theoretical and experimental spectra. While in the case $^{113}$Cd the spectral shape has been characterized \cite{Belli}, only and old measurement \cite{Pfe} exists for the shape of $^{115}$In. To perform a clean and reliable measurement a 10 g LiInSe$_2$ scintillating bolometer is currently in data taking at the Modane underground laboratory \cite{Tretyak:2017zqd}. \subsection{Electron capture processes} Bolometers can play in important role also in the search of other rare nuclear processes as the rare electron capture. When the source of the decay is embedded in the bolometer, in fact, a signal corresponding to the total binding energy of the captured electron can be measured with very high efficiency because X-rays/Auger electrons following the atomic de-excitation are fully contained. Moreover the excellent energy resolution is a powerful tool to discern externally generated $\gamma$\ from X-ray and electron cascades. \\ As example the electron capture of $^{123}$Te is predicted but not yet observed. The best limit, obtained with a TeO$_2$ bolometer is $T_{1/2}>5.0\times10^{19}$ y \cite{Alessandrello:2002ag}. A previous observation \cite{Alessandrello:1996zz} was confuted and explained as the electron capture in $^{121}$Te, an isotope created by neutron capture on the 0.09\% natural abundant $^{120}$Te isotope at see level. The importance of this measurement relies on the fact that it could be used to constrain and test nuclear models used to estimate intensities for rare electroweak decays \cite{civitarese}, models that in some case foresee a suppression of the rate up to six orders of magnitude \cite{broglia97}. The CUORE experiment, with is huge mass compared to its predecessors, will be able to improve the results by orders of magnitude and possibly to discover the electron capture of $^{123}$Te. \section{Conclusions} Bolometers are cryogenic calorimeters which base their principle on the phonon detection. They exhibit: excellent energy resolution, low energy threshold, high detection efficiency, wide choice of material for the calorimeter absorber. These characteristics make them one of the best performing instruments in several fields: double beta decay search, neutrino mass measurement, dark matter search, CMB precision measurement, high resolution X-ray detection, rare nuclear process detection. \\ This review is focused on the bolometric applications in the field of rare nuclear processes: in particular on the neutrinoless double beta decay search. The demand for the increase of the experimental sensitivity imposes a series of technical challenges and improvements of the actual technology. In particular, experiments aiming at covering the inverted hierarchy region of the neutrino mass scheme and possibly discovering the Majorana neutrino nature, need to lower the background in the region of interest to the level of 10$^{-4}$ counts kg$^{-1}$ y$^{-1}$ and increase the source mass. This implies a manifold effort: the development of passive methods for background reduction and new screening techniques, the growth of very radio-pure enriched crystals and the implementation of reliable active background rejection techniques. \\ The first pilot demonstrators, using enriched scintillator bolometers for particle identification, are already in data taking while the development of detectors for the Cherenkov light detection in TeO$_2$ bolometers is rapidly growing and entering into its final phase. On the other hand, the successful operation of the CUORE experiment (988 massive bolometers) ensures that hundred of kgs of isotopes can be studied in a stable and reliable cryogenic system.\\ Despite three decades have passed since they have been conceived, bolometers are still a very active field and performances are continuously improving. The viability of a next generation experiment in an almost background-free environment, is within the reach if actual R\&D will be successful.\\ Moreover, the superior bolometric features renewed in the last five years the interest in rare nuclear processes as a tool for the comprehension of nuclear models. The improvements in terms of performances and radio-purity of material, requested by the neutrinoless double beta decay search, are beneficial for all the rare nuclear process searches and will boost the sensitivity to unprecedented levels. \\ \bibliographystyle{ws-ijmpa}
{ "timestamp": "2018-02-14T02:00:15", "yymm": "1802", "arxiv_id": "1802.04260", "language": "en", "url": "https://arxiv.org/abs/1802.04260" }
\section{Introduction} The problem of index coding was introduced by Birk and Kol in \cite{BiK}. The index coding problem consists of a single sender with a set of \textit{M} independent messages \begin{displaymath} \mathcal{X}=\lbrace x_1,x_2,\dots,x_M\rbrace, \end{displaymath} and a set of $ N $ users \begin{displaymath} \mathcal{D}=\lbrace D_1,D_2,\dots,D_N \rbrace, \end{displaymath}connected to the sender by a single shared error-free link, with the $ k^{th} $ user $ D_k $ identified as \begin{displaymath} D_k=(\mathcal{X}_k,\mathcal{A}_k), \end{displaymath}where $ \mathcal{X}_k \subseteq \mathcal{X}$ is the set of messages desired by $ D_k $, the set $ \mathcal{A}_k \subset \mathcal{X}$ is comprised of the messages available to user $ D_k $ as side-information. The set of side-informations $ \mathcal{A}_k $ satisfies $ \mathcal{X}_k \cap \mathcal{A}_k=\phi$, i.e., a user does not desire a message that is already available to it. An $ (\mathcal{S},n,\mathcal{R}) $ index coding scheme \cite{MCJ} corresponds to the choice of a finite alphabet $ \mathcal{S} $ of cardinality $ |\mathcal{S}| > 1 $, a coding function, $ f $, and a decoding function $ g_{k,i} $, for each desired message $ x_i $ at each user $ D_k $. The coding function maps all the messages to the sequence of transmitted symbols \begin{displaymath} f(x_1,x_2,\dots,x_M)=S^n \end{displaymath}where $ S^n \in \mathcal{S}^n $ is the sequence of symbols transmitted over $ n $ channel uses. Here $ \forall m \in \lbrace1,2,\dots,M\rbrace $, message $ x_m $ is a random variable uniformly distributed over the set \begin{displaymath} x_m \in \lbrace1,2,\dots,|\mathcal{S}|^{nR_m}\rbrace, \end{displaymath}and $ \mathcal{R} \in \mathbb{R}^M_+ $ is simply a rate vector \begin{displaymath} \mathcal{R}=(R_1,R_2,\dots,R_M) \end{displaymath}that satisfies the condition that $ |\mathcal{S}|^{nR_m} $ is an integer for every $ m \in \lbrace1,2,\dots,M\rbrace $. At each user, $ D_k $, there is a decoding function for each desired message \begin{displaymath} g_{k,i}(S^n,\mathcal{A}_k)=x_i, \end{displaymath}for all $ i $ such that $ x_i\in \mathcal{X}_k $.\\An index coding scheme is said to be a linear index coding scheme if the coding and the decoding functions are linear and the alphabet $ \mathcal{S} $ is a finite field. An index coding scheme is said to be a scalar index coding scheme if \begin{displaymath} \mathcal{R}=\left(\frac{1}{n},\frac{1}{n},\dots,\frac{1}{n}\right). \end{displaymath}In other words, in a scalar index coding scheme, the sender sends one symbol for each message over $ n $ channel uses. $ n $ is referred to as the length of the index code. An index coding problem is said to be unicast \cite{OnH} if $ \mathcal{X}_k \cap \mathcal{X}_j = \phi $ for $ k\not= j $ and $ k,j\in \lbrace1,2,\dots,N\rbrace $, i.e., no message is desired by more than one user. The problem is said to be single unicast if the problem is unicast and $ |\mathcal{X}_k|=1$ for all $ k\in \lbrace1,2,\dots,N\rbrace $. A unicast index coding problem can be reduced into single unicast index coding problem, by splitting the user demanding more than one message into several users, each demanding one message and with the same side-information as the original user. For example, let there are $ 5 $ messages at the sender, $ \lbrace x_1,x_2,x_3,x_4,x_5\rbrace $. A user demanding three messages $ x_1 $, $ x_2 $ and $ x_3 $ and with side-information $ x_4 $ and $ x_5 $ is split into three users, each with side-information $ x_4 $ and $ x_5 $ and demanding one message $ x_1 $, $ x_2 $ and $ x_3 $ respectively. Single unicast index coding problems can be described by a directed graph called a side-information graph \cite{BBJK}, in which the vertices in the graph represent the indices of messages $ \lbrace x_1,x_2,\dots,x_M\rbrace $ and there is a directed edge from vertex $ i $ to vertex $ j $ if and only if the user requesting $ x_i $ has $ x_j $ as side-information. The set of vertices in a directed graph $ \mathcal{G} $ is denoted by $ V(\mathcal{G}) $ and the set of vertices in the out-neighbourhood of a vertex $q$ in $\mathcal{G}$ is denoted by $N^{+}_{\mathcal{G}}(q)$. Interlinked Cycle Cover (ICC) scheme is proposed as a scalar linear index coding scheme to solve unicast index coding problems by Thapa et al. \cite{TOJ}, by defining a graph structure called an Interlinked Cycle (IC) structure. \begin{defn}[\textbf{IC Structure} \cite{TOJ}] A side-information graph $\mathcal{G}$ is called a $K$-IC structure with inner vertex set $V_{I}$ $\subseteq V(\mathcal{G})$, such that $|V_{I}| = K$ if $ \mathcal{G} $ satisfies the following three conditions. \begin{enumerate} \item There is no I-cycle in $\mathcal{G}$, where an I-cycle is defined as a cycle which contains only one inner vertex. \item There is a unique I-path between any two different inner vertices in $\mathcal{G}$, where an I-path is defined as a path from one inner vertex to another inner vertex without passing through any other inner vertex (as a result, $K$ rooted trees can be drawn where each rooted tree is rooted at an inner vertex and has the remaining inner vertices as the leaves). \item $\mathcal{G}$ is the union of the $K$ rooted trees. \end{enumerate} The set of the vertices $ V(\mathcal{G})\backslash V_I $ is called the set of non-inner vertices, denoted by $ V_{NI} $. Let the $K$-IC structure, $\mathcal{G}$, have inner vertex set $V_I=\lbrace 1,2,\dots,K\rbrace$ and non-inner vertices $ V_{NI}=\lbrace K+1, K+2,\dots,N \rbrace $. Let $T_i$ be the rooted tree corresponding to the inner vertex $i$ where $i \in \lbrace 1,2,\dots,K\rbrace$. Let $V_{NI}(i)$ be the set of non-inner vertices in $\mathcal{G}$ which appear in the rooted tree $T_i$ of an inner vertex $ i $. \end{defn} The ICC scheme finds disjoint IC structures in a given side-information graph and then constructs an index code for each IC structure using the following construction proposed by Thapa et al. \cite{TOJ} (stated below as \textit{Construction} $1$). \begin{cons}[An index code construction for IC structures] Let the $K$-IC structure be denoted by $\mathcal{G}$ and let $|V(\mathcal{G})|=N$. Let $ V(\mathcal{G})=\lbrace1,2,\dots,N\rbrace$, $V_I=\lbrace1,2,\dots,K)$ be the set of the $K$ inner vertices and hence $ V_{NI}=\lbrace K+1, K+2,\dots,N\rbrace $. Let $x_n \in \mathbb{F}_q$ be the message corresponding to the vertex $n \in V(\mathcal{G})$ and where $ \mathbb{F}_q $ is a finite field with characteristic $ 2 $ and to which the all the $ N $ messages at the sender belong to (note that in single unicast setting, the number of messages will be equal to the number of users). \end{cons} \begin{enumerate} \item An index code symbol $W_I$ obtained by XOR of messages corresponding to inner vertices is transmitted, where \begin{equation} W_I=\underset{i=1}{\overset{K}{\bigoplus}} x_i. \end{equation} \item An index code symbol corresponding to each non-inner vertex, obtained by XOR of message corresponding to the non-inner vertex with the messages corresponding to the vertices in the out-neighbourhood of the non-inner vertex is transmitted, i.e., for $j \in V_{NI}$, $W_j$ is transmitted, where \begin{equation} W_j=x_j \underset{q \in N^{+}_{\mathcal{G}}(j)}{\bigoplus} x_q, \end{equation} \end{enumerate}where $ \oplus $ denotes modulo addition over $ \mathbb{F}_q $. \begin{algo} It is the algorithm proposed in \cite{TOJ} to decode an index code obtained by using \textit{Construction} $ 1 $ on an IC structure, $ \mathcal{G} $. \begin{itemize} \item The message $x_j$ corresponding to a non-inner vertex $j$ is decoded directly using the transmission $W_j$ and \item the message $x_i$ corresponding to an inner vertex $i$ is decoded using \begin{displaymath} Z_i=W_I \underset{q \in V_{NI}(i)}{\bigoplus}W_q \implies Z_i=x_i \underset{k:k \in N^{+}_{T_{i}}(i)}{\bigoplus}x_k. \end{displaymath} \end{itemize} \end{algo} Recently, in \cite{VaR} it has been shown that the index codes obtained from \textit{Construction} $1$ are not necessarily decodable using \textit{Algorithm} $1$ for some IC structures. The contributions of this paper are listed as follows. \begin{itemize} \item The cases where the index code obtained from \textit{Construction} $ 1 $ on the given IC structure is decodable using \textit{Algorithm} $ 1 $ are identified and are presented in \textit{Theorem} $ 1 $. \item It is shown in \textit{Theorem} $2$ that an IC structure which has no cycles containing only non-inner vertices satisfies the conditions presented in \textit{Theorem} $ 1 $. Thus the proof of optimality of IC structures of Case $ 1 $ of Theorem $ 3 $ in \cite{TOJ} holds. \item Examples of IC structures for which index code given by \textit{Construction} $ 1 $ is decodable using some other decoding algorithm are presented in the following section. \item An example of an IC structure for which index code given by \textit{Construction} $ 1 $ is not decodable using any decoding algorithm employing only linear combinations of the index code symbols is presented (Example \ref{exam8}). \end{itemize} The rest of the paper is organized as follows. Section $ 2 $ presents the examples that motivate the results of this paper. Section $ 3 $ discusses the main results along with some illustrating examples. Section $ 4 $ provides the conclusion and the problems that are opened by the results obtained in this paper. \section{Motivating Examples} In \cite{VaR} through an example it is shown that index codes obtained from \textit{Construction} $1$ are not necessarily decodable using \textit{Algorithm} $1$ for some IC structures. For a class of such structures the code construction is modified and a decoding algorithm is presented. In this section we present two more examples to show that the codes from \textit{Construction 1} are not decodable using \textit{Algorithm} 1. However, in the following section, after presenting the main results, these two codes are revisited and are shown to be decodable with some other algorithm employing only linear combinations of the index code symbols. \begin{figure}[!t] \includegraphics[height=\columnwidth,width=\columnwidth,angle=0]{Fig_1} \caption{$6$-IC structure $\mathcal{G}_1$ with inner vertex set, $V_I=\lbrace1,2,3,4,5,6\rbrace$.} \label{f1} \end{figure} \begin{figure*}[!t] \centering \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_1_1} \caption{} \label{rt11} \end{subfigure}% \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_1_2} \caption{} \label{rt12} \end{subfigure} \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_1_3} \caption{} \label{rt13} \end{subfigure} \centering \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_1_4} \caption{} \label{rt14} \end{subfigure}% \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_1_5} \caption{} \label{rt15} \end{subfigure} \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_1_6} \caption{} \label{rt16} \end{subfigure} \caption{Figures showing rooted trees of inner vertices $ 1,2,3,4,5,6 $ of $ \mathcal{G}_1 $, respectively.} \end{figure*} \begin{ex} \label{exam1} Consider $\mathcal{G}_1$, a side-information graph which is a $ 6$- IC structure with inner vertex set $ V_I = \lbrace1,2,3,4,5,6\rbrace$ given in Fig. \ref{f1}. It can be easily verified that \begin{enumerate} \item there are no cycles containing only one vertex from the set $\lbrace1,2,3,4,5,6\rbrace$ in $\mathcal{G}_1$ (i.e., no I-cycles), \item using the rooted trees for each vertex in the set $\lbrace1,2,3,4,5,6\rbrace$, which are given in Fig. \ref{rt11}, \ref{rt12}, \ref{rt13}, \ref{rt14}, \ref{rt15} and \ref{rt16} respectively, there exists a unique path between any two different vertices in $ V_I $ in $ \mathcal{G}_1 $ and does not contain any other vertex in $ V_I $ (i.e., unique I-path between any pair of inner vertices), \item $\mathcal{G}_1$ is the union of all the $6$ rooted trees. \end{enumerate} Using \textit{Construction} $1$, the transmitted index code symbols are \begin{eqnarray*} W_I &=&x_1\oplus x_2\oplus x_3\oplus x_4\oplus x_5\oplus x_6\\ W_7&=&x_7 \oplus x_2 \oplus x_6\\ W_8&=&x_8 \oplus x_3 \oplus x_5\\ W_9&=&x_9\oplus x_{10}\\ W_{10}&=&x_{10}\oplus x_{11}\\ W_{11}&=&x_{11}\oplus x_{12}\\ W_{12}&=&x_{12}\oplus x_{4} \oplus x_{13}\\ W_{13}&=&x_{13}\oplus x_{5} \oplus x_{9}\\ W_{14}&=&x_{14}\oplus x_{9}. \end{eqnarray*} Now consider the rooted tree of the inner vertex $2$ shown in Fig. \ref{rt12}. Applying \textit{Algorithm} $1$ to decode $x_2$, we get \begin{displaymath} Z_2 =W_I \oplus W_{10} \oplus W_{11} \oplus W_{12} \oplus W_{13} \end{displaymath} which results in \begin{displaymath} Z_2 =x_1 \oplus x_2 \oplus x_3 \oplus x_6 \oplus x_{10} \oplus x_9 \end{displaymath} using which message $x_2$ cannot be decoded by the user requesting it since $x_9$ is not available at that user as side-information. \end{ex} \begin{ex} \label{exam2} Consider $\mathcal{G}_2$, a side-information graph which is a $5$-IC structure with inner vertex set $ V_I=\lbrace1,2,3,4,5\rbrace$, given in Fig. \ref{f2}. \begin{figure}[!t]\centering \includegraphics[height=\columnwidth, width=\columnwidth,angle=0]{Fig_2} \caption{$5$-IC structure, $\mathcal{G}_2$ with $V_I=\lbrace1,2,3,4,5\rbrace$} \label{f2} \end{figure} It can be easily verified that \begin{enumerate} \item there are no cycles containing only one vertex from the set $\lbrace1,2,3,4,5\rbrace$ in $\mathcal{G}_2$ (i.e., no I-cycles), \item using the rooted trees for each vertex in the set $\lbrace1,2,3,4,5\rbrace$, which are given in Fig. \ref{rt21}, \ref{rt22}, \ref{rt23}, \ref{rt24} and \ref{rt25}, respectively,there exists a unique path between any two different vertices in $ V_I $ in $ \mathcal{G}_2 $ and does not contain any other vertex in $ V_I $ (i.e, unique I-path between any pair of inner vertices), \item $\mathcal{G}_2$ is the union of all the $5$ rooted trees. \end{enumerate} \begin{figure*}[!t] \centering \begin{subfigure}{.31\textwidth} \hspace{-8mm} \includegraphics[width=15pc]{RT_2_1} \caption{} \label{rt21} \end{subfigure}% \begin{subfigure}{.31\textwidth} \hspace{-8mm} \includegraphics[width=15pc]{RT_2_2} \caption{} \label{rt22} \end{subfigure} \begin{subfigure}{.31\textwidth} \hspace{-8mm} \includegraphics[width=15pc]{RT_2_3} \caption{} \label{rt23} \end{subfigure} \centering \begin{subfigure}{.31\textwidth} \hspace{-8mm} \includegraphics[width=15pc]{RT_2_4} \caption{} \label{rt24} \end{subfigure}% \begin{subfigure}{.31\textwidth} \hspace{-8mm} \includegraphics[width=15pc]{RT_2_5} \caption{} \label{rt25} \end{subfigure} \caption{Figures showing rooted trees of inner vertices $ 1,2,3,4,5 $ of $ \mathcal{G}_2 $, respectively.} \end{figure*} Using \textit{Construction} $1$, the index code symbols are \begin{eqnarray*} W_I &=& x_1 \oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5 \\ W_6 &=& x_6 \oplus x_3 \\ W_7 &=& x_7 \oplus x_8 \\ W_8 &=& x_8 \oplus x_4 \oplus x_9\\ W_9 &=& x_9 \oplus x_5 \oplus x_8\\ W_{10} &=& x_{10} \oplus x_3 \oplus x_{11}\\ W_{11} &=& x_{11} \oplus x_1. \end{eqnarray*} Now, consider the rooted tree for the inner vertex $1$, shown in Fig. \ref{rt21}. By applying \textit{Algorithm} $1$ to decode $x_1$, we get \begin{displaymath} Z_1=W_I \oplus W_6 \oplus W_7 \oplus W_8\oplus W_9 \end{displaymath} which results in \begin{displaymath} Z_1=x_1 \oplus x_2 \oplus x_6 \oplus x_7 \oplus x_8, \end{displaymath} using which, message $x_1$ is not decodable by the user requesting it because $ x_8 $ is not available at that user as side-information. \end{ex} \section{Main Results} The two examples in the previous section motivate \textit{Theorem} $1$ which imposes a set of necessary and sufficient conditions on a given IC structure for an index code obtained by using \textit{Construction} $1$ for that IC structure to be decodable using \textit{Algorithm} $1$. Recall that for the $K$-IC structure, $\mathcal{G}$, having inner vertex set $V_I=\lbrace 1,2,\dots,K\rbrace$ and non-inner vertices $ V_{NI}=\lbrace K+1, K+2,\dots,N \rbrace $ $T_i$ is the rooted tree corresponding to the inner vertex $i$ where $i \in \lbrace 1,2,\dots,K\rbrace$ and $V_{NI}(i)$ is the set of non-inner vertices in $\mathcal{G}$ which appear in the rooted tree $T_i$ of an inner vertex $ i $. For each $ i \in \lbrace1,2,\dots,K\rbrace $ and for a non-inner vertex $ j $ which is at a depth $ \geq2 $ in the rooted tree $ T_i $, define $ a_{i,j} $ as the number of vertices in $V_{NI}(i)$ for which $ j $ is in out-neighbourhood in $ \mathcal{G} $, i.e., for each $ i \in \lbrace1,2,\dots,K\rbrace $ and for $j \in {V_{NI}(i) \backslash N^+_{T_i}(i)} $, \begin{displaymath} a_{i,j} \triangleq |\lbrace v:v\in V_{NI}(i), j\in N^{+}_{\mathcal{G}}(v)\rbrace|. \end{displaymath} Also, for each $ i \in \lbrace1,2,\dots,K\rbrace $ and for a non-inner vertex $ j $ not in the rooted tree $ T_i $, define $ b_{i,j} $ as the number of vertices in $ V_{NI}(i) $ for which $ j $ is in out-neighbourhood in $ \mathcal{G}$, i.e., for each $ i \in \lbrace1,2,\dots,K\rbrace $ and $ j \in V(\mathcal{G}) \backslash V(T_i) $, \begin{displaymath} b_{i,j} \triangleq |\lbrace v:v\in V_{NI}(i),~j\in N^{+}_{\mathcal{G}}(v)\rbrace|. \end{displaymath} First, the following \textit{Lemma} is proved. \begin{lm} \label{lem1} Given an IC structure $ \mathcal{G} $ with inner vertex set $ V_I=\lbrace1,2,\dots,K \rbrace $, $ b_{i,j} \in \lbrace 0,1\rbrace $ for each $ i \in \lbrace1,2,\dots,K\rbrace $ and $ j \in V(\mathcal{G}) \backslash V(T_i) $. \end{lm} \begin{proof} Suppose, for an $ i \in \lbrace 1,2,\dots,K \rbrace $ and a $ j \in V(\mathcal{G})\backslash V(T_i)$, let $ b_{i,j}=a $, for some integer $ a \geq2 $. Let $ p $ and $ q $ be any two different vertices in the set $ \lbrace v:v\in V_{NI}(i), j\in N^{+}_{\mathcal{G}}(v)\rbrace $. Then $ j \in N^+_{\mathcal{G}}(p) $ and $ j \in N^+_{\mathcal{G}}(q) $. In $ T_i $, $ p $ and $ q $ can be predecessors of a single inner vertex or two different inner vertices. \begin{case}[i] Consider the case where $ p $ and $ q $ are predecessors of a single inner vertex, $ n \in V_I \backslash \lbrace i \rbrace $, i.e., non-inner vertices $ p $ and $ q $ are on the I-path from the inner vertex $ i $ to the inner vertex $ n $. Also let $ q $ be reached from $ p $, i.e., in the I-path from $ i $ to $ n $, the vertex $ p $ is reached first and $ q $ is reached from $ p $. Since every non-inner vertex has to be a predecessor of at least one inner vertex (by definition of IC structure), the non-inner vertex $ j $ must also be a predecessor of an inner vertex. Let $ j $ be a predecessor of an inner vertex $ m \in V_I\backslash \lbrace i,n \rbrace $. Since the arc from $ p $ to $ j $ does not exist in $ T_i $ but exists in $ \mathcal{G} $, there exists an I-path from an inner vertex $ s\in V_I\backslash\lbrace i,n \rbrace $ to the inner vertex $ m $, passing through the non-inner vertex $ p $ and then through $ j $. But now, there exist two I-paths in $ \mathcal{G} $ from $ s $ to $ m $, one that has a direct arc between $ p $ and $ j $, \begin{displaymath} s \rightarrow \dots \rightarrow p \rightarrow j \dots \rightarrow m \end{displaymath} and one in which $ j $ is reached from $ p $ through $ q $. \begin{displaymath} s \rightarrow \dots \rightarrow p \dots \rightarrow q\rightarrow j \dots \rightarrow m, \end{displaymath} which is not allowed in an IC structure and hence $ p $ and $ q $ being predecessors of a single inner vertex is not possible. \end{case} \begin{case}[ii] The other case is where $ p $ and $ q $ are predecessors of different inner vertices. Let $ p $ be predecessor of an inner vertex $ n \in V_I \backslash \lbrace i \rbrace $ and $ q $ be predecessor of an inner vertex $ m \in V_I \backslash \lbrace i,n\rbrace $. Let the set of inner vertices that are reached from the non-inner vertex $ j $ through some path in $ \mathcal{G} $ be $ V_I(j) $. Define $ S_j $ as the set of vertices that are successors of $ j $ and predecessors of vertices in $ V_I(j) $. We have $ V_I(j) \subset V(T_i) $. The paths from $ i $ to some vertices in $ V_I(j) $ pass through some vertices in $ S_j $ and paths from $ i $ to remaining vertices in $ V_I(j) $ will not pass through any vertex in $ S_j $. The subset of vertices in $ S_j $ which are also successors of the inner vertex $ i $ is denoted by $ S_{i,j} $. \begin{figure*}[!t] \centering \includegraphics[height=\columnwidth,width=\columnwidth,angle=0]{sij} \caption{Illustration of $ S_j $ and $ S_{i,j}$.} \label{pf2} \end{figure*}Fig. \ref{pf2} illustrates $ S_j $ and $ S_{i,j} $. Dotted arrows indicate presence of some vertices in the path. Now \begin{clm} $ p,q\in S_{i,j} $. \end{clm} \begin{proof} Since there is an edge from $ p $ to $ j $ in $ \mathcal{G} $, $ V_I(j) \subseteq V_I(p) $. Let $ t_p $ be a vertex in $ V_I(j) $. Then $ p $ will be a predecessor of $ t_p $. Fig. \ref{pf2} also illustrates all the possible positions of $ p $. In the cases when $ p \not\in S_{i,j} $ the corresponding vertices are shown to be dotted and $ p $ is marked as $ p' $ and $ p'' $. It is seen that there exist two I-paths from $ i $ and $ t_p $ in the two cases when $ p \not \in S_{i,j} $ which is not allowed in an IC structure. So $ p \in S_{i,j} $ which means that there exists a path from $ j $ to $ p $. Similarly, there exists an inner vertex $ t_q \in V_I(j) $ to which $ q $ is a predecessor. This implies that $ q \in S_{i,j} $. Hence there exists a path from $ j $ to $ q $. \end{proof}Since both $ p$, $ q \in S_{i,j}$, i.e., $ j $ is a predecessor of both $ p $ and $ q $, there exists a path from $ p $ to $ q $ through $ j $ in $ \mathcal{G} $ which implies that there exists an I- path from $ i $ to $ m $ through $ p $ and $ q $, which is a contradiction to the assumption that $ p $ and $ q $ don't lead to the same inner vertex. \end{case}Thus $ b_{i,j} $ cannot take values more than 1. Hence $ b_{i,j} \in \lbrace0,1\rbrace $. \end{proof} After Theorem \ref{thm2} several examples are discussed for some of which $b_{i,j}=0$ and for the remaining ones $b_{i,j}=1.$ \begin{thm} \label{thm1} The index code obtained from \textit{Construction} $1$ on $ \mathcal{G} $ is decodable using \textit{Algorithm} $1$ if and only if the IC structure, $ \mathcal{G} $, satisfies the following two conditions \textit{c}$1$ and \textit{c}$2$. \begin{cond}[\textit{c}$1$] $ a_{i,j} $ must be an odd number for each $ i \in \lbrace1,2,\dots,K\rbrace $ and $ j \in V_{NI}(i)\backslash N^+_{T_i}(i)$. \end{cond} \begin{cond}[\textit{c}$2$] $ b_{i,j} $ must be zero for each $ i \in \lbrace1,2,\dots,K\rbrace $ and $ j \in V(\mathcal{G}) \backslash V(T_i)$. \end{cond} \end{thm} \begin{proof} The proof of the \textit{if} part is as follows. Let the inner vertex of interest be $i$ and its rooted tree be $T_i$. Using \textit{Algorithm} $1$ to decode $x_i$, $Z_i$ is computed as \begin{displaymath} Z_i=W_I \underset{j\in V_{NI}(i)}{\oplus}W_j. \end{displaymath} In $Z_i$, the messages corresponding to the inner vertices that are not directly connected to $i$ will be cancelled since each such message appears exactly twice, once in $W_I$ and once in the index code symbol corresponding to the non-inner vertex which is the immediate predecessor of the inner vertex in $T_i$. Message $x_j$ that corresponds to a non-inner vertex $j$ at a depth $\geq 2$ in $T_i$ appears exactly even number of times in $Z_i$, once in $W_j$ and in odd number of index code symbols corresponding to the non-inner vertices that are in $T_i$ as it is in out-neighbourhood of odd number of vertices in $ V_{NI}(i) $, in $ \mathcal{G} $, by hypothesis and hence it is also cancelled. Finally, message corresponding to a non-inner vertex that is not in $T_i$ is not present in $Z_i$ as it appears in none of the index code symbols corresponding to the vertices $V_{NI}(i)$, since, by hypothesis, a non-inner vertex not in $T_i$, is in out-neighbourhood of no vertices that are in $ V_{NI}(i) $, in $ \mathcal{G} $. So, $Z_i$ will be of the form \begin{displaymath} Z_i=x_i \underset{j \in S \subseteq N^{+}_{T_{i}}(i)}{\oplus}x_j \end{displaymath} and hence $x_i$ is decodable by the user requesting $ x_i $ as the user will have messages corresponding to vertices in $ N^+_{T_i}(i) $ as side-information. The proof of the \textit{only if} part follows. Let the condition \textit{c}$ 1 $ be violated and \textit{c}$2$ be true, and let, in the rooted tree $T_i$ of an inner vertex $i$, a non-inner vertex $j$ at depth $\geq 2$ is in out-neighbourhood of even number of vertices in $ V_{NI}(i) $, in $ \mathcal{G} $ i.e., $ a_{i,j} $ is even. It is evident that $x_j$ is not cancelled in $Z_i$ because $x_j$ appears once in $W_j$ and in even number of index code symbols corresponding to vertices in $ V_{NI}(i)\backslash \lbrace j \rbrace$. As a result, $x_i$ is not decodable by user requesting it since $ x_j $ is not available to the user requesting $ x_i $ as side-information.\\ Now, let \textit{c}$1$ be true and \textit{c}$2$ be false. Let the non-inner vertex $j$ violate \textit{c}$2$ for a rooted tree $T_i$ corresponding to an inner vertex $i$, i.e., $ b_{i,j} = 1 $. Then $x_j$ is not cancelled in $Z_i$ because it appears only once in the index code symbols corresponding to the vertices that are in $ V_{NI}(i)$ and thus inhibiting decodability of $x_i$ for the user requesting it, since the user does not have $ x_j $ as side-information. \end{proof} \begin{thm} \label{thm2} An IC structure which has no cycles containing only non-inner vertices satisfies both the conditions \textit{c}$ 1 $ and \textit{c}$ 2 $. \end{thm} \begin{proof} The proof is in two parts. The condition \textit{c}$ 1 $ is shown to be satisfied in Part $ 1 $ and the condition \textit{c}$ 2 $ in Part $ 2 $. \begin{part} Let $ \mathcal{G} $ be an IC structure which has no cycles containing only non-inner vertices. It will be shown that $ a_{i,j} \geq 2 $ is not possible for any inner vertex $ i $ and a non-inner vertex $j$ at depth $ \geq2 $ in $ T_i $. Consider for such $ i $ and $ j $, $ a_{i,j} = z $ for some integer $ z\geq2 $. Since $ j $ is at depth $ \geq2 $, there exists a non-inner vertex $ p $ which is a successor of $ i $, predecessor of $ j $ and has $ j $ in its out-neighbourhood. So, $ a_{i,j} \geq1$. Let $ q $ be a non-inner vertex in the set $ \lbrace v:v\in V_{NI}(i), j\in N^{+}_{\mathcal{G}}(v)\rbrace $ and $ q\not = p $. Let $ V_I(j) $ be the set of inner vertices reached from $ j $ in $ T_i $. Let $ S_j $ denotes the set of non-inner vertices that are successors of $ j $ and predecessors of the vertices in $ V_I(j) $. \begin{clm} $ q \in S_j $ \end{clm} \begin{proof} Suppose not. Then there exists two I-paths from $ i $ to vertices in $ V_I(j) $ (one passing through $ p $ and the other passing through $ q $, see Fig. \ref{imag}). \begin{figure}[!t] \centering \includegraphics[height=2in,width=3in,angle=0]{image} \caption{Figure illustrating $ q \not \in S_j $.} \label{imag} \end{figure} Hence it is a contradiction. \end{proof} Now, $ q\in S_j $ means that there is a path from $ j $ to $ q $. As there exists an edge from $ q $ to $ j $, by definition of $ q $, a cycle containing only non-inner vertices which include $ j $ and $ q $ is formed. This is a contradiction to the assumption that $ \mathcal{G} $ doesn't have any cycles containing only non-inner vertices. Hence $ a_{i,j} \leq1 $. As a result, $ a_{i,j} = 1 $ (an odd number) for any inner vertex $ i $ and a non-inner vertex $ j $ which is at a depth $ \geq2 $ in $ T_i $. \end{part} \begin{part} Let $ \mathcal{G} $ be an IC structure which has no cycles containing only non-inner vertices. Suppose $ b_{i,j}=1 $ for an inner vertex $ i $ and a non-inner vertex $ j $ which is not in the rooted tree $ T_i $. This implies that there exists a non-inner vertex $ p $ in $ T_i $ which has $ j $ in its out-neighbourhood. Let $ V_I(j)$ be the set of inner vertices that are successors of non-inner vertex $ j $. Let $ S_j $ be the set of non-inner vertices that are successors of $ j $ and predecessors of vertices in $ V_I(j) $ and let $ S_{i,j} $ be the subset of non-inner vertices in $ S_j $ which are successors of $ i $ (see Fig. \ref{pf2}). \begin{clm} $ p \in S_{i,j} $. \end{clm} \begin{proof} Suppose not. Then it leads to existence of two I-paths between $ i $ and $ t_p $ which is not allowed in an IC structure. See Fig. \ref{pf2}. \end{proof} Since $p \in S_{i,j}$, there exists a path from $ j $ to $ p $. Since an edge exists from $ p $ to $ j $, by definition of $ p $, a cycle containing only non-inner vertices which include $ p $ and $ j $ exists in $ \mathcal{G} $. This is a contradiction. \end{part}Hence $ b_{i,j} \not =1 $ for any inner vertex $ i $ and any non-inner vertex $ j $ not in the rooted tree $ T_i $. As $ b_{i,j} \in \lbrace0,1\rbrace $ (as proved in \textit{Lemma} $ 1 $), $ b_{i,j} = 0 $ for any inner vertex $ i $ and any non-inner vertex $ j $ which is not present in rooted tree $ T_i $. \end{proof} The conditions of Theorem \ref{thm1} are now illustrated for Example \ref{exam1} and Example \ref{exam2} discussed in the previous section. Also, it is shown that though the constructed codes for these two examples are not decodable using \textit{Algorithm} 1 they are decodable using only linear combinations of the index code symbols. \begin{exmp}[continued] Table \ref{table1} shows that $ \mathcal{G}_1 $ violates \textit{c}$ 1 $ and Table \ref{table2} shows that \textit{c}$ 2 $ is also violated by $ \mathcal{G}_1 $. Since the conditions are violated in the rooted trees of inner vertices $ 1 $, $ 2 $ and $ 3 $, messages $ x_1 $, $ x_2 $ and $ x_3 $ are not decodable by using \textit{Algorithm} $ 1 $ whereas the messages $ x_4 $, $ x_5 $ and $ x_6 $ are decodable using \textit{Algorithm} $ 1 $. \begin{remark} However, the messages $ x_1 $, $ x_2 $ and $ x_3 $ are decodable by using other linear combinations of the index code symbols, as shown. \begin{itemize} \item $ x_1 $ is decoded using $ Z'_1= W_I \oplus W_7 \oplus W_9 \oplus W_{10} \oplus W_{11} \oplus W_{12} \oplus W_{13} $ which results in $ Z'_1= x_1 \oplus x_3 \oplus x_7 $. \item $ x_2 $ and $ x_3 $ are decoded using $ Z'_2 = W_I \oplus W_9 \oplus W_{10} \oplus W_{11} \oplus W_{12} \oplus W_{13} $ which results in $ Z'_2= x_1 \oplus x_2 \oplus x_3 \oplus x_6 $. \end{itemize} \end{remark} \begin{table}[!t] \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V_{NI}(T_i)\backslash N^+_{T_i}(i) $ & $ a_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace7,9,10,11,12,13,14\rbrace $ & $ \lbrace9,10,11,12,13\rbrace $ & $ 2 $,$ 1 $,$ 1 $,$ 1 $,$ 1 $ \\ \hline $ T_2 $ & $ \lbrace10,11,12,13\rbrace $ & $ \lbrace11,12,13\rbrace $ & $ 1 $,$ 1 $,$ 1 $ \\ \hline $ T_3 $ & $ \lbrace11,12,13\rbrace $ & $ \lbrace12,13\rbrace $ & $ 1 $,$ 1 $ \\ \hline $ T_4 $ & $ \lbrace7,8\rbrace $ & $ \phi $ & $ - $ \\ \hline $ T_5 $ & $ \phi $ & $ \phi $ & $ - $ \\ \hline $ T_6 $ & $9,10,11,12,13$ & $ 9,10,11,12$ & $ 1 $,$ 1 $,$ 1 $,$ 1 $ \\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 1 $ for $ \mathcal{G}_1 $} \label{table1} \end{table} \begin{table*}[!t] \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V(\mathcal{G}_1) \backslash V(T_i) $ & $ b_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace7,9,10,11,12,13,14\rbrace $ & $ \lbrace8\rbrace $ & $ 0 $ \\ \hline $ T_2 $ & $ \lbrace10,11,12,13\rbrace $ & $ \lbrace7,8,9,14\rbrace $ & $ 0 $,$ 0 $,$ 1 $,$ 0 $ \\ \hline $ T_3 $ & $ \lbrace11,12,13\rbrace $ & $ \lbrace7,8,9,10,14\rbrace $ & $ 0 $,$ 0 $,$ 1 $,$ 0 $ \\ \hline $ T_4 $ & $ \lbrace7,8\rbrace $ & $ \lbrace9,10,11,12,13,14\rbrace $ & $ 0 $,$ 0 $,$ 0 $,$ 0 $,$ 0 $,$ 0 $\\ \hline $ T_5 $ & $ \phi $ & $ \lbrace7,8,9,10,11,12,13,14\rbrace $ & $ - $ \\ \hline $ T_6 $ & $9,10,11,12,13$ & $\lbrace7,8,14\rbrace$ & $ 0 $,$ 0 $,$ 0 $\\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 2 $ for $ \mathcal{G}_1 $} \label{table2} \end{table*} \end{exmp} \begin{exmp}[continued] Table \ref{table3} shows that $ \mathcal{G}_2 $ violates \textit{c}$ 1 $ and Table \ref{table4} shows that \textit{c}$ 2 $ is satisfied by $ \mathcal{G}_2 $. Since \textit{c}$ 1 $ is violated in the rooted tree of inner vertex $ 1 $, the message $ x_1 $ is not decodable by using \textit{Algorithm} $ 1 $ and the rest of the messages corresponding to inner vertices are decodable by using \textit{Algorithm} $ 1 $. \begin{remark} However, $ x_1 $ is decodable using the the linear combination $ Z''= W_I \oplus W_6 \oplus W_8 \oplus W_9 $ which results in $ Z''= x_1 \oplus x_2 \oplus x_6 $. \end{remark} \begin{table}[!t]\centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V_{NI}(T_i)\backslash N^+_{T_i}(i) $ & $ a_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace6,7,8,9\rbrace $ & $ \lbrace8,9\rbrace $ & $ 2 $,$ 1 $ \\ \hline $ T_2 $ & $ \phi $ & $ \phi $ & $ - $\\ \hline $ T_3 $ & $ \lbrace8,9,11\rbrace $ & $ \lbrace8\rbrace $ & $ 1 $\\ \hline $ T_4 $ & $ \lbrace10,11\rbrace $ & $ \lbrace11\rbrace $ & $ 1 $ \\ \hline $ T_5 $ & $ \lbrace10,11\rbrace $ & $ \lbrace11\rbrace $ & $ 1 $ \\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 1 $ for $ \mathcal{G}_2 $} \label{table3} \end{table} \begin{table}[!t] \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V(\mathcal{G}_2) \backslash V(T_i) $ & $ b_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace7,8,9\rbrace $ & $ \lbrace6,10,11\rbrace $ & $ 0 $,$ 0 $,$ 0 $ \\ \hline $ T_2 $ & $ \phi $ & $ \lbrace6,7,8,9,10,11\rbrace $ & $ - $\\ \hline $ T_3 $ & $ \lbrace8,9,11\rbrace $ & $ \lbrace6,7,10\rbrace $ & $ 0 $,$ 0 $,$ 0 $ \\ \hline $ T_4 $ & $ \lbrace10,11\rbrace $ & $ \lbrace6,7,8,9\rbrace $ & $ 0 $,$ 0 $,$ 0 $,$ 0 $\\ \hline $ T_5 $ & $ \lbrace10,11\rbrace $ & $ \lbrace6,7,8,9\rbrace $ & $ 0 $,$ 0 $,$ 0 $,$ 0 $ \\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 2 $ for $ \mathcal{G}_2 $} \label{table4} \end{table} \end{exmp} Example \ref{exam3} and Example \ref{exam4} discussed below illustrate Theorem \ref{thm2}. \begin{ex} \label{exam3} Consider $\mathcal{G}_3$, a side-information graph which is a $3$-IC structure shown in Fig. \ref{f4}. Notice that $\mathcal{G}_3$ does not have any cycles consisting of only non-inner vertices and that it is \begin{figure}[!t] \centering \includegraphics[height=\columnwidth,width=\columnwidth,angle=0]{Fig_4} \caption{$ 3 $-IC structure $\mathcal{G}_3$ with $ V_I=\lbrace1,2,3\rbrace$.} \label{f4} \end{figure} indeed a 3-IC structure with inner vertex set $V_I=\lbrace1,2,3\rbrace$ since \begin{enumerate} \item there are no cycles containing only one vertex from the set $\lbrace1,2,3\rbrace $ in $\mathcal{G}_3$ (i.e., no I-cycles), \item using the rooted trees for each vertex in the set $\lbrace1,2,3\rbrace$, which are given in Fig. \ref{rt41}, \ref{rt42} and \ref{rt43} respectively, it is verified that there exists a unique path between any two different vertices in $ V_I $ in $ \mathcal{G}_3 $ and does not contain any other vertex in $ V_I $ (i.e., unique I-path between any pair of inner vertices), \item $\mathcal{G}_3$ is the union of all the $3$ rooted trees. \end{enumerate} \begin{figure*}[!t] \centering \begin{subfigure}{.31\textwidth} \centering \includegraphics[width=15pc]{RT_4_1} \caption{} \label{rt41} \end{subfigure}% \begin{subfigure}{.31\textwidth} \centering \includegraphics[width=15pc]{RT_4_2} \caption{} \label{rt42} \end{subfigure} \begin{subfigure}{.31\textwidth} \centering \includegraphics[width=15pc]{RT_4_3} \caption{} \label{rt43} \end{subfigure} \caption{Figures showing rooted trees of inner vertices $ 1,2,3$ of $ \mathcal{G}_3 $, respectively.} \end{figure*} Conditions \textit{c}$ 1 $ and \textit{c}$ 2 $ are illustrated for $ \mathcal{G}_3 $ as follows. The rooted trees $T_1$, $ T_2 $ and $ T_3 $ have no non-inner vertices at depth $ \geq2 $ and hence \textit{c}$ 1 $ need not be verified. From Table \ref{table5}, it is clear that $ b_{i,j} = 0$ for each $ i \in \lbrace1,2,3\rbrace $ and $ j \in V(\mathcal{G})\backslash V(T_i) $. \begin{table}[h!] \centering \hspace{5mm} \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V(\mathcal{G}_3) \backslash V(T_i) $ & $ b_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace 5,6 \rbrace $ & $ \lbrace4\rbrace $ & $ 0 $ \\ \hline $ T_2$ & $ \lbrace 4,6 \rbrace $ & $ \lbrace5\rbrace $ & $ 0 $ \\ \hline $ T_3$ & $ \lbrace 4,5 \rbrace $ & $ \lbrace6\rbrace $ & $ 0 $\\ \hline \end{tabular}\\ \caption{Table that illustrates \textit{c}$ 2 $ for $ \mathcal{G}_3 $.} \label{table5} \end{table} It is thus verified that \textit{c}$ 1 $ and \textit{c}$ 2 $ are satisfied by $ \mathcal{G}_3 $. \end{ex} \begin{ex} \label{exam4} Consider $\mathcal{G}_4$, a side-information graph which is a $4$-IC structure shown in Fig. \ref{f5}. \begin{figure}[h!] \includegraphics[height=\columnwidth,width=\columnwidth,angle=0]{Fig_5} \caption{$ 4 $-IC structure $\mathcal{G}_4$ with $ V_I=\lbrace1,2,3,4\rbrace$.} \label{f5} \end{figure} It is a $ 4 $-IC structure with inner vertex set $V_I=\lbrace1,2,3,4\rbrace$ since \begin{enumerate} \item there are no cycles with only one vertex from the set $ \lbrace1,2,3,4\rbrace $ in $\mathcal{G}_4$ (i.e., no I-cycles), \item using the rooted trees for each vertex in the set $\lbrace1,2,3,4\rbrace$, which are given in Fig. \ref{rt51}, \ref{rt52}, \ref{rt53} and \ref{rt54} respectively, it is verified that there exists a unique path between any two different vertices in $ V_I $ in $ \mathcal{G}_4 $ and does not contain any other vertex in $ V_I $ (i.e., unique I-path between any pair of inner vertices), \item $\mathcal{G}_4$ is the union of all the $4$ rooted trees \end{enumerate} and it has no cycles consisting of only non-inner vertices. \begin{figure*}[!t] \centering \begin{subfigure}{.31\textwidth} \hspace{-4mm} \includegraphics[width=15pc]{RT_5_1} \caption{} \label{rt51} \end{subfigure}% \begin{subfigure}{.31\textwidth} \hspace{-4mm} \includegraphics[width=15pc]{RT_5_2} \caption{} \label{rt52} \end{subfigure} \begin{subfigure}{.31\textwidth} \hspace{-4mm} \includegraphics[width=15pc]{RT_5_3} \caption{} \label{rt53} \end{subfigure} \begin{subfigure}{.31\textwidth} \hspace{-4mm} \includegraphics[width=15pc]{RT_5_4} \caption{} \label{rt54} \end{subfigure}% \caption{Figures showing rooted trees of inner vertices $ 1,2,3,4 $ of $ \mathcal{G}_4 $, respectively.} \end{figure*} Conditions \textit{c}$ 1 $ and \textit{c}$ 2 $ are illustrated for $ \mathcal{G}_4 $ as follows. The rooted trees $T_1$, $ T_2 $, $ T_3 $ and $ T_4 $ have no non-inner vertices at depth $ \geq2 $ and hence \textit{c}$ 1 $ need not be verified. Verification of \textit{c}$ 2 $ is done using Table \ref{table6}. \begin{table}[h!] \centering \hspace{7mm} \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI} $ & $ j \in V(\mathcal{G}_4)\backslash V(T_i) $ & $ b_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace 5\rbrace $ & $ \lbrace6\rbrace $ & $ 0 $ \\ \hline $ T_2$ & $ \lbrace 6 \rbrace $ & $ \lbrace5\rbrace $ & $ 0 $ \\ \hline $ T_3$ & $ \phi $ & $ \lbrace5,6\rbrace $ & $ - $\\ \hline $ T_4$ & $ \phi $ & $ \lbrace 5,6 \rbrace $ & $ - $\\ \hline \end{tabular}\\ \caption{Table that illustrates \textit{c}$ 2 $ for $ \mathcal{G}_4 $.} \label{table6} \end{table}It is thus verified that \textit{c}$ 1 $ and \textit{c}$ 2 $ are satisfied by $ \mathcal{G}_4$. \end{ex} The following three examples (Examples \ref{exam5}, \ref{exam6} and \ref{exam7}) illustrate Theorem \ref{thm1} for some IC structures having at least one cycle consisting of only non-inner vertices. \begin{ex} \label{exam5} Consider $\mathcal{G}_5$, a side-information graph which is a $5$-IC structure, shown in Fig. \ref{f3}. \begin{figure}[h!] \includegraphics[height=\columnwidth,width=\columnwidth,angle=0]{Fig_3} \caption{$5$-IC structure $\mathcal{G}_5$ with $V_I=\lbrace1,2,3,4,5\rbrace$.} \label{f3} \end{figure} It can be easily verified that it is a $ 5 $-IC structure with inner vertex set $V_I=\lbrace1,2,3,4,5\rbrace$ because \begin{enumerate} \item there are no cycles with only one vertex from the set $\lbrace1,2,3,4,5\rbrace$ in $\mathcal{G}_5$ (i.e., no I-cycles), \item using the rooted trees for each vertex in the set $\lbrace1,2,3,4,5\rbrace$, which are given in Fig. \ref{rt31}, \ref{rt32}, \ref{rt33}, \ref{rt34} and \ref{rt35} respectively, it is verified that there exists a unique path between any two different vertices in $ V_I $ in $ \mathcal{G}_5 $ and does not contain any other vertex in $ V_I $ (i.e., unique I-path between any pair of inner vertices), \item $\mathcal{G}_5$ is the union of all the $5$ rooted trees. \end{enumerate} Note that the vertices 7 and 8 form a cycle consisting of non-inner vertices. \begin{figure*}[!t] \centering \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_3_1} \caption{} \label{rt31} \end{subfigure}% \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_3_2} \caption{} \label{rt32} \end{subfigure} \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_3_3} \caption{} \label{rt33} \end{subfigure} \centering \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_3_4} \caption{} \label{rt34} \end{subfigure}% \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_3_5} \caption{} \label{rt35} \end{subfigure} \caption{Figures showing rooted trees of inner vertices $ 1,2,3,4,5 $ of $ \mathcal{G}_5 $, respectively.} \end{figure*} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V_{NI}(T_i)\backslash N^+_{T_i}(i) $ & $ a_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace6,7,8\rbrace $ & $ \lbrace8\rbrace $ & $ 1 $ \\ \hline $ T_2 $ & $ \phi $ & $ \phi $ & $ - $ \\ \hline $ T_3 $ & $ \lbrace7,8,10\rbrace $ & $ \lbrace7\rbrace $ & $ 1 $ \\ \hline $ T_4 $ & $ \lbrace9,10\rbrace $ & $ \lbrace10\rbrace $ & $ 1 $ \\ \hline $ T_5 $ & $ \lbrace9,10\rbrace $ & $ \lbrace10\rbrace $ & $ 1 $ \\ \hline \end{tabular}\\ \caption{Table that illustrates \textit{c}$ 1 $ for $ \mathcal{G}_5 $.} \label{table7} \end{table} \begin{table}[h!] \centering \hspace{4mm} \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V(\mathcal{G}_5)\backslash V_{T_i}$ & $ b_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace6,7,8\rbrace $ & $ \lbrace9,10\rbrace $ & $0$, $ 0 $\\ \hline $ T_2 $ & $ \phi $ & $ \lbrace6,7,8,9,10\rbrace $ & $ - $ \\ \hline $ T_3$ & $ \lbrace7,8,10\rbrace $ & $ \lbrace6,9\rbrace $ & $ 0 $, $ 0 $\\ \hline $ T_4 $ & $ \lbrace9,10\rbrace $ & $ \lbrace6,7,8\rbrace $ & $ 0 $, $ 0 $, $ 0 $\\ \hline $ T_5 $ & $ \lbrace9,10\rbrace $ & $ \lbrace6,7,8\rbrace $ & $0$, $ 0 $, $ 0 $\\ \hline \end{tabular}\\ \caption{Table that illustrates \textit{c}$ 2 $ for $ \mathcal{G}_5 $.} \label{tab4} \end{table} Conditions \textit{c}$ 1 $ and \textit{c}$ 2 $ are illustrated for $ \mathcal{G}_5 $ in Table \ref{table7} and Table \ref{tab4}, respectively. As a result, \textit{Algorithm} $ 1 $ can be used to decode an index code obtained by using \textit{Construction} $ 1$ on the IC structure $ \mathcal{G}_5 $. \end{ex} \begin{ex} \label{exam6} Consider $\mathcal{G}_6$, a side-information graph which is a $6$-IC structure, shown in Fig. \ref{f7}. \begin{figure}[h!] \includegraphics[height=\columnwidth,width=\columnwidth,angle=0]{Fig_7} \caption{$6$-IC structure $\mathcal{G}_6$ with $V_I=\lbrace1,2,3,4,5,6\rbrace$.} \label{f7} \end{figure}$\mathcal{G}_6$ is a $ 6 $-IC structure with inner vertex set $V_I=\lbrace1,2,3,4,5,6\rbrace$ because \begin{enumerate} \item there are no cycles with only one vertex from the set $\lbrace1,2,3,4,5,6\rbrace$ in $\mathcal{G}_6$ (i.e., no I-cycles), \item using the rooted trees for each vertex in the set, $\lbrace1,2,3,4,5,6\rbrace$, which are given in Fig. \ref{rt71}, \ref{rt72}, \ref{rt73}, \ref{rt74}, \ref{rt75} and \ref{rt76} respectively, it is verified that there exists a unique path between any two different vertices in $ V_I $ in $ \mathcal{G}_6 $ and does not contain any other vertex in $ V_I $ (i.e., unique I-path between any pair of inner vertices), \item $\mathcal{G}_6$ is the union of all the $6$ rooted trees. \end{enumerate} Also, notice that there are three disjoint cycles each of them consisting of only the non-inner vertices. \begin{figure*}[!t] \centering \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_7_1} \caption{} \label{rt71} \end{subfigure}% \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_7_2} \caption{} \label{rt72} \end{subfigure} \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_7_3} \caption{} \label{rt73} \end{subfigure} \centering \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_1_4} \caption{} \label{rt74} \end{subfigure}% \begin{subfigure}{.37\textwidth} \hspace{-2mm} \includegraphics[width=15pc]{RT_7_5} \caption{} \label{rt75} \end{subfigure} \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_7_6} \caption{} \label{rt76} \end{subfigure} \caption{Figures showing rooted trees of inner vertices $ 1,2,3,4,5,6 $ of $ \mathcal{G}_6 $, respectively.} \end{figure*} \begin{table} \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V(T_i)\backslash\lbrace V_I \cup N^+_{T_i}(i) \rbrace $ & $ a_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace8,9,10,11\rbrace $ & $ \lbrace9,11\rbrace $ & $ 1 $, $ 1 $ \\ \hline $ T_2 $ & $ \lbrace7,8,9,12\rbrace $ & $ \lbrace8,12\rbrace $ & $ 1 $, $ 1 $ \\ \hline $ T_3 $ & $ \lbrace10,11\rbrace $ & $ \lbrace10\rbrace $ & $ 1 $ \\ \hline $ T_4 $ & $ \lbrace7,12\rbrace $ & $ \lbrace7\rbrace $ & $ 1 $ \\ \hline $ T_5 $ & $ \lbrace7,12\rbrace $ & $ \lbrace7\rbrace $ & $ 1 $ \\ \hline $ T_6 $ & $\phi$ & $ \phi $ & $ - $ \\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 1 $ for $ \mathcal{G}_6 $.} \label{table11} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V(\mathcal{G}_6)\backslash V(T_i) $ & $ b_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace8,9,10,11\rbrace $ & $ \lbrace7,12\rbrace $ & $0$, $ 0 $\\ \hline $ T_2 $ & $ \lbrace7,8,9,12\rbrace $ & $ \lbrace10,11\rbrace $ & $ 0 $, $ 0 $ \\ \hline $ T_3$ & $ \lbrace10,11\rbrace $ & $ \lbrace7,8,9,12\rbrace $ & $ 0 $, $ 0 $, $ 0 $, $ 0 $\\ \hline $ T_4 $ & $ \lbrace7,12\rbrace $ & $ \lbrace8,9,10,11\rbrace $ & $ 0 $, $ 0 $, $ 0 $, $ 0 $\\ \hline $ T_5 $ & $ \lbrace7,12\rbrace $ & $ \lbrace8,9,10,11\rbrace $ & $ 0 $, $ 0 $, $ 0 $, $ 0 $\\ \hline $ T_6 $ & $ \phi $ & $ \lbrace7,8,9,10,11,12\rbrace $ & $ - $\\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 2 $ for $ \mathcal{G}_6 $.} \label{table12} \end{table} \begin{v1} From Table \ref{table11} and Table \ref{table12}, it is observed that \textit{c}$ 1 $ and \textit{c}$ 2 $ are satisfied by $ \mathcal{G}_6 $. As a result, \textit{Algorithm} $ 1 $ can be used to decode an index code obtained by using \textit{Construction} $ 1$ on the IC structure $ \mathcal{G}_6 $.\end{v1} The index code obtained is $W_I = x_1 \oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5 \oplus x_6; ~~W_7 = x_7 \oplus x_1 \oplus x_{12}; ~~ W_8 = x_8 \oplus x_3 \oplus x_9; ~~ W_9 = x_9 \oplus x_4 \oplus x_8; ~~ W_{10} = x_{10} \oplus x_5 \oplus x_{11}; W_{11} = x_{11} \oplus x_6 \oplus x_{10}; W_{12} = x_{12} \oplus x_6 \oplus x_7.$ Messages $x_7$, $x_8$, $x_9$, $x_{10}$, $ x_{11} $ and $ x_{12} $ are decoded directly using $W_7$, $W_8$, $W_9$, $W_{10}$, $ W_{11} $ and $ W_{12} $ respectively. The computation of $ Z_i $, for $ i=1,2,\dots,6 $ using \textit{Algorithm} $ 1 $ is shown in Table \ref{table13} and the decoding of messages $ x_1 $, $ x_2 $, $ x_3 $, $ x_4 $, $ x_5 $ and $ x_6 $ is shown in Table \ref{table14}. \begin{table}[h!] \hspace{1cm} \begin{tabular}{|c|c|} \hline Message $ x_i $ & Computation of $ Z_i$\\ \hline\hline $x_1$ & $ W_I \oplus W_8 \oplus W_9 \oplus W_{10}\oplus W_{11}$\\ \hline $x_2$ & $ W_I \oplus W_7 \oplus W_8 \oplus W_9 \oplus W_{12} $\\ \hline $ x_3 $ & $ W_I \oplus W_{10} \oplus W_{11} $\\ \hline $ x_4 $ & $W_I \oplus W_7 \oplus W_{12} $\\ \hline $ x_5 $ & $W_I \oplus W_7 \oplus W_{12} $\\ \hline $ x_6 $ & $ W_I $\\ \hline \end{tabular}\\ \caption{Table that shows the working of algorithm $ 1 $ on index code obtained from construction $ 1 $ on $ \mathcal{G}_6 $.} \label{table13} \end{table} \begin{table}[h!] \begin{tabular}{|c|c|c|} \hline Message $ x_i $ & $ Z_i$ & $ N^+_{\mathcal{G}_6}(i) $\\ \hline\hline $x_1$ & $x_1\oplus x_2 \oplus x_8 \oplus x_{10} $ & $ x_2 $, $x_8 $, $ x_{10} $\\ \hline $x_2$ & $ x_2 \oplus x_5 \oplus x_7 \oplus x_9$ & $ x_5 $, $ x_7 $, $ x_9 $\\ \hline $ x_3 $ & $ x_3 \oplus x_1 \oplus x_2 \oplus x_4\oplus x_{11} $ &$ x_1 $, $ x_2 $, $ x_4 $, $ x_{11} $\\ \hline $ x_4 $ & $ x_4 \oplus x_2 \oplus x_3 \oplus x_5\oplus x_{12} $ &$ x_2 $, $ x_3 $, $ x_5 $, $ x_{12} $\\ \hline $ x_5 $ & $ x_5 \oplus x_2 \oplus x_3 \oplus x_4\oplus x_{12} $ &$ x_2 $, $ x_3 $, $ x_4 $, $ x_{12} $\\ \hline $ x_6 $ & $ x_1 \oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5 \oplus x_6$ & $ x_1 $, $ x_2 $, $ x_3 $, $ x_4 $, $ x_5 $\\ \hline \end{tabular}\\ \caption{Table showing the decoding of messages using algorithm $ 1 $ on index code obtained from construction $ 1 $ on $ \mathcal{G}_6$.} \label{table14} \end{table}Thus, \textit{Algorithm} $ 1 $ is used to decode the index code obtained by using \textit{Construction}$ 1 $ on $ \mathcal{G}_6 $. \end{ex} \begin{ex} \label{exam7} Consider $\mathcal{G}_7$, a side-information graph which is a $5$-IC structure, shown in Fig. \ref{f8}. \begin{figure}[h!] \includegraphics[height=\columnwidth,width=\columnwidth,angle=0]{Fig_8} \caption{$5$-IC structure $\mathcal{G}_7$ with $V_I=\lbrace1,2,3,4,5\rbrace$.} \label{f8} \end{figure}$\mathcal{G}_7$ is a $ 5$-IC structure with inner vertex set $V_I=\lbrace1,2,3,4,5\rbrace$ since \begin{enumerate} \item there are no cycles with only one vertex from the set$\lbrace1,2,3,4,5\rbrace $ in $\mathcal{G}_7$ (i.e., no I-cycles), \item using the rooted trees for each vertex in the set, $\lbrace1,2,3,4,5\rbrace$, which are given in Fig. \ref{rt81}, \ref{rt82}, \ref{rt83}, \ref{rt84} and \ref{rt85} respectively, it is verified that there exists a unique path between any two different vertices in $ V_I $ in $ \mathcal{G}_7 $ and does not contain any other vertex in $ V_I $ (i.e., unique I-path between any pair of inner vertices), \item $\mathcal{G}_7$ is the union of all the $5$ rooted trees. \end{enumerate} There are two cycles consisting of only the non-inner vertices. \begin{figure*}[!t] \centering \begin{subfigure}{.31\textwidth} \centering \includegraphics[width=15pc]{RT_8_1} \caption{} \label{rt81} \end{subfigure}% \begin{subfigure}{.31\textwidth} \centering \includegraphics[width=15pc]{RT_8_2} \caption{} \label{rt82} \end{subfigure} \begin{subfigure}{.31\textwidth} \centering \includegraphics[width=15pc]{RT_8_3} \caption{} \label{rt83} \end{subfigure} \centering \begin{subfigure}{.31\textwidth} \centering \includegraphics[width=15pc]{RT_8_4} \caption{} \label{rt84} \end{subfigure}% \begin{subfigure}{.31\textwidth} \centering \includegraphics[width=15pc]{RT_8_5} \caption{} \label{rt85} \end{subfigure} \caption{Figures showing rooted trees of inner vertices $ 1,2,3,4,5 $ of $ \mathcal{G}_7 $, respectively.} \end{figure*} \begin{table} \centering \hspace{2mm} \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V_{NI}(T_i)\backslash N^+_{T_i}(i) $ & $ a_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace6,7,8\rbrace $ & $ \lbrace7,8\rbrace $ & $ 1 $, $ 1 $ \\ \hline $ T_2 $ & $ \lbrace6,7,8\rbrace $ & $ \lbrace6,7\rbrace $ & $ 1 $, $ 1 $ \\ \hline $ T_3 $ & $ \lbrace \phi \rbrace $ & $ \lbrace\phi\rbrace $ & $ - $ \\ \hline $ T_4 $ & $ \lbrace9,10\rbrace $ & $ \lbrace9\rbrace $ & $ 1 $ \\ \hline $ T_5 $ & $ \lbrace9,10\rbrace $ & $ \lbrace10\rbrace $ & $ 1 $ \\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 1 $ for $ \mathcal{G}_7 $.} \label{table15} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V(\mathcal{G}_7)\backslash V(T_i) $ & $ b_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace6,7,8\rbrace $ & $ \lbrace9,10\rbrace $ & $0$, $ 0 $\\ \hline $ T_2 $ & $ \lbrace6,7,8\rbrace $ & $ \lbrace9,10\rbrace $ & $ 0 $, $ 0 $ \\ \hline $ T_3$ & $ \lbrace\phi\rbrace $ & $ \lbrace6,7,8,9,10\rbrace $ & $ - $\\ \hline $ T_4 $ & $ \lbrace9,10\rbrace $ & $ \lbrace6,7,8\rbrace $ & $ 0 $, $ 0 $, $ 0 $\\ \hline $ T_5 $ & $ \lbrace9,10\rbrace $ & $ \lbrace6,7,8\rbrace $ & $ 0 $, $ 0 $, $ 0 $\\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 2 $ for $ \mathcal{G}_7 $.} \label{table16} \end{table} \begin{v1} From Table \ref{table15} and Table \ref{table16}, it is observed that \textit{c}$ 1 $ and \textit{c}$ 2 $ are satisfied by $ \mathcal{G}_7 $. As a result, \textit{Algorithm} $ 1 $ can be used to decode an index code obtained by using \textit{Construction} $ 1$ on the IC structure $ \mathcal{G}_7 $.\end{v1} The index code obtained is $W_I = x_1 \oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5; ~~W_6 = x_6 \oplus x_3 \oplus x_7; ~~W_7 = x_7 \oplus x_8 \oplus x_4; ~~ W_8 = x_8 \oplus x_5 \oplus x_6; ~~ W_9 = x_9 \oplus x_1 \oplus x_2 \oplus x_{10}; ~~ W_{10} = x_{10} \oplus x_3 \oplus x_9.$ Messages $ x_6 $, $x_7$, $x_8$, $x_9$ and $x_{10}$ are decoded directly using $ W_6 $, $W_7$, $W_8$, $W_9$ and $W_{10}$ respectively. The computation of $ Z_i $, for $ i=1,2,\dots,5 $ using \textit{Algorithm} $ 1 $ is shown in Table \ref{table17} and the decoding of messages $ x_1 $, $ x_2 $, $ x_3 $, $ x_4 $ and $ x_5 $ is shown in Table \ref{table18}. \begin{table}[h!] \centering \hspace{1.3cm} \begin{tabular}{|c|c|} \hline Message $ x_i $ & Computation of $ Z_i$\\ \hline\hline $x_1$ & $ W_I \oplus W_6 \oplus W_7 \oplus W_8$\\ \hline $x_2$ & $ W_I \oplus W_6 \oplus W_7 \oplus W_8$\\ \hline $ x_3 $ & $ W_I$ \\ \hline $ x_4 $ & $W_I \oplus W_9 \oplus W_{10} $\\ \hline $ x_5 $ & $W_I \oplus W_9 \oplus W_{10} $\\ \hline \end{tabular}\\ \caption{Table that shows the working of algorithm $ 1 $ on index code obtained from construction $ 1 $ on $ \mathcal{G}_7 $.} \label{table17} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|} \hline Message $ x_i $ & $ Z_i$ & $ N^+_{\mathcal{G}_7}(i) $\\ \hline\hline $x_1$ & $x_1\oplus x_2 $ & $ x_2 $, $x_6 $\\ \hline $x_2$ & $ x_2 \oplus x_1 $ & $ x_1 $, $ x_8 $\\ \hline $ x_3 $ & $ x_3 \oplus x_1 \oplus x_2 \oplus x_4\oplus x_5 $ &$ x_1 $, $ x_2 $, $ x_4 $, $ x_5 $\\ \hline $ x_4 $ & $ x_4 \oplus x_5 $ &$ x_5 $, $ x_{10} $\\ \hline $ x_5 $ & $ x_5 \oplus x_4 $ &$ x_4 $, $ x_{10} $\\ \hline \end{tabular}\\ \caption{Table that shows the decoding of messages using algorithm $ 1 $ on index code obtained from construction $ 1 $ on $ \mathcal{G}_7$.} \label{table18} \end{table}Thus, \textit{Algorithm} $ 1 $ is used to decode the index code obtained by using \textit{Construction}$ 1 $ on $ \mathcal{G}_7$. \end{ex} In the following last example it is shown that for some IC structures the code constructed using \textit{Construction 1} is not decodable using any algorithm using only linear combinations of the index code symbols. \begin{ex} \label{exam8} Consider $\mathcal{G}_8$, a side-information graph which is a $5$-IC structure, shown in Fig. \ref{f6}. \begin{figure}[h!] \includegraphics[height=\columnwidth,width=\columnwidth,angle=0]{Fig_6} \caption{$5$-IC structure $\mathcal{G}_8$ with $V_I=\lbrace1,2,3,4,5\rbrace$.} \label{f6} \end{figure}$ \mathcal{G}_8$ is a $ 6 $-IC structure with inner vertex set $ V_I=\lbrace1,2,3,4,5\rbrace$ since \begin{enumerate} \item there are no cycles with only one vertex from the set $\lbrace1,2,3,4,5\rbrace $ in $\mathcal{G}_8$ (i.e., no I-cycles), \item using the rooted trees for each vertex in the set, $\lbrace1,2,3,4,5\rbrace$, which are given in Fig. \ref{rt61}, \ref{rt62}, \ref{rt63}, \ref{rt64} and \ref{rt65} respectively, it is verified that there exists a unique path between any two different vertices in $ V_I $ in $ \mathcal{G}_8 $ and does not contain any other vertex in $ V_I $ (i.e., unique I-path between any pair of inner vertices), \item $\mathcal{G}_8$ is the union of all the $5$ rooted trees. \end{enumerate} \begin{figure*}[!t] \centering \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_6_1} \caption{} \label{rt61} \end{subfigure}% \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_2_2} \caption{} \label{rt62} \end{subfigure} \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_2_3} \caption{} \label{rt63} \end{subfigure} \centering \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_2_4} \caption{} \label{rt64} \end{subfigure}% \begin{subfigure}{.31\textwidth} \hspace{-6mm} \includegraphics[width=15pc]{RT_2_5} \caption{} \label{rt65} \end{subfigure} \caption{Figures showing rooted trees of inner vertices $ 1,2,3,4,5$ of $ \mathcal{G}_8 $, respectively.} \end{figure*} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V_{NI}(T_i)\backslash N^+_{T_i}(i) $ & $ a_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace6,7,8,9\rbrace $ & $ \lbrace7,8,9\rbrace $ & $ 1 $, $ 2 $, $ 1 $ \\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 1 $ for $ \mathcal{G}_8 $.} \label{table17a} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V(\mathcal{G}_8)\backslash V(T_i) $ & $ b_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace6,7,8,9\rbrace $ & $ \lbrace10,11\rbrace $ & $0$, $ 0 $\\ \hline $ T_2 $ & $ \lbrace10,11\rbrace $ & $ \lbrace6,7,8,9\rbrace $ & $ 0 $, $ 0 $, $0$, $ 0 $\\ \hline $ T_3$ & $ \lbrace8,9,11\rbrace $ & $ \lbrace6,7,10\rbrace $ & $ 0 $, $ 0 $, $ 0 $\\ \hline $ T_4 $ & $ \lbrace10,11\rbrace $ & $ \lbrace6,7,8,9\rbrace $ & $ 0 $, $ 0 $, $ 0 $, $ 0 $\\ \hline $ T_5 $ & $ \lbrace10,11\rbrace $ & $ \lbrace6,7,8,9\rbrace $ & $ 0 $, $0$, $ 0 $, $ 0 $\\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 2 $ for $ \mathcal{G}_8 $.} \label{table8} \end{table} \begin{v1} From Table \ref{table17a} and Table \ref{table8}, it is observed that \textit{c}$ 1 $ is not satisfied ($ a_{1,8}=2 $, an even number) and \textit{c}$ 2 $ is satisfied by $ \mathcal{G}_8 $. As a result, \textit{Algorithm} $ 1 $ fails to decode an index code obtained by using \textit{Construction} $ 1$ on the IC structure $ \mathcal{G}_8 $. It is verified as follows. \end{v1} The index code obtained is $W_I = x_1 \oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5: ~~ W_6 = x_6 \oplus x_7 \oplus x_8; ~~W_7 = x_7 \oplus x_3; W_8 = x_8 \oplus x_4 \oplus x_9; ~~ W_9 = x_9 \oplus x_5 \oplus x_8; ~~ W_{10} = x_{10} \oplus x_3 \oplus x_{11}; ~~ W_{11} = x_{11} \oplus x_1.$ Messages $ x_6 $, $x_7$, $x_8$, $x_9$, $x_{10}$ and $ x_{11} $ are decoded directly using $ W_6 $, $W_7$, $W_8$, $W_9$, $W_{10}$ and $ W_{11} $ respectively. The computation of $ Z_1 $, for $ i=1,2,\dots,5 $ using \textit{Algorithm} $ 1 $ is shown in Table \ref{table9} and Table \ref{table10} illustrates the inability of the \textit{Algorithm} $ 1 $ to decode $ x_1 $. \begin{table}[h!] \centering \hspace{1cm} \begin{tabular}{|c|c|} \hline Message $ x_i $ & Computation of $ Z_i$\\ \hline\hline $x_1$ & $ W_I \oplus W_6 \oplus W_7 \oplus W_8 \oplus W_9$\\ \hline $x_2$ & $ W_I $\\ \hline $ x_3 $ & $ W_I \oplus W_8 \oplus W_9 \oplus W_{11} $\\ \hline $ x_4 $ &$ W_I \oplus W_{10} \oplus W_{11} $\\ \hline $ x_5 $ & $ W_I \oplus W_{10} \oplus W_{11} $\\ \hline \end{tabular}\\ \caption{Table that shows the working of algorithm $ 1 $ on index code obtained from construction $ 1 $ on $ \mathcal{G}_8 $.} \label{table9} \end{table} \begin{table}[h!] \centering \hspace{5mm} \begin{tabular}{|c|c|c|} \hline Message $ x_i $ & $ Z_i$ & $ N^+_{\mathcal{G}_8}(i) $\\ \hline\hline $x_1$ & $x_1\oplus x_2 \oplus x_6\oplus x_8$ & $ x_2 $, $x_6 $\\ \hline \end{tabular}\\ \caption{Table that shows failure of algorithm $ 1 $ on index code obtained from construction $ 1 $ on $ \mathcal{G}_8$.} \label{table10} \end{table} Thus, \textit{Algorithm} $ 1 $ is fails to decode the index code obtained using \textit{Construction}$ 1 $ on $ \mathcal{G}_8 $ since user requesting message $ x_1 $ does not have $ x_8 $ in its side-information. It turns out that $ x_1 $ cannot be decoded using any linear combination of the index code symbols. All the possible linear combinations of the index code symbols are listed in the Table \ref{tablef} along with the reason for $ x_1 $ not being decodable using that linear combination. \begin{table*} \centering \begin{tabular}{|c|c|c|c|} \hline S.no & Linear combination & Obtained sum & Reason\\ \hline\hline $ 1 $ & $ 0 $ & $ 0 $ & $ - $ \\ \hline $ 2 $ & $ W_I $ & $ x_1\oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5 $ & $ x_3 $ not in side-information\\ \hline $ 3 $ & $ W_6 $ & $ x_6\oplus x_7\oplus x_8 $ & $ x_1 $ is absent\\ \hline $ 4 $ & $ W_I \oplus W_6 $ & $ x_1 \oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5 \oplus x_6 \oplus x_7\oplus x_8 $ & $ x_3 $ not in side-information\\ \hline $ 5 $ & $ W_7 $ & $x_3\oplus x_7 $ & $ x_1 $ is absent\\ \hline $ 6 $ & $ W_I \oplus W_7 $ & $ x_1 \oplus x_2 \oplus x_4 \oplus x_5 \oplus x_7 $ & $ x_4 $ not in side-information\\ \hline $ 7 $ & $ W_6 \oplus W_7 $ & $ x_3\oplus x_6 \oplus x_8 $ & $ x_1 $ is absent\\ \hline $ 8$ & $ W_I\oplus W_6\oplus W_7 $ & $ x_1 \oplus x_2 \oplus x_4 \oplus x_5 \oplus x_6 \oplus x_8 $ &$ x_4 $ not in side-information\\ \hline $ 9 $ & $ W_8 $ & $ x_4\oplus x_8 \oplus x_9 $ & $ x_1 $ is absent \\ \hline $ 10 $ & $ W_I \oplus W_8 $ & $ x_1 \oplus x_2 \oplus x_3 \oplus x_5 \oplus x_8\oplus x_9 $ & $ x_3 $ not in side-information \\ \hline $ 11 $ & $ W_6\oplus W_8 $ & $ x_4 \oplus x_6 \oplus x_7 \oplus x_9 $ & $ x_1 $ is absent\\ \hline $ 12 $ & $ W_I \oplus W_6\oplus W_8 $ & $ x_1 \oplus x_2 \oplus x_3 \oplus x_5 \oplus x_6 \oplus x_7\oplus x_9 $ & $ x_3 $ not in side-information \\ \hline $ 13 $ & $ W_7 \oplus W_8 $ & $ x_3\oplus x_4 \oplus x_7\oplus x_8\oplus x_9 $ & $ x_1 $ is absent \\ \hline $ 14 $ & $ W_I \oplus W_7 \oplus W_8 $ & $ x_1\oplus x_2 \oplus x_5\oplus x_7\oplus x_8\oplus x_9 $ & $ x_5 $ is not in side-information\\ \hline $ 15 $ & $ W_6 \oplus W_7 \oplus W_8 $ & $ x_3\oplus x_4 \oplus x_6\oplus x_9 $ & $ x_1 $ is absent \\ \hline $ 16 $ & $ W_I \oplus W_6 \oplus W_7 \oplus W_8 $ & $ x_1 \oplus x_2 \oplus x_5 \oplus x_6 \oplus x_9 $ & $ x_5 $ not in side-information \\ \hline $ 17 $ & $ W_9 $ & $ x_5\oplus x_{8}\oplus x_{9} $ & $ x_1 $ is absent \\ \hline $ 18 $ & $ W_I \oplus W_9 $ & $ x_1 \oplus x_2 \oplus x_3 \oplus x_4 \oplus x_8\oplus x_{9} $ & $x_3$ not in side-information \\ \hline $ 19 $ & $ W_6\oplus W_9 $ & $x_5 \oplus x_6 \oplus x_7 \oplus x_{9} $ & $ x_1 $ absent \\ \hline $ 20 $ & $ W_I \oplus W_6\oplus W_9 $ & $ x_1 \oplus x_2 \oplus x_3 \oplus x_4 \oplus x_6 \oplus x_7 \oplus x_9 $ & $ x_3 $ not in side-information\\ \hline $ 21 $ & $ W_7 \oplus W_9 $ & $ x_3 \oplus x_5 \oplus x_7 \oplus x_8 \oplus x_{9} $ & $ x_1 $ is absent\\ \hline $ 22 $ & $ W_I \oplus W_7 \oplus W_9 $ & $ x_1 \oplus x_2 \oplus x_4 \oplus x_7 \oplus x_8 \oplus x_9 $ & $ x_4 $ not in side-information\\ \hline $ 23 $ & $ W_6\oplus W_7\oplus W_9 $ & $ x_3 \oplus x_5 \oplus x_6 \oplus x_9 $ & $ x_1 $ is absent \\ \hline $ 24 $ & $ W_I \oplus W_6\oplus W_7\oplus W_9 $ & $ x_1 \oplus x_2 \oplus x_4 \oplus x_6 \oplus x_9 $ & $x_4 $ not in side-information\\ \hline $ 25 $ & $ W_8 \oplus W_9 $ & $x_4 \oplus x_5 $ & $ x_1 $ is absent \\ \hline $26 $ & $ W_I \oplus W_8 \oplus W_9 $ & $ x_1 \oplus x_2 \oplus x_3 $ & $ x_3 $ not in side-information \\ \hline $ 27 $ & $ W_6 \oplus W_8 \oplus W_9 $ & $x_4\oplus x_5 \oplus x_6 \oplus x_7\oplus x_8 $ & $ x_1 $ is absent \\ \hline $ 28 $ & $ W_I \oplus W_6 \oplus W_8 \oplus W_9 $ & $ x_1 \oplus x_2 \oplus x_3 \oplus x_6 \oplus x_7 \oplus x_8 $ &$ x_3 $ not in side-information \\ \hline $ 29 $ & $ W_7\oplus W_8\oplus W_9 $ & $ x_3\oplus x_4\oplus x_5\oplus x_7 $ & $ x_1 $ is absent\\ \hline $ 30 $ & $ W_I\oplus W_7\oplus W_8\oplus W_9 $ & $ x_1\oplus x_2\oplus x_7 $ & $ x_7 $ not in side-information \\ \hline $ 31 $ & $ W_6\oplus W_7\oplus W_8\oplus W_9 $ & $ x_3\oplus x_4 \oplus x_5 \oplus x_6\oplus x_8 $ & $ x_1 $ is absent\\ \hline $ 32 $ & $ W_I\oplus W_6\oplus W_7\oplus W_8\oplus W_9 $ & $ x_1 \oplus x_2 \oplus x_6 \oplus x_8 $ & $ x_8 $ not in side-information\\ \hline $ 33 $ & $ W_{10} $ & $ x_3\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 34 $ & $ W_I\oplus W_{10} $ & $ x_1 \oplus x_2 \oplus x_4 \oplus x_5\oplus x_{11} $ & $ x_4 $ not in side information\\ \hline $ 35 $ & $ W_6\oplus W_{10} $ & $ x_3\oplus x_6\oplus x_7\oplus x_{8}\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 36 $ & $ W_I\oplus W_6\oplus W_{10} $ & $ x_1 \oplus x_2 \oplus x_4 \oplus x_5 \oplus x_6\oplus x_7 \oplus x_8 \oplus x_{10} \oplus x_{11} $ & $ x_4 $ not in side-information\\ \hline $ 37 $ & $ W_7\oplus W_{10} $ & $ x_7\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 38 $ & $ W_I \oplus W_7\oplus W_{10} $ & $ x_1\oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5 \oplus x_7 \oplus x_{10} \oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 39 $ & $ W_6\oplus W_7\oplus W_{10} $ & $ x_6 \oplus x_8 \oplus x_{10} \oplus x_{11} $ & $ x_1 $ is absent \\ \hline $ 40 $ & $ W_I\oplus W_6\oplus W_7\oplus W_{10} $ & $x_1\oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5 \oplus x_6 \oplus x_8 \oplus x_{10} \oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 41 $ & $ W_8\oplus W_{10} $ & $ x_3\oplus x_4\oplus x_8\oplus x_9\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 42 $ & $ W_I\oplus W_8\oplus W_{10} $ & $ x_1 \oplus x_2 \oplus x_5 \oplus x_8\oplus x_9 \oplus x_{10} \oplus x_{11} $ & $ x_5 $ not in -side-information\\ \hline $ 43 $ & $ W_6\oplus W_8\oplus W_{10} $ & $ x_3 \oplus x_4 \oplus x_6 \oplus x_7 \oplus x_{9} \oplus x_{10} \oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 44 $ & $ W_I\oplus W_6\oplus W_8\oplus W_{10} $ & $x_1 \oplus x_2 \oplus x_5 \oplus x_6 \oplus x_7 \oplus x_9 \oplus x_{10} \oplus x_{11} $ & $ x_5 $ not in side-information\\ \hline $ 45 $ & $ W_7\oplus W_8\oplus W_{10} $ & $ x_4\oplus x_7\oplus x_8\oplus x_9\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent \\ \hline $ 46 $ & $ W_I\oplus W_7\oplus W_8\oplus W_{10} $ & $ x_1\oplus x_2\oplus x_3\oplus x_5\oplus x_7\oplus x_8\oplus x_9\oplus x_{10}\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 47 $ & $ W_6\oplus W_7\oplus W_8\oplus W_{10} $ & $ x_4 \oplus x_6 \oplus x_9 \oplus x_{10} \oplus x_{11} $ & $ x_1 $ is absent \\ \hline $ 48 $ & $ W_I\oplus W_6\oplus W_7\oplus W_8\oplus W_{10} $ & $ x_1\oplus x_2 \oplus x_3 \oplus x_5 \oplus x_6 \oplus x_9 \oplus x_{10} \oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 49 $ & $ W_9\oplus W_{10} $ & $ x_3\oplus x_5\oplus x_8\oplus x_9\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 50 $ & $ W_I\oplus W_9\oplus W_{10} $ & $ x_1\oplus x_2\oplus x_4\oplus x_8\oplus x_9\oplus x_{10}\oplus x_{11}$ & $ x_4 $ not in side-information \\ \hline $ 51 $ & $ W_6\oplus W_9\oplus W_{10} $ & $ x_3\oplus x_5\oplus x_6\oplus x_7\oplus x_9\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 52 $ & $ W_I\oplus W_6\oplus W_9\oplus W_{10} $ & $ x_1\oplus x_2\oplus x_4\oplus x_6\oplus x_9\oplus x_{10}\oplus x_{11} $ & $ x_4 $ not in side-information\\ \hline $ 53 $ & $ W_7 \oplus W_9\oplus W_{10} $ & $ x_5\oplus x_7\oplus x_8\oplus x_9\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent \\ \hline $ 54 $ & $ W_I\oplus W_7 \oplus W_9\oplus W_{10} $ & $ x_1\oplus x_2\oplus x_3\oplus x_4\oplus x_7\oplus x_8\oplus x_9\oplus x_{10}\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 55 $ & $ W_6\oplus W_7 \oplus W_9\oplus W_{10} $ & $ x_5\oplus x_6\oplus x_9\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent \\ \hline $ 56 $ & $ W_I\oplus W_6\oplus W_7 \oplus W_9\oplus W_{10} $ & $x_1\oplus x_2\oplus x_3\oplus x_4\oplus x_6\oplus x_9\oplus x_{10}\oplus x_{11} $ & $ x_3 $ not in side-information \\ \hline $ 57 $ & $ W_8 \oplus W_9\oplus W_{10} $ & $ x_3\oplus x_4\oplus x_5\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 58 $ & $ W_I \oplus W_8 \oplus W_9\oplus W_{10} $ & $ x_1\oplus x_2\oplus x_{10}\oplus x_{11} $ & $ x_{10} $ not in side-information \\ \hline $ 59 $ & $ W_6 \oplus W_8 \oplus W_9\oplus W_{10} $ & $ x_3\oplus x_4\oplus x_5\oplus x_6\oplus x_7\oplus x_8\oplus x_{10}\oplus x_{11} $ &$ x_1 $ is absent \\ \hline $ 60 $ & $ W_I\oplus W_6 \oplus W_8 \oplus W_9\oplus W_{10} $ & $ x_1\oplus x_2\oplus x_6\oplus x_7\oplus x_8\oplus x_{10}\oplus x_{11} $ & $ x_7 $ not in side-information\\ \hline $ 61 $ & $ W_7 \oplus W_8 \oplus W_9\oplus W_{10} $ & $ x_4\oplus x_5\oplus x_7\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 62 $ & $ W_I\oplus W_7 \oplus W_8 \oplus W_9\oplus W_{10} $ & $ x_1\oplus x_2\oplus x_3\oplus x_7\oplus x_{10}\oplus x_{11} $ & $ x_3 $ not in side-information \\ \hline $ 63 $ & $ W_6\oplus W_7 \oplus W_8 \oplus W_9\oplus W_{10} $ & $ x_4\oplus x_5\oplus x_6\oplus x_8\oplus x_{10}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 64 $ & $ W_I\oplus W_6\oplus W_7 \oplus W_8 \oplus W_9\oplus W_{10} $ & $ x_1\oplus x_2\oplus x_3\oplus x_6\oplus x_8\oplus x_{10}\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 65 $ & $ W_{11} $ & $ x_1\oplus x_{11} $ & $ x_{11} $ not in side-information\\ \hline $ 66 $ & $ W_I\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_4\oplus x_5\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 67 $ & $ W_6\oplus W_{11} $ & $ x_1\oplus\ x_6\oplus x_7\oplus x_8\oplus x_{11} $ & $ x_7 $ not in side-information\\ \hline $ 68 $ & $ W_I\oplus W_6\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_4\oplus x_5\oplus x_6\oplus x_7\oplus x_8\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 69 $ & $ W_7\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_7\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline \end{tabular} \end{table*} \begin{table*} \centering \begin{tabular}{|c|c|c|c|} \hline $ 70 $ & $W_I\oplus W_7\oplus W_{11} $ & $ x_2\oplus x_4\oplus x_5\oplus x_7\oplus x_{11} $ &$ x_1 $ is absent\\ \hline $ 71 $ & $ W_6\oplus W_7\oplus W_{11} $ & $x_1\oplus x_3\oplus x_6\oplus x_8\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 72 $ & $ W_I\oplus W_6\oplus W_7\oplus W_{11} $ & $ x_2\oplus x_4\oplus x_5\oplus x_6\oplus x_8\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 73 $ & $ W_8\oplus W_{11} $ & $ x_1\oplus x_4\oplus x_8\oplus x_9\oplus x_{11} $ & $ x_4 $ not in side-information\\ \hline $ 74 $ & $ W_I\oplus W_8\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_5\oplus x_8\oplus x_9\oplus x_{11} $ & $ x_1 $ is absent \\ \hline $ 75 $ & $ W_6\oplus W_8\oplus W_{11} $ & $ x_{1} \oplus x_4 \oplus x_{6}\oplus x_7\oplus x_9\oplus x_{11} $ & $ x_4 $ not in side-information\\ \hline $ 76 $ & $ W_I\oplus W_6\oplus W_8\oplus W_{11} $ & $x_2\oplus x_3\oplus x_5\oplus x_6\oplus x_7\oplus x_9\oplus x_{11}$ & $ x_1 $ is absent \\ \hline $ 77 $ & $ W_7\oplus W_8\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_4\oplus x_7\oplus x_8\oplus x_9\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 78 $ & $W_I\oplus W_7\oplus W_8\oplus W_{11} $ & $ x_2\oplus x_5\oplus x_7\oplus x_8\oplus x_9\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 79 $ & $ W_6\oplus W_7\oplus W_8\oplus W_{11} $ & $x_1\oplus x_3\oplus x_4\oplus x_6\oplus x_9\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 80 $ & $W_I \oplus W_6\oplus W_7\oplus W_8\oplus W_{11} $ & $ x_2\oplus x_5\oplus x_6\oplus x_9\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 81 $ & $ W_9\oplus W_{11} $ & $ x_1\oplus x_5\oplus x_8\oplus x_{9}\oplus x_{11} $ & $ x_5 $ not in side-information\\ \hline $ 82 $ & $ W_I\oplus W_9\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_4\oplus x_8\oplus x_{9}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 83 $ & $ W_6\oplus W_9\oplus W_{11} $ & $ x_1\oplus x_5\oplus x_6\oplus x_7 \oplus x_9\oplus x_{11} $ & $ x_5 $ not in side-information\\ \hline $ 84 $ & $ W_I\oplus W_6\oplus W_9\oplus W_{11} $ & $x_2\oplus x_3\oplus x_4\oplus x_6\oplus x_7 \oplus x_9\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 85 $ & $ W_7\oplus W_9\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_5\oplus x_7\oplus x_8\oplus x_{9}\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 86 $ & $ W_I\oplus W_7\oplus W_9\oplus W_{11} $ & $ x_2\oplus x_4\oplus x_7\oplus x_8\oplus x_{9}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 87 $ & $ W_6\oplus W_7\oplus W_9\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_5\oplus x_6\oplus x_9\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 88 $ & $ W_I\oplus W_6\oplus W_7\oplus W_9\oplus W_{11} $ & $ x_2\oplus x_4\oplus x_6\oplus x_9\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 89 $ & $ W_8\oplus W_9\oplus W_{11} $ & $ x_1\oplus x_4\oplus x_5\oplus x_{11} $ & $ x_4 $ not in side-information\\ \hline $ 90 $ & $ W_I\oplus W_8\oplus W_9\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 91 $ & $ W_6\oplus W_8\oplus W_9\oplus W_{11} $ & $x_1\oplus x_4\oplus x_5\oplus x_6 \oplus x_7\oplus x_8\oplus x_{11} $ & $ x_4 $ not in side-information\\ \hline $ 92 $ & $ W_I\oplus W_6\oplus W_8\oplus W_9\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_6\oplus x_7\oplus x_8\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 93 $ & $ W_7\oplus W_8\oplus W_9\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_4\oplus x_5\oplus x_7\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 94 $ & $ W_I\oplus W_7\oplus W_8\oplus W_9\oplus W_{11} $ & $ x_2\oplus x_7\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 95 $ & $ W_6\oplus W_7\oplus W_8\oplus W_9\oplus W_{11} $ & $x_1\oplus x_3\oplus x_4\oplus x_5\oplus x_6\oplus x_8\oplus x_{11} $ & $ x_3 $ not in side-information\\ \hline $ 96 $ & $ W_I\oplus W_6\oplus W_7\oplus W_8\oplus W_9\oplus W_{11} $ & $ x_2\oplus x_6\oplus x_8\oplus x_{11} $ & $ x_1 $ is absent \\ \hline $ 97 $ & $ W_{10}\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_{10} $ & $ x_3 $ not in side-information\\ \hline $ 98 $ & $ W_I\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_4\oplus x_5\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 99 $ & $ W_6\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_6\oplus x_7\oplus x_8\oplus x_{10}$ & $ x_3 $ not in side-information\\ \hline $ 100 $ & $ W_I\oplus W_6\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_4\oplus x_5\oplus x_6\oplus x_7\oplus x_8\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 101 $ & $ W_7\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_{7}\oplus x_{10} $ & $ x_7 $ not in side-information\\ \hline $ 102 $ & $ W_I\oplus W_7\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_4\oplus x_5\oplus x_7\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 103 $ & $ W_6\oplus W_7\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_6\oplus x_8\oplus x_{10} $ & $ x_8 $ not in side-information\\ \hline $ 104 $ & $ W_I\oplus W_6\oplus W_7\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_4\oplus x_5\oplus x_6\oplus x_{8}\oplus x_{11} $ & $ x_1 $ is absent\\ \hline $ 105 $ & $ W_8\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_4\oplus x_8\oplus x_9\oplus x_{10} $ & $ x_3 $ not in side-information\\ \hline $ 106 $ & $ W_I\oplus W_8\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_5\oplus x_8\oplus x_9\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 107 $ & $ W_6\oplus W_8\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_4 \oplus x_6\oplus x_7\oplus x_9\oplus x_{10} $ &$ x_3 $ not in side-information \\ \hline $ 108 $ & $ W_I\oplus W_6\oplus W_8\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_5\oplus x_6\oplus x_7\oplus x_9\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 109 $ & $ W_7\oplus W_8\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_4\oplus x_7\oplus x_8\oplus x_{9}\oplus x_{10} $ & $ x_4 $ not in side-information\\ \hline $ 110 $ & $W_I\oplus W_7\oplus W_8\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_5\oplus x_7\oplus x_8\oplus x_{9}\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 111 $ & $ W_6\oplus W_7\oplus W_8\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_4\oplus x_6\oplus x_9\oplus x_{10} $ & $ x_4 $ not in side-information\\ \hline $ 112 $ & $W_I\oplus W_6\oplus W_7\oplus W_8\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_5\oplus x_6\oplus x_{9}\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 113 $ & $ W_9\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_5\oplus x_8\oplus x_9\oplus x_{10} $ &$ x_3 $ not in side-information \\ \hline $ 114 $ & $ W_I\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_4\oplus x_8\oplus x_9\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 115 $ & $ W_6\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_5\oplus x_6\oplus x_7\oplus x_9\oplus x_{10} $ & $ x_3 $ not in side-information\\ \hline $ 116 $ & $ W_I\oplus W_6\oplus W_9\oplus W_{10}\oplus W_{11} $ & $x_2\oplus x_4\oplus x_6\oplus x_7\oplus x_9\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 117 $ & $ W_7\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_5\oplus x_7\oplus x_8\oplus x_9\oplus x_{10} $ &$ x_5 $ not in side-information \\ \hline $ 118 $ & $ W_I\oplus W_7\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_4\oplus x_7\oplus x_8\oplus x_9\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 119 $ & $ W_6\oplus W_7\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_5\oplus x_6\oplus x_9\oplus x_{10} $ &$ x_5 $ not in side-information \\ \hline $ 120 $ & $ W_I\oplus W_6\oplus W_7\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_4\oplus x_6\oplus x_9\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 121 $ & $ W_8\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_4\oplus x_5\oplus x_{10} $ & $ x_3 $ not in side-information\\ \hline $ 122 $ & $ W_I\oplus W_8\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 123 $ & $ W_6\oplus W_8\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_3\oplus x_4\oplus x_5\oplus x_6\oplus x_7\oplus x_8\oplus x_{10} $ & $ x_3 $ not in side-information\\ \hline $ 124 $ & $ W_I\oplus W_6\oplus W_8\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_6\oplus x_7\oplus x_8\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 125 $ & $ W_7\oplus W_8\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_4\oplus x_5 \oplus x_7\oplus x_{10} $ & $ x_4 $ not in side-information \\ \hline $ 126 $ & $ W_I\oplus W_7\oplus W_8\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_7\oplus x_{10} $ & $ x_1 $ is absent\\ \hline $ 127 $ & $ W_6\oplus W_7\oplus W_8\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_1\oplus x_4\oplus x_5\oplus x_6 \oplus x_8\oplus x_{10} $ & $ x_4 $ not in side-information \\ \hline $ 128 $ & $ W_I\oplus W_6\oplus W_7\oplus W_8\oplus W_9\oplus W_{10}\oplus W_{11} $ & $ x_2\oplus x_3\oplus x_6\oplus x_8\oplus x_{10} $ & $ x_1 $ is absent\\ \hline \end{tabular}\\ \caption{Table showing that decoding $ x_1 $ is not possible using the code obtained by using \textit{Construction} $ 1 $ on $ \mathcal{G}_8 $.} \label{tablef} \end{table*} \end{ex} \section{Discussion} For the IC structure $ \mathcal{G}$ given in Fig. $ 2 $ of \cite{VaR} it has been shown that the index code obtained by using \textit{Construction} $ 1 $ is not decodable using \textit{Algorithm} $ 1 $. This is supported by the fact that the conditions \textit{c}$ 1 $ and \textit{c}$ 2 $ are violated as shown in the Table \ref{table_a} and Table \ref{table_b} respectively. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V_{NI}(T_i)\backslash N^+_{T_i}(i) $ & $ a_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace7,9,10,11,12,13,14\rbrace $ & $ \lbrace9,10,11,12,13\rbrace $ & $ 2 $, $ 1 $, $ 1 $, $ 1 $, $ 1 $ \\ \hline $ T_2 $ & $ \lbrace10,11,12,13\rbrace $ & $ \lbrace11,12,13\rbrace $ & $ 1 $, $ 1 $, $ 1 $ \\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 1 $ for $ \mathcal{G} $.} \label{table_a} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $ T_i $ & $ V_{NI}(i) $ & $ j \in V(\mathcal{G})\backslash V_{T_i}$ & $ b_{i,j} $ \\ \hline\hline $ T_1 $ & $ \lbrace6,7,9,10,11,12,13,14\rbrace $ & $ \lbrace8\rbrace $ & $0$\\ \hline $ T_2 $ & $ 10,11,12,13 $ & $ \lbrace7,8,9,14\rbrace $ & $ 0 $, $ 0 $, $ 1 $,$ 0 $ \\ \hline \end{tabular}\\ \caption{Table that verifies \textit{c}$ 2 $ for $ \mathcal{G} $.} \label{table_b} \end{table} The index code obtained by using \textit{Construction} $ 1 $ on $ \mathcal{G}$ is $W_I =x_1\oplus x_2\oplus x_3\oplus x_4\oplus x_5\oplus x_6; ~~W_7=x_7 \oplus x_2 \oplus x_6; ~~ W_8=x_8 \oplus x_3 \oplus x_5; ~~ W_9=x_9\oplus x_{10}; ~~ W_{10}=x_{10}\oplus x_{11}; ~~ W_{11}=x_{11}\oplus x_4 \oplus x_{12}; ~~ W_{12}=x_{12}\oplus x_{13}; ~~ W_{13}=x_{13}\oplus x_{5} \oplus x_{9}; ~~ W_{14}=x_{14}\oplus x_{9}.$ Even though the code is not decodable by \textit{Algorithm} $ 1 $, it is possible for the messages to be decoded using some other linear combinations of index code symbols as \begin{itemize} \item $ x_1 $ is decoded using \begin{eqnarray*} Z'_1 &=& W_I \oplus W_7\oplus W_9 \oplus W_{10}\oplus W_{11}\oplus W_{12}\oplus W_{13}\\ &=& x_1 \oplus x_3 \oplus x_7. \end{eqnarray*} \item $ x_2 $ is decoded using \begin{eqnarray*} Z'_2 &=& W_I \oplus W_9 \oplus W_{10}\oplus W_{11}\oplus W_{12}\oplus W_{13}\\ &=& x_1 \oplus x_2 \oplus x_3 \oplus x_6. \end{eqnarray*} \item The remaining messages are decodable using \textit{Algorithm} $ 1 $. \end{itemize} In \cite{TOJ}, it is claimed that the the index code obtained by \textit{Construction} $1$ for an IC structure is a valid index code by proposing \textit{Algorithm} $ 1 $ for decoding. Since \textit{Algorithm} $ 1 $ works only for a class of IC structures (those that satisfy the conditions \textit{c}$1$ and \textit{c}$2$), the validity of index code obtained by \textit{Construction} $ 1 $ for an arbitrary IC structure is now an open problem. Also from \textit{Theorem} $ 2 $, it is clear that an IC structure which has no cycles containing only non-inner vertices satisfies \textit{c}$ 1 $ and \textit{c}$ 2 $. Hence, along with the proof of \textit{Theorem} $ 3 $ in \cite{TOJ}, the proof of optimality of index codes obtained by using \textit{Construction} $ 1 $ on IC structures which do not contain cycles consisting of only non-inner vertices holds. From the example of Fig.$2$ in \cite{VaR} discussed at the beginning of this section and Examples 1,2 and $8$ in this paper the following directions for further research arise: \begin{itemize} \item Characterize the IC structures for which there exists no decoding algorithm that uses only linear combinations of the index code symbols for the codes constructed using \textit{Construction} 1 (like in Example \ref{exam8}). \item Characterize the IC structures for which the \textit{Algorithm} 1 does not work for the code constructed using \textit{Construction} 1 but there exists decoding algorithm for the code which use only linear combinations of index code symbols (like in Examples 1 and $2$ in this paper and Fig.$2$ in \cite{VaR}). \item Identify the IC structures apart from those in Theorem \ref{thm2} for which the codes obtained by \textit{Construction} 1 is decodable using \textit{Algorithm} 1. \end{itemize} \section*{Acknowledgements} This work was supported partly by the Science and Engineering Research Board (SERB) of Department of Science and Technology (DST), Government of India, through J. C. Bose National Fellowship to Professor B. Sundar Rajan
{ "timestamp": "2018-02-13T02:21:40", "yymm": "1802", "arxiv_id": "1802.04183", "language": "en", "url": "https://arxiv.org/abs/1802.04183" }
\section{Introduction} The theory of center manifolds provides a powerful method of analysis of local bifurcations in infinite-dimensional systems associated with a PDE. They allow to reduce, under certain conditions, the infinite-dimensional dynamics near a bifurcation point to a finite-dimensional dynamics, described by a system of ordinary differential equations on the center manifold, which is the submanifold of the space of solutions along which the PDE flow has sub-exponential growth. A novel categorical approach to the study of center manifolds was initiated by the authors in \cite{hkkp_semistability}. This lead to the discovery of \begin{enumerate}[1)] \item The presence of \textit{iterated logarithms} in the asymptotics of solutions \item A theory of \textit{weight filtrations} in modular lattices \end{enumerate} While our previous work \cite{hkkp_semistability} provided a detailed study of the minimizing flow on the space of metrics of a quiver representation, the purpose of this work is to explore further generalizations and applications to classical PDEs, as well as suggest conjectures and directions for future research. All this is part of more general program - Categorical K\"ahler Geometry - to which we will return with more details elsewhere. \subsubsection*{Hermitian Yang--Mills flow} A detailed study of the Yang--Mills functional for bundles over a compact Riemann surface $X$ with K\"ahler form $\omega$ was initiated by Atiyah--Bott~\cite{atiyah_bott}. The limiting behavior of the gradient flow of the Yang--Mills functional on the space of connections was determined by Daskalopoulos~\cite{daskalopoulos92} and R\aa de~\cite{rade92} and is given by the associated graded of the Harder--Narasimhan--Seshadri filtration. On the other hand, since any metric on a holomorphic bundle $E$ over $X$ has an associated metric connection compatible with the complex structure, the Yang--Mills flow can be written as \begin{equation} h^{-1}\partial_t h=-2i(\Lambda F-\lambda) \end{equation} on the space of Hermitian metrics $h$ on $E$, where $F$ is the curvature of the connected associated with $h$. In general, $h$ will grow or decay at different rates on various subbundles in a way determined by a refinement of the Harder--Narasimhan filtration provided by the theory of weight filtrations in modular lattices (see \cite{hkkp_semistability} and Section~\ref{sec_weightfilt}). More precisely, we show the following. \begin{thm} \label{thm_hym_asymp} Let $X$ be a compact Riemann surface with K\"ahler form $\omega$ and $E$ be a holomorphic bundle on $X$. Then there exists a canonical filtration \begin{equation} 0=E_0\subset E_1\subset\ldots\subset E_n=E \end{equation} labelled by $\beta_1<\ldots<\beta_n$ with \begin{equation} \beta_k\in \RR t\oplus\RR\log t\oplus\RR\log\log t\oplus\ldots \end{equation} such that \begin{equation} \left\|\log\left(h\mid_{E_k}(x)\right)\right\|=\beta_k+O(1) \end{equation} where we choose some reference metric on $E$ so that $h$ becomes a positive self-adjoint section of $\mathrm{End}(E)$ and the bounded term $O(1)$ is uniform in $x\in X$. Moreover, $E_k/E_{k-1}$ is a direct sum of stable bundles of some slope $\mu_k\in\RR$ and \begin{equation} \beta_k=4\pi\left(\int_X\omega\right)^{-1}\left(\mu_k-\mu(E)\right)t+\ldots. \end{equation} \end{thm} The main ingredients of the proof are the theory of weight filtrations introduced in our previous work~\cite{hkkp_semistability} and a monotonicity property of the HYM flow (Theorem~\ref{prop_mono_hym}). In the case when $X$ is a compact K\"ahler manifold of higher dimension results on the limit of the Hermitian--Yang--Mills flow on the space of connections where obtained by Daskalopoulos--Wentworth~\cite{daskalopoulos_wentworth}, Jacob~\cite{jacob15}, Collins--Jacob~\cite{collins_jacob}, Sibley--Wentworth~\cite{sibley_wentworth}. We anticipate that our result can be extended to higher dimensional $X$, but will describe the behavior of $h$ only outside a set of complex codimension two. \subsubsection*{Modified curve shortening flow} In Section~\ref{sec_curve_shortening} we explore the A-side analog of the HYM flow. The example considered is a type of curve shortening flow on a punctured cylinder. In explicit terms, the PDE is \begin{equation} \partial_tf=\rho(x,f)\partial_{xx}f \end{equation} which is a non-linear modification of the standard 1-dimensional heat equation. The function $\rho$ is assumed to be positive except for a finite set of quadratic zeros along the $x$-axis which are the punctures of the cylinder. We provide a heuristic argument to show that this PDE reduces to a system of ODEs in variables $y_i=|f(x_i)|/\pi$, $i\in\ZZ/n$, of the form \begin{equation} \frac{\dot{y}_i}{y_i}=\frac{\epsilon_{i-1}\epsilon_i}{m_i}y_{i-1}-\left(\frac{1}{m_i}+\frac{1}{m_{i+1}}\right)y_i+\frac{\epsilon_{i}\epsilon_{i+1}}{m_{i+1}}y_{i+1} \end{equation} The asymptotics of this system (which depend on the $m_i$ -- distances between punctures) where completely determined in~\cite{hkkp_semistability} and are related in this case to the structure of the partially wrapped Fukaya category of the punctured cylinder. \begin{conj} The PDE \eqref{flow_pde} has an $n$-dimensional center manifold on which the flow is approximated by the system \eqref{system_y} in the sense that error terms of solutions are bounded in coordinates $\log(y_i)$. \end{conj} \subsubsection*{Acknowledgements} We would like to thank S.~Donaldson for the encouragements and constant attention to our work. We also are also very grateful to P.~Griffiths, A.~Petkov, T.~Pantev, K.~Fukaya and C.~Simpson for several illuminating discussions we have had over the last months. We are also thankful to UMiami Applied Math seminar members S.~Cantrell, C.~Costner and S.G.~Ruan for useful suggestions and references. The authors were supported by a Simons Investigators Grant, a Simons Collaboration research grant, NSF DMS 150908, ERC Gemis, DMS-1265230, DMS-1201475 OISE-1242272 PASI. Simons collaborative Grant - HMS. HSE-grant, HMS and automorphic forms. The second author is partially supported by Laboratory of Mirror Symmetry NRU HSE, RF government grant, ag. 14.641.31.000. \section{Weight filtrations in modular lattices} \label{sec_weightfilt} This section presents a brief summary of the theory of stability in modular lattices and the weight--type filtrations introduced in our previous work~\cite{hkkp_semistability} to which we refer to for more examples and details. We begin with some basic definitions. A \textbf{lattice} is a partially ordered set, $L$, in which any two elements $a,b\in L$ have a least upper bound $a\vee b$ and greatest lower bound $a\wedge b$. The following two properties of a lattice will be crucial. \begin{itemize} \item \textbf{modularity}: $(x\wedge b)\vee (a\wedge b)=((x\wedge b)\vee a)\wedge b$ for all $x,a,b\in L$ \item \textbf{finite length}: There is an upper bound on the length, $n$, of any chain $a_0<a_1<\ldots<a_n$ of elements in $L$. \end{itemize} In particular, unless $L=\emptyset$, there are least and greatest elements $0$ and $1$ in any finite length lattice. A lattice is \textbf{artinian} if it is modular and has finite length. A rich class of examples to keep in mind is any collection of subspaces of a given finite-dimensional vector space which is closed under sum and intersection. There is a good theory of slope stability for artinian lattices. First, we need to define the analog of the Grothendieck K-group. We use the notation $[a,b]:=\{x\in L\mid a\leq x\leq b\}$ for the interval from $a$ to $b$ in a lattice. Given an artinian lattice, $L$, we let $K(L)$ be the abelian group with generators $\overline{[a,b]}$, $a\leq b$, and relations \begin{equation} \overline{[a,b]}+\overline{[b,c]}=\overline{[a,c]},\qquad \overline{[a,a\vee b]}=\overline{[a\wedge b,b]}. \end{equation} Denote by $K^+(L)\subset K(L)$ the sub-semigroup generated by elements $\overline{[a,b]}$, $a<b$. The Jordan--H\"older--Dedekind theorem implies that $K(L)$ (resp. $K^+(L)$) is a free abelian group (resp. semigroup). Stability depends on a choice of \textbf{polarization} (or \textit{central charge}), which is a group homomorphisms $Z:K(L)\to\CC$ such that the image of $K^+(L)$ is contained in the right half--plane $\{\mathrm{Re}(z)>0\}$. Then for each $a<b\in L$ we get a well-defined \textit{phase} \begin{equation} \phi([a,b]):=\mathrm{Arg}(Z(\overline{[a,b]}))\in (-\pi/2,\pi/2). \end{equation} A polarized lattice is \textbf{semistable} if $\phi([0,x])\leq\phi(L)$ for any $x\neq 0$. Any polarized artinian lattice breaks up canonically into semistable ones. More precisely, there is a unique chain $0=a_0<a_1<\ldots<a_n=1$, called the \textbf{Harder--Narasimhan filtration}, such that $[a_{k-1},a_k]$ is semistable for $k=1,\ldots,n$ and $\phi([a_0,a_1])>\ldots>\phi([a_{n-1},a_n])$. We remark that semistability imposes no restrictions on the underlying artinian lattice since we could have chosen $Z$ purely real, for instance. \begin{remark} While slope stability is often introduced in the context of abelian categories, it really depends only on the lattice structure. Furthermore, there are many natural examples of modular lattices which do not come from abelian categories. \end{remark} An $\RR$-filtration in $L$ is given by real numbers $\lambda_1<\ldots<\lambda_n$ and a chain $0=a_0<a_1<\ldots<a_n=1$. We think of $\lambda_k$ as being associated with the interval $[a_{k-1},a_k]$. A lattice is \textbf{complemented} if any $a\in L$ has a \textbf{complement}, i.e. an element $b\in L$ with $a\wedge b=0$, $a\vee b=1$. This corresponds to the notion of \textit{semisimplicity} in representation theory. An $\RR$-filtration $0=a_0<a_1<\ldots<a_n=1$ labeled by $\lambda_1<\ldots<\lambda_n$ is \textbf{paracomplemented} if for any $1\leq k\leq l\leq n$ with $\lambda_l-\lambda_k<1$ the interval $[a_{k-1},a_l]$ is complemented. Fixing such an $\RR$-filtration, there is another artinian lattice whose elements are given by choices of $b_k\in[a_{k-1},a_k]$, $k=1,\ldots,n$ such that $[b_k,b_l]$ is complemented for $1\leq k<l\leq n$ with $\lambda_k-\lambda_k\leq 1$. Denote this lattice by $\mathcal M(a,\lambda)$. If furthermore $L$ has an $\RR$-valued polarization $X:K_0(L)\to\RR$ then $\mathcal M(a,\lambda)$ can be given the polarization \begin{equation} Z([b,c]):=\sum_{k=1}^n(1+i\lambda_k)X([b_k,c_k]). \end{equation} The following theorem, proven in \cite{hkkp_semistability}, provides a canonical way of breaking up an artinian lattice with $\RR$-polarization into complemented ones. \begin{thmdef}\label{balanced_chain_thm} Let $L$ be an artinian lattice and $X:K(L)\to\RR$ an $\RR$-valued polarization. Then there exists a unique paracomplemented $\RR$-filtration $(a,\lambda)$, called the \textbf{weight filtration}, such that $\mathcal M(a,\lambda)$ is semistable with phase $\phi=0$. \end{thmdef} Note that the weight filtration is trivial (i.e. $n=1$, $\lambda_1=0$) if and only if $L$ is complemented. There is a sublattice \begin{equation}\label{sublattice_0} \mathcal M(a,\lambda)^0:=\{x\in \mathcal M(a,\lambda)\mid x=0\text{ or }\phi([0,x])=0\} \end{equation} and $Z$ restricts to an $\RR$--valued polarization on it. If this lattice is not complemented (which corresponds to the case when $L$ is \textit{polystable}) we may apply the theorem again and get a refined filtration of $L$ indexed by $\RR^2$ with the lexicographical order. Proceeding inductively until reaching a complemented lattice we get the \textbf{iterated weight filtration} indexed by the space of functions \begin{equation} \RR^\infty=\RR\log t\oplus \RR\log\log t\oplus\ldots \end{equation} ordered by growth as $t\to+\infty$. Examples in \cite{hkkp_semistability} show that arbitrarily deep refinement can occur. \section{K\"ahler type DG--algebras} \label{sec_lozenge} In this section we introduce an algebraic framework which unifies both \begin{enumerate}[1)] \item $U(n)$ Yang--Mills bundles over Riemann surfaces and \item Quiver representations with harmonic metric. \end{enumerate} This framework will be used to construct asymptotic solutions to certain gradient flows by an iterative procedure. It is a generalization and reinterpretation of the $*$-bimodule formalism used in \cite{hkkp_semistability}. \subsection{Motivation} Let $X$ be a compact Riemann surface with K\"ahler form $\omega$ and $E$ a holomorphic bundle which is a direct sum of stable bundles of various slopes. According to a theorem of Narasimhan--Seshadri~\cite{narasimhan_seshadri}, there exists a Hermitian metric on $E$ such that the corresponding compatible connection has constant central curvature $F$, i.e. satisfies the Yang--Mills equation $d^*F=0$. A detailed study of the Yang--Mills functional for bundles over a Riemann surface was initiated by Atiyah--Bott~\cite{atiyah_bott}. The deformation theory of $E$ is controlled by the DG-algebra \begin{equation} A:=\mathcal A^\bullet(X,\mathrm{End}(E)) \end{equation} of forms with values in the endomorphism bundle of $E$. The differential on $A$ is induced by the connection on $E$ and the product is a combination of the wedge product of forms and the composition of endomorphisms. Moreover $A$ has a trace \begin{equation} \tau:A^2\to\mathbb C,\qquad \tau(\alpha)=\int_X\mathrm{tr}(\alpha), \end{equation} a $*$-structure $\alpha\mapsto\alpha^*$ coming from the metric on $E$, and a splitting $A^1=A^{1,0}\oplus A^{0,1}$ coming from the complex structure on $X$. The K\"ahler form $\omega$ on $X$ gives an isomorphism $L:A^0\to A^2$, $\alpha\mapsto \omega\wedge\alpha$. The condition on the metric on $E$ implies the relation \begin{equation} \Delta=2\Delta_\partial=2\Delta_{\overline{\partial}} \end{equation} among the Laplacians for the differentials $d,\partial,\overline{\partial}$ on $A$. We will encounter finite--dimensional instances of algebras with these kinds of structures in our analysis of the Yang--Mills flow. For this reason we summarize the relevant axiomatics below. \subsection{Lozenge algebras} A \textbf{curved DG-algebra} over a field $\mathbf{k}$ is a $\ZZ$-graded associative algebra, $A$, with unit $1$, $\mathbf{k}$-linear derivation $d:A^\bullet\to A^{\bullet+1}$, and element $\theta\in A^2$, the \textit{curvature}, such that $d\theta=0$ and $d^2a=[\theta,a]$ for any $a\in A$. An element $\alpha\in A^1$ gives rise to a deformation $(A,\widetilde{d},\widetilde{\theta})$ of $(A,d,\theta)$ where \begin{equation} \widetilde{d}a:=da+[\alpha,a], \qquad \widetilde{\theta}:=\theta+d\alpha+\alpha^2. \end{equation} In the following we will always have $d^2=0$ or equivalently $\theta$ is central. In this case $A$ can be considered as an ordinary DG-algebra, but $\theta$ provides additional data. A \textbf{Calabi--Yau structure} of dimension $n$ on a DG-algebra $A$ over a field $\mathbf{k}$ with finite dimensional total cohomology $H^\bullet(A)$ is given by a linear functional \begin{equation} \tau:A^n\to \mathbf{k} \end{equation} called the \textit{trace} so that $\tau([a,b])=0$, $\tau(da)=0$ for all $a,b\in A$, and $(a,b)\mapsto \tau(ab)$ induces perfect pairings $H^k(A)\otimes H^{n-k}(A)\to\mathbf{k}$. Here and throughout, $[a,b]$ denotes the supercommutator which is $ab-(-1)^{|a||b|}ba$ for homogeneous elements $a,b$ of degrees $|a|,|b|$ respectively. A \textbf{$*$--structure} on a curved DG-algebra $A$ over $\CC$ is given by a $\CC$-antilinear involution $a\mapsto a^*$ such that \begin{equation} (ab)^*=(-1)^{|a||b|}b^*a^*,\qquad (da)^*=da^*,\qquad 1^*=1, \qquad \theta^*=-\theta. \end{equation} See~\cite{deligne_morgan_susy} for the categorical justification for this convention. If $A$ has a trace then we require $\tau(a^*)=\overline{\tau(a)}$. \begin{df} A \textbf{lozenge algebra} is a curved DG-algebra $A$ concentrated in degrees $0,1,2$ with $d^2=0$, trace $\tau:A^2\to\CC$ giving a Calabi--Yau structure of dimension $2$, a direct sum decomposition $A^1=A^{1,0}\oplus A^{0,1}$ as an $A^0$ bimodule such that \begin{equation} (A^{1,0})^*=A^{0,1},\qquad A^{1,0}A^{1,0}=0 \end{equation} and an element $\omega\in A^2$ with \begin{equation} \omega^*=\omega,\qquad [\omega,a]=0 \end{equation} inducing an isomorphism $L:A^0\to A^2$, $a\mapsto \omega\wedge a$ with inverse $\Lambda:A^2\to A^0$. Furthermore, we require that the following bilinear maps \begin{gather} \overline{A^{1,0}}\otimes A^{1,0}\to\CC,\qquad a\otimes b\mapsto -i\tau(a^*b) \\ \overline{A^{0,1}}\otimes A^{0,1}\to\CC,\qquad a\otimes b\mapsto i\tau(a^*b) \\ \overline{A^0}\otimes A^0\to\CC,\qquad a\otimes b\mapsto \tau(\omega a^*b) \end{gather} are positive definite, thus providing scalar products on $A^0$ and $A^1$. Finally, the Yang--Mills condition on the curvature \begin{equation}\label{abstract_ym_condition} d\Lambda\theta=0 \end{equation} should hold. \end{df} Suppose $A$ is a lozenge algebra, then the differential $d:A^0\to A^1$ is a sum of $\partial: A^0\to A^{1,0}$ and $\overline{\partial}: A^0\to A^{0,1}$, and similarly $d:A^1\to A^2$ is a sum of $\partial: A^{0,1}\to A^2$ and $\overline{\partial}: A^{1,0}\to A^2$. \begin{equation} \begin{tikzcd} & A^{1,0} \arrow{dr}{\overline{\partial}} \\ A^0\arrow{ur}{\partial}\arrow{dr}{\overline{\partial}} & & A^2 \\ & A^{0,1} \arrow{ur}{\partial} \end{tikzcd} \end{equation} The usual first-order K\"ahler identities can be used to define adjoints of these differentials. \begin{lem} Adjoints of $\partial$, $\overline{\partial}$, and $d$ are given by \begin{equation}\label{kaehler_ident1} \partial^*=i[\Lambda,\overline{\partial}],\qquad \overline{\partial}^*=-i[\Lambda,\partial],\qquad d^*=i[\Lambda,\overline{\partial}-\partial] \end{equation} respectively. \end{lem} \begin{proof} We consider $\partial:A^0\to A^{1,0}$, the other cases are entirely similar. Let $a\in A^0$ and $b\in A^{1,0}$, then \begin{equation} \langle \partial a,b\rangle = -i\tau\left((\partial a)^*b\right)=-i\tau(\overline{\partial} a^*b)=i\tau(a^* \overline{\partial} b)=\tau(\omega a^*\partial^*b)=\langle a,\partial^*b\rangle \end{equation} where we used $\tau(\overline{\partial}(a^*b))=\tau(d(a^*b))=0$. \end{proof} The three Laplacians \begin{equation} \Delta=dd^*+d^*d,\qquad \Delta_{\overline{\partial}}=\overline{\partial}\overline{\partial}^*+\overline{\partial}^*\overline{\partial},\qquad \Delta_{\partial}=\partial\partial^*+\partial^*\partial \end{equation} are related by the following lemma. \begin{lem} \begin{equation}\label{kaehler_ident2} \Delta=2\Delta_{\overline{\partial}}=2\Delta_{\partial} \end{equation} \end{lem} \begin{proof} First, \begin{equation} \Delta=(\partial^*+\overline{\partial}^*)(\partial+\overline{\partial})+(\partial+\overline{\partial})(\partial^*+\overline{\partial}^*)=\Delta_\partial+\Delta_{\overline{\partial}}. \end{equation} Since $d^2=0$ by assumption we get $[\partial,\overline{\partial}]=0$ (anticommutator) and by \eqref{kaehler_ident1}: \begin{equation} \Delta_\partial=[\partial^*,\partial]=i[[\Lambda,\overline{\partial}],\partial]=-i[[\Lambda,\partial],\overline{\partial}]=[\overline{\partial}^*,\overline{\partial}]=\Delta_{\overline{\partial}} \end{equation} \end{proof} \begin{lem}\label{lem_lozenge_semisimple} Suppose $A$ is a lozenge algebra with $A^0$ finite--dimensional, then $A^0$ is a $C^*$-algebra, in particular semisimple, i.e. a product of matrix algebras. \end{lem} \begin{proof} By assumption, $A^0$ has an inner product $\langle a,b\rangle=\tau(\omega a^*b)$. The (faithful) regular representation $A^0\to\mathrm{End}(A^0)$ by left multiplication of $A^0$ on itself is a $*$-representation since \begin{equation} \langle l_ab,c\rangle=\tau(\omega b^*a^*c)=\langle b,l_{a^*}c\rangle \end{equation} i.e. $(l_a)^*=l_{a^*}$. Thus, if $A^0$ is finite--dimensional, then $A^0$ is a $*$-subalgebra of the finite--dimensional $C^*$-algebra $\mathrm{End}(A^0)$. Any finite--dimensional $C^*$-algebra is a product of matrix algebras. \end{proof} The lemma can be used to classify finite--dimensional lozenge algebras. We have \begin{equation} A^0\cong\mathrm{End}(\mathcal E_1)\oplus\ldots\oplus\mathrm{End}(\mathcal E_n) \end{equation} for some finite--dimensional Hermitian vector spaces $\mathcal E_i$. Any bimodule, in particular $\mathcal A^{0,1}$, is a direct sum of simple bimodules $\mathrm{Hom}(\mathcal E_i,\mathcal E_j)$. If we associate an arrow $i\to j$ to each such simple summand we get a quiver with vertices $\{1,\ldots,n\}$. Furthermore, since $A^0$ is semisimple,the derivation $\overline{\partial}:A^0\to A^{0,1}$ must be inner, i.e. of the form $a\mapsto [\alpha,a]$ for some $\alpha\in A^{0,1}$. The $\mathcal E_i$ together with $\alpha$ are exactly the data of a representation of the quiver. This shows that in the finite--dimensional case we recover precisely the setup considered in the previous paper \cite{hkkp_semistability} but using a slightly different formalism. The table below describes the translation between the two formalisms. \begin{center} \begin{tabular}{|c|c|} \hline Bimodule formalism & Lozenge algebra \\ \hline \hline $B$ & $A^0\cong A^2$ via $\omega$ \\ \hline $\overline{M}\oplus M$ & $A^1=A^{1,0}\oplus A^{0,1}$ \\ \hline $M\otimes\overline{M}\to B$ & $(a,b)\mapsto -i\Lambda(ab)$ \\ \hline $\overline{M}\otimes M\to B$ & $(a,b)\mapsto i\Lambda(ab)$ \\ \hline $\rho$ & $i\theta/\omega$ \\ \hline $\tau$ & $b\mapsto \tau(\omega b)$ \\ \hline $b\mapsto [\phi_0,b]$ & $\overline{\partial}$ \\ \hline \end{tabular} \end{center} \subsection{The harmonic part of the algebra} Let $A$ be a lozenge algebra. In particular, the cohomology $H(A)$ is assumed to be finite--dimensional. This implies that the subspace of harmonic chains \begin{equation}\label{harmonic_part} \mathcal H=\mathrm{Ker}(\Delta)\subset A \end{equation} is finite--dimensional, and as a consequence there exists an orthogonal projection $P:A\to \mathcal H$. The restricted operator \begin{equation} \Delta\mid_{\mathcal H^\perp}:\mathcal H^\perp\to\mathcal H^\perp \end{equation} is then injective, but could fail to be surjective. We assume henceforth that it is surjective, which will be true if $A$ is either finite--dimensional or by harmonic theory in the vector bundle case. Then there is a unique \textit{Green's operator} $G:A\to A$ with \begin{equation}\label{greens} PG=GP=0,\qquad P+\Delta G=P+G\Delta=\mathrm{id}_A. \end{equation} Since $\Delta$ commutes with $d$ and $d^*$, the same is also true for $G$. Under the assumption of existence of a Green's operator the cohomology of $A$ is isomorphic to $\mathcal H$ as a vector space, however $\mathcal H$ is in general not a subalgebra of $A$. \begin{lem} Let $A$ be a lozenge algebra with Green's operator, then $\mathcal H=\mathrm{Ker}(\Delta)$ is a lozenge algebra with $d=0$, the same curvature $\theta\in A^2$, $\omega\in A^2$, restricted trace $\tau\mid_{\mathcal H}$, and product $\mathcal H^1\otimes \mathcal H^1\to \mathcal H^2$ the composition \begin{equation} \mathcal H^1\otimes \mathcal H^1\longrightarrow A^2 \xrightarrow{P} \mathcal H^2 \end{equation} of the product on $A$ and the projection to the harmonic subspace. Moreover, $\mathcal H$ is isomorphic to $H(A)$ and quasi-isomorphic to $A$ (i.e. $A$ is formal). \end{lem} \begin{proof} Taking into account that $A$ is concentrated in degrees $0,1,2$ and the K\"ahler identities \eqref{kaehler_ident1}, \eqref{kaehler_ident2} we see that harmonicity can be characterized by \begin{gather} a\in A^0: \Delta a=0 \Leftrightarrow \partial a=0 \Leftrightarrow \overline{\partial}a=0 \\ a\in A^{1,0}: \Delta a=0 \Leftrightarrow \partial^*a=0 \Leftrightarrow \overline{\partial}a=0 \\ a\in A^{0,1}: \Delta a=0 \Leftrightarrow \partial a=0 \Leftrightarrow \overline{\partial}^*a=0 \\ a\in A^2: \Delta a=0 \Leftrightarrow \partial^*a=0 \Leftrightarrow \overline{\partial}^*a=0 \end{gather} This shows that $\mathcal H^0\subset A^0$ is a subalgebra, $\mathcal H^1\subset A^1$ is a sub-bimodule over $\mathcal H^0$, and $\mathcal H^2\subset A^2$ is also a sub-bimodule over $\mathcal H^0$ since if $a\in\mathcal H^0$, $b\in\mathcal H^2$ then \begin{equation} \partial^*(ab)=-i\overline{\partial}\Lambda(ab)=-i\overline{\partial}a\Lambda(b)=-ia\overline{\partial}\Lambda(b)=a\partial^*b=0 \end{equation} thus $ab\in\mathcal H^2$. Furthermore $L$ and $\Lambda$ restrict to inverse isomorphisms between $\mathcal H^0$ and $\mathcal H^2$ since $d^*\omega=0$. The YM condition \eqref{abstract_ym_condition} is precisely that $\theta\in \mathcal H^2$. One can show directly that $\mathcal H$ is an algebra or deduce this from the isomorphism with $H(A)$. The point is that there are quasi--isomorphisms of DG-algebras \begin{equation} (A,d)\hookleftarrow (\mathrm{Ker}(d^c),d) \rightarrow (H(A),0) \end{equation} where $d^c=i(\overline{\partial}-\partial)$. This follows from the $dd^c$-lemma as in Deligne--Griffiths--Morgan--Sullivan~\cite{dgms}. \end{proof} \subsection{Gauge group action and flow} By \textit{gauge group} we mean here the group $\mathcal G$ of invertible elements in $A^0$. If $A^0$ is finite--dimensional, then $\mathcal G$ is isomorphic to a product of general linear groups, and if $A$ comes from a vector bundle $E$ then $\mathcal G$ is the group of automorphism of $E$ as a complex vector bundle. If $\alpha''\in A^{0,1}$ and $g\in\mathcal G$ then define $g\cdot\alpha''$ by gauge transformations \begin{equation} g\cdot\alpha''=g\alpha''g^{-1}-\overline{\partial}gg^{-1}. \end{equation} We extend this action to $A^1$ so that if $\alpha^*=-\alpha$ then $(g\cdot\alpha)^*=-g\cdot\alpha$. Explicitly, we get the formula \begin{equation} g\cdot \alpha:=g^{*-1}\alpha'g^*+g^{*-1}\partial g^{*} + g\alpha''g^{-1}-\overline{\partial} gg^{-1} \end{equation} for $\alpha=\alpha'+\alpha''\in A^{1,0}\oplus A^{0,1}$ and $g\in \mathcal G$. Given $\alpha\in A^1$ the associated curvature is \begin{equation} F:=\theta+d\alpha+\alpha^2 \end{equation} and under the condition $\alpha^*=-\alpha$ we have $F^*=-F$. Given a fixed $\alpha\in A^1$ we can consider at least formally the flow \begin{gather} \dot{g}g^{-1}=-i(\Lambda F-\lambda) \\ F=\theta+d(g\cdot\alpha)+(g\cdot\alpha)^2 \end{gather} where $\dot{g}=dg/dt$ and $\lambda$ is chosen so that $\tau(\omega(\Lambda F-\lambda))=0$, i.e. $\lambda:=\tau(\theta)/\tau(\omega)$. If $\alpha^*=-\alpha$ then the right hand side is $*$-invariant and we replace the left hand side by the Hermitian part: \begin{equation}\label{flow_abstract_gauge} \frac{1}{2}\left(\dot{g}g^{-1}+(\dot{g}g^{-1})^*\right)=-i(\Lambda F-\lambda) \end{equation} This allows multiplying $g$ by any unitary elements. In terms of $h=g^*g$ the curvature is \begin{gather} g^{-1}\left(\theta+d(g\cdot\alpha)+(g\cdot\alpha)^2\right)g=\theta +d\left(A_{\alpha,h}\right)+\left(A_{\alpha,h}\right)^2 \\ A_{\alpha,h}:=\alpha''+h^{-1}\alpha'h+h^{-1}\partial h \end{gather} so the flow \eqref{flow_abstract_gauge} becomes \begin{equation}\label{flow_abstract_metric} h^{-1}\dot{h}=-2i\left(\Lambda\left(\theta+d\left(A_{\alpha,h}\right)+\left(A_{\alpha,h}\right)^2\right)-\lambda\right). \end{equation} If $A$ is finite--dimensional then this is the flow considered in \cite{hkkp_semistability} up to a factor of 2. Note that in this case there is a partial order on self-adjoint elements of $A^0$ such that positive elements are those with non-negative spectrum. For the rest of this subsection assume that $A$ is finite--dimensional unless otherwise stated. The following results are established in \cite{hkkp_semistability}. In Section~\ref{sec_hym} we will prove versions of these results in the Hermitian vector bundle case. \begin{prop}[Monotonicity] \label{prop_mono_quiver} Let $g_t,h_t$ be solutions of \eqref{flow_abstract_metric} for $t\geq 0$ with $g_0\leq h_0$, then $g_t\leq h_t$ for all $t\geq 0$. \end{prop} \begin{coro}[Uniqueness of asymptotics] Let $g_t,h_t$ be solutions of \eqref{flow_abstract_metric} for $t\geq 0$. Then there is a constant $C\geq 1$ such that \begin{equation} \frac{1}{C}g_t\leq h_t\leq Cg_t \end{equation} for $t\geq 0$. \end{coro} We call $h$ an \textbf{asymptotic solution} of \eqref{flow_abstract_metric} if for some (hence any) exact solution $k$ there is a constant $C\geq 1$ such that \begin{equation} \frac{1}{C}k_t\leq h_t\leq Ck_t \end{equation} for all $t\gg 0$. \begin{prop}[Criterion for asymptotic solutions] \label{prop_asym_criterion} Suppose $g$ satisfies \eqref{flow_abstract_gauge} up to an error term $s$, i.e. \begin{equation} \frac{1}{2}\left(\dot{g}g^{-1}+(\dot{g}g^{-1})^*\right)=-i\left(\Lambda F-\lambda\right)+s \end{equation} where $s$ is a smooth $t$-dependent self-adjoint element of $A^0$, and furthermore \begin{equation} -f\leq s\leq f \end{equation} for some smooth $L^1$ function $f:[0,\infty)\to[0,\infty)$. Then $h:=g^*g$ is an asymptotic solution of \eqref{flow_abstract_metric}. \end{prop} King's theorem~\cite{king94} relates existence of a fixed point of the flow to slope stability. Let $A$ be a lozenge algebra, not necessarily finite--dimensional. By assumption, $\mathrm{Ker}(d:A^0\to A^1)$ is finite--dimensional, thus a direct sum of matrix algebras $\mathrm{End}(\mathcal E_i)$ for some finite--dimensional Hermitian vector spaces $\mathcal E_i$ (proof as for Lemma~\ref{lem_lozenge_semisimple}). The partially ordered set of collections of subspaces $V_i\subset\mathcal E_i$ for each $i$ is a modular lattice which can be equivalently described in terms of orthogonal projectors as \begin{equation} \mathcal M(A):=\left\{p\in A^0\mid p^2=p, p^*=p, dp=0\right\}. \end{equation} The partial order by inclusion translates to $p\leq q\Leftrightarrow qp=p\Leftrightarrow pq=p$. The central elements $\theta,\omega\in A^2$ together give a polarization \begin{equation} Z([p,q]):=\tau\left((\omega-\theta)(q-p)\right) \end{equation} of the lattice. Note that real part of $Z$ depends on $\omega$ and the imaginary part on $\theta$. Given $\alpha''\in A^{0,1}$ we may also consider the sublattice \begin{equation} \mathcal M(A,\alpha''):=\left\{p\in\mathcal M(A,\alpha)\mid (1-p)\alpha'' p=0\right\} \end{equation} of projectors compatible with $\alpha''$. In contrast to $\mathcal M(A)$, this lattice need not be complemented. \begin{df} A polarized lattice $(L,Z)$ is \textbf{polystable} of phase $\phi\in \RR$ if $\phi(L)=\phi$, $\phi([0,x])\leq\phi(L)$ for any $x\neq 0$, and if $\phi([0,x])=\phi(L)$ then $x$ has a complement in $L$. Equivalently, $(L,Z)$ is semistable and $L^\phi$ is complemented in the notation of \eqref{sublattice_0}. \end{df} \begin{thm}[King] \label{thm_king} Let $A$ be a finite--dimensional lozenge algebra, $\alpha=\alpha'+\alpha''\in A^{1,0}\oplus A^{0,1}$ with $\alpha^*=-\alpha$, then there exists a $g\in\mathcal G$ such that \begin{equation} \Lambda\left(\theta+d(g\cdot\alpha)+(g\cdot\alpha)^2\right)=\lambda \end{equation} i.e. a fixed point of the flow \eqref{flow_abstract_gauge}, if and only if $\mathcal M(A,\alpha'')$ is polystable of phase $\phi=\arctan(i\lambda)$. \end{thm} \subsection{Constructing asymptotic solutions} In this subsection we construct asymptotic solutions of \eqref{flow_abstract_gauge} by relating the theory of weight filtrations in modular lattices as outlines in Section~\ref{sec_weightfilt} with the properties of lozenge algebras. If $\mathcal M(A,\alpha'')$ is semistable of phase $0$ for some $\alpha''\in A^{0,1}$, meaning that $\tau(\theta)=0$ and $\tau(i\theta p)\leq 0$ for any $p\in \mathcal M(A,\alpha'')$, then $Z$ restricts to an $\RR$-valued polarization on the sublattice $\mathcal M(A,\alpha'')^0$ of semistables of phase $0$ (see \eqref{sublattice_0}). \begin{df} Let $A$ be a lozenge algebra with $\mathcal M(A,\alpha'')$ semistable. Theorem~\ref{balanced_chain_thm} provides a weight filtration of $\mathcal M(A,\alpha'')^0$ of the form $0=p_0<p_1<\ldots<p_n=1$ labeled by real numbers $\lambda_1<\ldots<\lambda_n$. Define projectors $p_\lambda\in\mathcal M(A)$ by \begin{equation} p_\lambda:=\begin{cases} p_{k}-p_{k-1} & \text{ if }\lambda=\lambda_k \\ 0 & \text{ else } \end{cases} \end{equation} for $\lambda\in\RR$. The \textbf{weight grading} $r\in A^0$ is given by \begin{equation} r:=\sum_{\lambda\in\RR}\lambda p_\lambda \end{equation} which is harmonic ($dr=0$) and self-adjoint ($r^*=r$) by construction. \end{df} The degree $\lambda$ part of $\alpha''$ is \begin{equation} \alpha_\lambda'':=\sum_{\mu\in\RR}p_{\mu+\lambda}\alpha''p_\mu. \end{equation} \begin{lem} $\alpha_\lambda''=0$ for $\lambda>0$, i.e. $\alpha''$ is upper--triangular with respect to the $\RR$--grading \end{lem} \begin{proof} We need to check that $p_{\mu+\lambda}\alpha''p_\mu=0$ for $\lambda=0$. Assume $p_{\mu+\lambda}\neq 0$ and $p_\mu\neq 0$, then $p_\mu=p_k-p_{k-1}$ and $p_{\mu+\lambda}=p_m-p_{m-1}$ for some $k<m$. We have $p_k-p_{k-1}\leq p_k$ and $p_m-p_{m-1}\leq 1-p_k$ thus \begin{equation} p_{\mu+\lambda}\alpha''p_\mu=(p_m-p_{m-1})(1-p_k)\alpha''p_k(p_k-p_{k-1})=0 \end{equation} where we use $p_k\in \mathcal M(A,\alpha'')$. \end{proof} \begin{lem} The polarized lattice $(\mathcal M(A,\alpha''_0),Z)$ is polystable. \end{lem} \begin{proof} We need to show that $\mathcal M(A,\alpha''_0)^0$ is complemented. By definition of the weight filtration, each interval $[p_{k-1},p_k]$ in $\mathcal M(A,\alpha'')^0$ is complemented, so it suffices to show that each $p_k$ has a complement in $\mathcal M(A,\alpha''_0)^0$. Since $p_k\in M(A,\alpha'')$ and by definition of $\alpha''_0$ we have $(1-p_k)\alpha_0''p_k=0$ but $[p_k,\alpha_0'']=0$ so $p_k\alpha_0''(1-p_k)=0$ and $1-p_k\in \mathcal M(A,\alpha''_0)^0$ is a complement of $p_k$. \end{proof} Thus, if $A$ is finite-dimensional then Theorem~\ref{thm_king} tells us that after applying a gauge transformation we can assume $\alpha_0:=\alpha_0''-(\alpha_0'')^*$ is harmonic in the sense that \begin{equation}\label{alpha0_harm} \theta+d\alpha_0+(\alpha_0)^2=0 \end{equation} (since we assume $\alpha=0$ we have $\lambda=0$). This means in particular if we change the differential to $b\mapsto db+[\alpha_0,b]$ and the curvature $\theta$ to $0$ then we still have a lozenge algebra. Thus we may assume that $\alpha_0=0$, since simultaneously twisting the differential by $\alpha_0$ and removing this term from $\alpha$ does not change the flow \eqref{flow_abstract_gauge}. The property of the weight filtration that $[a_{k-1},a_l]$ is complemented for $\lambda_l-\lambda_k<1$, $k\leq l$ implies that after applying a gauge transformation by a harmonic invertible element in $A^0$ taking a splitting to an orthogonal one we can ensure that \begin{equation}\label{alpha01_vanish} \alpha''_\lambda=0\qquad \text{ for }\lambda\in(0,1). \end{equation} Moreover, we may assume that \begin{equation}\label{alpha1_harm} \partial\alpha_\lambda''=0 \text{ for }\lambda\leq -1 \end{equation} i.e. we have harmonic representatives of the extension classes. We can use $r$ and $\alpha''$ to construct a new lozenge algebra, $A_\diamond$, which consists of harmonic chains which have certain degree with respect to $r$, thought of as an $\RR$-grading. Precisely, set \begin{gather} A_\diamond^0=\{a\in A^0\mid \Delta a=0,[r,a]=0\} \\ A_\diamond^{1,0}=\{a\in A^{1,0}\mid \Delta a=0,[r,a]=-a\} \\ A_\diamond^{0,1}=\{a\in A^{0,1}\mid \Delta a=0,[r,a]=a\} \\ A_\diamond^2=\{a\in A^2\mid \Delta a=0,[r,a]=0\} \\ d_\diamond=0,\qquad \theta_\diamond=-i\omega r,\qquad \tau_\diamond=\tau,\qquad \omega_\diamond=\omega. \end{gather} The product $A_\diamond^{1,0}\otimes A_\diamond^{0,1}\to A_\diamond^2$ is by definition the product from $A$ composed with the projection to the harmonic part, which will then be in $A_\diamond^2$ automatically. It follows from \eqref{alpha0_harm} that $d^2=0$ in this algebra, and $\theta_\diamond$ is central by construction. Also, $\alpha_{-1}''\in A_\diamond^{0,1}$ by \eqref{alpha1_harm}, and $\mathcal M(A_\diamond,\alpha_{-1}'')$ is semistable of phase $0$ by definition of the weight filtration/grading. \subsubsection*{Changing time scale} The standing assumptions are that $A$ is a lozenge algebra and $\alpha\in A^{0,1}$ such that $\alpha_0=0$, \eqref{alpha01_vanish}, and \eqref{alpha1_harm} hold. Suppose $x(t)\in A_\diamond^0$ solves \begin{equation} \frac{1}{2}\left(\dot{x}x^{-1}+\left(\dot{x}x^{-1}\right)^*\right)=-i\Lambda P\left(\left(x\cdot \alpha_{-1}\right)^2\right)-r \end{equation} which is just the flow \eqref{flow_abstract_gauge} in $A_\diamond$ for $\alpha_{-1}:=\alpha_{-1}''-(\alpha_{-1}'')^*$. We want to construct a solution which solves the above equation but without the $r$ term. Set \begin{equation} y(t):=(2t)^{r/2}x\left(\frac{1}{2}\log(2t)\right) \end{equation} then \begin{equation} \dot{y}=\left(r(2t)^{r/2-1}x\left(\frac{1}{2}\log(2t)\right)+(2t)^{r/2}\dot{x}\left(\frac{1}{2}\log(2t)\right)(2t)^{-1}\right) \end{equation} and \begin{equation} \dot{y}y^{-1}=(2t)^{-1}\left(r+\dot{x}\left(\frac{1}{2}\log(2t)\right)\left(x\left(\frac{1}{2}\log(2t)\right)\right)^{-1}\right) \end{equation} where we use $[x,r]=0$. The left hand side is thus \begin{equation} \frac{1}{2}\left(\dot{y}y^{-1}+\left(\dot{y}y^{-1}\right)^*\right)=(2t)^{-1}\left(r+\frac{1}{2}\left(\dot{x}x^{-1}+\left(\dot{x}x^{-1}\right)^*\right)\right). \end{equation} On the other hand \begin{align} y\cdot\alpha_{-1} &= y^{*-1}\alpha_{-1}'y^*+y\alpha_{-1}''y^{-1} \\ &= (2t)^{-r/2}\left(x^{*-1}\alpha_{-1}'x^*\right)(2t)^{r/2}+(2t)^{r/2}\left(x\alpha_{-1}''x^{-1}\right)(2t)^{-r/2} \\ &= (2t)^{-1/2}\left(x\cdot\alpha_{-1}\right) \end{align} since $\alpha_{-1}'$ has $r$-degree $1$ and $\alpha_{-1}''$ has $r$-degree $-1$ and thus \begin{equation} \Lambda P\left(\left(y\cdot \alpha_{-1}\right)^2\right)=(2t)^{-1}\Lambda P\left(\left(x\cdot \alpha_{-1}\right)^2\right). \end{equation} Combining this we see that indeed \begin{equation} \frac{1}{2}\left(\dot{y}y^{-1}+\left(\dot{y}y^{-1}\right)^*\right)=-i\Lambda P\left(\left(y\cdot \alpha_{-1}\right)^2\right). \end{equation} Below we will modify $y$ so that it satisfies this equation, up to terms in $L^1$, but without the orthogonal projection $P$. \subsubsection*{Green's function correction} Let $x,y$ be as before and consider \begin{gather} w:=-i\Lambda\left(\left(y\cdot\alpha_{-1}\right)^2\right) \\ z:=y\left(1+G\left(y^{-1}wy\right)\right) \end{gather} where $G$ is a Green's function as in \eqref{greens}. The factor $\left(1+G(y^{-1}wy)\right)$ has no effect on the asymptotics, but is needed to get a solution up to terms on $L^1$. Indeed, we will show that $z$ satisfies \begin{equation}\label{z_eqn} \frac{1}{2}\left(\dot{z}z^{-1}+\left(\dot{z}z^{-1}\right)^*\right)=-i\Lambda \left(d\left(z\cdot \alpha_{-1}\right)+\left(z\cdot \alpha_{-1}\right)^2\right)+\text{ terms in }L^1 \end{equation} and is thus an asymptotic solution. We write $O(t^\beta\mathcal L)$ for terms which are $O(t^{\beta})$ up to logarithmic corrections, e.g. $O(t^{\beta}\log t)$, $O(t^\beta\log t\log\log t)$, and so on. As $\alpha_{-1}''$ has $r$-degree $-1$ and $w$ has $r$-degree $0$ we have \begin{equation} y\alpha_{-1}y^{-1}=O(t^{-1/2}\mathcal L),\qquad w=O(t^{-1}\mathcal L),\qquad G(y^{-1}wy)=O(t^{-1}\mathcal L). \end{equation} Consequently, \begin{equation} \left(1+G(y^{-1}wy)\right)^{-1}=\left(1-G(y^{-1}wy)\right)+O(t^{-2}\mathcal L) \end{equation} and \begin{equation} \dot{z}=\dot{y}\left(1+G(y^{-1}wy)\right)+yO(t^{-2}\mathcal L) \end{equation} where the terms in $O(t^{-2}\mathcal L)$ are of $r$-degree $0$, hence \begin{equation} \dot{z}z^{-1}=\dot{y}y^{-1}+O(t^{-2}\mathcal L). \end{equation} Next, we look at the right hand side. We have \begin{align} \overline{\partial}zz^{-1} &= y\overline{\partial}G(y^{-1}wy)\left(1-G(y^{-1}wy)\right)y^{-1}+O(t^{-2}\mathcal L) \\ &=\overline{\partial}G(w)+O(t^{-2}\mathcal L) \end{align} and similarly \begin{equation} z^{*-1}\partial z^*=\partial G(w)+O(t^{-2}\mathcal L) \end{equation} thus using the K\"ahler identities \eqref{kaehler_ident1} and defining property of $G$ \eqref{greens} we get \begin{align} -i\Lambda d\left(z^{*-1}\partial z^*-\overline{\partial}zz^{-1}\right) &=-2i\Lambda\overline{\partial}\partial G(w)+O(t^{-2}\mathcal L) \\ &=- \Delta Gw+O(t^{-2}\mathcal L) \\ &=(P-1)w+O(t^{-2}\mathcal L) \end{align} Recall that $\alpha_0=0$, \eqref{alpha01_vanish}, thus \begin{equation} \alpha''=\alpha_{-1}''+\nu'' \end{equation} where $\nu''$ collects components of $r$-degree $<-1-\epsilon$ for some $\epsilon>0$. Hence, \begin{align} z\alpha''z^{-1} &= y\left(1+G\left(y^{-1}wy\right)\right)\alpha''\left(1-G\left(y^{-1}wy\right)\right)y^{-1}+O(t^{-2}\mathcal L) \\ &=y\alpha''y^{-1}+O(t^{-3/2}\mathcal L) \\ &=y\alpha_{-1}''y^{-1}+O(t^{-(1+\varepsilon)/2}\mathcal L) \end{align} thus, using $\partial\alpha''=\overline{\partial}\alpha'=0$, \begin{equation} d\left(z\alpha''z^{-1}\right)=O(t^{-3/2}\mathcal L),\qquad d\left(z^{*-1}\alpha'z^{*}\right)=O(t^{-3/2}\mathcal L) \end{equation} and \begin{equation} \left(z^{*-1}\alpha'z^{*}+z\alpha''z^{-1}\right)^2=\left(y^{*-1}\alpha_{-1}'y^{*}+y\alpha_{-1}''y^{-1}\right)^2+O(t^{-(1+\varepsilon)/2}\mathcal L). \end{equation} Combing this we get \begin{align} -i\Lambda \left(d(z\cdot \alpha)+(z\cdot \alpha)^2\right) &=(P-1)w-i\Lambda\left((y\cdot\alpha_{-1})^2\right)+O(t^{-(1+\varepsilon)/2}\mathcal L) \\ &= -i\Lambda P\left((y\cdot\alpha_{-1})^2\right)+O(t^{-(1+\varepsilon)/2}\mathcal L) \end{align} which shows \eqref{z_eqn}. \subsubsection*{Iterative procedure} Suppose $A$ is a finite--dimensional lozenge algebra and $\alpha''\in A^{0,1}$ such that $\mathcal M(A,\alpha'')$ is semistable of phase $0$. Let \begin{equation} 0=p_0<p_1<\ldots<p_n=1 \end{equation} be the iterated weight filtration of $\mathcal M(A,\alpha'')$ labeled by elements $\beta_1<\ldots<\beta_n$ in $\RR\log t\oplus\RR\log\log t\oplus\ldots$. The procedure described in this subsection gives a finite sequence of lozenge algebras $A,A_\diamond,\ldots$ and gauge transformations $g(t)\in A^0$ for $t\gg 0$ which give a solution of the flow \eqref{flow_abstract_gauge} up to terms in $L^1$, thus an asymptotic solution by Proposition~\ref{prop_asym_criterion}. In this process some fixed gauge transformation was applied to $\alpha''$ to ensure harmonicity properties. Moreover, \begin{equation} \log g(t)=\frac{1}{2}\sum_{k=1}^n\beta_k(p_k-p_{k-1})+O(1) \end{equation} i.e. $\log g(t)$ is, up to bounded terms, diagonal with entries which are linear combinations of iterated logarithms. \section{Flow on Hermitian vector bundles} \label{sec_hym} In this section we explore the consequences of our theory in the setting of holomorphic bundles over a compact K\"ahler manifold $X$. While initially $X$ can be of arbitrary dimension, we later restrict to the case of Riemann surfaces. \subsection{Slope-semistable coherent sheaves} We let $X$ be a compact K\"ahler manifold and $\omega$ its positive $(1,1)$-form. The \textit{degree} of a coherent sheaf $E$ on $X$ is by definition \begin{equation} \deg(E):=\int_Xc_1(E)\wedge \omega^{n-1} \end{equation} and depends evidently only on the K\"ahler class $[\omega]\in H^2(X;\RR)$. Consider \begin{equation} Z(E):=\mathrm{rk}(E)+\deg(E)i\in\CC \end{equation} then $Z(E)=0$ if the support of $E$ has codimension at least $2$. As in \cite{Meinhardt2014}, let $\mathrm{Coh}^2(X)\subset \mathrm{Coh}(X)$ denote the full subcategory of sheaves whose support has codimension $\geq 2$, and \begin{equation} \mathrm{Coh}_{(1)}(X)=\mathrm{Coh}(X)/\mathrm{Coh}^2(X) \end{equation} the quotient abelian category (localization). The category $\mathrm{Coh}_{(1)}(X)$ behaves much like the $\mathrm{Coh}(Y)$ for a curve $Y$ in that is has cohomological dimension $\leq 1$. We refer to Meinhardt--Partsch~\cite{Meinhardt2014} for details. Fix $\phi\in (-\frac{\pi}{2},\frac{\pi}{2}]$ and let $\mathrm{Coh}_{(1)}^\phi(X)\subset \mathrm{Coh}_{(1)}(X)$ be the full subcategory of semistable objects, $E$ of phase $\mathrm{Arg}(Z(E))=\phi$. This is an abelian artinian category, hence Theorem~\ref{balanced_chain_thm} provides any $E\in\mathrm{Coh}_{(1)}^\phi(X)$ with a canonical $\RR^\infty$-filtration with polystable subquotients. Furthermore, for any $E\in\mathrm{Coh}_{(1)}(X)$ we get a canonical refinement of its Harder--Narasimhan filtration. \begin{remark} If $\dim_{\CC}X>1$ then slope stability is degenerate in the sense that it does not see anything in codimension at least $2$. Instead, suppose we have a stability condition in the sense of Bridgeland~\cite{bridgeland07} on $D^b(X)$, then for each phase $\phi\in\RR$ there is an artinian abelian category of semistable objects of phase $\phi$. Thus we get canonical refinements of the Harder--Narasimhan filtrations for this stability condition without needing to localize by a subcategory. It remains an open problem however to show existence of stability conditions on $D^b(X)$ for general $X$, and even more significantly to prove an analogue of the DUY theorem (Theorem~\ref{duy_thm}) for Bridgeland stability conditions. \end{remark} \subsubsection*{Hermitian--Einstein metrics} Textbook references for this material are Kobayashi~\cite{kobayashi87} and Siu~\cite{siu_book}. Suppose $E$ is a holomorphic vector bundle over $X$. For any choice of Hermitian metric, $h$, on $E$ there exists a unique unitary connection, $D$, whose $(0,1)$-component is the natural operator $\overline{\partial}_E$ determined by the holomorphic structure on $E$. With respect to a local holomorphic frame of $E$ we have \begin{equation} D=d+h^{-1}\partial h. \end{equation} The curvature of $D$ is given by \begin{equation} F=\overline{\partial}(h^{-1}\partial h)=h^{-1}(\overline{\partial}\partial h-\overline{\partial}hh^{-1}\partial h). \end{equation} Let $\Lambda$ be the operator on forms which is adjoint to $L(\alpha)=\omega\wedge\alpha$. The metric $h$ is \textit{Hermitian--Einstein} if \begin{equation} \Lambda F=\lambda \end{equation} for some constant $\lambda$. Since $\mathrm{tr}\left(\frac{i}{2\pi}F\right)$ represents the first Chern class we necessarily have \begin{equation}\label{top_lambda} \lambda=-\frac{2n\pi\deg(E)}{\mathrm{rk}(E)\int_X\omega^n}i \end{equation} where $n=\dim_{\CC}X$. The following fundamental theorem is due to Narasimhan--Seshadri for $X$ a Riemann surface \cite{narasimhan_seshadri}, Donaldson for $X$ an algebraic surface \cite{donaldson85} and Uhlenbeck--Yau for general K\"ahler $X$ \cite{uhlenbeck_yau}. \begin{thm}\label{duy_thm} A holomorphic vector bundle admits a Hermitian--Einstein metric if and only if it is slope-polystable. \end{thm} Donaldson's approach to proving existence of Hermitian--Einstein metrics is to start with an arbitrary metric and follow the nonlinear heat-type flow \begin{equation}\label{donaldson_flow} h^{-1}\partial_th=-2i(\Lambda F-\lambda) \end{equation} whose solution is unique for given initial condition and exists for all positive time. \subsection{Monotonicity and related properties} The flow \eqref{donaldson_flow} is homogeneous in the sense that if $h$ is a solution for $t\geq 0$ and $f:[0,+\infty)\to\RR$ a smooth function, then $g:=e^{f(t)}h$ gives the same compatible connections and thus satisfies \begin{equation} g^{-1}\partial_tg=-2i(\Lambda F-\lambda)+\frac{df}{dt}. \end{equation} The Hermitian metrics on a bundle $E$ are partially ordered by the pointwise comparison \begin{equation} g\leq h :\Leftrightarrow g(v,v)\leq h(v,v)\qquad\text{for all }v\in TM. \end{equation} The flow preserves this partial order, as in the case of quiver representations. \begin{prop} \label{prop_mono_hym} Let $g_t,h_t$ be solutions of \eqref{donaldson_flow} for $t\geq 0$ with $g_0\leq h_0$, then $g_t\leq h_t$ for all $t\geq 0$. \end{prop} \begin{proof} It suffices to show that for arbitrary $\varepsilon>0$ we have $k_t:=e^{-\varepsilon t}g_t\leq h_t$ for $t\in[0,\infty)$. Furthermore, since the set of $t\in[0,\infty)$ where $k_t\leq h_t$ is closed and contains $0$, the claim follows from existence a $\delta>0$ such that $k_t\leq h_t$ for $t\in[0,\delta)$. Let $S(TX)$ denote the unit sphere bundle inside the tangent bundle $TX$ with respect to the metric $h_0$. We claim that each $v\in S(TX)$ has a neighborhood $U\subset S(TX)$ and $\delta>0$ such that $k_t(w,w)\leq h_t(w,w)$ for $w\in U$ and $t\in[0,\delta)$. If $k_0(v,v)<h_0(v,v)$ this is clear, so suppose $k_0(v,v)=h_0(v,v)=1$. Since $g,h$ are solutions of the flow we have \begin{gather} k^{-1}\partial_tk=-2i\left(\Lambda k^{-1}(\overline{\partial}\partial k-\overline{\partial}kk^{-1}\partial k)-\lambda\right)-\varepsilon \\ h^{-1}\partial_th=-2i\left(\Lambda h^{-1}(\overline{\partial}\partial h-\overline{\partial}hh^{-1}\partial h)-\lambda\right). \end{gather} For suitable choice of local holomorphic frame of $E$ we may assume that $\partial h_0=0$ at the basepoint, $x\in X$, of $v$ (see \cite{kobayashi87}, Proposition 1.4.20). Since $2i\Lambda\overline{\partial}\partial=\Delta$ is the geometer's Laplacian on functions we get \begin{align}\label{mono_main_calc} \left.\frac{d}{dt}\right\vert_{t=0}&\left(h_t(v,v)-k_t(v,v)\right)= \\ &=-\Delta\left(h_0(v,v)-k_0(v,v)\right)-2i\left(\Lambda(\overline{\partial}k_0k_0^{-1}\partial k_0)\right)(v,v)+\varepsilon \geq \varepsilon \nonumber \end{align} where we have used that $-i\Lambda\overline{\varphi}\wedge\varphi\geq 0$ for a $(1,0)$-form $\varphi$. This shows existence of the desired neighborhood $U$ and $\delta>0$. The proposition then follows from compactness of $S(TX)$. \end{proof} \begin{coro} Let $g_t,h_t$ be solutions of \eqref{donaldson_flow} for $t\geq 0$. Then there is a constant $C\geq 1$ such that \begin{equation} \frac{1}{C}g_t\leq h_t\leq Cg_t \end{equation} for $t\geq 0$. \end{coro} \begin{proof} Since $X$ is compact we can find $C\geq 1$ such that $C^{-1}g_0\leq h_0\leq Cg_0$. Apply Proposition~\ref{prop_mono_hym}. \end{proof} We call $h$ an \textbf{asymptotic solution} of \eqref{donaldson_flow} if for some (hence any) exact solution $k$ there is a constant $C\geq 1$ such that \begin{equation} \frac{1}{C}k_t\leq h_t\leq Ck_t \end{equation} for all $t\gg 0$. An easily verifiable sufficient criterion for recognizing asymptotic solutions will be given below. One may rewrite the flow as taking place in the gauge group instead of the space as metrics. In this case there is a fixed reference metric $H$ on $E$ so that if $g$ is a smooth section $GL(E)$ then $h(v,w):=H(gv,gw)$ defines another metric on $E$, and conversely $g$ is determined by $h$ up to unitary transformations. In terms of $g$ instead of $h$, the equation~\ref{donaldson_flow} becomes \begin{equation}\label{gauge_flow} \frac{1}{2}\left(\partial_tgg^{-1}+\left(\partial_tgg^{-1}\right)^*\right)=-i\left(\Lambda F-\lambda\right) \end{equation} where $F$ is the curvature of the metric connection \begin{equation} g\circ\overline{\partial}_E\circ g^{-1}+g^{*-1}\circ\partial_E\circ g^{*}. \end{equation} \begin{prop}\label{prop_asym_criterion_hym} Suppose $g$ satisfies \eqref{gauge_flow} up to an error term $s$, i.e. \begin{equation} \frac{1}{2}\left(\partial_tgg^{-1}+\left(\partial_tgg^{-1}\right)^*\right)=-i\left(\Lambda F-\lambda\right)+s \end{equation} where $s$ is a smooth $t$-dependent self-adjoint section of $\mathrm{End}(E)$, and furthermore \begin{equation} -f\leq s\leq f \end{equation} for some smooth $L^1$ function $f:[0,\infty)\to[0,\infty)$. Then the corresponding $t$-dependent metric, $h:=H(g\_,g\_)$, is an asymptotic solution of \eqref{donaldson_flow}. \end{prop} \begin{proof} Let $k$ be an exact solution of \eqref{donaldson_flow}, then \begin{equation} k^{\pm}_t:=\exp\left(\pm\int_0^tf\right)k_t \end{equation} satisfy \eqref{donaldson_flow} up to an error term $\pm f$, respectively. Furthermore, if $C:=\exp\left(\int_0^\infty f\right)$ then $k^+\leq Ck$ and $C^{-1}k\leq k^-$. Fix the initial condition for $k$ by $k_0=h_0=H(g_0\_,g_0\_)$. We will show that \begin{equation} k^-_t\leq h_t\leq k^+_t \end{equation} for $t\geq 0$, which implies the proposition. The proof is a slight modification of the one for monotonicity, Proposition~\ref{prop_mono_hym}. The family of metrics $h$ satisfies \eqref{donaldson_flow} up to an error term $g^{-1}sg$. In the main calculation \eqref{mono_main_calc} a new term \begin{equation} f(0)k_0^+(v,v)-H(g_0v,s(0)g_0v)=H(g_0v,(f-s)(0)g_0v)\geq 0 \end{equation} appears. Since it is non-negative, the rest of the argument still works. \end{proof} \subsection{Asymptotic solutions} We restrict to the case $\dim_{\CC}(X)=1$. Let $E$ be a holomorphic bundle over $X$ with Harder--Narasimhan filtration \begin{equation} 0=E_0\subset E_1\subset \ldots\subset E_m=E \end{equation} which means that each $S_k:=E_k/E_{k-1}$ is semistable and the slopes $\mu(S_k)=\deg(S_k)/\mathrm{rk}(S_k)$ satisfy \begin{equation} \mu(S_1)>\mu(S_2)>\ldots>\mu(S_m). \end{equation} \subsubsection*{Case without refinement} To illustrate the strategy in a simple special case where the theory of Section~\ref{sec_lozenge} is not needed, let us assume first that each $S_k$ is in fact polystable, i.e. a direct sum of stable bundles. Then by the Narasimhan--Seshadri theorem ($n=1$ case of Theorem~\ref{duy_thm}) the associated graded bundle \begin{equation} S:=\bigoplus_{k=1}^m S_k \end{equation} admits a Hermitian metric so that the associate connection $D$ has constant central curvature $F_S$, or equivalently is a critical point of the Yang--Mills functional $\|F\|^2$. We assume such a metric is chosen on $S$. The holomorphic bundle $E$ is isomorphic to $S$ with modified complex structure $\overline{\partial}_S+\alpha$ for some $(0,1)$-form $\alpha$ with values in $\mathrm{End}(S)$ which is strictly block upper--triangular with respect to the direct sum decomposition $S:=\bigoplus S_k$. An asymptotic solution of \ref{gauge_flow} is simply \begin{equation}\label{HN_only_solution} g_t=e^{C(\mu-\mu_S)t} \end{equation} where $C:=2\pi/\int_X\omega$, $\mu:=\mu(E)$, and $\mu_S$ acts by multiplication by $\mu_k$ on $S_k$. We need to check that this satisfies \ref{gauge_flow} up to terms in $L^1$. An element $g$ of the gauge group (smooth automorphism of $S$) acts on $\alpha$ and $\alpha^*$ by \begin{equation}\label{gauge_action} g\cdot\alpha:=g\alpha g^{-1}-\overline{\partial}_Sgg^{-1},\qquad g\cdot\alpha^*:=g^{*-1}\alpha^*g^*+g^{*-1}\partial_Sg^{*} \end{equation} and thus the connection $\overline{\partial}_S+g\cdot\alpha+\partial_S-g\cdot\alpha^*$ has curvature \begin{equation} F=F_S+\overline{\partial}_S(g\cdot\alpha)-\partial_S(g\cdot\alpha^*)-g\cdot\alpha\wedge g\cdot\alpha^*-g\cdot\alpha^*\wedge g\cdot\alpha. \end{equation} If $g$ as in \ref{HN_only_solution} then $\partial_S g=\overline{\partial}_S g=0$ and we claim that $g\alpha g^{-1}$ is exponentially decaying. Indeed, the $(j,k)$ block of $g\alpha g^{-1}$ is \begin{equation} e^{C(\mu-\mu_j)t}\alpha_{jk}e^{-C(\mu-\mu_k)t}=e^{C(\mu_k-\mu_j)t}\alpha_{jk} \end{equation} but $\alpha_{jk}=0$ for $j\geq k$ and $\mu_k-\mu_j<0$ for $j<k$ by assumption. Thus \begin{equation} F=F_S+(\textit{exponentially decaying terms}) \end{equation} and by assumption on the metric on $S$ we have $-i\Lambda F_S=-C\mu_S$. Since $\partial_tgg^{-1}=C(\mu-\mu_S)$ and $i\lambda=C\mu$ (see \eqref{top_lambda}) we find that $g$ solves \eqref{gauge_flow} up to exponentially decaying terms. By Proposition~\ref{prop_asym_criterion_hym} we see that $g$ is an asymptotic solution. This proves Theorem~\ref{thm_hym_asymp} in this special case. \subsubsection*{Case with refinement} We start by labeling of the Harder--Narasimhan filtration by exponential growth rate \begin{equation} \beta_k:=2\pi\left(\int_X\omega\right)^{-1}(\mu(E)-\mu_k) \end{equation} instead of slope $\mu_k=\mu(E_k/E_{k-1})$. For each semistable component $S_k=E_k/E_{k-1}$ there is an artinian lattice of subbundles of $S_k$ of the same slope $\mu_k$ which has an $\RR$-valued polarization given by rank. This is up to a constant factor (which does not affect the weight filtration) the same as the polarization coming from the restriction of $Z(E)=\mathrm{rk}(E)+\deg(E)i$, since slope is fixed. Thus we get a canonical iterated weight filtration \begin{equation} 0=S_{k,0}\subset S_{k,1}\subset\ldots\subset S_{k,r_k}=S_k \end{equation} of $S_k$ indexed by elements $\beta_{k,1}<\ldots<\beta_{k,r_k}$ in $\RR\log t\oplus\RR\log\log t\oplus\ldots$ and whose subquotients are direct sums of stable bundles of slope $\mu_k$. These filtrations of $S_1,\ldots,S_m$ together provide a refinement of the Harder--Narasimhan filtration indexed by elements \begin{equation} \beta_1t+\beta_{1,1}<\beta_1t+\beta_{1,2}<\ldots<\beta_1t+\beta_{1,r_1}<\beta_2t+\beta_{2,1}<\ldots<\beta_mt+\beta_{m,r_m} \end{equation} in $\RR t\oplus\RR\log t\oplus\RR\log\log t\oplus\ldots$. For each semistable bundle, $S_k$, there is an associated graded holomorphic bundle, $T_k$, of the iterated weight filtration which is a direct sum of stable bundles of the same slope and thus admits a metric so that the associated connection has constant central curvature \begin{equation} F=-\frac{2\pi i\omega}{\int_X\omega}. \end{equation} Fixing such a metric, the algebra of endomorphism valued forms \begin{equation} \mathcal A^{\bullet}(X,\mathrm{End}(T_k)) \end{equation} is a lozenge algebra with curvature $\theta=F$. The bundles $S_k$ and $T_k$ have the same underlying complex vector bundle, but the holomorphic structure differ by \begin{equation} \alpha_k'':=\overline{\partial}_{T_k}-\overline{\partial}_{S_k}\in\mathcal A^{0,1}(X,\mathrm{End}(T_k)) \end{equation} and $\alpha_k''$ is strictly upper--triangular with respect to the grading on $T_k$ coming from the iterated weight filtration on $S_k$. Let $g_k(t)\in \mathcal A^0(X,GL(T_k))$ be a solution of the flow \begin{equation} \frac{1}{2}\left(\dot{g}_kg_k^{-1}+\left(\dot{g}_kg_k^{-1}\right)^*\right)=-i\Lambda\left(d\left(g_k\cdot\alpha_k\right)+\left(g_k\cdot\alpha_k\right)^2\right) \end{equation} where $\alpha_k=\alpha_k''-(\alpha_k'')^*$. (Note that $\Lambda\theta=\lambda$ cancel.) The procedure described in Section~\ref{sec_lozenge} gives an asymptotic solution which shows that for the trajectory of metrics $h_k:=(g_k)^*g_k$ one has asymptotic growth \begin{equation} \left\|\log\left(h_k(t)\mid S_{k,j}\right)\right\|=\beta_{k,j}+O(1). \end{equation} Consider $T:=\bigoplus_k T_k$ which is the associated graded of the total filtration on $E$. In particular, $T$ and $E$ have the same underlying complex vector bundle and the holomorphic structures differ by \begin{equation} \alpha'':=\overline{\partial}_E-\overline{\partial}_T\in\mathcal A^{0,1}(X,\mathrm{End}(T)) \end{equation} which is strictly upper triangular with respect to the grading on $T$ coming from the total filtration on $E$. Moreover, the $T_k$ diagonal block of $\alpha$ is just $\alpha_k$. As before one shows that \begin{equation} g(t):=\mathrm{diag}\left(e^{\beta_1t}g_1(t),\ldots,e^{\beta_mt}g_m(t)\right)\qquad \in\mathcal A^0(X,GL(T)) \end{equation} solves \begin{equation} \frac{1}{2}\left(\dot{g}g^{-1}+\left(\dot{g}g^{-1}\right)^*\right)=-i\left(\Lambda\left(\theta+d\left(g\cdot\alpha\right)+\left(g\cdot\alpha\right)^2\right)-\lambda\right) \end{equation} where $\alpha=\alpha''-(\alpha'')^*$, up to terms in $L^1$. If we set $h:=g^*g$ then \begin{equation} \left\|\log\left(h(t)\mid S_{k,j}\right)\right\|=2\beta_kt+\beta_{k,j}+O(1). \end{equation} where we consider $S_{k,j}$ also as holomorphic subbundle of $E$ by taking the preimage under $E_k\to S_k$. This completes the proof of Theorem~\ref{thm_hym_asymp}. \section{Modified curve shortening flow on a cylinder} \label{sec_curve_shortening} On a Riemann surface with non-vanishing holomorphic 1-form $\Omega$ and symplectic form $\omega$ there is a natural curve shortening flow \begin{equation}\label{arg_flow} \dot{c}=-d\mathrm{Arg}\left(\Omega|_c\right)\llcorner\omega^{-1} \end{equation} which decreases length measured with respect to $|\Omega|^2$. By definition, the flow deforms the curve by (local) Hamiltonian isotopies, provided we restrict to those curves which admit global choice of $\mathrm{Arg}\left(\Omega|_c\right)$. Motivated by the theory of stability in partially wrapped Fukaya categories of Riemann surfaces~\cite{hkk} we consider the following special case. The flat surface is the cylinder \begin{equation} X=\CC/L\ZZ, \qquad \Omega=dz \end{equation} with circumference $L>0$. The cylinder is punctured at points $x_0<x_1<\ldots<x_{n-1}\in [0,L)$. The symplectic form is \begin{equation} \omega=\frac{dx\wedge dy}{\rho} \end{equation} where $\rho$ should be chosen so that near $x_i$ it is of the form \begin{equation} \rho(x,y)=(x-x_i)^2+y^2 \end{equation} for simplicity, and $\rho$ vanishes at no other points. This means that symplectically there is a half-infinite cylinder around each puncture $x_i$. Assume the curve is a graph \begin{equation} x\mapsto (x,f(x)), \qquad f(x_i)\neq 0 \end{equation} where we can consider $f$ as $L$-periodic function. Also assume that not all $f(x_i)$ have the same sign, i.e. the curve passes both over and under some punctures (see Figure~\ref{plane_curve}). Approximating $\mathrm{Arg}$ by the slope, the flow \eqref{arg_flow} becomes \begin{equation}\label{flow_pde} \partial_tf=\rho(x,f)\partial_{xx}f \end{equation} which is the PDE we want to study. \begin{figure} \center \begin{tikzpicture}[scale=2] \draw (0,0) to (5,0); \draw[dashed] (0,-.5) to (0,.5); \draw[dashed] (5,-.5) to (5,.5); \foreach \x in {0, 1, 2.2, 2.8, 4,5} \draw[fill] (\x,0) circle [radius=0.02]; \draw [thick] (0,-.25) to [out=0,in=180] (1,.3) to [out=0,in=180] (2.5,-.35) to [out=0,in=180] (4,.2) to [out=0,in=180] (5,-.25); \path[<->] (0,-.4) edge node[below]{$m_1$} (1,-.4); \path[<->] (1,-.4) edge node[below]{$m_2$} (2.2,-.4); \path[<->] (2.2,-.4) edge node[below]{$m_3$} (2.8,-.4); \path[<->] (2.8,-.4) edge node[below]{$m_4$} (4,-.4); \path[<->] (4,-.4) edge node[below]{$m_5$} (5,-.4); \end{tikzpicture} \caption{Curve on punctured flat cylinder.} \label{plane_curve} \end{figure} We will give a heuristic argument to show that asymptotically, on a center manifold, this PDE reduces to a system of ODEs in variables $y_i=|f(x_i)|/\pi$, $i\in\ZZ/n$, of the form \begin{equation}\label{system_y} \frac{\dot{y}_i}{y_i}=\frac{\epsilon_{i-1}\epsilon_i}{m_i}y_{i-1}-\left(\frac{1}{m_i}+\frac{1}{m_{i+1}}\right)y_i+\frac{\epsilon_{i}\epsilon_{i+1}}{m_{i+1}}y_{i+1} \end{equation} where $\epsilon_i=\mathrm{sign}(f(x_i))$ and $m_i>0$ is the length of the segment from $x_{i-1}$ to $x_i$. This is a special case of the system constructed from a directed acyclic graph in~\cite{hkkp_semistability}. The calculations below provide good evidence for the following conjecture. \begin{conj} The PDE \eqref{flow_pde} has an $n$-dimensional center manifold on which the flow is approximated by the system \eqref{system_y} in the sense that error terms of solutions are bounded in coordinates $\log(y_i)$. \end{conj} The starting point is the following ansatz for $f$. Assume that near $x_i$, $f$ is approximately of the form \begin{equation}\label{ansatz_pt} f(x)\sim a_{i,0}+a_{i,1}(x-x_i)+a_{i,2}\varphi\left(\frac{x-x_i}{a_{i,0}}\right) \end{equation} and on the segment between $x_i$ and $x_{i+1}$ we have \begin{equation}\label{ansatz_seg} f(x)\sim b_{i,0}(x-x_i)+b_{i,1}(x_{i+1}-x)+b_{i,2}\chi(x)+b_{i,3}\psi(x). \end{equation} The function $\varphi$ is the unique solution to \begin{equation} (x^2+1)\varphi''=1,\qquad\varphi(0)=\varphi'(0)=0 \end{equation} which is \begin{equation} \varphi(x)=x\arctan x-\frac{1}{2}\log(1+x^2)\sim \frac{\pi}{2}|x|-\log|x| \end{equation} where the approximation is good for $|x|\gg 0$. The functions $\chi,\psi$ are chosen so that \begin{equation} \rho(x,0)\chi''=x-x_i,\qquad \rho(x,0)\psi''=x_{i+1}-x. \end{equation} Compatibility of \eqref{ansatz_pt} and \eqref{ansatz_seg} gives the following set of equations. \begin{center} \begin{tabular}{c|c|c} term & left of $x_i$ & right of $x_i$ \\ \hline $1$ & $m_ib_{i-1,0}=a_{i,0}$ & $m_{i+1}b_{i,1}=a_{i,0}$ \\ $x-x_i$ & $b_{i-1,0}-b_{i-1,1}=a_{i,1}-\frac{\pi a_{i,2}}{2|a_i,0|}$ & $b_{i,0}-b_{i,1}=a_{i,1}+\frac{\pi a_{i,2}}{2|a_i,0|}$ \\ $\log|x-x_i|$ & $m_ib_{i-1,2}=a_{i,2}$ & $m_{i+1}b_{i,3}=a_{i,2} $ \end{tabular} \end{center} Furthermore, plugging the ansatz into the equation \eqref{flow_pde} implies \begin{equation} \dot{a}_{i,0}=a_{i,2},\qquad \dot{b}_{i,0}=b_{i,2},\qquad \dot{b}_{i,1}=b_{i,3}. \end{equation} Combining the above gives \begin{align} \pi\frac{\dot{a}_{i,0}}{|a_{i,0}|}&=b_{i-1,1}-b_{i-1,0}-b_{i,1}+b_{i,0} \\ &=\frac{a_{i-1,0}}{m_i}-\frac{a_{i,0}}{m_i}-\frac{a_{i,0}}{m_{i+1}}+\frac{a_{i+1,0}}{m_{i+1}} \end{align} which is \eqref{system_y} with $y_i=\epsilon_ia_{i,0}/\pi$. The signs $\epsilon_i$ determine an orientation of the graph which is a single cycle with $n$ edges. Namely, vertices correspond to segments $[x_{i-1},x_i]$ and arrows correspond to punctures $x_i$ with orientation given by $\epsilon_i$, i.e. depending on whether the curve passes over or under the puncture. By assumption not all arrows are oriented the same way. More conceptually, the abelian category of representations of this directed graph appears as a full subcategory containing the object corresponding to the curve $(x,f(x))$ of the partially wrapped Fukaya category of the punctured cylinder. The directed graph gives rise to a modular (in fact distributive) lattice $\mathcal M$ whose elements are subsets $S$ of the set of vertices so that no arrows lead out of $S$. According to the theory developed in the previous paper~\cite{hkkp_semistability}, the asymptotics of solutions to \eqref{system_y} are determined by the iterated weight filtration on the lattice $\mathcal M$. Since the lattice comes from a directed graph, the simpler definition of a \textit{weight grading} may also be used. The first case with wall--crossing is when $n=5$ with three arrows pointing one way and two the other, which corresponds to Figure~\ref{plane_curve}. \begin{center} \begin{tikzcd} \underset{m_1}{\bullet} \arrow{r}\arrow[bend left]{rrrr} & \underset{m_2}{\bullet} & \underset{m_3}{\bullet} \arrow{l} & \underset{m_4}{\bullet} \arrow{l} \arrow{r} & \underset{m_5}{\bullet} \end{tikzcd} \end{center} There are three chambers in the space $\RR_{>0}^5$ of parameters $m_i$. The two disjoint walls (where $\log\log t$ appears in the asymptotics) are given by equations $D_1=0$, $D_2=0$ where \begin{align} D_1&=m_1m_4+m_3m_5+2m_4m_5-m_1m_2\\ D_2&=m_2m_5+m_1m_3+2m_1m_2-m_4m_5 \end{align} The weight grading on the graph looks as follows in each of the chambers. Vertical position of the vertices indicates weight. \begin{center} \begin{tabular}{c|c|c} $D_1\leq 0$ & $D_1,D_2\geq 0$ & $D_2\leq 0$ \\ \hline \begin{tikzpicture} \node (1) at (0,1) {$\bullet$}; \node (2) at (1.5,-1) {$\bullet$}; \node (3) at (1.5,0) {$\bullet$}; \node (4) at (1.5,1) {$\bullet$}; \node (5) at (0,0) {$\bullet$}; \draw[dashed] (1) edge (2); \draw (2) edge (3); \draw (3) edge (4); \draw (4) edge (5); \draw (5) edge (1); \end{tikzpicture} & \begin{tikzpicture} \node (1) at (0,.5) {$\bullet$}; \node (2) at (1.5,-1) {$\bullet$}; \node (3) at (1.5,0) {$\bullet$}; \node (4) at (1.5,1) {$\bullet$}; \node (5) at (0,-.5) {$\bullet$}; \draw[dashed] (1) edge (2); \draw (2) edge (3); \draw (3) edge (4); \draw[dashed] (4) edge (5); \draw (5) edge (1); \end{tikzpicture} & \begin{tikzpicture} \node (1) at (0,0) {$\bullet$}; \node (2) at (1.5,-1) {$\bullet$}; \node (3) at (1.5,0) {$\bullet$}; \node (4) at (1.5,1) {$\bullet$}; \node (5) at (0,-1) {$\bullet$}; \draw (1) edge (2); \draw (2) edge (3); \draw (3) edge (4); \draw[dashed] (4) edge (5); \draw (5) edge (1); \end{tikzpicture} \end{tabular} \end{center} One can make a variant of this example where the curve is not a closed loop but an embedded path with fixed endpoints at punctures. The quiver in this case is of type $A_n$ instead of extended $A_n$. \bibliographystyle{plain}
{ "timestamp": "2018-02-13T02:20:17", "yymm": "1802", "arxiv_id": "1802.04123", "language": "en", "url": "https://arxiv.org/abs/1802.04123" }
\section{The Learning Algorithm} \label{algorithm} Having discussed the modelling of the control sequences and the RL problem, we will now introduce the actual learning algorithm we employ. As we have seen above, we can not perform direct optimization of $R(c)$ as we cannot access $\nabla R(c)$. However, it has long been known that it is possible to approximate $\nabla_{\Theta} \mathbf{E}_c[R(c)]$ since \begin{align} \nabla_{\Theta} \mathbf{E}_c[R(c)] = \mathbf{E}_c[\nabla \ln p_{\Theta}(c) R(c)] \end{align} where $\mathbf{E}_c$ is the expectation over the sequence space and $p_{\Theta}(c)$ is the stochastic policy of the agent parameterized by the weight vector $\Theta$, which in this work corresponds to an RNN\@. This insight is known as the likelihood ratio or REINFORCE~\cite{williams1992simple} trick and constitutes the basis of the policy gradient approach to reinforcement learning. From the physics point of view, the trick allows us to take the gradient of the average outcome of a given experiment with respect to the parameters of our stochastic controller and perform gradient-based optimization while being agnostic about the mechanisms behind the experiment, i.e.\ model-free. In a sense we thus have a way of taking a gradient through an experiment without the necessity to mathematically model every variable of influence and their interplay. From a different perspective, this approach simply corresponds to maximizing the likelihood of sequences that are weighted by their results, such that the agent has a higher incentive to maximize the likelihood of good sequences. The approach can be refined by replacing the weighting by the pure reward $R(c)$ with an approximation of the advantage $A(s,c) = Q(s,c) - V(s)$. This has been shown to improve the convergence significantly and especially for continuous control problems, policy gradient methods outperform Q-learning algorithms~\cite{schulman2017proximal}. Despite such improvements, policy gradient approaches still suffer from slow convergence or catastrophicly large updates, which has led to the development of improvements such as trust region policy optimization~\cite{schulman2015trust} (TRPO). These methods however make use of second-order information such as inverses of the Hessian or Fisher information matrix and hence are very difficult to apply in large parameter spaces which are common in the deep learning regime. The underlying idea of such improvements thereby is limiting the magnitude of updates to $\Theta$ by imposing constraints on the difference between $p_{\Theta}$ and $p_{\Theta_{new}}$ in order to prevent catastrophic jumps out of optima and achieve a better convergence behvaior. In an effort to strike a balance between ease of application and leveraging the insights behind TRPO, recently a novel policy gradient scheme called proximal policy optimization~\cite{schulman2017proximal} (PPO) was introduced. One main novelty hereby lies in the introduced loss, which is for a general RL scenario given by \begin{align} L^{CLIP}(\Theta) = &\mathbf{E}_t[\min(r_t(\Theta)A_t, \\ &\text{clip}(r_t(\Theta),1-\epsilon, 1+\epsilon)A_t)] \end{align} where $\mathbf{E}_t$ and $A_t$ are the expectation over time steps and the advantage at time $t$ respectively, which both need to be approximated. The term $r_t$ is defined as the ratio of likelihoods \begin{align} r_t(\Theta) = \frac{p_{\Theta}(c_t|s_t)}{p_{\Theta_{old}}(c_t|s_t)} \end{align} of actions $c_t$ in states $s_t$ in our notation and we define $\text{clip}(a,b,c)=\text{min}(\text{max}(a,b),c)$. The distribution $p_{\Theta}(c_t|s_t)$ is a stochastic policy depending on parameters $\Theta$. Note that this generic formulation assumes multiple actions $c_t$ per episode and thus does not yet apply to the learning scenario discussed here. The objective function poses a lower bound on the improvement induced by an update and hence establishes a trust region around $\Theta_{old}$. The hyperparameter $\epsilon$ controls the maximal improvement and thus the size of the trust region. Now, the basic algorithm is defined as follows: \begin{enumerate} \item Obtain new set of trajectories, i.e.\ sequences, $C$, by sequentially sampling from $p_{\Theta}(c_t|s_t)$. \item Optimize $L^{CLIP}$ over $C$ for $K$ iterations. \item Set $\Theta_{old} = \Theta$. \item Repeat until convergence. \end{enumerate} Note that there exists a straight-forward generalization to the case of multiple agents but as we can not reasonably assume in our application to have access to multiple identical experiments, we only consider the case of one agent here. The algorithm was shown to achieve state-of-the-art performance for several discrete and contiouous control tasks, which makes it ideally suited for the problems tackled in this work. However, we will now introduce a few improvements tailored to our specific reinforcement learning problem as defined in the previous section which we will for the sake of brevity from now on refer to as memory proximal policy optimization (MPPO). Since in our problem we only consider episodes consisting of one action $c$, the objective becomes\begin{align} L_1^{CLIP}(\Theta) = &\mathbf{E}_c[\min(r(\Theta)A,\\ &\text{clip}(r(\Theta),1-\epsilon, 1+\epsilon)A)] \end{align} with \begin{align} r(\Theta, c) = \frac{p_{\Theta}(c)}{p_{\Theta_{old}}(c)} \end{align} and $p_{\Theta}(c)$ being parameterized by an LSTM, as discussed above. $A$ again denotes the advantage function. We have omitted the dependence on $c$ in $L_1$ for the sake of clarity. Since we know that in our problem setting it holds that $Q(c,s)=R(c)$, the advantage function becomes \begin{align} A(c) = R(c) - V(c). \end{align} It is worth noting that this implies that in our scenario there is no need to approximate the Q-function as we can access it directly. In fact approximating the Q-function and hence $R(c)$ would be equivalent to solving the optimization problem as we could use the approximator to optimize over its input space to find good sequences. The quality of the approximation of $A(c)$ consequentially only depends on the approximation of $V(c)$. While there exist many sophisticated ways of approximating the value function~\cite{schulman2017proximal, mnih2016asynchronous} in our case the optimal approximation is given by \begin{align} \hat{V}(c) = R(c^*) \end{align} where $c^*$ is the best sequence we have encountered so far. Since we do not know the best sequence and its corresponding reward (at best we know an upper bound), the reward of the best sequence found so far is the closest approximation we can make. The optimal approximation of the advantage $A(c)$ hence is given by \begin{align} \hat{A}(c) = R(c) - R(c^*). \end{align} Since we need to store $c^*$ to compute the advantage approximation and are generally interested in keeping the best solution, it is a natural idea to equipping the agent with a memory $M$ of the best sequences found so far. We can then formulate a memory-enhanced version of the PPO algorithm: \begin{enumerate} \item Obtain new set of trajectories, i.e.\ sequences, $C$, by sampling from $p_{\Theta}(c)$. \item Update the memory of best sequences $M$ \item Optimize $L_1^{CLIP}$ over $C \cup M$ for $K$ iterations. \item Set $\Theta_{old} = \Theta$. \item Repeat until convergence \end{enumerate} The memory sequences are treated as newly sampled sequences such that their weighting always is performed with respect to the current values of $\Theta_{old}$ and $\Theta$. This ensures compatibility with the policy gradient framework while the access to the best actions discovered so far leads to a better convergence behavior as we will see later. Note that, under the previously introduced assumption, the best sequences share common structural properties. Maximizing the expected reward over all sequences $\mathbf{E}_c[R(c)]$ is thus equivalent to maximizing the expected reward over the sequences in the memory $\mathbf{E}_{c \in M}[R(c)]$ which ensures relevance and stability of the updates computed over $M$. This memory scheme furthermore is different from experience replay in Q-learning~\cite{mnih2015human} as only the best sequences are kept and reintroduced to the agent. The relation between $|C|$ and $|M|$ thereby is a new hyperparameter of the algorithm affecting the exploration-exploitation dynamics of the learning process. Another factor that has a significant impact on the exploration behavior is the value of the scaling or variance parameter of the probability distributions employed in continuous control tasks, such as for instance the standard deviation $\sigma$ of the univariate normal distribution or the covariance matrix $\Sigma$ in the multivariate case. It is clear that a large variance induces more exploration while a small variance corresponds to a more exploitation-oriented behavior. Over the course of training an agent to find a good policy it is hence reasonable to start with a larger variance and reduce it during the optimization until it reaches a defined minimal value. However, while the agent usually learns to predict the mean of the given distribution, the variance parameter is currently often treated as fixed or follows a predefined decay schedule which does not account for the randomness in the training process. Utilizing the sequence memory, we propose an improvement by introducing a dynamical adaptation scheme for the variance parameters depending on the improvement of the memory $M$. More concretely, we propose to maintain a window $W_i$ of the relative improvements of the average rewards in memory \begin{align} W_i = \left[ \frac{\overline{ R(M_{i-l+1})} -\overline{R(M_{i-l})}}{\overline{ R(M_{i-l})}},\cdots,\frac{\overline{(M_i)} -\overline{R(M_{i-1})}}{\overline{R(M_{i-1})}} \right] \end{align} where $\overline{R(M_i)}$ denotes the average reward over the memory in iteration $i$ of the optimization and $l$ is the window length. At every $l$-th step in the optimization, we then compute a change parameter \begin{align} \alpha_t = 1 + \frac{\overline{W_{t-l}} - \overline{W_t}}{\overline{W_{t-l}}} \end{align} with $\overline{W_t}$ being the window average and multiply (possibly clipped) the variance parameters by it. Note that we assume here monotonic improvement of $M$ and $R \in [0,1]$. This scheme thus poses a dynamic adaptation of the variance parameters based on second-order information of the improvement of the average reward of $M$. It follows the intuition that if the improvement slows down, a decrease of the variance gives the agent more control over the sampled actions and allows for a more exploitation-oriented behavior. On the other side, when the improvement accelerates, it appears reasonable to prevent too greedy a behavior by increasing the uncertainty in the predicted actions. The same scheme can furthermore also be applied to parameters such as $\epsilon$, which plays a similar role to the variance. In conclusion, extending the PPO training with a memory of the best perceived actions prevents good solutions of the control problem to be lost, gives the agent access to the best available advantage estimate, improves convergence and allows to dynamically scale the variance parameters of respective distributions from which actions are sampled. While we introduce this variant of the PPO algorithm for our specific application, we believe that it would generalize to other applications of reinforcement learning. \section{Conclusion and Future Work} \label{conclusion} In this work we have tried to introduce quantum physics and especially problems in (black-box or model-free) quantum information and quantum control to a broader audience in the machine learning community and showed how they can be successfully tackled with state-of-the-art reinforcement learning methods. To this end, we have given a brief introduction to quantum control and discussed different aspects of the application of reinforcement learning to it. We have argued that LSTMs are a good choice to model the sequences of control parameters arising in quantum control and shown how black-box quantum control gives rise to a particular reinforcement learning problem for whose optimization policy gradient methods are a natural choice. As a recent and successful variant of policy gradient algorithms, we have adapted the PPO scheme for our application and introduced the MPPO algorithm. We then went on to show how our general method for treating black-box quantum control can be easily combined with physical prior knowledge for two example scenarios and presented numerical results for a range of learning tasks arising in this context. These results showed that our method is able to achieve state-of-the-art performance in different tasks while being able to address problems of discrete and continuous control alike and provided evidence for the hypotheses that machine learning is a good choice for the automated optimization of parameters in experiments. This work can also be understood to some extent as a contribution to the debate about how much prior knowledge is necessary for machine learning algorithms to perform well in real-world tasks. During the course of this work, we have found it a necessary precondition for the addressed problems in continuous domains to be solvable to incorporate physical domain knowledge such as known good rotation axes and angles. Without this information a reinforcement learning agent would be required to at least implicitly learn about certain laws of physics to not be lost in the infinite action space of which only a negligibly small part results in good solutions. This clearly is out of scope for current models and algorithms without symbolic reasoning capacity and might remain so for some time especially when the data collected by the agent is very small compared to the search space. Finally, interesting directions of future work would be to apply the method to a real experiment and evaluate its performance there as well as to develop a set of benchmark problems in quantum control to compare the different already existing algorithms on neutral grounds. It would also be interesting to investigate which other problems of relevance yield reinforcement learning problems similarly structured to the formulation presented in this work. \subsection{Ground state transitions} \label{ground_states} Another scenario that was recently addressed in an anlysis of the characteristics of the optimization problem behind controlling systems out of equilibrium~\cite{bukov2017machine} is the transition between ground states of different Hamiltonians. The considered class of Hamiltonians was thereby defined to be the class of Ising Hamiltonians given by \begin{align*} H(J,g,h) &= J \sum_{i=1}^{L-1} I^{\otimes i-1} \otimes \sigma_x \otimes \sigma_x \otimes I^{\otimes L-(i+1)} \\ &+ g \sum_{i=1}^{L} I^{\otimes i-1} \otimes \sigma_z \otimes I^{\otimes L - i} \\ &+ h \sum_{i=1}^{L} I^{\otimes i-1} \otimes \sigma_x \otimes I^{\otimes L - i} \end{align*} where the $\sigma_{\{x,y,z\}}$ again denote the Pauli matrices and $L$ specifies the number of particles. In this setting we furthermore set $J=g=-1$, leaving $h$ as the only free parameter specifing the strength of the magnetic field represented by $\sigma_x$. From a mathematical perspective, the ground state $\ket{E_{min}(h)}$ of a given Hamiltonian $H(h)$ is then defined as the eigenvector of $H(h)$ corresponding to its lowest eigenvalue. In the considered scenario we now choose the initial and target states to be $\ket{\psi_i}=\ket{E_{min}(h_i)}$ and $\ket{\psi^*}=\ket{E_{min}(h^*)}$ respectively where $h_i \neq h^*$ are particular choices of $h$. The controlled time evolution operator is then simply defined to be the one generated by $H(h)$ as given by \[ U(h_t) = e^{-i \Delta t /h H(h_t)} \] where we assume $h_t$ to be time dependent. The closeness between the state resulting from the controlled time evolution $\ket{\psi(T)}$ and the target state $\ket{\psi^*}$ is measured by their squared overlap \[ S_2(\psi^*, \psi(T)) = |\braket{\psi^*, \psi(T)}|^2, \] similar to what was in Section~\ref{q_control}. We thus obtain the optimization problem formulation \[ \max_{\{h_t\}} S_2(\psi^*, \psi(\{h_t\})) \] representing the quantum control scenario. \subsection{Ground state transitions} \label{ground_states} Another scenario that was recently addressed in an anlysis of the characteristics of the optimization problem behind controlling systems out of equilibrium~\cite{bukov2017machine} is the transition between ground states of different Hamiltonians. The considered class of Hamiltonians was thereby defined to be the class of Ising Hamiltonians given by \begin{align*} H(J,g,h) &= J \sum_{i=1}^{L-1} I^{\otimes i-1} \otimes \sigma_x \otimes \sigma_x \otimes I^{\otimes L-(i+1)} \\ &+ g \sum_{i=1}^{L} I^{\otimes i-1} \otimes \sigma_z \otimes I^{\otimes L - i} \\ &+ h \sum_{i=1}^{L} I^{\otimes i-1} \otimes \sigma_x \otimes I^{\otimes L - i} \end{align*} where the $\sigma_{\{x,y,z\}}$ again denote the Pauli matrices and $L$ specifies the number of particles. In this setting we furthermore set $J=g=-1$, leaving $h$ as the only free parameter specifing the strength of the magnetic field represented by $\sigma_x$. From a mathematical perspective, the ground state $\ket{E_{min}(h)}$ of a given Hamiltonian $H(h)$ is then defined as the eigenvector of $H(h)$ corresponding to its lowest eigenvalue. In the considered scenario we now choose the initial and target states to be $\ket{\psi_i}=\ket{E_{min}(h_i)}$ and $\ket{\psi^*}=\ket{E_{min}(h^*)}$ respectively where $h_i \neq h^*$ are particular choices of $h$. The controlled time evolution operator is then simply defined to be the one generated by $H(h)$ as given by \[ U(h_t) = e^{-i \Delta t /h H(h_t)} \] where we assume $h_t$ to be time dependent. The closeness between the state resulting from the controlled time evolution $\ket{\psi(T)}$ and the target state $\ket{\psi^*}$ is measured by their squared overlap \[ S_2(\psi^*, \psi(T)) = |\braket{\psi^*, \psi(T)}|^2, \] similar to what was shown in Section~\ref{q_control}. We thus obtain the optimization problem formulation \[ \max_{\{h_t\}} S_2(\psi^*, \psi(\{h_t\})) \] representing the quantum control optimization problem. Next, we will introduce some RL tasks arising in this control scenario. Similarly to the the taxonomy introduced above, we will thereby distinguish between a discrete, a continuous and a constrained case. These cases correspond to different domains of possible values for the time dependent field strengths $h_t$. All of them however have in common that we assume a maximal magnitude $h_{max}$ of the field strength such that $h_t \in [-h_{max}, h_{max}]$ holds. This is simply done to reflect the fact that in real experiments infinite field strengths are impossible to achieve. \begin{description} \item[Discrete case] Knowing that the potentially continuous domain of our control parameter $h_t$ is upper and lower bounded by $\pm h_{max}$, we can apply Pontryagin's principle to limit ourselves to actions $s_t \in \{-h_{max}, h_{max}\}$. We thus obtain a reinforcement learning problem where at each point in time the agent has to make a binary decision. While this is arguably the easiest conceivable scenario, the sequence space still is of size $|S|=2^T$. \item[Continuous case] Although we know from theory that optimal sequences will comprise only extremal values of the control parameter $h_t$, it is still interesting to examine if the agent is able to discover this rule by itself. In this case we hence allow the agent to freely choose $h_t \in [-h_{max}, h_{max}]$ which again presents us with a sequence space of infinite size. Following our reasoning from the continuous quantum memory case, we cast the problem as learning the deviation $\Delta h \in [0, h_{max}]$ from $\pm h_{max}$. Hence, for each time $t$ the agent must predict the deviation $\Delta h$ and decide to which of the two extremal values the deviation should be applied. This formulation clearly allows the agent to predict any value in $[-h_{max}, h_{max}]$. \item[Constrained case] In the continuous case as defined above, we know that the agent should ideally learn to predict deviations of 0 to achieve sequences with extremal values of $h_t$. We can thus try to make the problem more challenging by imposing an upper bound $B < T|h_{max}|$ on $\sum_t |h_t|$, representing an upper limit of the total field strength. Imposing such a bound is not an artificial problem as it could for instance be used to model energy constraints in real experiments. This constraint can easily be realized by defining the reward of a sequence $s$ to be \[ R(s) = \begin{cases} S_2(\psi^*, \psi(s)) \; \text{if} \; \sum_t |h_t| \leq B \\ 0 \; \text{else}. \end{cases} \] This constraint requires the agent to learn how to distribute a global budget over a given sequence where it can maximally allocate an absolute field strength of $|h_{max}|$ to each action $s_t$. As it is not clear which values are optimal in principle for a given bound $B$, instead of a deviation we here let the agent directly predict the field strength $h_t$. \end{description} \section{Ground state transition as RL problem} Next, we will introduce some RL tasks arising in this control scenario. Similarly to the the taxonomy introduced above, we will thereby distinguish between a discrete, a continouous and a constrained case. These cases correspond to different domains of possible values for the time dependent field strengths $h_t$. All of them however have in common that we assume a maximal magnitude $h_{max}$ of the field strength such that $h_t \in [-h_{max}, h_{max}]$ holds. This is simply done to reflect the fact that in real experiments infinite field strengths are impossible to achieve. \begin{description} \item[Discrete case] Knowing that the potentially continouous domain of our control parameter $h_t$ is upper and lower bounded by $\pm h_{max}$, we can apply Pontryagin's principle to limit ourselves to actions $s_t \in \{-h_{max}, h_{max}\}$. We thus obtain a reinforcement learning problem where at each point in time the agent has to make a binary decision. While this is arguably the easiest conceivable scenario, the sequence space still is of size $|S|=2^T$. \item[Continouous case] Although we know from theory that optimal sequences will comprise only extremal values of the control parameter $h_t$, it is still interesting to examine if the agent is able to discover this rule by itself. In this case we hence allow the agent to freely choose $h_t \in [-h_{max}, h_{max}]$ which again presents us with a sequence space of infinite size. Following our reasoning from the continouous quantum memory case, we cast the problem as learning the deviation $\Delta h \in [0, h_{max}]$ from $\pm h_{max}$. Hence, for each time $t$ the agent must predict the deviation $\Delta h$ and decide to which of the two extremal values the deviation should be applied. This formulation clearly allows the agent to predict any value in $[-h_{max}, h_{max}]$. \item[Constrained case] In the continouous case as defined above, we know that the agent should ideally learn to predict deviations of 0 to achieve sequences with extremal values of $h_t$. We can thus try to make the problem more challenging by imposing an upper bound $B < T|h_{max}|$ on $\sum_t |h_t|$, representing an upper limit of the total field strength. Imposing such a bound is not an artificial problem as it could for instance be used to model energy constraints in real experiments. This constraint can easily be realized by defining the reward of a sequence $s$ to be \[ R(s) = \begin{cases} S_2(\psi^*, \psi(s)) \; \text{if} \; \sum_t |h_t| \leq B \\ 0 \; \text{else}. \end{cases} \] This constraint requires the agent to learn how to distribute a global budget over a given sequence where it can maximally allocate an absolute field strength of $|h_{max}|$ to each action $s_t$. As it is not clear which values are optimal in principle for a given bound $B$, instead of a deviation we here let the agent directly predict the field strength $h_t$. \end{description} \section{Introduction} As a result of collaborative efforts by academia and industry, machine learning (ML) has in recent years led to advancements in several fields of application ranging from natural language and image processing over chemistry to medicine. In addition to this, reinforcement learning (RL) has recently made great progress in solving challenging problems like Go or Chess~\cite{silver2017mastering, silver2017mastering1} with only small amounts of prior knowledge which was widely believed to be out of reach for the near future. Consequentially, RL is nowadays thought to hold promise for applications such as robotics or molecular drug design. This success naturally raises the question of what other areas of application might benefit from the application of machine learning. Quantum mechanics and especially quantum computing is of special interest to the machine learning community as it can not only profit from applications of state-of-the-art ML methods but is also likely to have an impact on the way ML is done in the future~\cite{biamonte2017quantum}. This bidirectional influence sets it apart from most other applications and is a strong incentive to investigate possible uses of machine learning in the field despite the comparably steep learning curve. One challenging and important task in the context of quantum physics is the control of quantum systems over time to implement the transition between an initial and a defined target physical state by finding good settings for a set of control parameters~\cite{nielsen2002quantum}. This problem lies at the heart of quantum computation as performing any kind of operation on quantum bits (qubits) amounts to implementing a controlled time evolution with high accuracy in the face of noise effects induced by the environment. Apart from the relevance to quantum computation, the analysis and understanding of the properties of quantum control problems also is an interesting research problem in its own right. However, for a given physical system as implemented in a real experiment it is in general not possible to express all influence factors and dependencies of particles in mathematical form to perform an analytical analysis or gradient-based optimization of the control variables. Thus, physicists have for some time been proposing automated solutions for these problems~\cite{wigley2016fast, melnikov2018active, palitta2017learning, bukov2017machine, august2017using} that are able to find good control parameter settings while being as agnostic as possible about the details of the problem in question. Unfortunately though, these approaches are in general based on tailored solutions that do not necessarily generalize to other problems as they, e.g.\ only consider discrete variables when the underlying problem is actually continuous and are not always very sample efficient. In this work we improve over the status quo by introducing a control method based on recurrent neural networks (RNNs) and policy gradient reinforcement learning that is generic enough to tackle every kind of quantum control problem while simultaneously allowing for the incorporation of physical domain knowledge. More precisely, we present an improved version of the recently introduced proximal policy optimization (PPO) algorithm~\cite{schulman2017proximal} and use it to train Long Short-Term Memory (LSTM)~\cite{hochreiter1997long} networks to approximate the probability distribution of good sequences of control parameters. We furthermore show how physical domain knowledge can be incorporated to obtain state-of-the-art results for two recently addressed control problems~\cite{bukov2017machine, august2017using}. While our method is based on an analysis of the reinforcement problem underlying quantum control, it can also be applied to other RL problems yielding the same structure. Our contribution hence is threefold in that we firstly introduce the general method, secondly demonstrate how to successfully apply it to quantum control problems and thirdly, by doing so, try to stimulate a more intense exchange of ideas between quantum physics to the broader machine learning community to facilitate mutual benefit. The rest of this work is structured as follows: in Section~\ref{q_control}, we provide a very brief introduction to quantum control, followed by a discussion and analysis of the reinforcement learning problem posed by quantum control in Section~\ref{rl_for_qc}. Building on the analysis, we present the method in Section~\ref{algorithm} and subsequently introduce two concrete quantum control problems in Sections~\ref{quantum_memory} and \ref{ground_states} respectively. We then present numerical results obtained by our method for these problems and compare them to those of existing solutions in Section~\ref{numerics}. Finally, we conclude with a discussion of the work in Section~\ref{conclusion}. \section{Results} \label{numerics} \begin{table} \caption{The best values of $D(U,I)$ found by or method for the discrete, semi-continuous and continuous quantum memory learning tasks together with baseline results. The reference values were taken from~\cite{august2017using} and computed with the corresponding algorithm for $T=0.512$ and $\Delta t=0.002$. Lower values are better.} \begin{center} \begin{tabular}{c *{4}{c} } & \multicolumn {2}{c}{$\Delta t =0.002$} & \multicolumn {2}{c}{$\Delta t =0.004$}\\ \hline $T=$ & $0.064$ & $0.512$ & $0.256$ & $0.512$\\ \hline Ref. & $7\cdot 10^{-5}$ & $2\cdot 10^{-4}$ & $4\cdot 10^{-4}$ & $8\cdot 10^{-4}$ \\ \hline Disc. & $7\cdot 10^{-5}$ & $2\cdot 10^{-4}$ & $4\cdot 10^{-4}$ & $8\cdot 10^{-4}$ \\ \hline Semi-Cont. & $6\cdot 10^{-5}$ & $2\cdot 10^{-4}$ & $4\cdot 10^{-4}$ & $8\cdot 10^{-4}$ \\ \hline Cont. & $6\cdot 10^{-5}$& $2\cdot 10^{-4}$ & $4\cdot 10^{-4}$ & $7\cdot 10^{-4}$ \\ \hline \end{tabular} \end{center} \label{tab_qm_0} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{qm_disc_seqs} \caption{The best 10 sequences found for the discrete learning problem with varying parameters of $T$ and $\Delta t$. It is clearly visible how the best sequences for each setting share common structural properties and also exhibit recurring patterns making them amenable to machine learning models.} \label{fig:qm_disc_seqs} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{mem_eval} \caption{A comparison of the convergence behavior of the best results sampled per iteration for different sizes of the memory, no memory and a memory with the $L^{CLIP}$ loss applied to the invididual $c_t$ for $T=0.064$ and $\Delta t=0.002$. The convergence becomes more stable with larger memory and updates based on the entire sequences lead to convergence to better sequences.} \label{fig:mem_eval} \end{figure} In this section we will now present numerical results for the two application scenarios presented above to illustrate the validity of our method and the usefulness of the MPPO algorithm. As we did not have at our disposal real physical experiments implementing these scenarios, the results presented in the following are based numerical simulations. \subsection{Quantum Memory} For the quantum memory scenario, we investigate the performance of our algorithm for different lengths of the discrete time step, total evolution times and across the three formulations of the problem described above. More concretely, we explore the method's behavior for a discrete time evolution with $\Delta t \in\{0.002, 0.004\}$, $T \in \{0.064, 0.256, 0.512, 1.024\}$ and a physical system consisting of one memory qubit coupled to a bath of four qubits with up to three-body interactions to allow for a comparison with the baseline results~\cite{august2017using}. We refer the interested reader to this article for a more precise description of the physical setup. While we ultimately would like to optimize $D(U(c),I)$ as defined above, we used $1-D(U(c),I)$ as a reward signal to obtain an $R(c) \in [0,1]$. We furthermore shifted the reward such that a uniformly random policy obtains zero reward on average. As the three learning tasks introduced for this scenario differ in their action domains, we need to use a different probabilistic modelling for each setting. For the discrete case, we simply model each element $c_t$ of a sequence $c$ by a categorical distribution such that we have \[ p(c) = \prod_t Cat(c_t \in \{I,X,Y,Z\}|\{p_{I,t}, p_{X,t}, p_{Y,t}, p_{Z,t}\}) \] for a complete sequence $c$. In the semi-continuous case we employ a mixture-of-Gaussians distribution which yields \[ p(c) = \prod_t \sum_{i \in \{I,X,Y,Z\}} p_{i,t} \mathcal{N}(c_t=\Delta \alpha| \mu_{i,t}, \sigma_t). \] This can easily be generalized to the continuous case via a multivariate mixture-of-Gaussians distribution with diagonal covariance matrix such that we obtain \[ p(c) = \prod_t \sum_{i \in \{I,X,Y,Z\}} p_{i,t} \mathcal{N}(c_t=\{\Delta \alpha,\Delta \theta,\Delta \phi\}| \boldsymbol{\mu}_{i,t}, \sigma_tI). \] Note that we have omitted here the dependence on the weights $\Theta$ for the sake of brevity. As discussed in Section~\ref{rl_for_qc}, we use an LSTM to parameterize these probability densities. More concretely, we use a two-layer LSTM and use its output as input to a softmax layer to predict the $p_{i,t}$. From this output state and the relevant parts of the output from the previous time step we also predict the $\mu_i$ for $\Delta \alpha$ in the semi-continuous case and analogously for $\Delta \theta$ and $\Delta \phi$ in the continuous case. For every deviation output we train an individual output unit for each discrete rotation. For the semi-continuous and continuous tasks, we scale the standard deviation $\sigma_t$ and PPO parameter $\epsilon$ over the course of the optimization using our introduced adaption scheme with a window size of 10 and optimize the loss function with the Adam optimizer~\cite{kingma2014adam}. The scores $D(U(c),I)$ of the best sequences found in our numerical experiments are listed in Table~\ref{tab_qm_0}. They clearly show that our method is able to achieve the same or slightly better results as the baseline algorithm from~\cite{august2017using} for all considered settings and learning tasks. For the semi-continuous case, we observe that for the setting involving the shortest sequences slightly better sequences than in the discrete case can be found. For longer sequences the performance is on par with the discrete sequences. The same in principle holds for the continuous case with the exception of the results for $T=0.512$ and $\Delta t=0.004$ being slightly better then for the other two cases. Overall we can conclude that our method finds sequences several orders of magnitude better than those a random policy generates, which are generally in the interval $[0.1, 0.5]$, showing that in all cases LSTMs trained by the MPPO algorithm seem to perform quite well. We can also see that the discrete sequences pose a strong baseline that is hard to beat even with a fully continuous approach and in fact we observed the predicted deviations to converge to very small values. The results furthermore support the conjecture that good sequences share common structure and local patterns that can be learned which is also illustrated in Figure~\ref{fig:qm_disc_seqs}. Here, the best 10 sequences found during the training process in the discrete case for three different settings are shown, illustrating the high degree of structure that the best sequences exhibit. The structural similarities become more apparent with growing sequence length. Interestingly, in all cases the best sequences only make use of two of the four Pauli rotations and less surprisingly never use the identity `rotation'. In Figure~\ref{fig:mem_eval} we show the effect of different sizes of the memory $M$ on the convergence of the best sequences in the discrete case for otherwise constant optimization parameters. As can be seen, when not using a memory or only storing the best sequence, the optimization diverges. For larger sizes of the memory, the algorithm converges to better and better sequences, arriving at the best sequence found for this setting with a memory of 1024 sequences. We also compared the performance of our algorithm to updates computed not over complete sequences but over the single control parameters $c_t$ as done in the PPO algorithm for $|M|=1024$. While also the latter performs well, only the former converges to the best sequence. \begin{table} \caption{The best values of $S_2$ obtained by our method for the discrete, continuous and constrained ground state transition learning problems with reference values taken from~\cite{bukov2017machine}. Higher values are better.} \begin{center} \begin{tabular}{c *{4}{c} } & $T=0.5$ & $T=1$ & $T=3$ \\ \hline Ref. ($L=1$) & $0.331$ & $0.576$ & $1$ \\ \hline Disc. ($L=1$) & $0.331$ & $0.576$ & $1$ \\ \hline Cont. ($L=1$) & $0.331$ & $0.576$ & $1$ \\ \hline Disc. ($L=5$) & $0.57$ & $0.767$ & $1$ \\ \hline Cont. ($L=5$) & $0.57$ & $0.768$ & $1$ \\ \hline Const. ($B=20$) & $0.313$ & $-$ & $-$ \\ \hline Const. ($B=30$) & $0.322$ & $-$ & $-$ \\ \hline Const. ($B=40$) & $-$ & $0.572$ & $-$ \\ \hline Const. ($B=50$) & $-$ & $0.577$ & $-$ \\ \hline Const. ($B=60$) & $-$ & $0.577$ & $-$ \\ \hline Const. ($B=120$) & $-$ & $-$ & $1$ \\ \hline Const. ($B=140$) & $-$ & $-$ & $1$ \\ \hline Const. ($B=160$) & $-$ & $-$ & $1$ \\ \hline \end{tabular} \end{center} \label{tab_gs_0} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{gs_const_seqs} \caption{The 10 best sequences found for different values of $T$ and a maximal field strength $B$ amounting to half of the maximally possible. While the best sequences for $T=0.5$ and $T=1.0$ are very similar und use the maximal possible absolute field strength, the best sequences for $T=3.0$ use much smaller pulses.} \label{fig:gs_const_seqs} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{sigma_conv} \caption{The convergence the best and average reward per iteration together with the dynamically adapted $\sigma$ for the constrained scenario with $T=1.0$ and $B=60$.} \label{fig:sigma_conv} \end{figure} \subsection{Ground state transition} In the ground state transition setting, we evaluate our method for times $T \in \{0.5, 1, 3\}$ with $\Delta t=0.05$ and an initial $h_i=-2$, target $h^*=2$ as well as $|h_{max}=4|$ to achieve comparability with the baseline results~\cite{bukov2017machine}. For the discrete and continuous case, we consider systems of size $L=1$ and $L=5$ and $B \in \{20,30,40,50,60,100,120,140\}$ with $L=1$ for the constrained case. Since the overlap $S_2$ as defined above already lies in the interval $[0,1]$, we used it directly as reward function, again shifting it such that a uniformly random policy achieved zero reward. The probabilistic modelling of the sequences is similar to the quantum memory case in that we use a categorical distribution for the discrete case and a mixture-of-Gaussians for both the continuous and constrained tasks. Thereby, we model the probability density of the deviations $\Delta h_t$ in the continuous case and the predicted absolute value of $h_t$ in the constrained case. The distributions are parameterized in the same way as above, namely by a two layer LSTM form whose output state both the discrete probabilities and the means for both discrete cases as predicted. The optimization is conducted as in the quantum memory scenario. The results of our numerical experiments are listed in Table~\ref{tab_gs_0}. As shown, our method was able to replicate the baseline results from~\cite{bukov2017machine} both for the discrete and the continuous formulation of the problem for a system size of $L=1$ and also performs well for larger systems of $L=5$ with both versions yielding generally the same results. We indeed found the continuous version to converge to predicting zero deviation as it was expected to. For the constrained case we can see that our method converges to sequences whose performance is surprisingly close to the baseline results even when allowed to use only half of the maximal absolute field strength. For $T=3.0$ the imposed constraints in fact seem to have no negative effect as apparently already sequences with a very small total field strength suffice to achieve perfect overlap. This is also illustraed by Figure~\ref{fig:gs_const_seqs} which shows the best 10 sequences found during the training process for $T \in \{0.5,1.0,3.0\}$ and $B$ set to half the maximal total field strength. While for the smaller two total times the sequences are very similar and always make use of the maximal field strength or apply no pulse at all, for $T=3.0$ only the general scheme of applying positive pulses first, then doing nothing and finally applying negativ pulses persists. The individual pulses that are applied are very weak and and entire sequence typically only amounts to a total absolute strength of $\sim 6$. This phenomenon is likely caused by the fact that the optimization problem in this case becomes significantly easier for longer times~\cite{bukov2017machine}. In Figure~\ref{fig:sigma_conv} we display the convergence of the best and average results sampled per iteration together with the dynamic schedule for sigma during the optimization. It can be seen that $\sigma$ is dynamically increased when the convergence slows down, decreased when it speeds up and finally converges to a stable value as the optimization converges as well. In other scenarios we also observed our adaption scheme to perform similarly to a decayed annealing schedule. \section{Applying the Method} In this section, we will now introduce two quantum control scenarios that were recently explored via machine learning~\cite{bukov2017machine, august2017using}. We show how one can apply our method to tackle some interesting learning tasks arising in these control settings by leveraging physical domain knowledge. \subsection{Quantum Memory} \label{quantum_memory} One particular instance of a quantum control problem is the problem of storing the state of a qubit, i.e.\ a two-level system used in quantum computation. This is, next to quantum error correction, a very relevant problem in quantum computation. Here we assume that our qubit is embedded in some environment, called the bath, such that the complete system lives in the Hilbert space \begin{align*} \mathcal{H} = \mathcal{H}_S \otimes \mathcal{H}_B \end{align*} with the subsripts $S$ and $B$ denoting the space of the system and bath respectively. If we let this system evolve freely, decoherence effects will over time destroy the state of the qubit. Hence the question is how we can intervene to prevent the loss of the state in the presence of the environment or, for computer scientific purposes, the noise where we assume to have control over the qubit only. From a quantum computing perspective, we would like to implement a gate that performs the identity function over a finite time interval. Qubit states are commonly represented as points on the Bloch sphere~\cite{nielsen2002quantum} and the effect of the environment on the qubit can in this picture be perceived as some rotation that drives the qubit away from its original position. To counter this problem we must hence determine a good rotation at each time step such that we negate the effect of the environment. So, our goal is to dynamically decouple the qubit from its bath by performing these rotations. The rotation of a qubit is defined as \begin{align*} R_n(\alpha) = e^{-i\frac{\alpha}{2}n\mathbf{\sigma}} \end{align*} with $n$ being a unit vector specifying the rotation axis, $\alpha$ denoting the rotation angle and $\mathbf{\sigma}$ the `vector' of the stacked Pauli matrices $\sigma_{\{x,y,z\}}$~\cite{sakurai1995modern}. Thus our controlled time evolution operator per time step $t$ becomes \begin{align*} U(n_t, \alpha_t) = e^{-i \Delta t ( H_0 + \frac{\alpha_t }{2\Delta t}n_t\mathbf{\sigma} \otimes I_B)}, \end{align*} expressing that we only apply the rotation to the qubit, but not the bath. The noise Hamiltonian $H_0$ here reflects the effect of the bath on the qubit and $I_B$ simply denotes the identity of size of the dimensionality of $\mathcal{H}_B$ such that the Kronecker product yields a matrix of equal size to $H_0$. One possible metric to quantify how well we were able to preserve the qubit's state is \begin{align*} D(U,I) = \sqrt{1 - \frac{1}{d_S d_B} \Vert \trace_S(U)\Vert_{\trace}} \end{align*} with $U$ denoting the total evolution operator, $I$ the identity and $\trace_S$ is the partial trace over the system~\cite{quiroz2013optimized}. $\Vert U \Vert_{\trace} = \trace \sqrt{U^{\dagger}U}$ is the trace or nuclear norm. This distance measure is minimized by the ideal case $U = I_S \otimes U_B$ with an arbitrary unitary $U_B$ acting on the bath. Thus, the problem we would like to solve is a special instance of quantum control and can be formulated as \begin{align*} \min_{\{(n_t, \alpha_t)\}} D(U(\{(n_t, \alpha_t)\}, I). \end{align*} Having introduced the quantum memory scenario, we now turn to a description of possible reinforcement learning tasks in this context. We present three different formulations of the setting which we will in the following refer to as the discrete, semi-continuous and continuous case. These formulations differ in the parametrization of the rotation $R_n(\alpha)$ that is to be performed at each time step. \begin{description} \item[Discrete case] It is known from analytical derivations that the Pauli matrices $\sigma_{\{0, x,y,z\}}$ give rise to optimal sequences under certain ideal conditions~\cite{viola1999dynamical, souza2011robust}, where at each time step exactly one of the rotations $R_{\{0,x,y,z\}}=e^{-i \frac{\pi}{2} \sigma_{\{0,x,y,z\}}}$ is performed. $\sigma_0$ hereby denotes the identity. Hence, in the simplest formulation we can define the problem as choosing one of the four Pauli matrices at each time step. This formulation then leads to a sequence space $S$ of size $|S|=4^T$ being exponential in the sequence length $T$. This is the formulation which was also used in recent work on quantum memory~\cite{august2017using}. \item[Semi-continuous case] While the class of sequences introduced above is provably ideal under certain conditions, one might be interested in allowing the agent more freedom to facilitate its adaption to more adverse conditions. This can in a first step be achieved by allowing the agent full control over the rotation angle while keeping the discrete formulation for the rotation axis. That means that at each time step, the agent will have to choose a rotation axis from $\sigma_{\{0,x,y,z\}}$ as before, but now must also predict the rotation angle $\alpha \in [0,2\pi]$. As $\alpha$ can take infinitely many values, this formulation of the problem now yields a sequence space $S$ of inifinite size, making it much harder from a reinforcement learning perspective. To lighten this burden we can make use of the fact that we know that in principle a rotation around $\pi$ is ideal. Thus, we will interpret the output of the agent as the deviation from $\pi$ $\Delta \alpha \in [-\pi, \pi]$. This should facilitate learning progress even in the early training phase. \item[Continuous case] Finally, we can of course also allow the agent full control over both the rotation angle and axis. This formulation of the problem requires the agent to predict a unit vector $n \in \mathbb{R}^3$ and a corresponding rotation angle $\alpha$ for each time step. It is clear that without any prior knowledge it will be very difficult for the agent to identify the `right corner' of this infinite sequence space. We hence propose to again leverage the knowledge about Pauli rotations being a good standard choice by having the agent predict a Pauli rotation together with the deviation in $n$ and $\alpha$. While for $\alpha$ we have already seen how this can be easily achieved, $n$ requires slightly more insight. As is customary in quantum physics, every state of a two-dimensional particle $\ket{\psi}$ can be represented by choosing two angles $\theta \in [0,\pi]$ and $\phi \in [0, 2\pi]$, yielding the three-dimensional real unit Bloch vector \[b = \begin{pmatrix} \sin \theta \cos \phi \\ \sin \theta \sin \phi \\ \cos \theta \end{pmatrix}. \] We can hence use this formulation to parameterize $n$ by $\theta$ and $\phi$. It is easy to see that the Pauli rotations correspond to the unit vectors that equal a one-hot encoding of the Pauli matrices such that we obtain the following identities \begin{align*} \theta_x &= \theta_y = \phi_y = \frac{\pi}{2} \,\, \text{and} \\ \phi_x &= \phi_z = \theta_z = 0 \end{align*} with periodicity in $\pi$. We can now leverage this knowledge by translating the Pauli rotation axis chosen by the agent into its Bloch expression and requring it to predict the deviations $\Delta \theta$ and $\Delta \phi$. In this way the agent has access to the full axis space. As with the rotation angle, this formulation has the effect that the agent starts learning from a reasonable baseline. \end{description} \section{Quantum Memory as RL problem} Having introduced the quantum memory scenario, we now turn to a description of possible reinforcement learning tasks in this context. We present three different formulations of the setting which we will in the following refer to as the discrete, semi-continuous and continuous case. These formulations differ in the parametrization of the rotation $R_n(\alpha)$ that is to be performed at each time step. \begin{description} \item[Discrete case] It is known from analytical derivations that the Pauli matrices $\sigma_{\{0, x,y,z\}}$ give rise to optimal sequences under certain ideal conditions~\cite{viola1999dynamical, souza2011robust}, where at each time step exactly one of the rotations $R_{\{0,x,y,z\}}=e^{-i \frac{\pi}{2} \sigma_{\{0,x,y,z\}}}$ is performed. $\sigma_0$ hereby denotes the identity. Hence, in the simplest formulation we can define the problem as chosing one of the four Pauli matrices at each time step. This formulation then leads to a sequence space $S$ of size $|S|=4^T$ being exponential in the sequence length $T$. This is the formulation which was also used in recent work on quantum memory~\cite{august2017using}. \item[Semi-continuous case] While the class of sequences introduced above is provably ideal under certain conditions, one might be interested in allowing the agent more freedom to facilitate its adaption to more adverse conditions. This can in a first step be achieved by allowing the agent full control over the rotation angle while keeping the discrete formulation for the rotation axis. That means that at each time step, the agent will have to choose a rotation axis from $\sigma_{\{0,x,y,z\}}$ as before, but now must also predict the rotation angle $\alpha \in [0,2\pi]$. As $\alpha$ can take infinitely many values, this formulation of the problem now yields a sequence space $S$ of inifinite size, making it much harder from a reinforcement learning perspective. To lighten this burden we can make use of the fact that we know that in principle a rotation around $\pi$ is ideal. Thus, we will interpret the output of the agent as the deviation from $\pi$ $\Delta \alpha \in [-\pi, \pi]$. This should facilitate learning progress even in the early training phase. \item[Continuous case] Finally, we can of course also allow the agent full control over both the rotation angle and axis. This formulation of the problem requires the agent to predict a unit vector $n \in \mathbb{R}^3$ and a corresponding rotation angle $\alpha$ for each time step. It is clear that without any prior knowledge it will be very difficult for the agent to identify the `right corner' of this infinite sequence space. We hence propose to again leverage the knowledge about Pauli rotations being a good standard choice by having the agent predict a Pauli rotation together with the deviation in $n$ and $\alpha$. While for $\alpha$ we have already seen how this can be easily achieved, $n$ requires slightly more insight. As is customary in quantum physics, every state of a two-dimensional particle $\ket{\psi}$ can be represented by choosing two angles $\theta \in [0,\pi]$ and $\phi \in [0, 2\pi]$, yielding the three-dimensional real unit Bloch vector \[b = \begin{pmatrix} \sin \theta \cos \phi \\ \sin \theta \sin \phi \\ \cos \theta \end{pmatrix}. \] We can hence use this formulation to parameterize $n$ by $\theta$ and $\phi$. It is easy to see that the Pauli rotations correspond to the unit vectors that equal a one-hot encoding of the Pauli matrices such that we obtain the following identities \begin{align*} \theta_x &= \theta_y = \phi_y = \frac{\pi}{2} \,\, \text{and} \\ \phi_x &= \phi_z = \theta_z = 0 \end{align*} with periodicity in $\pi$. We can now leverage this knowledge by translating the Pauli rotation axis chosen by the agent into its Bloch expression and requring it to predict the deviations $\Delta \theta$ and $\Delta \phi$. In this way the agent has access to the full axis space. As with the rotation angle, this formulation has the effect that the agent starts learning from a reasonable baseline. \end{description} \section{Quantum Control} \label{q_control} The time evolution of a physical system in quantum mechanics is described by the Schr\"odinger equation \begin{align} ih\frac{\delta}{\delta t} \ket{\psi(t)} = H \ket{\psi(t)} \end{align} where $H$ is the Hamiltonian, a complex Hermitian Matrix describing the energy of the physical system, and $h$ is Planck's constant~\cite{cohen1977quantum}. Hereby, $\ket{\psi}$ is the Dirac notation for a physical state which for finite dimensional systems as we treat here corresponds to a complex column vector of the same dimensionality as the Hamiltonian's. The conjugate transpose of a vector $\ket{\psi}$ then is denoted as $\bra{\psi}$ such that $\braket{\psi,\psi}$ denotes the inner and $\ket{\psi}\bra{\psi}$ the outer product. The Schr\"odinger equation yields the unitary quantum time evolution \begin{align} \ket{\psi(t)} = e^{-itH/h} \ket{\psi(0)}. \end{align} In a discretized time setting with time steps $\Delta t$ the evolution for a total time $T$ can thus be written as \begin{align} \ket{\psi(T)} = {e^{-i\Delta t H /h}}^{L} \ket{\psi(0)} \end{align} where we define $L = T/{\Delta t}$. In quantum control we now assume to be able to control the time evolution by application of so-called control Hamiltonians $H_1,\cdots,H_C$, which yields the controlled time evolution \begin{align} \ket{\psi(T)} = &{e^{-i\Delta t \sum_{i=1}^C c_{iL} H_i /h}}\cdots\\ &{e^{-i\Delta t \sum_{i=1}^C c_{i1} H_i /h}} \ket{\psi(0)} \end{align} where the $c_{it}$ are time-dependent scaling constants for the control Hamiltonians. This formulation however assumes that we have full control over the system which due to various kinds of noise or environmental effects will not be the case. Hence we introduce a noise or drift Hamiltonian $H_0$, which we here assume to be time independent and of constant strength, and obtain the final formulation \begin{align} \ket{\psi(T)} = &{e^{-i\Delta t (H_0 + \sum_{i=1}^C c_{iL} H_i)}}\cdots\\ &{e^{-i\Delta t( H_0 + \sum_{i=1}^C c_{i1} H_i)}} \ket{\psi(0)} \end{align} where we set $h=1$ for convenience. Now that we have a well-defined notion of our control problem, we need to state the actual goal that we aim to achieve. Generally, starting from an initial state $\ket{\psi(0)}$ or the corresponding density operator $\rho(0) = \ket{\psi(0)}\bra{\psi(0)}$ we would like to obtain an evolution to target state $\ket{\psi^*}$ or $\rho^* = \ket{\psi^*}\bra{\psi^*}$. Hence we need to define some similarity measure between the state we actually obtain after evolving for time $T$ and our ideal result. The easiest way of doing this is simply to compute the overlap between these states by \begin{align} S(\psi^*, \psi(T)) = \braket{\psi^*, \psi(T)} \end{align} or \begin{align} S(\rho^*,\rho(T))= \trace {\rho^*}^{\dagger}\rho(T) \end{align} respectively for Hermitian operators and correspondingly only using the real part $\text{Re}(S(\rho^*,\rho(T)))$ for non-Hermitian ones~\cite{khaneja2005optimal}. Equipped with this metric, we can formally define the problem we would like to solve as \begin{align} \max_{\{c_{it}\}} S(\rho^*, \rho(T, \{c_{it}\}). \end{align} This formulation is broad enough to capture every problem from synthesizing certain quantum gates over evolving from one eigenstate of a Hamiltonian to another to storing the initial state in a quantum memory setting. \section{Applying the Method} In this section, we will now introduce two quantum control scenarios that were recently explored via machine learning~\cite{bukov2017machine, august2017using}. We show how one can apply our method to tackle some interesting learning tasks arising in these control settings by leveraging physical domain knowledge. \subsection{Quantum Memory} \label{quantum_memory} One particular instance of a quantum control problem is the problem of storing the state of a qubit, i.e.\ a two-level system used in quantum computation. This is, next to quantum error correction, a very relevant problem in quantum computation. Here we assume that our qubit is embedded in some environment, called the bath, such that the complete system lives in the Hilbert space \begin{align*} \mathcal{H} = \mathcal{H}_S \otimes \mathcal{H}_B \end{align*} with the subsripts $S$ and $B$ denoting the space of the system and bath respectively. If we let this system evolve freely, decoherence effects will over time destroy the state of the qubit. Hence the question is how we can intervene to prevent the loss of the state in the presence of the environment or, for computer scientific purposes, the noise where we assume to have control over the qubit only. From a quantum computing perspective, we would like to implement a gate that performs the identity function over a finite time interval. Qubit states are commonly represented as points on the Bloch sphere~\cite{nielsen2002quantum} and the effect of the environment on the qubit can in this picture be perceived as some rotation that drives the qubit away from its original position. To counter this problem we must hence determine a good rotation at each time step such that we negate the effect of the environment. So, our goal is to dynamically decouple the qubit from its bath by performing these rotations. The rotation of a qubit is defined as \begin{align*} R_n(\alpha) = e^{-i\frac{\alpha}{2}n\mathbf{\sigma}} \end{align*} with $n$ being a unit vector specifying the rotation axis, $\alpha$ denoting the rotation angle and $\mathbf{\sigma}$ the `vector' of the stacked Pauli matrices $\sigma_{\{x,y,z\}}$~\cite{sakurai1995modern}. Thus our controlled time evolution operator per time step $t$ becomes \begin{align*} U(n_t, \alpha_t) = e^{-i \Delta t ( H_0 + \frac{\alpha_t }{2\Delta t}n_t\mathbf{\sigma} \otimes I_B)}, \end{align*} expressing that we only apply the rotation to the qubit, but not the bath. The noise Hamiltonian $H_0$ here reflects the effect of the bath on the qubit and $I_B$ simply denotes the identity of size of the dimensionality of $\mathcal{H}_B$ such that the Kronecker product yields a matrix of equal size to $H_0$. One possible metric to quantify how well we were able to preserve the qubit's state is \begin{align*} D(U,I) = \sqrt{1 - \frac{1}{d_S d_B} \Vert \trace_S(U)\Vert_{\trace}} \end{align*} with $U$ denoting the total evolution operator, $I$ the identity and $\trace_S$ is the partial trace over the system~\cite{quiroz2013optimized}. $\Vert U \Vert_{\trace} = \trace \sqrt{U^{\dagger}U}$ is the trace or nuclear norm. This distance measure is minimized by the ideal case $U = I_S \otimes U_B$ with an arbitrary unitary $U_B$ acting on the bath. Thus, the problem we would like to solve is a special instance of quantum control and can be formulated as \begin{align*} \min_{\{(n_t, \alpha_t)\}} D(U(\{(n_t, \alpha_t)\}, I). \end{align*} \section{Reinforcement Learning: Why and What?} \label{rl_for_qc} As we have seen above, solving quantum control problems amounts to determining an optimal or at least good sequence of principly continuous variables that describe the influence we exert on the system at each discrete time step. If a rigorous mathematical description of the evolution dynamics is available, there exist methods like GRAPE~\cite{khaneja2005optimal} or CRAB~\cite{doria2011optimal, caneva2011chopped} to obtain good solutions. However, the gap between theory and experiment also does not close in quantum mechanics and hence it is reasonable to assume that the actual dynamics of a real experiment will slightly differ from the mathematical model due to various noise effects induced by the environment. As can for instance also be observed in robotics, these slight differences between theory/simulation and real world implementation might still have a significant impact on the optimization problem to be solved. Additionally, it is clear that in general it is neither an interesting nor feasible task to derive a proper mathematical model for the effect of every influence factor in a real experiment~\cite{bukov2017machine}. This shows that it is worthwhile to investigate ways of optimizing such a control problem from a black box perspective in the sense that we are agnostic about the actual time evolution dynamics of the system and can only observe the final results obtained by a chosen set of parameters. In fact, in the absence of a mathematical model it is the only possible option to obtain information after the end of an experiment as in quantum mechanics a measurement during the experiment would in general cause the wave function to collapse and hence destroy the experiment without any way of determining what the final outcome would have been. Hence the task we would like to solve is to find a controller or at least find a good sequence of control parameters based on the outcomes of trial runs of a given experiment, which in quantum control terminology corresponds to a closed-loop setting. While one viable route to solving this problem would be to use classical evolutionary or hill-climbing algorithms or more advanced black-box methods such as Bayesian optimization, another interesting option is to fit a generative probabilistic model from which we can efficiently sample good sequences. This approach has two advantages. Firstly, we can iteratively update the model by fitting it to additional data we might acquire after the initial fitting phase. Doing so allows it to improve over previous results or make it adapt to changing conditions, e.g.\ a change of the noise Hamiltonian after some time. This is in contrast to pure optimization methods which would have to start from scratch for every problem. Secondly, by examining the distribution over the sequence space the model has learned and inspecting the best sampled control sequences, it might be possible to gain a better understanding of the underlying dynamics of a system. It is clear that the sequences of control parameters in a quantum control problem should not be treated as i.i.d.\ as a given choice of parameters $c_t$ at time $t$ potentially depends on all previous choices $c_1,\cdots,c_{t-1}$ and thus we have a conditional distribution $p(c_t|c_1, \cdots, c_{t-1})$. This kind of distribution can successfully be learned by modern RNN variants, such as LSTM or Gated Recurrent Unit (GRU) networks. This can for instance be seen in natural language processing (NLP) problems, which feature similar structure and where RNNs have led to breakthrough results in recent years. Note that, with this modelling decision, we still capture the full multivariate distribution $p(c_1,\cdots,c_T)$ as by the factorization rule of probabilities it holds that \begin{align} p(c_1,\cdots,c_T) = \prod_{t=1}^T p(c_t|c_1,\cdots,c_{t-1}). \end{align} Having decided on the class of models to employ, we are left with the question of how to fit them. This is non-trivial as we obviously can not hope to be able to obtain gradients of real-world experiments and also can not assume to have any a priori data available. Hence, we must `query' the experiment to obtain tuples of sequences and results. Thereby we would naturally like to be as sample efficient as possible and hence have to find an intelligent way to draw samples from the experiment and learn from them. In a recent attempt to address this problem, an evolutionary-style algorithm for training LSTMs was introduced~\cite{august2017using} that iteratively generates better data and fits the models to that data, then uses sampling from these models instead of the usual mutation operations to generate new sequences. While the algorithm was able to find better sequences than known in theory for the considered control problem of quantum memory, it was only demonstrated for a discretized version of the problem and there is room for improvement with respect to the efficient use of sampleded sequences. A more direct solution to this black-box optimization problem would however be if we were able to simply approximate the gradient of the error function with respect to the parameters of our model from the sampled data. Being able to obtain an approximate gradient would allow us to optimize our model in a gradient descent fashion and thus to leverage existing optimization methods mainly used in superivsed learning. Indeed, this is a typical RL scenario which is commonly referred to as \emph{policy gradient} learning. In the following, we will thus show how to solve the optimization task at hand by perceiving the problem of black-box quantum control as an RL problem and tackling it with a state-of-the-art policy gradient algorithm. To this end, we start by analyzing the particular reinforcement learning problem posed by black-box quantum control. As we only receive a result or measurement, from now on also referred to as reward, after having chosen a complete sequence of control parameters, we can perceive the sequence $c=(c_1,\cdots, c_T)$ as a single action of the RL agent for which it receives a reward $R(c)$. This approach most clearly reflects the envisioned closed-loop control scenario explained above. Modelling the sequences and their respective results in this way then implies that our Markov decision process (MDP) takes the form of a bi-partite graph consisting of a single initial state $s_0$ on the left and multiple final states $s_c$ on the right that are reached deterministically after exactly one action $c$. The set of states $S$ of this MDP is thus given by $S = s_0 \cup \{ s_c \}$ while the set of actions $A$ corresponds to $A = \{c\}$ and the transition probabilities are defined as $P_c(s_0, s_c)=1$. The reward $R(c)$ of an action $c$ is determined by the associated value of the error function as defined in Section~\ref{q_control}. We assume here that two different sequences always lead to different final states of the system, which is the most challenging conceivable case as equivalence classes in the sequence space would effectively reduce the size of the search space. This particular structure then implies that the value function simplifies to \begin{align} V(s_0) = \max_c R(c) = R(c^{opt}) \end{align} where $c^{opt}$ is the optimal sequence and the Q-function \begin{align} Q(s_0,c) = R(c) \end{align} is in fact independent of the state and equal to the reward function $R(c)$ as each sequence $c$ is associated with exactly one final state. Additionally, the number of actions $| \{c\} |$ and hence final states $x_c$ is \emph{at least} exponential in the number of possible values of control parameters per time step $t$ and generally infinite. This learning setting can be perceived as a \emph{multi-armed bandit}~\cite{robbins1985some} problem but constitutes a special case as firstly we assume to be only able to perform one action, i. e.\ generate one sequence, before receiving the total reward and secondly the actions are not atomic but rather exhibit a structure we exploit for learning. While it is true that one could derive a different formulation of the problem by considering the $c_t$ to be individual actions and using the discounted rewards of complete sequences, this approach puts more emphasis on optimal local behvaior of the agent when our goal clearly is to optimize the global performance, i.e.\ to generate the best possible sequences of control parameters. However, for this RL problem to be solvable the compositional structure of the actions $c$ is in fact of critical importance as we will discuss now. In principle, the RL problem amounts to learning to choose the best out of up to infinitely many possible actions which in general clearly is unsolvable for every algorithm. So, why can we hope to achieve something with an algorithm learning from trials in the introduced problem setting? The main reason for this is in fact that we know that the actions the agent takes are not atomic but concatenations of multiple sub-actions which have a physical meaning. Nature as we perceive it seems to be governed by simple underlying rules (or complex rules that are at least approximated very well by simple ones) which allows us to capture them with mathematical expressions. This in turn implies that there is much structure to be found in Nature and hence it is reasonable to assume that likewise the desirable actions in our learning problem share certain patterns which can be discovered. More precisely, we conjecture that solving the particular problems we are tackling in this work requires less abstract conceptual inference, which would still be out of reach for todays machine learning models, and more recognition of patterns in large sets of trials, i.e.\ control sequences, and hence in fact lends itself to treatment via machine learning and especially contemporary RNN models. Some empirical evidence for the validity of this conjecture has recently been provided for the problem of quantum memory~\cite{august2017using} and for a problem related to quantum control, the design of quantum experiments~\cite{melnikov2018active}.
{ "timestamp": "2018-04-16T02:07:59", "yymm": "1802", "arxiv_id": "1802.04063", "language": "en", "url": "https://arxiv.org/abs/1802.04063" }
\section{\label{sec:introduction}Introduction} Topological semimetals (TSMs) are materials with non-trivial crossing of valence and conduction bands at points, lines, or loops within the Brillouin zone. They are topologically protected in the sense that the crossing cannot be lifted by a symmetry-preserving perturbation. The symmetry which protects these crossings can be global, such as time-reversal, or crystalline, and are classified through a bulk topological invariant of the Bloch states in a neighbourhood of the crossing. Proposals for TSMs so far include Weyl points\cite{murakami2007phase, murakami2008universal, wan2011topological, burkov2011weyl, xu2011chern, fang2012multi, yan2017topologicalwsm} in systems with broken inversion or time-reversal symmetry, Dirac points\cite{young2012dirac, wang2012dirac, armitage2018weyl}, as well as nodal line semimetals\cite{burkov2011topological, carter2012semimetal,fang2016topologicalnlsm}. In addition, the non-trivial bulk topology in TSMs may manifest itself through associated surface states, including Fermi arcs between bulk Weyl or Dirac points. Among these TSMs, topological crystalline semimetals (TCSMs) with strong spin-orbit coupling possess a nodal FS which is protected by non-symmorphic lattice symmetries. In particular, it has been proposed theoretically that SrIrO$_3$ exhibits a nodal ring FS\cite{carter2012semimetal} protected by two perpendicular glide symmetries\cite{fang2015tnlsm, chen2016tcsm}. This nodal line FS is interesting, as it may act as a parent state for other nodal FS structures when symmetry-breaking perturbations are added. Associated with the bulk topology, protected by glide symmetries, are double helicoid surface states\cite{fang2016topological, chen2016tcsm} on the $(001)$ top surface. Furthermore, on side surfaces perpendicular to $(001)$ flat two-dimensional surface states associated with mirror and chiral symmetries are predicted\cite{chen2015topological, kim2015surface} to exist. Apart from difficulty in the synthesis of SrIrO$_3$\cite{longo1971structure}, these flat 2D surface states on side surfaces are difficult to observe directly due to the fact that the bulk is semimetallic. Resistivity measurements\cite{matsuno2015engineering} performed on thin films, synthesized using pulsed laser deposition, as well as ARPES\cite{nie2015interplay} measurements confirm the semimetallic nature and the nodal FS. However, a clear signature of the flat side surface states remains elusive. In this work, we propose that phonon modes can be used to infer the existence of side surface states. Symmetry properties of the surface state wave function leads to a unique electron-phonon coupling which will damp only certain optical phonon modes at the zone center. This paper is organized as follows. In Section~\ref{sec:surface-states} we study the side surface states in detail through an analytic solution of the wave function (\ref{subsec:open-boundary-wavefunction}), and direct numerical diagonalization (\ref{subsec:slab-calculations}). In Section~\ref{sec:epi-polarization} we show how the symmetry of the electronic wave function constrains the electron-phonon interaction, and calculate the first order phonon self-energy (\ref{subsec:density-response}, Appendix~\ref{app:bubble-calculation}). Finally, experimental techniques used to measure this effect are discussed in Section~\ref{sec:discussion}. \section{\label{sec:surface-states}Surface States in a TCSM} \begin{figure} \includegraphics[width=0.75\linewidth]{fig1.pdf} \caption{(colour online) Surface states in AIrO$_3$ for a $\hat{\bm{b}}$ crystal termination. The two branches, shown in green, disperse linearly away from $k_c = \pi$ but are flat in the $k_a$ direction, forming a line of 1D Dirac cones. Projection of the bulk nodal ring at $(k_c,k_a) = (\pi,0)$ is shown as a red ellipse.} \label{fig:SBZ} \end{figure} Through a combination of strong spin-orbit coupling and crystal field splitting, $j_{\mathrm{eff}} = \tfrac{1}{2}$ states provide a good basis for a low-energy description\cite{carter2012semimetal,kim2015surface} of orthorhombic perovskite iridates AIrO$_3$ (A an alkaline earth metal, space group \textit{Pbnm}). The unit cell of AIrO$_3$ contains four Ir atoms ($B,R,Y,G$) on which the $j_{\mathrm{eff}}$ states live, and are distinguished by distortion of the surrounding oxygen octahedra. The full tight-binding Hamiltonian, derived in Ref. [\onlinecite{carter2012semimetal}], is written in the basis \begin{equation}\label{eq:tb-basis} \psi = (c_{B\uparrow},c_{R\uparrow},c_{Y\uparrow},c_{G\uparrow},c_{B\downarrow},c_{R\downarrow},c_{Y\downarrow},c_{G\downarrow})^T, \end{equation} where $\uparrow,\downarrow$ refer to $j_{\mathrm{eff}}^z = \pm \tfrac{1}{2}$. It takes the form \begin{align}\label{eq:full-tb} \begin{split} \H_{\k} &= \hspace{2.5mm} \Re{\epsilon^p_{\k}}\tau_x + \Im{\epsilon^p_{\k}}\sigma_z\tau_y + \epsilon^z_{\k}\nu_x \\ &\hspace{3mm}+\Re{\epsilon^d_{\k}}\nu_x\tau_x + \Im{\epsilon^d_{\k}}\nu_y\tau_y\\ &\hspace{3mm}+[\Re{\epsilon^{po}_{\k}}\sigma_y + \Im{\epsilon^{po}_{\k}}\sigma_x]\nu_z\tau_y \\ &\hspace{3mm}+[\Re{\epsilon^{zo}_{\k}}\sigma_y + \Im{\epsilon^{zo}_{\k}}\sigma_x]\nu_y\tau_z \\ &\hspace{3mm}+[\Re{\epsilon^{do}_{\k}}\sigma_y + \Im{\epsilon^{do}_{\k}}\sigma_x]\nu_x\tau_y \\ &\hspace{3mm}+[\Re{\epsilon^{d1}_{\k}}\sigma_y + \Im{\epsilon^{d1}_{\k}}\sigma_x]\nu_y\tau_x, \end{split} \end{align} where three sets of Pauli matrices correspond to pseudospin ($\bm{\sigma}$), layer ($\bm{\nu}$), and in-plane sublattice ($\bm{\tau}$). Tight-binding parameters and the form of the functions $\epsilon_{\k}$ are listed in Appendix~\ref{app:tb-hamiltonian}. Next-nearest neighbour hopping ($\lambda_{\k}$) has been left out for simplicity, but its effect is discussed in Section \ref{sec:epi-polarization}. This model exhibits an elliptical one-dimensional (1D) FS called the nodal ring, consistent with \textit{ab initio} calculations\cite{carter2012semimetal} and protected by non-symmorphic symmetries\cite{fang2015tnlsm, chen2016tcsm}. This topological crystalline semimetal also exhibits\cite{chen2015topological} a pair of surface zero modes on the mirror-symmetric line $k_c = \pi$ for all side surfaces except those perpendicular to a weak index $\bm{M} = \hat{\bm{a}} + \hat{\bm{b}} \parallel \hat{\bm{y}}$. These modes are protected by a combination of mirror symmetry \begin{equation}\label{eq:mirror-symmetry} \Pi_m = i\sigma_z\nu_x \hspace{5mm}(k_a,k_b,k_c) \mapsto (k_a,k_b,-k_c), \end{equation} and an emergent chiral symmetry \begin{equation}\label{eq:chiral} \mathcal{C} = \sigma_z\nu_y\tau_z, \end{equation} which anti-commutes with the Hamiltonian Eq.~\ref{eq:full-tb} on the $k_c = \pi$ plane. Zero modes are indicated in Fig.~\ref{fig:SBZ} for a $\hat{\bm{b}}$ crystal termination with a thick line. Away from $k_c = \pi$ the surface modes disperse linearly, forming a line of 1D Dirac cones which are flat in the $k_a$ direction. Time-reversal symmetry is of the usual form \begin{equation} \mathcal{T} = i\sigma_y\mathcal{K} \qquad \k \mapsto -\k, \end{equation} where $\mathcal{K}$ denotes complex conjugation. While the dispersion of the surface states is well understood, little is known about their wave functions which must be known to describe their response to density perturbations. Therefore, in this section we describe their wave functions through two complementary approaches. First, we obtain the wave function analytically at $k_c = \pi$ by solving an open boundary problem with the bulk Hamiltonian. Second, we extend this solution away from $k_c = \pi$ by numerically diagonalizing the Hamiltonian in a slab geometry. Finally, we show how one can form total and `relative' density combinations which will couple to certain phonon modes discussed in Section~\ref{sec:epi-polarization}. \subsection{\label{subsec:open-boundary-wavefunction} Open Boundary Wave Function at \texorpdfstring{$\bm{k_c = \pi}$}{$k_c = \pi$}} \begin{figure} \includegraphics[width=0.8\linewidth]{fig2.png} \caption{(colour online) Slab geometry of AIrO$_3$ with a $\hat{\bm{b}}$ crystal termination which is periodic in the $\hat{\bm{a}}$ and $\hat{\bm{c}}$ directions. Since $x_b < 0$ is the bulk, the red and green iridiums are closest to the edge at $x_b = 0$. } \label{fig:slab-geometry} \end{figure} For definiteness, we focus on the open boundary problem for a $\hat{\bm{b}}$ surface in the half space $x_b < 0$, with $x_b > 0$ the vacuum as shown in Fig.~\ref{fig:slab-geometry}. Periodic boundary conditions are imposed in the $\hat{\bm{a}},\hat{\bm{c}}$ directions, and with this crystal termination the red and green iridiums lie on the surface. Starting with the Hamiltonian Eq.~\ref{eq:full-tb}, we fix $k_c = \pi$ so that both mirror and chiral symmetries are present. On this mirror-symmetric plane, the mirror operator Eq.~\ref{eq:mirror-symmetry} simplifies to \begin{equation}\label{eq:mir-simp} \mathcal{M} = \sigma_z\nu_x, \end{equation} where the factor of $i$ has been dropped. To proceed, we introduce the following unitary rotation to simultaneously diagonalize the chiral and mirror operators. \begin{equation}\label{eq:unitary} U(k_c) = \exp\left(+i\tfrac{k_c}{4}\nu_z\right)\exp\left(+i\tfrac{\pi}{4}\tau_z\right) \end{equation} After rotating the basis with $U$, the chiral operator becomes $\mathcal{C} = \sigma_z\nu_x\tau_z = \mathcal{M}\tau_z$, and the rotated Hamiltonian on the $k_c = \pi$ plane is \begin{align} \begin{split} \H(k_a,k_b,\pi) &= \Re{\epsilon^p_{\k}}\tau_y - \Im{\epsilon^p_{\k}}\sigma_z\tau_x + \\ &\hspace{5mm}-[\Re{\epsilon^{po}_{\k}}\sigma_y + \Im{\epsilon^{po}_{\k}}\sigma_x]\nu_z\tau_x \\ &\hspace{5mm}-[\Re{\epsilon^{do}_{\k}}\sigma_y + \Im{\epsilon^{do}_{\k}}\sigma_x]\nu_y\tau_x. \end{split} \end{align} For simplicity we have neglected $\epsilon_{\k}^d$, but it will be included in the final result. Translational symmetry is broken in the $\hat{\bm{b}}$ direction so $k_b$ is no longer a good quantum number. In the spirit of \citet{jackiw1976solitons} we expand the above Hamiltonian around $k_b = \pi$ (U-R-S-X plane containing the nodal ring) by introducing $p_b = k_b - \pi$ which we substitute as a real space derivative operator $-i\partial_b$ . Performing this expansion to linear order in $p_b$ we obtain \begin{align} \begin{split} \H(k_a,p_b) &= \bigr[-2t_p\cos(\tfrac{1}{2}k_a)\tau_y - t_p'\cos(\tfrac{1}{2}k_a)\sigma_z\tau_x \\ &\hspace{5mm} + \tfrac{1}{2}(t_{1p}^p + t_{2p}^o)\cos(\tfrac{1}{2}k_a)(\sigma_y - \sigma_x)\nu_z\tau_x \\ &\hspace{5mm} + \tfrac{1}{2}t_d^o\sin(\tfrac{1}{2}k_a)(\sigma_y + \sigma_x)\nu_y\tau_x\bigr]p_b \\ &\hspace{2mm}- \bigr[(t_{1p}^o - t_{2p}^o)\sin(\tfrac{1}{2}k_a)(\sigma_y + \sigma_x)\nu_z\tau_x \\ &\hspace{5mm} + t_d^o\cos(\tfrac{1}{2}k_a)(\sigma_y - \sigma_x)\nu_y\tau_x\bigr]. \end{split} \end{align} Next, we block-diagonalize in mirror even and odd subspaces to reduce the above $8\times 8$ model into $4\times 4$ blocks. This is done by introducing bonding and anti-bonding combinations which are eigenstates of $\nu_x$ \begin{equation}\label{eq:ba-states} \ket{b} = \frac{1}{\sqrt{2}}(\ket{B} + \ket{T}), \qquad \ket{a} = \frac{1}{\sqrt{2}}(\ket{B} - \ket{T}), \end{equation} where $\ket{B}$ is localized to the bottom layer (composed of $B,R$ iridiums) and $\ket{T}$ to the top layer (composed of $Y,G$ iridiums). It is then clear that $\{\ket{b\uparrow},\ket{a\downarrow}\}$ and $\{\ket{a\uparrow},\ket{b\downarrow}\}$ form bases for the four dimensional $\mathcal{M} = \pm 1$ subspaces, respectively, and the chiral operator reduces to $\mathcal{C} = \tau_z$. We introduce a new set of Pauli matrices $\bm{\eta}$ which act on the appropriate mirror subspace through \begin{align} \begin{split} \mathcal{M} = +: &\qquad \ket{b\uparrow}\ \stackrel{\eta_x}{\longleftrightarrow}\ \ket{a\downarrow}, \\ \mathcal{M} = -: &\qquad \ket{a\uparrow}\ \stackrel{\eta_x}{\longleftrightarrow}\ \ket{b\downarrow}. \end{split} \end{align} Within $\mathcal{M} = \pm$ the Hamiltonian becomes \begin{align} \begin{split} \H^{\pm}(k_a,p_b) &= \bigr[-2t_1(k_a)\mathbbm{1}\tau_y - t_2(k_a)\eta_z\tau_x \\ &\hspace{5mm} + t_{3\pm}(k_a)(\eta_y - \eta_x)\tau_x]p_b \\ &\hspace{5mm} - t_{4\pm}(k_a)(\eta_y + \eta_x)\tau_x, \end{split} \end{align} where \begin{align}\label{eq:t-functions} \begin{split} t_1(k_a) &= t_p\cos(\tfrac{1}{2}k_a), \\ t_2(k_a) &= t_p'\cos(\tfrac{1}{2}k_a), \\ t_{3\pm}(k_a) &= \tfrac{1}{2}(t_{1p}^o + t_{2p}^o)\cos(\tfrac{1}{2}k_a) \mp \tfrac{1}{2}t_d^o\sin(\tfrac{1}{2}k_a), \\ t_{4\pm}(k_a) &= (t_{1p}^o - t_{2p}^o)\sin(\tfrac{1}{2}k_a) \pm t_d^o\cos(\tfrac{1}{2}k_a). \end{split} \end{align} We then solve the Schr\"odinger equation to obtain the wave function for the zero modes in each mirror subspace \begin{equation} \H^{\pm}(k_a,p_b \mapsto -i\partial_b)\Psi_{\pm}(k_a,x_b) = 0\cdot\Psi_{\pm}(k_a,x_b), \end{equation} with the wave function ansatz \begin{equation}\label{eq:wf-ansatz} \Psi_{\pm}(k_a,x_b) \propto e^{\lambda^{\pm}x_b}(\eta_x + \eta_y)\tau_x\chi_{\pm}, \end{equation} where the factor $(\eta_x + \eta_y)\tau_x$ is chosen to simplify the following equations. The eigenequation for the four-component vector $\chi_{\pm}$ is \begin{equation}\label{eq:pre-eigenequation} \{[2t_1(\eta_x + \eta_y)\tau_z + t_2(\eta_x - \eta_y) - 2t_{3\pm}\eta_z]\lambda^{\pm} - 2t_{4\pm}\}\chi_{\pm } = 0. \end{equation} Since we have a chiral symmetry $\mathcal{C} = \tau_z$ at $k_c = \pi$, this can be further reduced into a $2\times 2$ problem by block-diagonalizing in the appropriate chiral subspace. The outermost iridiums on this surface are $R,G$, so we diagonalize in $\tau_z = -1$. If we had instead considered the half space $x_b > 0$, the outermost iridiums would be $B,Y$ and we would diagonalize in $\tau_z = +1$. As the wave function ansatz contains $\tau_x$, which flips the eigenvalue of $\tau_z$, we instead set $\tau_z = +1$ which yields \begin{equation}\label{eq:eigenequation} \{[(2t_1 + t_2)\eta_x + (2t_1 - t_2)\eta_y - 2t_{3\pm}\eta_z]\lambda^{\pm} - 2t_{4\pm}\mathbbm{1}\}\chi_{\pm}^{BY} = 0. \end{equation} This describes a two-component vector $\chi_{\pm}^{BY}$ which must be an eigenstate of \begin{equation}\label{eq:22-hamiltonian} h^{\pm} = \bm{d_{\pm}}\cdot\bm{\eta}, \qquad \bm{d_{\pm}} = (2t_1 + t_2, 2t_1 - t_2, -2t_{3\pm}), \end{equation} with eigenvalue $(-1)^j\norm{\bm{d_{\pm}}}$, where $j = 0,1$. The eigenstates of $h^{\pm}$ can be parameterized with the angles defined through \begin{equation}\label{eq:parameterization} \frac{\bm{d_{\pm}}}{\norm{\bm{d_{\pm}}}} = (\sin\theta_{\pm}\cos\varphi_{\pm},\sin\theta_{\pm}\sin\varphi_{\pm},\cos\theta_{\pm}). \end{equation} Solutions $\chi_{\pm,j}^{BY}$ corresponding to eigenvalue $(-1)^j\norm{\bm{d_{\pm}}}$ in the mirror even subspace are \begin{align} \begin{split}\label{eq:mp-chi} \chi_{+,0}^{BY} &= \cos(\tfrac{1}{2}\theta_+)\ket{b\uparrow} + e^{i\varphi_+}\sin(\tfrac{1}{2}\theta_+)\ket{a\downarrow},\\ \chi_{+,1}^{BY} &= \sin(\tfrac{1}{2}\theta_+)\ket{b\uparrow} - e^{i\varphi_+} \cos(\tfrac{1}{2}\theta_+)\ket{a\downarrow}, \end{split} \end{align} while those in the mirror odd subspace are \begin{align} \begin{split}\label{eq:mm-chi} \chi_{-,0}^{BY} &= \cos(\tfrac{1}{2}\theta_-)\ket{a\uparrow} + e^{i\varphi_-}\sin(\tfrac{1}{2}\theta_-)\ket{b\downarrow} \\ \chi_{-,1}^{BY} &= \sin(\tfrac{1}{2}\theta_-)\ket{a\uparrow} - e^{i\varphi_-} \cos(\tfrac{1}{2}\theta_-)\ket{b\downarrow}. \end{split} \end{align} \begin{figure} \includegraphics[width=\linewidth]{fig3.pdf} \caption{(colour online) Thick vertical lines indicate the position of the nodal ring, and the shaded area corresponds to the $k_a$ for which our model remains valid. (a) Value of $t_{4\pm}$ as a function of $k_a$ whose sign determines the solution. (b) Penetration depth $\ell^{\pm}$ in units of the $\hat{\bm{b}}$ lattice spacing for the physically relevant solution in each region outside the nodal ring. (c) $\sin(\tfrac{1}{2}\theta_{\pm})$ and (d) $\varphi_{\pm}$ parameters. The number of digits quoted reflects the agreement with angles averaged over $k_c$ obtained from the wave functions calculated in a finite slab geometry. All quantities are calculated with the tight-binding parameters for SrIrO$_3$ given in Table~\ref{table:tb-parameters}.} \label{fig:wf-parameters} \end{figure} Substituting these solutions into the eigenequation Eq.~\ref{eq:eigenequation} we find the exponential decay parameter $\lambda^{\pm}_j$ \begin{equation}\label{eq:decay-constant} \lambda^{\pm}_j(k_a) = \frac{2(-1)^jt_{4\pm}(k_a)}{\norm{\bm{d_{\pm}}(k_a)}}. \end{equation} For each $k_a$ the physically relevant solution will be $\chi_{\pm,j}^{BY}$ with $\lambda_j^{\pm} > 0$ because the wave function should decay into the bulk as $x_b \rightarrow -\infty$. The value of $j$ is determined by the sign of $t_{4\pm}$ which is plotted in Fig.~\ref{fig:wf-parameters}(a). For $k_a < 0$, $t_{4\pm} > 0$ and the solutions are $\chi_{\pm,0}^{BY}$, while for $k_a > 0$, $t_{4\pm} < 0$ so the solutions are $\chi_{\pm,1}^{BY}$. The exponential decay parameters $\lambda_1^{\pm}(+k_a)$ are equal to $\lambda_0^{\mp}(-k_a)$, so we will simply write $\lambda_j^{\pm} = \lambda^{\pm}$. The decay parameter sets the penetration depth $\ell^{\pm} = 1/\lambda^{\pm}$ of the wave functions into the bulk, which is plotted in Fig.~\ref{fig:wf-parameters}(b). To obtain the final form of the wave function we must perform the rotation $(\eta_x + \eta_y)\tau_x$ appearing in the wave function ansatz Eq.~\ref{eq:wf-ansatz}. The first $\tau_x$ operation simply changes the composition from sublattice $B,Y$ to $R,G$. For $k_a < 0$, $\chi_{+,0}^{RG}$ is rotated to \begin{align} \begin{split} &\sim (1 - i)\sin(\tfrac{1}{2}\theta_+)\ket{b\uparrow} + (1 + i)e^{-i\varphi_+}\cos(\tfrac{1}{2}\theta_+)\ket{a\downarrow} \\ &\sim \sin(\tfrac{1}{2}\theta_+)\ket{b\uparrow} + e^{+i(\tfrac{\pi}{2} - \varphi_+)}\cos(\tfrac{1}{2}\theta_+)\ket{a\downarrow}, \end{split} \end{align} with a similar result for $\chi_{-,0}^{RG}$. For $k_a > 0$, $\chi_{+,1}^{RG}$ is rotated to \begin{align} \begin{split} &\sim (1 - i)\cos(\tfrac{1}{2}\theta_+)\ket{b\uparrow} - (1 + i)e^{-i\varphi_+}\sin(\tfrac{1}{2}\theta_+)\ket{a\downarrow} \\ &\sim \cos(\tfrac{1}{2}\theta_+)\ket{b\uparrow} + e^{-i(\tfrac{\pi}{2} + \varphi_+)}\sin(\tfrac{1}{2}\theta_+)\ket{a\downarrow}, \end{split} \end{align} with a similar result for $\chi_{-,1}^{RG}$. Close to $k_a = 0$, $t_{4\pm}$ changes sign when \begin{equation}\label{eq:nr-boundary} |k_a^*| \approx 2\tan(\tfrac{1}{2}|k_a^*|) = \frac{2t_d^o}{|t_{1p}^o - t_{2p}^o|}, \end{equation} which corresponds to the semi-major axis of the bulk nodal ring ellipse\cite{rhim2015landau}. Across the nodal ring one of $\lambda^{\pm}$ vanishes, signaling a sharp change in the wave function due to closing of the bulk gap. A similar sharp change must happen across the zone boundary $k_a = \pi$ which separates $k_a > 0$ and $k_a < 0$ solutions. We therefore focus on $k_a$ outside the nodal ring and away from the zone boundary; the shaded region in Fig.~\ref{fig:wf-parameters}. Therefore, for $k_a < 0$ the wave functions describing the zero modes are \begin{align} \begin{split}\label{eq:wf-ka-p} \Psi_+ &= e^{\lambda^+x_b}[\cos(\tfrac{1}{2}\theta_+)\ket{b\uparrow} + e^{-i(\tfrac{\pi}{2} + \varphi_+)}\sin(\tfrac{1}{2}\theta_+)\ket{a\downarrow}] \\ \Psi_- &= e^{\lambda^-x_b}[\cos(\tfrac{1}{2}\theta_-)\ket{a\uparrow} + e^{-i(\tfrac{\pi}{2} + \varphi_-)}\sin(\tfrac{1}{2}\theta_-)\ket{b\downarrow}], \end{split} \end{align} where $\lambda^{\pm}$, $\theta_{\pm}$, and $\varphi_{\pm}$ are all functions of $k_a$ and plotted in Fig.~\ref{fig:wf-parameters}(b-d). Solutions for $k_a < 0$ are related to those above by time-reversal symmetry, for $\Psi_{\pm}(k_a < 0)$ is the time-reversal partner of $\Psi_{\mp}(k_a > 0)$. The states $\ket{B},\ket{T}$ appearing in $\ket{b},\ket{a}$ (Eq.~\ref{eq:ba-states}) are understood to be iridium $R,G$ states, respectively. In the region of interest away from the nodal ring and zone boundary, to good approximation the angles $\theta_{\pm},\varphi_{\pm}$ can be taken as constants $\theta,\varphi$ with values given in Fig.~\ref{fig:wf-parameters}(c-d). This is because away from $k_a = \pi$, the relevant functions in Eq.~\ref{eq:t-functions} are all dominated by $\cos(\tfrac{1}{2}k_a)$. Owing to the small size of $t_d^0, t_d'$ due to slight rotation and tilting of oxygen octahedra, the cosine varies slowly to yield nearly constant angles defined through Eq.~\ref{eq:parameterization}. We find that $\theta_{\pm}(k_a),\varphi_{\pm}(k_a)$ remain within 10\% of their value at $k_a = 0$ up to $|k_a| \approx 0.88\pi$ and $|k_a| \approx 0.93\pi$, respectively. This result also holds for an $\hat{\bm{a}}$ termination in the half space $x_a > 0$ if we simply switch the labels $a\leftrightarrow b$. The solution for $x_b > 0$ ($x_a < 0$) will involve diagonalization in the $\tau_z = +1$ subspace, since the outermost iridiums are $B,Y$, and the solutions for $k_a < 0$ and $k_a > 0$ will be switched with $\ket{B},\ket{T}$ as $B,Y$ iridium states. As a final note, the small $\epsilon_{\k}^d$ term we have ignored contributes \begin{equation} \Im{\epsilon_{\k}^d}\nu_x\tau_x = -t_d'\sin(\tfrac{1}{2}k_a)p_b\nu_x\tau_x, \end{equation} which becomes \begin{equation} \mp t_d'\sin(\tfrac{1}{2}k_a)p_b\eta_z\tau_x \end{equation} in the mirror even and odd subspaces. This serves to modify the function $t_2$ \begin{equation} t_2(k_a) \mapsto t_{2\pm}(k_a) = t_p'\cos(\tfrac{1}{2}k_a) \pm t_d'\sin(\tfrac{1}{2}k_a), \end{equation} which slightly changes the wave function parameters $\lambda^{\pm}, \theta_{\pm},\varphi_{\pm}$, but cannot change the form of the solutions. \subsection{\label{subsec:slab-calculations} Extending Solution with Slab Calculations} \begin{figure} \includegraphics[width=0.7\linewidth]{fig4.pdf} \caption{Dispersion of surface zero modes away from $k_c = \pi$ at $k_a = 0.5\pi$, obtained from exact diagonalization of the tight-binding model in a $\hat{\bm{b}}$ slab geometry with $N = 250$ layers. Blue and red colours represent mirror eigenvalues $\mathcal{M} = \pm 1$, respectively.} \label{fig:slab-dispersion} \end{figure} To extend the analytic solution for $\Psi_{\pm}$ away from $k_c = \pi$, we numerically diagonalize the tight-binding Hamiltonian Eq.~\ref{eq:full-tb} in a $\hat{\bm{b}}$ slab geometry to obtain the wave functions localized to the appropriate edge. At $k_c = \pi$, we find $\braket{\tau_z} = -1$ at the edge with outer $R,G$ iridiums, which is the appropriate chiral subspace for that surface. For $k_c \ne \pi$ the Hamiltonian no longer has a chiral symmetry, and we find $\braket{\tau_z} \ne -1$ due to mixing with the other chiral subspace, $\tau_z = +1$. However, this deviation does not affect the main conclusion as discussed below. We find that the expectation of the mirror operator $\braket{\mathcal{M}} = \braket{\sigma_z\nu_x}$ in the two branches remains $\pm 1$ away from $k_c = \pi$. For $k_a < 0$ the left-moving branch has $\braket{\mathcal{M}} = -1$ and the right $\braket{\mathcal{M}} = +1$, with the opposite holding for $k_a > 0$ as shown in Fig.~\ref{fig:slab-dispersion} for $k_a = 0.5\pi$. Examining the wave function components, they are found to be the appropriate combination of bonding and anti-bonding states. Finally, the weights of $\ket{b\sigma}$ and $\ket{a\sigma}$ for $k_c \ne \pi$ are found to be in excellent agreement with the analytic expressions Eq.~\ref{eq:wf-ka-p} at $k_c = \pi$, which are plotted in Fig.~\ref{fig:wf-parameters}. In particular, for $|k_c - \pi| \le 0.3\pi$ the angles $\theta_{\pm},\varphi_{\pm}$ were found to remain within 5\%, 15\% of their value at $k_c = \pi$, respectively, in the $k_a$ region of validity discussed below. Therefore, our numerical results imply that the wave functions in Eq.~\ref{eq:wf-ka-p} describe mirror even and odd branches away from $k_c = \pi$. The dispersion of the branches with mirror eigenvalue $m = \pm$ are \begin{equation}\label{eq:ss-dispersion} \varepsilon_{\k m} = m\cdot v_F\cdot \mathrm{sgn}(-k_a)\cdot(k_c - \pi), \end{equation} where $v_F$ is the velocity of the surface states. From the slab calculation we estimate this velocity to be $v_F \approx (c/\mathrm{\AA})(2.0 \times 10^4\ \mathrm{m/s}),$ where $c$ is the $\hat{\bm{c}}$ lattice spacing. For SrIrO$_3$ grown on a SrTiO$_3$ substrate, $c \approx 7.97\ \mathrm{\AA}$\cite{kim2015surface} which yields $v_F \approx 1.6\times 10^5\ \mathrm{m/s}$. To determine the values of momentum for which we have localized surface states, one must consider the proximity of bulk states. The exponential decay parameter given by Eq.~\ref{eq:decay-constant} determines the penetration depth $\ell^{\pm} = 1/\lambda^{\pm}$ of the wave function into the bulk, which should be compared to the thickness $L$ of the finite slab. In our numerical study we consider $L = 250$ in units of the $\hat{\bm{b}}$ lattice spacing, which satisfies $L \gg \ell^{\pm}(k_a)$ for $k_a$ away from the nodal ring. As $|k_a|$ approaches $|k_a^*|$, the penetration depth becomes large due to mixing with extended bulk states. For each $k_a$ the minimum of the bulk bands determines the high-energy (and momentum) cutoff $\Lambda(k_a)$ below which the low energy theory is valid. The cutoff vanishes at the nodal ring $|k_a^*|$ and becomes small near the zone boundary $|k_a| = \pi$. In order to have a constant cutoff $\Lambda_0$, the $k_a$ region is defined to be those for which the cutoff is at least $\Lambda_0$: $\{k_a\ |\ \Lambda(k_a) \ge \Lambda_0\}$. This constrains $k_c$ to be in $[-\Lambda_0/v_F,+\Lambda_0/v_F]$, which determines the range of momentum used in the electron-phonon interaction, shown as the $x$-axis in Fig.~\ref{fig:ph-excitations}(a-b). With tight binding parameters for SrIrO$_3$, and $\Lambda_0 \approx 100\ \mathrm{meV}$ close to its maximum value, the $|k_a|$ region is approximately $ [0.4\pi,0.75\pi]$. However, in the real material this region is expected to be larger due to the small size of the nodal ring. Note that the main conclusion, i.e. qualitative difference in the damping of mirror odd and even phonon modes, is independent of this cutoff. \subsection{\label{subsec:densities} Total and Relative Density} The wave functions obtained for $k_a > 0$ and $k_a < 0$ in each mirror subspace Eq.~\ref{eq:mp-chi},~\ref{eq:mm-chi} are linearly independent solutions of the same $2\times 2$ Hamiltonian Eq.~\ref{eq:22-hamiltonian}. This means the wave function expressions in Eq.~\ref{eq:wf-ka-p} can be inverted to write bonding and anti-bonding states in terms of the surface wave functions. It is easily shown that \begin{align} \begin{split} b_{\uparrow}^{\dag}b_{\uparrow} + a_{\downarrow}^{\dag}a_{\downarrow} &= e^{-2\lambda^+x_b}\Psi_+^{\dag}(+k_a)\Psi_+(+k_a) + (-k_a) \\ a_{\uparrow}^{\dag}a_{\uparrow} + b_{\downarrow}^{\dag}b_{\downarrow} &= e^{-2\lambda^-x_b}\Psi_-^{\dag}(+k_a)\Psi_-(+k_a) + (-k_a), \end{split} \end{align} where $b_{\sigma},a_{\sigma}$ are electron operators for the states $\ket{b\sigma},\ket{a\sigma}$, and $\Psi_{\pm}$ for the surface states. In the above $k_a$ is assumed to be positive, and $(-k_a)$ represents the contribution from the $-k_a$ region. Therefore the total density \begin{equation} \rho_+ = \sum_{\sigma}(b_{\sigma}^{\dag}b_{\sigma} + a_{\sigma}^{\dag}a_{\sigma}) \end{equation} can be written in terms of $\Psi_{\pm}^{\dag}\Psi_{\pm}$ connecting states with the same mirror eigenvalue. This can be simplified close to $k_c = \pi$ \begin{equation} \rho_+ = \sum_{\sigma}(c_{R\sigma}^{\dag}c_{R\sigma}^{\phantom{\dag}} + c_{G\sigma}^{\dag}c_{G\sigma}^{\phantom{\dag}}),\nonumber \end{equation} where the wave function is dominantly supported on $R,G$ iridiums, and $\braket{\tau_z} \approx -1$. Clearly, the total density is even under mirror reflection. Similarly, with \begin{align} \begin{split} b_{\uparrow}^{\dag}a_{\uparrow} + a_{\downarrow}^{\dag}b_{\downarrow} &= e^{-(\lambda^+ + \lambda^-)x_b}\Psi_+^{\dag}(+k_a)\Psi_-(+k_a) + (-k_a) \\ a_{\uparrow}^{\dag}b_{\uparrow} + b_{\downarrow}^{\dag}a_{\downarrow} &= e^{-(\lambda^+ + \lambda^-)x_b}\Psi_-^{\dag}(+k_a)\Psi_+(+k_a) + (-k_a) \end{split} \end{align} the relative density between different layers \begin{equation} \rho_- = \sum_{\sigma}(b_{\sigma}^{\dag}a_{\sigma} + a_{\sigma}^{\dag}b_{\sigma}) \end{equation} can be written in terms of $\Psi_{\pm}^{\dag}\Psi_{\mp}$ connecting states with different mirror eigenvalues, and is therefore odd under mirror reflection. It can also be simplified as \begin{equation} \rho_- = \sum_{\sigma}(c_{R\sigma}^{\dag}c_{R\sigma}^{\phantom{\dag}} - c_{G\sigma}^{\dag}c_{G\sigma}^{\phantom{\dag}}), \nonumber \end{equation} close to $k_c = \pi$. The momenta $k',k$ appearing in $\Psi^{\dag}(k')\Psi(k)$ need not be same because the angles $\theta,\varphi$ in the wave function are approximately constant. So long as $k',k$ lie in the same $k_a$ region with small difference in $k_c$, the above relations hold. As shown in the next section, mirror even phonons can only couple to the total density while mirror odd phonons can only couple to the relative density. \section{\label{sec:epi-polarization} Electron-Phonon Coupling and Density Response} In this section we focus on optical phonons in \textit{Pbnm}-AIrO$_3$ with momentum $\bm{q}$ close to the zone center. Before considering the effect of terminating the crystal, we review the symmetry of bulk optical phonons. The zone center phonons of the isostructural \textit{Pbnm}-SrHfO$_3$ have been classified according to irreducible representations of the point group $D_{2h}$\cite{fatteley1972infrared, park1976raman, vali2009lattice} \begin{align} \begin{split} &\Gamma_O: \resizebox{220pt}{!}{$7A_g\oplus 5B_{1g}\oplus 7B_{2g}\oplus 5B_{3g}\oplus 8A_u\oplus 9B_{1u}\oplus 7B_{2u}\oplus 9B_{3u}$} \\ &\Gamma_A: \resizebox{67.75pt}{!}{$B_{1u}\oplus B_{2u}\oplus B_{3u}$}, \end{split} \end{align} where $\Gamma_O,\Gamma_A$ refer to optical and acoustic modes, respectively. Of the optical modes, 25 are infrared (IR) active ($B_{1u},B_{2u},B_{3u}$) and 24 are Raman active ($A_g, B_{1g},B_{2g},B_{3g}$). The IR active modes $B_{1u},B_{3u},B_{2u}$ are polarized along the $\hat{\bm{a}}, \hat{\bm{b}}, \hat{\bm{c}}$ directions, respectively. \begin{table} \caption{\label{table:c1h-chartable} Character table for the point group $C_{1h}\ (m)$.\cite{dresselhaus}} \begin{tabular}{c | c | c c} \toprule \multicolumn{2}{c|}{$C_{1h}\ (m)$}& $E$ & $\sigma_c$ \\ \hline $a,b$ & $A'$ & $1$ & $+1$ \\ $c$ & $A''$ & $1$ & $-1$ \\ \toprule \end{tabular} \end{table} For a crystal terminated in the $\hat{\bm{b}}$ direction with $N$ layers, we perform a straightforward classification of the zone center phonons. For simplicity we focus on modes involving displacements between iridium and oxygen atoms. In this slab configuration only the mirror symmetry remains, which leads to the point group $C_{1h}\ (m)$ with character table shown in Table~\ref{table:c1h-chartable}. Following the notation of \citet{dresselhaus}, the vector representation is given by \begin{equation} \Gamma_{\mathrm{vec}} = 2A'\oplus A''. \end{equation} Under mirror reflection all the iridiums are switched ($B\leftrightarrow Y$, $R\leftrightarrow G$), so the character of the mirror operation in the equivalence representation is zero. The character of the identity operation is simply $4N$, so the equivalence representation can be easily decomposed as \begin{equation} \Gamma_{\mathrm{equiv}} = 2NA'\oplus 2NA''. \end{equation} Therefore, at the zone center the representation of these phonon modes is \begin{equation} \Gamma_{\mathrm{phon}} = \Gamma_{\mathrm{vec}}\otimes \Gamma_{\mathrm{equiv}} = 6NA'\oplus 6NA''. \end{equation} Each mode has a simple interpretation because $\chi_{A'}(\sigma_c) = +1$ and $\chi_{A''}(\sigma_c) = -1$; they have definite parity under mirror reflection. This analysis holds for finite $q_a$ as the mirror symmetry is preserved. In terms of atomic displacements, the meaning of mirror even and odd phonon modes is that the displacement $\bm{\xi}^{\alpha}$ of ion $\alpha$ is mapped to $\pm \bm{\xi}^{\alpha'}$ of ion $\alpha'$ under reflection. This is illustrated in Fig.~\ref{fig:phonons} for modes involving relative Ir and O displacements. \begin{figure} \includegraphics[width=0.71\linewidth]{fig5.png} \caption{(colour online) Schematic long-wavelength mirror (a) even and (b) odd optical phonon modes on a $\hat{\bm{b}}$ surface. Arrows represent the relative displacement between iridium and oxygen atoms.} \label{fig:phonons} \end{figure} Restricting ourselves to optical phonons of definite mirror symmetry, we will broadly label the modes by $\lambda = \pm$ with field operator $A_{\bm{q}\lambda} = a_{\bm{q}\lambda} + a_{-\bm{q}\lambda}^{\dag}$, displacements $\bm{\xi}_{\bm{q}\lambda}^{\alpha}$, and unperturbed Matsubara Green's function\cite{mahan} \begin{equation}\label{eq:unperturbed-propagator} \mathcal{D}_{\lambda}^0(\bm{q},iq_n) = \frac{2\omega_{\bm{q}\lambda}^0}{(iq_n)^2 - (\omega_{\bm{q}\lambda}^0)^2}, \qquad q_n = \frac{2\pi n}{\beta}, \end{equation} where $\omega_{\bm{q}\lambda}^0$ is the dispersion, and $q_n$ the Matsubara frequency. The aim of this section is to show how certain optical phonons with $\bm{q}$ along $q_a$ close to the zone center are damped through their interaction with electronic surface states. First, we examine how mirror even and odd modes couple to the surface electrons through microscopic and symmetry considerations. We then discuss a certain type of surface localized phonon, and how it couples differently to electrons than bulk phonons. Finally, we calculate the imaginary part of the first-order phonon self-energy $\Pi_{\lambda}^0$. \subsection{\label{subsec:ep-vertices} Symmetry-Allowed Electron-Phonon Vertices} For electrons tightly-bound to Ir sites, their interaction with bulk phonon modes involving Ir displacements takes the general form\cite{mahan} \begin{equation}\label{eq:frohlich-coupling} \mathcal{H}_{ep} = \sum_{\bm{q}\lambda\alpha}\underbrace{\left(-i\sqrt{\frac{\hbar}{2M_{\alpha}\omega_{\bm{q}\lambda}^0}}(\hat{\bm{\xi}}^{\alpha}_{q\lambda}\cdot\bm{q})V_{\alpha}(\bm{q})\right)}_{g_{\bm{q}\lambda\alpha}}A_{\bm{q}\lambda}\rho_{\bm{q}\alpha}, \end{equation} where $\alpha\in\{B,R,Y,G\}$, $V_{\alpha}$ is the atomic potential, and $\rho_{\alpha}$ is the density of electrons on Ir site $\alpha$. Longitudinal optical phonons have displacements proportional to $\hat{\bm{q}}$ and nearly flat dispersion near the zone center, so the electron-phonon coupling $g_{\bm{q}\lambda\alpha}$ scales like \begin{equation} g_{\bm{q}\lambda\alpha} \propto (\hat{\bm{q}}\cdot\bm{q})\frac{1}{q^2} = \frac{1}{q}, \end{equation} which is the well-known \citet{frohlich1954} polar coupling in $d = 3$ dimensions with long-range Coulomb potential $V_{\alpha}(\bm{q}) \propto q^{-2}$. As shown in Section~\ref{sec:surface-states}, the wave function describing surface electrons is dominantly supported on the $R,G$ iridiums near $k_c = \pi$, so for simplicity we neglect $B,Y$ sites. Along $q_a$ near the zone center, the phonons have definite mirror symmetry, and the displacements of the iridiums satisfy $\bm{\xi}_{\pm}^{R} = \pm \bm{\xi}^{G}_{\pm}$. This means the coupling constants satisfy $g_{\pm R} = \pm g_{\pm G}$. Taking their common value $g_{\pm}$, the surface electron-phonon interaction takes the form \begin{align}\label{eq:micro-ep-coupling} \begin{split} \mathcal{H}_{ep} &= \sum_{\lambda = \pm }g_{\lambda}A_{\lambda}\sum_{\sigma}(c_{R\sigma}^{\dag}c_{R\sigma}^{\phantom{\dag}} + \lambda c_{G\sigma}^{\dag}c_{G\sigma}^{\phantom{\dag}} ) \\ &= \sum_{\lambda = \pm }g_{\lambda}A_{\lambda}\rho_{\lambda}, \end{split} \end{align} in which the total and relative densities of Section~\ref{subsec:densities} appear naturally. In a $\hat{\bm{b}}$ slab geometry, the electron-phonon coupling Eq.~\ref{eq:frohlich-coupling} must be modified as $q_b$ is no longer a good quantum number. However, properties of the displacements $\bm{\xi}^{\alpha}$ under mirror reflection lead to the same qualitative result. A modified electron-phonon coupling is discussed in the next section. The form of the coupling Eq.~\ref{eq:micro-ep-coupling} can also be understood through symmetry considerations. Under mirror reflection the phonon operators transform as $\mathcal{M} A_{\pm}\mathcal{M}^{\dag} = \pm A_{\pm}$, and the densities as $\mathcal{M}\rho_{\pm}\mathcal{M}^{\dag} = \pm \rho_{\pm}$. Therefore, the only mirror-invariant electron-phonon vertices we can write are \begin{equation}\label{eq:sym-ep-coupling} \H_{ep} \propto g_+A_+\rho_+ + g_-A_-\rho_-. \end{equation} Fig.~\ref{fig:phonon-RPA} shows Dyson's equation for the phonon propagators, where these vertices appear in the first-order self-energy. As shown in Section~\ref{subsec:densities}, the even modes $A_+$ coupling to the total density $\rho_+$ can only excite electron-hole pairs lying in the same mirror branch, while odd modes $A_-$ coupling to the relative density $\rho_-$ can excite pairs lying in different mirror branches. The form of this interaction is based on mirror symmetry alone, and is independent of the sublattice composition $\braket{\tau_z}$. Note that form factor is independent of momentum, unlike carbon nanotubes\cite{ishikawa2006} where the low-energy Dirac physics leads to a spinor that varies dramatically with momentum. This is because the parameters $\theta_{\pm},\varphi_{\pm}$ describing the surface state wave functions remain nearly constant with momentum due to small rotation and tilting of oxygen octahedra. Consequently, the form factors remain nearly constant, with modulus close to unity. \subsection{\label{subsec:fk-modes} Fuchs-Kliewer Modes} When the crystal is terminated, generally there will be a large number of vibrational modes with frequencies lying between the bulk values with quantized wavelengths in the normal direction. Fuchs and Kliewer\cite{fuchs1965optical,kliewer1966optical1, kliewer1966optical2} studied long wavelength modes in a polar material, and found that optical modes localized to the surface can exist in addition to extended bulk-like modes. Out of phase motion between oppositely charged ions sets up a macroscopic polarization field, with associated electric and electric displacement fields, which are described by Maxwell's equations. By imposing boundary conditions for the field inside and outside the material, an exponentially localized electric field exists provided that \begin{equation} \frac{1}{\epsilon(\omega)}\left(q^2 - \epsilon(\omega)\frac{\omega^2}{c^2}\right)^{1/2} = -\left(q^2 - \frac{\omega^2}{c^2}\right)^{1/2}, \end{equation} where $q$ is the wavevector of the field parallel to the surface, $\omega$ is the frequency of the field, and $\epsilon(\omega)$ is the dielectric function of the material. A surface optical (SO) mode exists when $\epsilon(\omega) < 0$. Using a simple independent oscillator model of the dielectric function \begin{equation} \epsilon(\omega) = \epsilon_{\infty} + \frac{\epsilon_0 - \epsilon_{\infty}}{1 - (\omega/\omega_{TO})^2}, \end{equation} where $\omega_{TO}$ is the transverse optical (TO) phonon frequency, and $\epsilon_0,\epsilon_{\infty}$ are the low- and high-frequency dielectric constants, an SO mode exists when\cite{kliewer1966optical1,wang1972electron,devreese2013elementary} \begin{equation} \omega_{TO} < \omega < \omega_{LO} \quad \mathrm{and} \quad q > \omega_{TO}/c. \end{equation} Therefore, in a polar crystal with LO-TO splitting we expect an SO mode to exist between the IR active TO and LO mode frequencies at long wavelengths. As with the bulk modes, the macroscopic polarization produced by the SO mode couples to the electric field of the electron through \begin{equation} \H_{ep} = \iint\d \bm{r}\d \bm{R}\ \Psi^{\dag}(\bm{r})\frac{e(\bm{r} - \bm{R})\cdot\bm{P}(\bm{R})}{\lVert \bm{r} - \bm{R}\rVert^3}\Psi(\bm{r}), \end{equation} where $\Psi(\bm{r})$ is the electron field operator, and $\bm{P}(\bm{R})$ is the polarization field. Due to the exponential attenuation of the mode amplitudes into the bulk, SO modes will couple differently to electrons than their bulk LO counterparts. Choosing the electron field to be the TCSM surface states from Section~\ref{sec:surface-states}, we can expand \begin{equation} \Psi(\bm{r}) = \frac{1}{\sqrt{\mathcal{A}}}\sum_{\k}e^{i\k\cdot\bm{r}_s}e^{\lambda_{\k} x_b}\Psi_{\k}, \end{equation} where $\bm{r}_s = (x_a,x_c)$ is the position on the surface of the crystal, $\k = (k_a,k_c)$ is a 2D wavevector, $\lambda_{\k}$ describes attenuation into the bulk, and $\Psi_{\k}$ is an electron field operator. Using the polarization field of the SO mode, it has been shown\cite{lucas1970electron, lucas1970quantum, wang1972electron} that the electron-phonon interaction for $q \gg \omega_{TO}/c$ takes the form \begin{equation} \H_{ep} \propto \sum_{\k,\bm{q}}\frac{1}{\sqrt{q}}\left(\int_{-\infty}^0\d x_b\ e^{qx_b}e^{(\lambda_{\k+\bm{q}} + \lambda_{\k})x_b} \right)A_{\bm{q}}\Psi^{\dag}_{\k + \bm{q}}\Psi_{\k}. \end{equation} So in contrast with the \citet{frohlich1954} Hamiltonian, where the electron-bulk LO phonon vertex scales like $1/q$, the electron-SO phonon vertex scales like $1/\sqrt{q}$. The coupling of electrons to extended bulk-like LO phonons in a slab geometry was found\cite{lucas1970electron} to scale like the bulk Fr\"ohlich coupling. \begin{figure} \includegraphics[width=\linewidth]{fig6.pdf} \caption{(colour online) Dyson equation for (a) even and (b) odd phonon propagators $\mathcal{D}_{\pm}(q)$ through their interaction with surface states for small $q_a$. The vertex $\Box$ represents even modes coupling through the total density, while $\triangle$ represents coupling of odd modes through the relative density. As the irreducible self-energy we take the first-order particle-hole bubble $\Pi^0_{\pm}(q)$ in which we sum over $m$, indexing the mirror branch of the electrons.} \label{fig:phonon-RPA} \end{figure} \subsection{\label{subsec:density-response} Density Response and Phonon Damping} With the preceding form of the electron-phonon interaction Eq.~\ref{eq:sym-ep-coupling}, we calculate the imaginary part of the first-order polarization bubble $\Pi^0_{\pm}$ in the Matsubara formalism for $q_c = 0$ at zero temperature. The self-energy serves to modify the phonon propagator from Eq.~\ref{eq:unperturbed-propagator} to \begin{equation} \mathcal{D}_{\lambda}(\bm{q},\omega) = \frac{2\omega_{\bm{q}\lambda}^0}{\omega^2 - (\omega_{\bm{q}\lambda}^0)^2 - 2\omega_{\bm{q}\lambda}^0|g_{\bm{q}\lambda}|^2\Pi^0_{\lambda}(\bm{q},\omega)}, \end{equation} and in particular, the imaginary part of $\Pi^0_{\lambda}$ broadens the phonon mode lifetime. An explicit calculation (see Appendix~\ref{app:bubble-calculation}) yields \begin{equation} -\tfrac{1}{\pi}\mathrm{Im}\Pi^0_+(q_a,q_c = 0,\omega) = 0 \end{equation} for the even modes, and \begin{equation} -\tfrac{1}{\pi}\mathrm{Im}\Pi^0_-(q_a,q_c = 0,\omega) \propto \frac{1}{v_F}\left[\Theta(2\mu + \omega) - \Theta(2\mu - \omega)\right] \end{equation} for the odd modes, where $\omega\in [-2\Lambda,+2\Lambda]$ and $q_a$ is small. The chemical potential $\mu > 0$ of the surface states is measured from the nodal point shown in Fig.~\ref{fig:ph-excitations}. The imaginary part is easily understood by considering particle-hole excitations which provide decay channels for the phonons. Even modes excite pairs within the same mirror branch, leading to a linear $\mathrm{Im}\Pi^0_+(\bm{q},\omega)$ as depicted in Fig.~\ref{fig:ph-excitations}(a), which vanishes at $q_c = 0$. Odd modes excite pairs in different mirror branches, so even at $q_c = 0$ pairs can be excited with energy $\omega$ ranging from $2\mu$ to $2\Lambda$ as depicted in Fig.~\ref{fig:ph-excitations}(b). These results hold for small $q_a$, so long as particle-hole pairs with different $k_a$ lie in the same region along the line of 1D Dirac cones. In real materials the chiral symmetry is slightly broken due to next-nearest neighbour in-layer hopping, which adds a small $k_a$ dispersion. Despite this, particle-hole pairs may still be excited at small $q_a$ with the window $[2\mu,2\Lambda]$ being slightly reduced. We therefore predict damping of mirror odd bulk LO or SO phonons near the zone center due to the presence of surface states, while mirror even phonons are unaffected. \begin{figure}[H] \includegraphics[width=\linewidth]{fig7.pdf} \caption{(colour online) Particle-hole excitations contributing to the (a) even mode self-energy for small $q_a$ with $p_c = k_c - \pi$, and (b) the same for odd modes as shown for $k_a > 0$. (c) Imaginary part of phonon self-energy as a function of $\omega$, with $q_c = 0$ and small $q_a$, for even and odd modes.} \label{fig:ph-excitations} \end{figure} \section{\label{sec:discussion} Discussion} Landau damping due to particle-hole excitations of a typical bulk FS, including a nodal ring FS\cite{rhim2016anisotropic}, vanishes as $q\rightarrow 0$ at finite frequency. However, significant damping of particular phonon modes in the same limit occurs in a TCSM, when the TCSM exhibits a set of flat 1D Dirac surface states as shown in Fig.~\ref{fig:SBZ}. Nearly flat bands in one direction is responsible for this effect. This unique feature of the surface electron-phonon interaction in a TCSM, distinguished from bulk electronic contributions, is associated with the symmetry properties of surface states. With $\bm{q}$ along $q_a$, phonons have definite parity under mirror reflection. Even modes can excite electron-hole pairs within the same mirror branch, while odd modes can excite between different branches. As a result, only the odd optical phonons will be damped at the zone center. Signatures of this effect may be accessible through a combination of optical and scattering experiments. For SrHfO$_3$, isostructural to SrIrO$_3$, optical mode frequencies have been calculated\cite{vali2009lattice} using density functional perturbation theory (DFPT), which are comparable with experimental Raman\cite{park1976raman,lee2010optical}, and IR reflectivity\cite{lee2010optical} studies. In the case of SrIrO$_3$, high pressures are necessary to achieve the orthorhombic perovskite structure\cite{longo1971structure}, and as a consequence there are only a small number of bulk experiments available\cite{nie2015interplay, matsuno2015engineering, zhao2008high,blanchard2014anomalous,fujioka2017correlated}. Despite this, line widths in bulk Raman spectroscopy, or the imaginary part of the dielectric function from IR reflectivity, should reflect electronic damping. Since SrHfO$_3$ is is electronically insulating it may serve as a reference material for intrinsic line widths. Moreover, Fuchs-Kliewer SO modes have been observed\cite{baden1981observation} in the cubic SrTiO$_3$ through high-resolution, low-energy electron diffraction (LEED). Based on the general theory of these modes, we also expect them to exist near the zone center in SrIrO$_3$ between bulk LO and TO frequencies, provided that the dielectric function is negative. Bulk IR reflectivity data would serve to determine $\epsilon(\omega)$ as well as the LO and TO frequencies. Even without a microscopic basis for the SO modes (which may be provided by DFPT, or a semi-empirical approach such as the embedded atom method\cite{daw1984embedded,daw1993embedded,foiles1986embedded,karimi1992embedded} or the multipole expansion\cite{jayanthi1987nature,kaden1992electronic}), a general symmetry analysis tells us that SO modes along $q_a$ will have definite parity under mirror reflection. The existence of these modes could be confirmed with LEED or inelastic helium atom scattering (HAS)\cite{zhu2011interaction,zhu2012epcoupling}. In SrIrO$_3$ HAS would be more appropriate to avoid electronic contributions to scattering. Comparing line widths in time-of-flight HAS measurements of SrIrO$_3$ along the $\hat{\bm{a}}$ direction (for a $\hat{\bm{b}}$ crystal termination) with the reference material SrHfO$_3$ would provide information about how the SO modes are damped. Quantitative analysis of SO modes in SrIrO$_3$ is beyond the scope of the current work, and may be an interesting subject for future study. In summary, we have investigated the nature of the wave function describing surface states in a TCSM, and found unique properties under mirror symmetry. This restricts the form of the electron-phonon interaction for phonons of definite mirror symmetry when $\bm{q}$ is along $q_a$. The surface states couple to bulk LO, or SO modes with different scaling of the vertex. We computed the first-order self-energy of mirror even and odd phonons, and found that damping near the zone center at finite frequency is zero for mirror even modes but finite for mirror odd modes. Damping from surface electrons is distinct from typical Landau damping due to bulk electrons, and we propose a combination of optical and HAS experiments to observe this. Such an experiment would provide the first evidence of surface states in a topological crystalline semimetal. \begin{acknowledgments} We thank Yige Chen for useful discussions at the beginning of this project. This work was supported by the Natural Sciences and Engineering Research Council of Canada and the Center for Quantum Materials at the University of Toronto. \end{acknowledgments}
{ "timestamp": "2018-05-09T02:00:25", "yymm": "1802", "arxiv_id": "1802.04300", "language": "en", "url": "https://arxiv.org/abs/1802.04300" }
\section{Introduction} Revealing the morphological and chemical structure of the Milky Way requires knowing the locations of objects on a Galaxy-wide scale. In the Solar neighborhood, the distances to stars can be accurately derived by measuring their parallax. Far from the Solar neighborhood, distances to stars may be determined using spectrophotometric techniques \citep[e.g.,][]{moises2011} and red clump stars \citep[e.g.,][]{bovy2014}. Distances to gas clouds can be gotten from both Very Long Baseline Interferometry (VLBI) parallax measurements of molecular maser emission from high mass star forming regions (HMSFRs) \citep[e.g.,][]{reid2014} as well as kinematic distance determinations \citep[e.g.,][]{anderson2012}. Kinematic distances are derived by measuring the local standard of rest (LSR) velocity, \ensuremath{V_{\rm LSR}}\xspace, of an object and assuming a model of Galactic rotation. If the object is on a circular orbit following this Galactic rotation model (GRM), then the LSR velocity of the object uniquely identifies the object's Galactocentric radius, \(R\). Beyond the Solar orbit, this technique also uniquely determines the object's Galactocentric azimuth, \(\theta\), and distance from the Sun, \(d\). Within the Solar orbit, kinematic distances suffer from the kinematic distance ambiguity (KDA). Here, a single LSR velocity may correspond to two distances: a ``near'' and ``far'' kinematic distance. We must use additional information to identify the kinematic distance ambiguity resolution (KDAR). The kinematic method is commonly used to determine the distances to HMSFRs in the study of Galactic structure. Recently, for example, \citet{balser2015} used H\,{\sc ii}\ region kinematic distances to probe the metallicity distribution across the Galactic disk. The Green Bank Telescope H\,{\sc ii}\ Region Discovery Survey (GBT HRDS) and its successors discovered more than \({\sim}1000\) new Galactic H\,{\sc ii}\ regions by measuring their centimeter wavelength radio recombination line (RRL) emission \citep{bania2010,bania2012,anderson2015}. H\,{\sc ii}\ regions are the zones of ionized gas surrounding recently-formed high-mass (OB-type) stars. They are the archetypical tracer of Galactic spiral structure. \citet{anderson2012} derived the kinematic distances to 149 H\,{\sc ii}\ regions in the original GBT HRDS, and today the \textit{WISE} Catalog of Galactic H\,{\sc ii}\ Regions \citep{anderson2014} lists \({\sim}1500\) H\,{\sc ii}\ region kinematic distances. Errors in kinematic distances are caused by both inaccurate GRMs and incorrect KDARs. The rotation of the Milky Way is affected by non-circular streaming motions induced by the Galactic bar and spiral arms \citep[e.g.,][]{burton1971,gomez2006,moises2011}. These deviations from circular motion will affect the accuracy of GRMs. A variety of techniques have been used to resolve the KDA for Galactic H\,{\sc ii}\ regions, for example, H\,{\sc i}\ emission/absorption experiments \citep{kuchar1994,kolpak2003,anderson2009a,anderson2012,urquhart2012,brown2014}, H\,{\sc i}\ self-absorption experiments \citep{roman-duval2009,urquhart2012}, and H\(_2\)CO absorption experiments \citep{araya2002,watson2003,sewilo2004}. If these KDAR techniques are inaccurate, the derived kinematic distances will be as well. Very Long Baseline Interferometric (VLBI) trigonometric parallax measurements of molecular masers are an independent and accurate way to measure the distances to HMSFRs. Over the past decade, the Bar and Spiral Structure Legacy Survey (BeSSeL)\footnote{\url{http://bessel.vlbi-astrometry.org/}}, the Japanese VLBI Exploration of Radio Astrometry (VERA)\footnote{\url{http://veraserver.mtk.nao.ac.jp/}}, and the European VLBI Network (EVN)\footnote{\url{http://www.evlbi.org/}} projects have accumulated a sample of more than 100 VLBI parallaxes and proper motions for masers associated with HMSFRs \citep{reid2014}. These trigonometrically-derived distances do not suffer from the same problems as kinematic distances. With a typical parallax uncertainty of \({\sim}20\,\mu\text{as}\), these parallax distances are accurate to about 10\% at distances of 5\ensuremath{\,{\rm kpc}}\xspace \citep{reid2014rev}. Although parallaxes are the ``gold standard'' distances for HMSFRs, they are difficult and time-consuming to measure. To constrain the parallax and proper motion of four HMSFRs, including W51 Main/South, \citet{sato2010} used the National Radio Astronomy Observatory (NRAO) Jansky Very Large array to locate background extragalactic position reference objects together with the NRAO Very Long Baseline Array (VLBA) for the accurate astrometry. The VLBA observations totaled \({\sim}28\) hours spread over \({\sim}12\) months. Such observations are impractical to make for all \({\sim}4000\) H\,{\sc ii}\ regions in the \textit{WISE} Catalog. Furthermore, the majority of the H\,{\sc ii}\ regions in the \textit{WISE} Catalog will not have detectable maser emission. \startlongtable \input{reid2014_sample_cut.tex} With such a large sample of HMSFR maser parallaxes, we can now compare the parallax and kinematic distances and judge the accuracy of the kinematic distance technique. \citet{reid2009b} performed a similar study comparing the kinematic and parallax distances of 18 HMSFRs. They found that the kinematic distance method gives distances much larger (up to a factor of 2) than the parallax distances for a majority of their sample. After correcting the LSR velocities using updated Solar motion parameters, however, the mean difference between the kinematic and parallax distances became close to zero and only half of their sample had kinematic distances larger than their parallax distances. Here we expand upon the \citet{reid2009b} analysis using a larger sample of HMSFRs. \section{Sample Selection} Our sample of HMSFRs comes from the maser parallax catalog in \citet{reid2014} that contains parallaxes and proper motions for 103 HMSFRs and HMSFR proxies in the Milky Way. These data stem from measurements made using the NRAO VLBA, the VERA project, and the EVN. The \citet{reid2014} catalog contains the parallax, maser LSR velocity, and their associated uncertainties for each HMSFR. This provides the necessary information to derive both the parallax distance and kinematic distance to each object. Kinematic distances are unreliable in the direction of the Galactic Center (GC; \(\ensuremath{\ell}\xspace = 0^\circ\)) and the Galactic Anti-center (GAC; \(\ensuremath{\ell}\xspace = 180^\circ\)) due to velocity crowding: LSR velocities due to circular motion tend towards zero in these directions. As in previous studies using kinematic distances \citep[e.g.,][]{balser2015}, we exclude all objects within \(15^\circ\) of the GC and \(20^\circ\) of the GAC. Our final sample contains 72 HMSFRs and 3 red supergiants (HMSFR proxies). The positions, parallaxes, and LSR velocities (\(V_{\rm LSR}\)) from the \citet{reid2014} catalog are reproduced in Table~\ref{tab:sample}. According to \citet{reid2014}, the listed LSR velocities are those of methanol masers when available, otherwise they are the \co emission line velocities from associated giant molecular clouds (GMCs). The LSR velocity uncertainties include both measurement uncertainties as well as an added uncertainty relating the maser spot motion to the bulk HMSFR motion. This added component ranges from \(5\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) to \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) \citep[see][]{reid2014}. \section{Parallax Distances} The parallax distance is defined as \begin{equation} D_P = \frac{1}{\pi} \label{eq:parallax} \end{equation} where the parallax distance, \(D_P\), has units of kpc when the parallax, \(\pi\), has units of milli-arcseconds (mas). If the parallax uncertainty, \(\sigma_\pi\), is small compared to the parallax, i.e. \(\sigma_\pi/\pi \ll 1\), then the parallax distance uncertainty, \(\sigma_P\), is determined by propagating the parallax uncertainty through Equation~\ref{eq:parallax}, \begin{equation} \sigma_P = \frac{\sigma_\pi}{\pi^2}. \end{equation} If the fractional parallax uncertainty is large, however, the shape of the parallax distance probability distribution function (PDF) is skewed. Thus the peak (\(D_P\)) and the shape of the wings change and the parallax distance uncertainty is non-symmetric around the peak \citep[see][]{kovalevsky1998}. Figure~\ref{fig:parallax_pdf_example} shows an example of the parallax distance PDF skew for different parallax uncertainties. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{parallax_pdf_example.pdf} \caption{Normalized parallax probability distribution function (PDF; top) and parallax distance PDF (bottom). The parallax in this example is \(\pi = 0.5\,\text{mas}\) and the parallax uncertainty is \(\sigma_\pi = 0.01\,\text{mas}\) (dotted), \(0.05\,\text{mas}\) (dashed) and \(0.1\,\text{mas}\) (solid). The parallax distance PDF is determined by Monte Carlo re-sampling the Gaussian parallax PDF. For large relative parallax uncertainties, the parallax distance PDF is skewed.} \label{fig:parallax_pdf_example} \end{figure} \begin{figure*}[ht] \centering \includegraphics[width=0.49\linewidth]{{G049.48-00.36}.pdf} \includegraphics[width=0.49\linewidth]{{G209.00-19.38}.pdf} \\ \includegraphics[width=0.49\linewidth]{{G095.29-00.93}.pdf} \includegraphics[width=0.49\linewidth]{{G025.70+00.04}.pdf} \caption{Normalized parallax distance probability distribution functions (PDFs) for four HMSFRs. The solid curve is the kernel density estimator (KDE) of the distribution; the solid vertical line is the peak of the KDE and our assigned parallax distance. The dashed vertical line is the parallax distance given by Equation~\ref{eq:parallax}. The vertical dotted lines span the symmetric uncertainty range in the parallax distance derived by propagating the parallax uncertainty through Equation~\ref{eq:parallax}. The filled region is the uncertainty range derived using the KDE (see text). Panel (a) (G049.48\(-\)00.36; W 51 IRS2) has the largest fractional parallax uncertainty and thus has the most skewed PDF. Panel (b) (G209.00-19.38; Orion Nebula) has the smallest fractional parallax uncertainty and has the PDF closest to a Gaussian distribution. Panel (c) (G095.29-00.93) has a typical fractional parallax uncertainty. Panel (d) (G025.70+00.04) has a large fractional parallax uncertainty. It has the largest deviation from the Monte Carlo-defined parallax distance and the parallax distance derived using Equation~\ref{eq:parallax}.} \label{fig:parallax_pdf} \end{figure*} We derive a Monte Carlo parallax distance for each HMSFR by re-sampling the measured parallaxes within their uncertainties, assuming a Gaussian parallax PDF. We sample the parallax \(10^5\) times and use Equation~\ref{eq:parallax} to derive the parallax distance distribution. To approximate the parallax distance PDF, we fit a kernel density estimator (KDE) to the distribution. We use the linear combination KDE technique from \citet{jones1993}, which is accurate even in the presence of physical boundaries such as the requirement that distances be greater than 0. The parallax distance PDFs for four sources are shown in Figure~\ref{fig:parallax_pdf}. The peak of the PDF (i.e. the most likely value) is the parallax distance. In every case, this distance is smaller than the distance given by Equation~\ref{eq:parallax}. We derive the uncertainty in the parallax distance by determining the lower and upper bounds of the PDF such that 1) the value of the PDF at both bounds is equal and 2) the integral of the normalized PDF between the bounds is equal to 0.683 (i.e., 68.3\% of the total area under the PDF). This uncertainty is therefore the 68.3\% confidence interval. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{para_pdf_para.pdf} \includegraphics[width=\linewidth]{frac_para_pdf_para.pdf} \caption{Difference (top) and fractional difference (bottom) between parallax distances derived using Equation~\ref{eq:parallax}, \(D_P({\rm eq.\,1})\), and using the Monte Carlo method, \(D_P({\rm MC})\). The solid curve is the KDE fit to the difference distribution and the solid vertical line is the median of the distribution.} \label{fig:para_pdf_para} \end{figure} The difference between the parallax distances derived using Equation~\ref{eq:parallax} and our Monte Carlo-derived parallax distances is small; the median difference is 0.03 kpc and the largest difference is 1.32 kpc for G025.70+00.04 (Figure~\ref{fig:parallax_pdf}, panel (d)). Figure~\ref{fig:para_pdf_para} shows the distribution of parallax distance differences between these two methods for our HMSFR sample (Table~\ref{tab:sample}). The majority of objects in our sample have less than \(0.1\ensuremath{\,{\rm kpc}}\xspace\) difference between the Equation~\ref{eq:parallax} and Monte Carlo parallax distances. \section{Kinematic Distances} A fundamental assumption of the kinematic distance method is that the chosen GRM, which gives the Galactic orbital speed, \(\Theta\), at all Galactocentric radii, \(R\), accurately models the Galaxy. Several different techniques have been employed to derive \(\Theta(R)\), for example the tangent point method \citep[e.g.,][]{mcclure-griffiths2007} or using the full phase-space kinematics of masers associated with HMSFRs \citep[e.g.,][]{reid2014}. The former method is only reliable in the inner-Galaxy (within the Solar orbit) whereas the latter method works across the entire Galactic disk. \citet{reid2016b} demonstrated that both methods predict similar rotation curves in the inner-Galaxy. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{kd_rotcurve_example.pdf} \\ \includegraphics[width=0.8\linewidth]{kd_faceon_example.pdf} \\ \includegraphics[width=0.8\linewidth]{kd_distance_example.pdf} \caption{Schematic of the kinematic distance technique. Panel (a) is the \citet{reid2014} rotation curve. Panel (b) is a face-on view of the Galaxy with the Galactic Center located at the center and the Sun located 8.34 kpc in the direction \(\theta_{\rm Az} = 0^\circ\). The concentric circles are 4, 8, and 12 kpc in \(R\) and \(\theta_{\rm Az}\) is given in degrees. The solid line is a line of sight through the Galaxy with \(\ensuremath{\ell}\xspace = 40^\circ\). Panel (c) is the LSR velocity profile along this line-of-sight. An object with \(V_{\rm LSR} = 30\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) in this direction (solid horizontal line) is an inner-Galaxy object and has two possible kinematic distances (black circles). An object with \(V_{\rm LSR} = -30\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) (dashed horizontal line) is an outer-Galaxy object and has only one possible kinematic distance (black square). Open circles show the location of the Sun. } \label{fig:kinematic_distances} \end{figure} The GRM rotation curve is used to transform the Galactic longitude, Galactic latitude, distance space (\(\ensuremath{\ell}\xspace,b,d\)) to Galactic longitude, Galactic latitude, LSR velocity space (\(\ensuremath{\ell}\xspace,b,\ensuremath{V_{\rm LSR}}\xspace\)). A schematic of the kinematic distance technique is shown in Figure~\ref{fig:kinematic_distances}. Many studies have shown that HMSFRs in the Milky Way do not have perfectly circular orbits; there are significant non-circular motions due to streaming in the vicinity of the Galactic bars and spiral arms \citep[e.g.,][]{burton1971,gomez2006,reid2009b,reid2014}. These streaming motions compromise the accuracy of kinematic distances in a complicated, uncertain way and are typically not accounted for in the derivation of kinematic distances. A face-on view of the \citet[][hereafter A12]{anderson2012} kinematic distance uncertainty model is shown in Figure~\ref{fig:lda_uncertainty}. The A12 model includes uncertainties that stem from: (1) the variation in kinematic distances when using different GRMs; (2) the adopted values of the Solar Galactocentric Radius, \(R_0\), and Solar circular orbit speed, \(\Theta_0\); and, (3) including a global \(7\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) streaming motion uncertainty. This streaming motion uncertainty is an estimate of the true global streaming motion uncertainty which may be between 5 and \(10\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) \citep{burton1966}. They did not, however, consider uncertainties with the GRMs or in the Solar motion parameters that define the LSR. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{lda_dist.pdf} \\ \includegraphics[width=\linewidth]{lda_dist_frac.pdf} \caption{Face-on Galactic view of the A12 kinematic distance uncertainty model. The top panel is the absolute distance uncertainty and the bottom panel is the fractional distance uncertainty. The Galactic Center is located at the origin and the Sun is located 8.34 kpc in the direction \(\theta_{\rm Az} = 0^\circ\). The concentric circles are 4, 8, and 12 kpc in \(R\) and \(\theta_{\rm Az}\) is given in degrees. The color represents the distance uncertainty. The regions \(-15^\circ < \ensuremath{\ell}\xspace < 15^\circ\) and \(160^\circ < \ensuremath{\ell}\xspace < 200^\circ\) are masked (white) since kinematic distances are very inaccurate towards the Galactic Center and Galactic Anti-center. The black regions represent distance uncertainties greater than \(\sigma_d = 2\ensuremath{\,{\rm kpc}}\xspace\) (top) or \(\sigma_d/d = 0.5\) (bottom). The gray points are the HMSFRs in our sample.} \label{fig:lda_uncertainty} \end{figure} Here we discuss three methods for calculating kinematic distances: the traditional method using the \citet{brand1993} GRM (Method A), the traditional method using updated Solar motion parameters and the \citet{reid2014} GRM (Method B), and a new Monte Carlo technique using the \citet{reid2014} GRM (Method C). \subsection{Method A: Traditional Method, \citet{brand1993} GRM} The traditional method for calculating kinematic distances uses a GRM and the measured position and LSR velocity, (\(\ensuremath{\ell}\xspace,b,\ensuremath{V_{\rm LSR}}\xspace\)), of an object to determine the distance(s) that correspond to the measured LSR velocity. This is typically accomplished by finding the minimum difference between the GRM LSR velocity and the measured LSR velocity (see Figure~\ref{fig:kinematic_distances}). We derive the Method A kinematic distances for our sample of HMSFRs using the \citet{brand1993} GRM and the uncertainty model from A12. This rotation curve and uncertainty model provide the kinematic distances and distance uncertainties listed in the \textit{WISE} Catalog. We resolve the KDA by finding the kinematic distance closest to the parallax distance. If the region has an LSR velocity within \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) of the tangent point velocity, we assign it to the tangent point. A12 used a similar tangent point strategy, but with a velocity cutoff of \(10\ensuremath{\,{\rm km\,s^{-1}}}\xspace\). Our \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) cutoff is more conservative and is more consistent with the GRM uncertainties discussed in the following sections. \subsection{Method B: Updated Solar Motion Parameters, \citet{reid2014} GRM} In 1985, the LSR was \textit{defined} by the International Astronomical Union Commission 33 as \(220\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) in the direction \((\ell,b) = (90^\circ,0^\circ)\) with a Solar non-circular motion of \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) in the direction \(\alpha=18^{\rm h}\), \(\delta = +30^\circ\) (1900) \citep{kerr1986}. Precessing to the modern epoch (J2000), the Solar non-circular motion is defined in Galactic Cartesian coordinates as \(U_\odot^{\rm Std} = 10\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) in the direction of the GC, \(V_\odot^{\rm Std} = 15\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) in the direction of the Solar orbit, and \(W_\odot^{\rm Std} = 7\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) in the direction of the North Galactic Pole. Since this definition was adopted, many authors have published more accurate derivations of the Solar non-circular motion parameters. For example, \citet{reid2014} derived updated Solar motion parameters by fitting a \citet{persic1996} universal rotation curve to the full phase-space kinematics of a sample of maser parallaxes and proper motions towards HMSFRs. The \citet{persic1996} universal rotation curve is a physically-motivated GRM, rather than an empirical model, that includes the gravitational potential of both the disk and halo. The \citet{persic1996} universal rotation curve is given by \begin{equation} \Theta(R) = a_1 \left[\frac{1.97\beta x^{1.22}}{\left(x^2 + 0.78^2\right)^{1.43}} + \left(1-\beta\right)x^2\frac{1+a_3^2}{x^2 + a_3^2}\right]^{1/2}\label{eq:persic_rotcurve} \end{equation} where \(x = R/(a_2R_0)\) and \(\beta = 0.72 + 0.44\log_{10}[(a3/1.5)^5]\). Here, \(a_1\), \(a_2\), and \(a_3\) are the parameters fit by \citet{reid2014}. These parameters, as well as the updated Solar motion parameters fit by \citet{reid2014}, are listed in Table~\ref{tab:reid2014_rotcurve}. \input{reid2014_rotcurve.tex} To correct the LSR velocities in our sample for the updated Solar non-circular motion parameters, we first convert the measured LSR velocity to a heliocentric velocity via \begin{equation} \begin{aligned} V_{\rm helio} & = V_{\rm LSR} - \left(U_\odot^{\rm Std}\cos\ensuremath{\ell}\xspace + V_\odot^{\rm Std}\sin\ensuremath{\ell}\xspace\right)\cos b \\ & - W_\odot^{\rm Std}\sin b.\label{eq:helio} \end{aligned} \end{equation} Next, we use the \citet{reid2014} Solar motion parameters to derive the revised LSR velocity, \(V_{\rm LSR}^{\rm Rev}\): \begin{equation} \begin{aligned} V_{\rm LSR}^{\rm Rev} & = V_{\rm helio} + \left(U_\odot^{\rm Rev}\cos\ensuremath{\ell}\xspace + V_\odot^{\rm Rev}\sin\ensuremath{\ell}\xspace\right)\cos b \\ & + W_\odot^{\rm Rev}\sin b. \label{eq:vlsr} \end{aligned} \end{equation} The uncertainty in this LSR velocity (\(\sigma_V^{\rm Rev}\)) includes contributions from the uncertainty in the measured LSR velocity (\(\sigma_V\)) and the uncertainties in the \citet{reid2014} Solar motion parameters, \(\sigma_{U\odot}^{\rm Rev},\sigma_{V\odot}^{\rm Rev},\sigma_{W\odot}^{\rm Rev}\). The combined uncertainty in the revised LSR velocity is \begin{equation} \begin{aligned} {\sigma_V^{\rm Rev}}^2 & = \sigma_V^2 + \left(\sigma_{U\odot}^{\rm Rev}\cos\ensuremath{\ell}\xspace\cos b\right)^2 + \left(\sigma_{V\odot}^{\rm Rev}\sin\ensuremath{\ell}\xspace\cos b\right)^2 \\ & + \left(\sigma_{W\odot}^{\rm Rev}\sin b\right)^2 \label{eq:velocity_uncertainty} \end{aligned} \end{equation} For simplicity, we ignore the cross-terms between the Solar motion parameter uncertainties. Including these cross-terms would have little effect since \citet{reid2014} finds that the magnitude of the Pearson product-moment correlation coefficients between these parameters is small, ranging between \(0.011\) and \(0.017\). To compute the Method B kinematic distances to our sample of HMSFRs, we use the \citet{reid2014} fits to the \citet{persic1996} universal rotation curve and these revised LSR velocities. As before, we assign the near or far KDAR by determining which kinematic distance is closest to the parallax distance. If the HMSFR has an LSR velocity within \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) of the tangent point velocity, we assign it to the tangent point distance. The Method B kinematic distance uncertainties are again determined by the A12 kinematic distance uncertainty model. \begin{figure}[ht] \includegraphics[width=\linewidth]{reid2014_rotcurve_fig.pdf} \caption{The \citet{reid2014} universal rotation curve. The solid line is the nominal rotation curve using the parameters listed in Table~\ref{tab:reid2014_rotcurve}. The colors represent the probability distribution function (PDF) derived by Monte Carlo re-sampling the rotation curve parameters within their uncertainties.} \label{fig:reid2014_rotcurve} \end{figure} \subsection{Method C: Monte Carlo Method, \citet{reid2014} GRM} Here we develop a method to derive kinematic distances and their uncertainties in a more statistically robust way. With this method we re-sample all measured and derived parameters within their uncertainties and determine the probability distribution function (PDF) of kinematic distances. We first correct the measured LSR velocities as described above. We then re-sample the revised LSR velocities from a normal distribution centered on the nominal revised LSR velocity, \(\ensuremath{V_{\rm LSR}}\xspace^{\rm Rev}\), with a width \(\sigma_V^{\rm Rev}\). The width of this distribution is the total revised LSR velocity uncertainty, which includes both the measured uncertainty and the uncertainties in the Solar motion parameters. We also re-sample the \citet{reid2014} universal Galactic rotation curve parameters, including \(R_0\), from a normal distribution centered on the nominal values and a width equal to the uncertainty (see Equation~\ref{eq:persic_rotcurve} and Table~\ref{tab:reid2014_rotcurve}). The variation of this re-sampled rotation curve is shown in Figure~\ref{fig:reid2014_rotcurve}. Unlike A12, we do not add any additional streaming motion uncertainty into these calculations because the derivation of the \citet{reid2014} rotation curve inherently includes uncertainties due to streaming motions. \begin{figure*}[ht] \centering \includegraphics[width=0.8\linewidth]{{32.04glong_96.71velo}.pdf} \caption{Normalized probability distribution functions (PDFs) of the Method C kinematic distances derived for G032.04+00.05, \((\ensuremath{\ell}\xspace,V_{\rm LSR}) = (32.0^\circ, 96.7\,\ensuremath{\,{\rm km\,s^{-1}}}\xspace)\). Shown from top to bottom are the kinematic distance PDFs for: Galactocentric radius, \(R\), Galactocentric radius of tangent point, \(R_{\rm tan}\), near kinematic distance, \(d_{\rm near}\), far kinematic distance, \(d_{\rm far}\), and tangent point kinematic distance, \(d_{\rm tan}\). The PDFs are determined by Monte Carlo resampling the \citet{reid2014} rotation curve within the uncertainties in the rotation curve parameters and then deriving the kinematic distances. The solid curve is the kernel density estimation (KDE) derived using the linear combination technique from \citet{jones1993}. The dashed vertical line is the distance derived using the ``traditional'' kinematic techniques whereas the solid vertical line is the peak of the KDE. The gray region is the \(68.3\%\) confidence interval.} \label{fig:pdf_example_0lag_0stream} \end{figure*} By re-sampling the above parameters \(10^5\) times for each HMSFR, we derive the kinematic distance PDF for each object (see example for G032.04+00.05 in Figure~\ref{fig:pdf_example_0lag_0stream}). Each panel in Figure~\ref{fig:pdf_example_0lag_0stream} represents one distance we derive: the Galactocentric radius, \(R\), the Galactocentric radius of the tangent point, \(R_{\rm tan}\), the near kinematic distance, \(d_{\rm near}\), the far kinematic distance, \(d_{\rm far}\), and the tangent point kinematic distance, \(d_{\rm tan}\). The shapes of the PDFs are determined by the uncertainties in the LSR velocities, the Galactocentric radius of the Solar orbit, and the parameters of the rotation curve model. We fit each PDF with a kernel density estimator (KDE) derived using the linear combination technique from \citet{jones1993}. The peak of this KDE is the most probable kinematic distance, and the width and shape of the KDE describe the range of possible kinematic distances. Similar to how we defined parallax distances, we define the Method C kinematic distance as the peak of the KDE. The uncertainty in this distance is the 68.3\% confidence interval. We resolve the KDAR in the same way as in the previous two methods. If the object has a velocity in excess of the magnitude of the tangent point velocity, then the uncertainty in the tangent point distance is the formal Monte Carlo uncertainty (i.e., the 68.3\% confidence interval). If the object's velocity is smaller than the magnitude of the tangent point velocity but still within \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\), then the tangent point distance uncertainty is the total range from the near distance to the far distance. In this velocity range, traditional KDAR techniques are inaccurate and the object could be anywhere between the near and far kinematic distance (A12). \begin{figure*}[ht] \centering \includegraphics[width=0.45\linewidth]{pdf_neg_dist.pdf} \includegraphics[width=0.45\linewidth]{pdf_pos_dist.pdf} \\ \includegraphics[width=0.45\linewidth]{pdf_neg_dist_frac.pdf} \includegraphics[width=0.45\linewidth]{pdf_pos_dist_frac.pdf} \\ \caption{Face-on Galactic view of the Monte Carlo kinematic distance uncertainties. These uncertainties are not symmetric; the left panels are the uncertainty in the negative direction (toward the Sun) and the right panels are the uncertainty in the positive direction (away from the Sun). The top panels are the absolute distance uncertainties and the bottom panels are the fractional distance uncertainties. The Galactic Center is located at the origin and the Sun is located 8.34 kpc in the direction \(\theta_{\rm Az} = 0^\circ\). The concentric circles are 4, 8, and 12 kpc in \(R\) and \(\theta_{\rm Az}\) is given in degrees. The color represents the distance uncertainty. The regions \(-15^\circ < \ensuremath{\ell}\xspace < 15^\circ\) and \(160^\circ < \ensuremath{\ell}\xspace < 200^\circ\) are masked (white) since kinematic distances are very inaccurate towards the Galactic Center and Galactic Anti-center. The black regions represent distance uncertainties greater than \(\sigma_d = 2\ensuremath{\,{\rm kpc}}\xspace\) (top) or \(\sigma_d/d = 0.5\) (bottom). The gray points are the HMSFRs in our sample.} \label{fig:pdf_uncertainty} \end{figure*} A face-on Galactic view of the Monte Carlo kinematic distance uncertainties is shown in Figure~\ref{fig:pdf_uncertainty}. To construct these maps we use the Monte Carlo technique to compute the kinematic distance and distance uncertainties in bins of \(2^\circ\) in Galactic longitude and \(2\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) in velocity, with \(10^{4}\) Monte Carlo samples in each bin. Since the kinematic distance uncertainties derived in this method are not symmetric, we show the uncertainties in both the positive direction (away from the Sun) and the negative direction (toward the Sun). \startlongtable \input{final_distances.tex} \section{Kinematic Distance Uncertainty} We assess the accuracy of kinematic distances by comparing the parallax and kinematic distances for each of the three kinematic distance methods. Table~\ref{tab:final_distances} lists the derived distances for our sample: the Monte Carlo parallax distance, \(D_P\), the kinematic distances using each of the three methods, \(D_A\), \(D_B\), and \(D_C\), and their associated KDARs: KDAR\(_A\), KDAR\(_B\), and KDAR\(_C\). Here we investigate the differences between the parallax distances and kinematic distances and compare those differences to the kinematic and parallax distance uncertainties. For each kinematic distance method we generate six figures: (1) a histogram of the difference between the kinematic distance and the parallax distance; (2) a histogram of the fractional distance difference; (3) a scatter plot of the distance difference as a function of the parallax distance; (4) a scatter plot of the distance difference minus the median difference as a function of the parallax distance; (5) a cumulative distribution function (CDF) of the ratio of the distance difference to the uncertainty in the distance difference; and (6) a CDF of the ratio of the distance difference minus the median difference to the difference uncertainty. The distance difference histograms reveal any systematic differences between the kinematic and parallax distances. The scatter plots uncover correlations between the distance difference and the parallax distance. Finally, the CDFs characterize the accuracy of the kinematic and parallax distance uncertainties; if the kinematic and parallax distance uncertainties are random and an accurate representation of the data, the CDF should follow a normal distribution. \begin{figure*}[ht] \centering \includegraphics[width=0.42\linewidth]{orig_tan20_para_diff_hist.pdf} \includegraphics[width=0.42\linewidth]{orig_tan20_para_fracdiff_hist.pdf} \\ \includegraphics[width=0.42\linewidth]{orig_tan20_para_diff_scat.pdf} \includegraphics[width=0.42\linewidth]{orig_tan20_para_diff_scat_med.pdf} \\ \includegraphics[width=0.42\linewidth]{orig_tan20_para_diff_cdf.pdf} \includegraphics[width=0.42\linewidth]{orig_tan20_para_diff_cdf_med.pdf} \caption{Difference between parallax distances and Method A kinematic distances. Panel (a): histogram of distance difference. The solid curve is the KDE fit to the difference distribution and the solid vertical line is the median of the distribution. Panel (b): histogram of the fractional distance difference. Panel (c): scatter plot of distance difference as a function of parallax distance. Panel (d): scatter plot of the distance difference minus the median distance difference as a function of parallax distance. Panel (e): cumulative distribution function (CDF) of the ratio of the distance differences to the distance difference uncertainties. Panel (f): CDF of the ratio of the distance differences minus the median difference to the distance difference uncertainties. The dashed curve in Panels (e) and (f) is the expected CDF for a normal distribution centered on zero. The CDF does not go to 0 on the left nor to 1 on the right because there is at least one source beyond the limits of the abscissas.} \label{fig:orig_diff} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.42\linewidth]{reid_tan20_para_diff_hist.pdf} \includegraphics[width=0.42\linewidth]{reid_tan20_para_fracdiff_hist.pdf} \\ \includegraphics[width=0.42\linewidth]{reid_tan20_para_diff_scat.pdf} \includegraphics[width=0.42\linewidth]{reid_tan20_para_diff_scat_med.pdf} \\ \includegraphics[width=0.42\linewidth]{reid_tan20_para_diff_cdf.pdf} \includegraphics[width=0.42\linewidth]{reid_tan20_para_diff_cdf_med.pdf} \caption{Same as Figure~\ref{fig:orig_diff} but using Method B kinematic distances.} \label{fig:reid_diff} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.42\linewidth]{pdf_tan20_para_diff_hist.pdf} \includegraphics[width=0.42\linewidth]{pdf_tan20_para_fracdiff_hist.pdf} \\ \includegraphics[width=0.42\linewidth]{pdf_tan20_para_diff_scat.pdf} \includegraphics[width=0.42\linewidth]{pdf_tan20_para_diff_scat_med.pdf} \\ \includegraphics[width=0.42\linewidth]{pdf_tan20_para_diff_cdf.pdf} \includegraphics[width=0.42\linewidth]{pdf_tan20_para_diff_cdf_med.pdf} \caption{Same as Figure~\ref{fig:orig_diff} but using Method C kinematic distances.} \label{fig:pdf_diff} \end{figure*} We first compare the parallax distances to the Method A kinematic distances, \(D_A\). The distance differences are shown in Figure~\ref{fig:orig_diff}. We compute the mean, median, and standard deviation of the distance difference (i.e., \(D_A-D_P\)), the absolute distance difference (i.e., \(|D_A-D_P|\)), the fractional distance difference (i.e., \((D_A-D_P)/D_P\)), and the absolute fractional distance difference (i.e., \(|D_A-D_P|/D_P\)). These values are listed in Table~\ref{tab:final_stats}. The fractional distance difference distribution in Panel (b) has a long tail towards larger kinematic distances. After subtracting the median offset, the kinematic distance uncertainties from the A12 model fit the differences between the kinematic and parallax distances well. Panel (f) of Figure~\ref{fig:orig_diff} shows that the ratio of the distance difference (minus the median difference) to the difference uncertainty follows a normal distribution, indicating that the kinematic and parallax distance uncertainties accurately represent the random errors in the distances. The K--S statistic for this distribution is \(0.121\) which corresponds to a p--value of \(0.203\). Panel (d), however, shows that some of the error bars are large even when the difference between the kinematic and parallax distance is small. This implies that the kinematic distance uncertainty model is over-predicting the kinematic distance uncertainties in some cases. The median Method A kinematic distance uncertainty (i.e., \(\sigma_A/D_A\)) is \(28.0\%\). Next, we compare the parallax distances to the Method B kinematic distances, \(D_B\). The differences between these two distances are shown in Figure~\ref{fig:reid_diff}, and the mean, median, and standard deviation statistics are listed in Table~\ref{tab:final_stats}. The mean and median distance differences are significantly smaller than those found using Method A. The fractional distance difference distribution is both centered closer to zero and narrower than the Method A distribution. The tail towards larger fractional differences is not nearly as long using this method. Once again, the A12 kinematic distance uncertainty model seems to accurately represent the typical differences between the parallax and kinematic distances. The K--S statistic for the median-corrected CDF is \(0.081\) with a p--value of \(0.716\), thus strongly implying that the uncertainties are sampled from a normal distribution. The kinematic distance errors are, however, large for sources with kinematic distances and parallax distances in good agreement. The median Method B kinematic distance uncertainty is the same as with method A at \(28.0\%\). Finally, we compare the parallax distances to the Method C kinematic distances, \(D_C\). The distance differences using this method are shown in Figure~\ref{fig:pdf_diff} and the mean, median, and standard deviation statistics are in Table~\ref{tab:final_stats}. These statistics and distributions are nearly identical to those found using Method B. The kinematic distance uncertainties derived using the Monte Carlo method (Method C) are just as accurate as those given by the A12 kinematic distance uncertainty model (Methods A and B). Panel (f) of Figure~\ref{fig:pdf_diff} shows that the kinematic distance uncertainties follow a normal distribution with a K--S statistic of \(0.083\) (p--value is \(0.681\)). This distribution and K--S statistic are nearly the same as that of Method B, yet the distance uncertainties are not assigned based on a model but rather derived based on the data and GRM. The median Method C kinematic distance uncertainty is slightly smaller than that of Method B at \(25.8\%\). More than half (56\%) of the Method C kinematic distance uncertainties are smaller than the Method B uncertainties. Despite these smaller error bars, panel (f) of Figure~\ref{fig:pdf_diff} shows that these kinematic distance uncertainties fit the data just as well as the A12 model used in Method A and B. Table~\ref{tab:final_stats} summarizes the aforementioned results for the three kinematic distance methods. The median absolute distance difference is nearly \({\sim}40\%\) smaller using Methods B and C, with a \({\sim}12\%\) smaller standard deviation. The median-corrected K--S statistic is about \(30\%\) smaller using Methods B and C, and nearly identical between Methods B and C. This suggests that the Monte Carlo-derived kinematic distance uncertainties (Method C) are just as accurate as the A12 kinematic distance uncertainty model (Method B). The median kinematic distance uncertainty, \(\sigma_D/D\), is the smallest using Method C. \input{final_stats.tex} \section{Kinematic Distance Ambiguity (KDA)\label{sec:kdar}} Thus far we resolved the KDA by assigning the kinematic distance closest to the parallax distance, or by assigning objects within \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) of the tangent point velocity to the tangent point distance. The \textit{WISE} Catalog of Galactic H\,{\sc ii}\ Regions \citep{anderson2014} contains the KDAR for 34 of our sources determined using a variety of KDAR techniques. Here we compare our parallax-based KDARs to the \textit{WISE} Catalog KDARs. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{vlsr_maser_diff_hist.pdf} \\ \includegraphics[width=\linewidth]{mole_maser_diff_hist.pdf} \caption{Difference between RRL and maser LSR velocity (top) and molecular line and maser LSR velocity (bottom). The solid curve is the KDE fit to each distribution, and the vertical line is the median.} \label{fig:velocity} \end{figure} We first compare the LSR velocities of non-maser transitions in the \textit{WISE} Catalog to the maser velocities from \citet{reid2014}. The \textit{WISE} Catalog contains RRL velocities and/or non-maser molecular spectral line velocities for 34 HMSFRs: 6 regions with only RRL velocities, 11 regions with only molecular line velocities, and 17 regions with both. Since the RRL emission comes from the ionized gas of the HMSFR and the non-maser molecular line emission comes from molecular clouds associated with the HMSFR, the LSR velocities of these transitions need not be the same as that of the maser emission, which originates within the molecular envelope of the high mass stars. Figure~\ref{fig:velocity} shows the difference between the RRL and maser velocities and the difference between the molecular line velocities and maser velocities. The median difference is \(-0.70\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) with a standard deviation of \(3.83\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) for RRL velocities and \(0.55\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) with a standard deviation of \(2.73\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) for molecular line velocities. These distributions are consistent with the expected \({\sim}10\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) difference between maser spot emission region motions and bulk gas motions \citep[e.g.,][]{reid2009b,reid2014}. The difference in LSR velocities corresponds to differences in kinematic distances. The differences between the Method C kinematic distances derived using the RRL, molecular, and maser velocities are shown in Figure~\ref{fig:distance_comp}. The median difference is \(0\ensuremath{\,{\rm kpc}}\xspace\) for both the RRL and maser distance difference and the molecular and maser distance difference, with standard deviations of \(0.25\ensuremath{\,{\rm kpc}}\xspace\) and \(0.21\ensuremath{\,{\rm kpc}}\xspace\), respectively. The maximum fractional difference is \(20\%\) in both cases, which implies that the choice of LSR velocity tracer has a moderate impact on the derived kinematic distance. \begin{figure*}[ht] \centering \includegraphics[width=0.45\linewidth]{vlsr_maser_kd_diff_hist.pdf} \includegraphics[width=0.45\linewidth]{vlsr_maser_kd_fracdiff_hist.pdf} \\ \includegraphics[width=0.45\linewidth]{mole_maser_kd_diff_hist.pdf} \includegraphics[width=0.45\linewidth]{mole_maser_kd_fracdiff_hist.pdf} \caption{Difference between the Method C kinematic distances derived using the RRL and maser LSR velocities (top) and molecular line and maser LSR velocities (bottom). The absolute difference is shown in the left panels and the fractional difference is shown in the right panels. The solid curve is the KDE fit to each distribution, and the vertical line is the median.} \label{fig:distance_comp} \end{figure*} If we limit our sample to inner-Galaxy \textit{WISE} Catalog objects more than \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) from the tangent point velocity using the \citet{reid2014} GRM, there are 9 HMSFRs. Of these, the KDAs are resolved using: H\,{\sc i}\ emission/absorption and self-absorption experiments based on RRL velocities \citep[2 objects;][]{anderson2009a,anderson2012}, H\,{\sc i}\ self-absorption experiments based on molecular line velocities \citep[2 objects;][]{urquhart2012,roman-duval2009}, and H\(_2\)CO absorption experiments \citep[4 objects;][]{araya2002,watson2003,sewilo2004}. One object is a visible H\,{\sc ii}\ region and thus likely located at the near distance. Based on the KDAR determined using the Method C kinematic distance method and selecting the distance closest to the parallax distance, the \textit{WISE} Catalog has incorrect KDARs for 3 of our sample objects: one source (G034.39+00.22) using an H\,{\sc i}\ self-absorption experiment based on RRL velocities \citep{anderson2009a} and two sources (G023.70-00.19, G035.02+00.34) using H\(_2\)CO absorption experiments \citep{watson2003,sewilo2004}. The \citet{anderson2009a} KDAR resolution for G034.39+00.22 was determined only with H\,{\sc i}\ self-absorption techniques, and, as the authors show in that paper, H\,{\sc i}\ self-absorption techniques are much less reliable than H\,{\sc i}\ emission/absorption techniques. Too, this object had a low confidence H\,{\sc i}\ self-absorption detection (quality factor B in that paper). The H\(_2\)CO absorption spectra for the other two sources are marginal detections. The absorption feature for (G023.70$-$00.19) is on the wing of the RRL \citep{sewilo2004}, and the absorption feature for G035.02+00.34 is weak and \({\sim}5\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) beyond the tangent point velocity \citep{watson2003}. This sample size is too small to make any definitive conclusions about the accuracy of the KDAR techniques. Authors using the \textit{WISE} Catalog KDARs should investigate the original KDAR work to assess the quality of the distance resolution. \section{Discussion} Based on the results of this analysis, we recommend the following prescription for deriving kinematic distances: (1) correct the measured LSR velocity using the \citet{reid2014} Solar motion parameters and Equations~\ref{eq:helio} and \ref{eq:vlsr}; (2) use the corrected LSR velocity and the Monte Carlo method (Method C) to derive the kinematic distances and uncertainties; and (3) use only the highest quality KDARs from the \textit{WISE} Catalog (if available) to resolve the kinematic distance ambiguity. The \textit{Python} code we used to calculate the Monte Carlo kinematic distances is publicly available and may be utilized through an online tool\footnote{\url{http://doi.org/10.5281/zenodo.1166001}} \citep{kdutils2017}. Changing the method used to derive kinematic distances may have important implications. When applying kinematic distances to Galactic morphological or metallicity structure analyses, it is important to consider the kinematic distance uncertainties and inaccuracies in the KDAR techniques. For example, \citet{koo2017} recently re-analyzed the Leiden/Argentine/Bonn H\,{\sc i}\ 21 cm line all-sky survey \citep{hartmann1997,arnal2000,bajaja2005,kalberla2005} to characterize the spiral structure in the outer Galaxy. They derived kinematic distances to their H\,{\sc i}\ features to produce a face-on map of the H\,{\sc i}\ distribution beyond the Solar orbit. Even though there is no KDA in this part of the Galaxy, their kinematic distances will be affected by the uncertainties discussed here. Their results, determining the pitch angles of the spiral features for example, may change significantly if they use the Monte Carlo method to derive the kinematic distances of their H\,{\sc i}\ features. Monte Carlo kinematic distances will also affect the interpretation of Galactic metallicity structure. For example, \citet{balser2015} recently discovered azimuthal variations in the radial metallicity gradient of the Milky Way inferred by the electron temperatures of Galactic H\,{\sc ii}\ regions. They used the \citet{reid2014} rotation curve to derive their kinematic distances and the A12 kinematic distance uncertainty model to assign distance uncertainties. After resampling their H\,{\sc ii}\ region distances within the A12 uncertainties, they determined that the azimuthal metallicity gradient variations were statistically significant. The \citet{balser2015} result may be affected by the results of this analysis. Not only will the kinematic distances for their sample of H\,{\sc ii}\ regions change slightly, the uncertainties will change as well. These changes will affect the statistical significance of their result. This new kinematic distance method will affect all distance estimation techniques that rely, at least in part, on kinematic distances. For example, \citet{reid2016a} used a Bayesian distance estimation method to derive the distance to HMSFRs. The priors in their method included the parallax distance (if available), the kinematic distance with equal weight given to both the near and far kinematic distance, the Galactic latitude, and a spiral arm model of the Galaxy. Instead of using a Gaussian kinematic distance PDF, future Bayesian analyses should use the full Monte Carlo kinematic distance PDF. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{da_dc_vlsr.pdf} \includegraphics[width=\linewidth]{db_dc_vlsr.pdf} \caption{Difference between Method A and Method C kinematic distances (top) and Method B and Method C kinematic distances (bottom) as a function of LSR velocity in the direction \(\ell=30^\circ\). The black points correspond to near kinematic distances and the red points correspond to far kinematic distances. The error bars are the combined uncertainties of both the A12 kinematic distance uncertainty model (Methods A and B) and the Method C Monte Carlo uncertainty. We exclude \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) near the tangent point for clarity.} \label{fig:30long_dist} \end{figure} The difference between Method A and Method C kinematic distances is fairly large, whereas the difference between Method B and Method C is small. Figure~\ref{fig:30long_dist} shows the difference between the Method A and C distances as well as the Method B and C distances for LSR velocities along \(\ell=30^{\circ}\). We choose this line-of-sight because it crosses both the inner and outer Galaxy through most of the Galactic disk. The difference between Method A and C is \(<0.5\ensuremath{\,{\rm kpc}}\xspace\) within the Solar orbit, and approaches \(3\ensuremath{\,{\rm kpc}}\xspace\) at a distance of \(20\ensuremath{\,{\rm kpc}}\xspace\). This discrepancy is caused by the variations in the GRMs used by each method. The difference between Method B and C, however, is small (\(\lsim0.5\ensuremath{\,{\rm kpc}}\xspace\)) across the Galaxy since both methods use the same GRM. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{dist_err_ratio.pdf} \caption{Ratio of the Method C Monte Carlo kinematic distance uncertainty to the A12 kinematic distance uncertainty model (Methods A and B) as a function of distance in the direction \(\ell=30^\circ\). The solid vertical line indicates a ratio of one where \(\sigma_C = \sigma_{A,B}\). The spikes near \(5\ensuremath{\,{\rm kpc}}\xspace\) and \(9\ensuremath{\,{\rm kpc}}\xspace\) are at the boundaries of the ``tangent point region,'' defined where the LSR velocity is within \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) of the tangent point velocity.} \label{fig:kd_dist_err_ratio} \end{figure} The largest distinction between the different kinematic distance methods is the magnitude of the uncertainties. In Figure~\ref{fig:kd_dist_err_ratio} we show the ratio of the Method C kinematic distance uncertainty to those of Methods A and B (the A12 model) along \(\ell=30^\circ\). Except near the tangent point, the Method C kinematic distance uncertainties are smaller than the A12 model uncertainties. At a distance of \(15\ensuremath{\,{\rm kpc}}\xspace\), the Model C uncertainty is half of the A12 model uncertainty. The spikes near \(5\ensuremath{\,{\rm kpc}}\xspace\) and \(9\ensuremath{\,{\rm kpc}}\xspace\) are located at the boundaries of the ``tangent point region'' (within \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) of the tangent point velocity). Here, the Method C kinematic distance uncertainties are much larger than the A12 model uncertainties. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{parallax_dist.pdf} \includegraphics[width=\linewidth]{parallax_dist_frac.pdf} \caption{Face-on Galactic view of the parallax distance uncertainties assuming a typical parallax uncertainty of \(0.02\,\mu\text{as}\). The top panel is the absolute distance uncertainty and the bottom panel is the fractional distance uncertainty. The Galactic Center is located at the origin and the Sun is located 8.34 kpc in the direction \(\theta_{\rm Az}=0^\circ\). The concentric circles are 4, 8, and 12 kpc in \(R\) and \(\theta_{\rm Az}\) is given in degrees. The color represents the distance uncertainty. The black regions represent distance uncertainties greater than \(\sigma_d = 2\ensuremath{\,{\rm kpc}}\xspace\) (top) or \(\sigma_d/d = 0.5\) (bottom). The gray points are the HMSFRs in our sample.} \label{fig:parallax_dist_unc} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{para_err_ratio.pdf} \caption{Ratio of the Method C Monte Carlo kinematic distance uncertainty to the typical parallax distance uncertainty as a function of distance in the direction \(\ell=30^\circ\). The solid vertical line indicates a ratio of one where \(\sigma_C = \sigma_P\). The spikes near \(5\ensuremath{\,{\rm kpc}}\xspace\) and \(9\ensuremath{\,{\rm kpc}}\xspace\) are at the boundaries of the ``tangent point region,'' defined where the LSR velocity is within \(20\ensuremath{\,{\rm km\,s^{-1}}}\xspace\) of the tangent point velocity.} \label{fig:para_dist_err_ratio} \end{figure} Although kinematic distances are not as accurate as parallax distances in the Solar neighborhood, their accuracy is much better in distant regions of the Milky Way. To demonstrate this point, we generate a face-on view of the typical parallax distance uncertainty in the Galaxy (Figure~\ref{fig:parallax_dist_unc}). We assume a characteristic parallax uncertainty of \(0.02\,\mu\text{as}\) \citep{reid2014rev} which corresponds to a typical parallax distance uncertainty of \(\sigma_d/\text{kpc} = 0.02(d/\text{kpc})^2\). This figure uses the same color scale as the Method C Monte Carlo kinematic distance uncertainty map in Figure~\ref{fig:pdf_uncertainty}. By comparing these figures we see that large regions of Galactic quadrants I and IV (\(-90^\circ < \ell < 90^\circ\)) have Method C kinematic distance uncertainties much smaller than the typical parallax distance uncertainties. In Figure~\ref{fig:para_dist_err_ratio} we show the ratio of the Method C kinematic distance uncertainty to the typical parallax distance uncertainty along \(\ell=30^\circ\). Beyond the tangent point at a distance of \({\sim}8\ensuremath{\,{\rm kpc}}\xspace\), the Method C kinematic distance uncertainties are smaller than the typical parallax distance uncertainty. This ratio reaches a minimum at about \(14\ensuremath{\,{\rm kpc}}\xspace\) where the Method C kinematic distance uncertainty is less than \(10\%\) of the typical parallax distance uncertainty. The spikes near \(5\ensuremath{\,{\rm kpc}}\xspace\) and \(9\ensuremath{\,{\rm kpc}}\xspace\) are, again, located at the boundaries of the ``tangent point region,'' where the Method C kinematic distance uncertainties are much larger. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{parallax_Rgal.pdf} \\ \includegraphics[width=\linewidth]{parallax_az.pdf} \caption{Face-on Galactic view of the typical parallax distance uncertainties converted to Galactocentric coordinates, \(R\) (top) and \(\theta_{\rm Az}\) (bottom). The Galactic Center is located at the origin and the Sun is located 8.34 kpc in the direction \(\theta_{\rm Az}=0^\circ\). The concentric circles are 4, 8, and 12 kpc in \(R\) and \(\theta_{\rm Az}\) is given in degrees. The color represents the distance uncertainty. The black regions have uncertainties larger than the maximum value shown in the color scale. The gray points are the HMSFRs in our sample.} \label{fig:parallax_unc} \end{figure} \begin{figure*}[ht] \plottwo{{pdf_neg_Rgal}.pdf}{{pdf_neg_az}.pdf} \plottwo{{pdf_pos_Rgal}.pdf}{{pdf_pos_az}.pdf} \caption{Face-on Galactic view of the Monte Carlo kinematic distance uncertainties converted to Galactocentric coordinates, \(R\) (left) and \(\theta_{\rm Az}\) (right). The top figures are the distance uncertainties in the negative direction while the bottom figures are the distance uncertainties in the positive direction. The Galactic Center is located at the origin and the Sun is located 8.34 kpc in the direction \(\theta_{\rm Az}=0^\circ\). The concentric circles are 4, 8, and 12 kpc in \(R\) and \(\theta_{\rm Az}\) is given in degrees. The color represents the distance uncertainty. The black regions have uncertainties larger than the maximum value shown in the color scale. The regions \(-15^\circ < \ensuremath{\ell}\xspace < 15^\circ\) and \(160^\circ < \ensuremath{\ell}\xspace < 200^\circ\) are masked in white since kinematic distances are very inaccurate towards the Galactic Center and Galactic Anti-center. The gray points are the HMSFRs in our sample.} \label{fig:pdf_unc} \end{figure*} The accuracy of the Method C kinematic distances is especially apparent when we consider that Galactic structure analyses are more interested in the Galactocentric positions of structure tracers (\(R,\theta_{\rm Az},z\)) than the heliocentric positions (\(\ell,b,d\)). We derive the relationship between the distance uncertainty and uncertainties in \(R\) and \(\theta_{\rm Az}\) in Appendix~\ref{sec:unc_derivation}. Figure~\ref{fig:parallax_unc} shows the face-on uncertainties in Galactocentric position given these uncertainties in parallax distance. The same analysis using the Monte Carlo kinematic distance uncertainties is shown in Figure~\ref{fig:pdf_unc}. In large regions of Galactic quadrants I and IV (\(-90^\circ < \ell < 90^\circ\)), the Monte Carlo kinematic distances have smaller uncertainties in both \(R\) and \(\theta_{\rm Az}\) than the parallax distances. Kinematic distances therefore determine not only the distance of objects, but also the \textit{Galactocentric} position of objects more accurately than parallax distances when the object is far from the Solar neighborhood. Streaming motions will have a \textit{systematic} effect on the accuracy of kinematic distances rather than a \textit{random} effect as we have assumed in this analysis. With a much larger catalog of parallax observations of HMSFRs, we could compare kinematic and parallax distances and uncover any systematic differences. We may then be able to create a non-axisymmetric GRM that includes these non-circular motions. Such a task requires parallax observations uniformly across the entire Galactic disk. \section{Conclusions} We investigate the accuracy of kinematic distances by comparing the kinematic and parallax distances of 75 Galactic HMSFRs. We derive the kinematic distances using three different methods: the traditional method using the \citet{brand1993} rotation curve and the IAU-defined Solar motion parameters (Method A), the traditional method using the \citet{reid2014} rotation curve and their revised Solar motion parameters (Method B), and a new Monte Carlo method using the \citet{reid2014} rotation curve and their revised Solar motion parameters (Method C). The best agreement between the kinematic and parallax distances is when we use Method C. In this case, the median absolute difference between the kinematic distances and parallax distances is \(0.71\ensuremath{\,{\rm kpc}}\xspace\) with a standard deviation of \(0.83\ensuremath{\,{\rm kpc}}\xspace\). The Method C kinematic distance uncertainties are smaller than those of Methods A and B for most of the Galaxy except near the tangent point. Along the line-of-sight with \(\ell=30^\circ\), for example, the Method C kinematic distance uncertainty is \(50\%\) of the Method A and B uncertainties at a distance of \(15\ensuremath{\,{\rm kpc}}\xspace\). We test the accuracy of KDAR techniques using the KDARs derived in the literature for 9 of our inner-Galaxy, non-tangent point HMSFRs. The KDAR is incorrect in 3 cases when using the \textit{WISE} catalog KDARs to compare the parallax distances to our Monte Carlo kinematic distances, but each of these KDARs are low-quality determinations. We recommend a new prescription for deriving and applying kinematic distances and their uncertainties: (1) correct the measured LSR velocity using the \citet{reid2014} Solar motion parameters and Equations~\ref{eq:helio} and \ref{eq:vlsr}; (2) use the corrected LSR velocity and the Monte Carlo method (Method C) to derive the kinematic distances and uncertainties; and (3) use only the highest quality KDARs from the \textit{WISE} Catalog to resolve the kinematic distance ambiguity. Based on the typical parallax distance uncertainties, we show that, in a large region of Galactic quadrants I and IV (\(-90^\circ < \ell < 90^\circ\)), both the distances and the Galactocentric positions of HMSFRs are more accurately constrained by the Method C kinematic distances than parallax distances. In the direction \(\ell=30^\circ\), for example, the Method C kinematic distance uncertainties are smaller than the parallax distance uncertainties everywhere beyond the tangent point, reaching a minimum of \(10\%\) of the parallax distance uncertainty at a distance of \(14\ensuremath{\,{\rm kpc}}\xspace\). The code to derive the Method C Monte Carlo kinematic distances and kinematic distance uncertainties is publicly available and may be utilized through an on-line tool. In a future paper, we will investigate the effects of using the Monte Carlo kinematic distances on the interpretation of Galactic morphological and metallicity structure. \acknowledgments TVW is supported by the NSF through the Grote Reber Fellowship Program administered by Associated Universities, Inc./National Radio Astronomy Observatory, the D.N. Batten Foundation Fellowship from the Jefferson Scholars Foundation, the Mars Foundation Fellowship from the Achievement Rewards for College Scientists Foundation, and the Virginia Space Grant Consortium. LDA is supported by NSF grant AST1516021. TMB is supported by NSF grant AST1714688. We thank the anonymous referee for useful comments and suggestions that improved the quality of this paper. \nraoblurb \software{Astropy \citep{astropy2013}, KDUtils \citep{kdutils2017}, Matplotlib \citep{matplotlib2007}, NumPy \& SciPy \citep{numpyscipy2011}, Pandas \citep{pandas2010}, PyQt-Fit (\url{http://pyqt-fit.readthedocs.io/}), Python (\url{https://www.python.org/})}
{ "timestamp": "2018-02-13T02:22:25", "yymm": "1802", "arxiv_id": "1802.04203", "language": "en", "url": "https://arxiv.org/abs/1802.04203" }
\section{Introduction} Formal Concept Analysis~\cite{DBLP:books/daglib/0095956} is a mathematical framework that allows, from a binary relation, to extract interesting patterns called concepts. Those patterns form a hierarchy called a concept lattice. \medskip The size of the lattice, potentially exponential in the size of the relation, is one of the main drawbacks in its use as data representation. In some cases, it is possible to avoid using the whole lattice, as it contains redundant information~\cite{DBLP:journals/tapos/GodinMMMAC98}. The AOC-poset (or Galois Sub-Hierarchy) is a sub-order of the lattice that preserves only some of its key elements~\cite{Godin1993}. Its size is potentially much smaller than the size of the associated concept lattice~\cite{DBLP:conf/icfca/CarbonnelHG15} and it can be used in place of the concept lattice to perform certain tasks~\cite{DBLP:journals/ijgs/DolquesBHG16, DBLP:conf/ismis/BazinCK17}. \medskip The generalisation of FCA to the $n$-dimensional case, Polyadic Concept Analysis~\cite{DBLP:journals/order/Voutsadakis02}, focuses on multidimensional data, i.e. $n$-ary relations. \medskip In this paper, we generalise the notion of AOC-posets to $n$-lattices. In Section~\ref{sec:def}, we provide the definitions and notations that we use throughout the paper. Section~\ref{sec:introducers} is dedicated to the definition of introducer concepts in $n$-lattices, and some properties about those concepts. In Section~\ref{sec:algo}, we present an algorithm to compute the introducer sub-order, and study its complexity. Finally, we conclude and discuss some future works. \section{Definitions and Notations} \label{sec:def} In this section, we introduce classical definitions and notations from Formal Concept Analysis and Polyadic Concept Analysis. They can also be found in~\cite{DBLP:books/daglib/0095956} and~\cite{DBLP:journals/order/Voutsadakis02}. \subsection{Formal Concept Analysis} From now on, we will omit the brackets in the notation for sets when no confusion is induced by this simplification. \medskip A (formal) context is a triple $(\mathcal S_1,\mathcal S_2,\mathcal R)$ in which $\mathcal S_1$ and $\mathcal S_2$ are sets and $\mathcal R\subseteq \mathcal S_1\times \mathcal S_2$ is a binary relation between them. The elements of $\mathcal S_1$ are called the (formal) objects and those of $\mathcal S_2$ the (formal) attributes. A pair $(x_1,x_2)\in\mathcal R$ means that ``the object $x_1$ has the attribute $x_2$''. A context can be represented as a cross table, as shown in Fig.~\ref{fig:toycontext}. For instance, object $1$ has attributes $a$ and $b$, and attribute $b$ is shared by objects $1$ and $2$. \medskip \begin{figure}[ht] \centering \begin{tabular}{c | c c c} & a & b & c \\ \hline 1 &$\times$&$\times$& \\ 2 & & $\times$ & $\times$ \\ 3 & $\times$ & & $\times$ \\ \end{tabular} \caption{\label{fig:toycontext}An example of a context $\mathcal C =(\mathcal S_1, \mathcal S_2,\mathcal R)$ with $\mathcal S_1=\{1,2,3\}$ and $\mathcal S_2 =\{a, b, c\}$.} \end{figure} \medskip Two \emph{derivation operators} $(\cdot)^\prime:2^{\mathcal S_1}\mapsto 2^{\mathcal S_2}$ and $(\cdot)^\prime:2^{\mathcal S_2}\mapsto 2^{\mathcal S_1}$ are defined. For $X_1\subseteq \mathcal S_1$ and $X_2\subseteq \mathcal S_2$, $X_1^\prime=\{a\ |\ \forall x\in X_1, (x,a)\in\mathcal R\}$ and $X_2^\prime =\{o\ |\ \forall y\in X_2, (o,y)\in \mathcal R\}$. \medskip A formal concept is a pair $(X_1,X_2)$ where $X_1\subseteq \mathcal S_1$, $X_2\subseteq \mathcal S_2$, $X_1' = X_2$ and $X_2' = X_1$. This corresponds to a maximal set of objects that share a maximal set of attributes and can be viewed as a maximal rectangle full of crosses in the formal context, up to permutations on the elements of the rows and columns. $X_1$ is called the \emph{extent} of the concept, while $X_2$ is called the \emph{intent}. \medskip The set of all concepts of a context ordered by the inclusion relation on either one of their components forms a complete lattice. Additionally, every complete lattice is isomorphic to the concept lattice of some context~\cite{DBLP:books/daglib/0095956}. The concept lattice associated with the formal context from Fig.~\ref{fig:toycontext} is shown in Fig.~\ref{fig:toylattice}. \begin{figure}[ht] \centering \begin{tikzpicture}[-,>=stealth',shorten >=1pt,shorten <=4pt] \node (bot) at (0,0) {$(\emptyset, abc)$}; \node (c1) at (-1,1.3) {$(1,ab)$}; \node (c2) at (0,1.3) {$(2,bc)$}; \node (c3) at (1,1.3) {$(3,ac)$}; \node (b1) at (-1,2.7) {$(12,b)$}; \node (b2) at (0,2.7) {$(13,a)$}; \node (b3) at (1,2.7) {$(23,c)$}; \node (top) at (0,4) {$(123,\emptyset)$}; \path[] (bot) edge [] (c1) (bot) edge [] (c2) (bot) edge [] (c3) (top) edge [] (b1) (top) edge [] (b2) (top) edge [] (b3) (c1) edge [] (b1) (c1) edge [] (b2) (c2) edge [] (b1) (c2) edge [] (b3) (c3) edge [] (b2) (c3) edge [] (b3); \end{tikzpicture} \caption{Concept lattice associated with $\mathcal C$.\label{fig:toylattice}} \end{figure} \medskip In an application, the size of the concept lattice might be a drawback. For this reason, Godin et al.~\cite{Godin1993} introduced a sub-hierarchy of the lattice, the Galois sub-hierarchy. This sub-hierarchy was introduced and is most often used in the field of software engineering, but is also used in other fields, such as Relational Concept Analysis (RCA)~\cite{DBLP:journals/amai/HuchardHRV07} and data mining~\cite{DBLP:journals/ijgs/DolquesBHG16}. Additionally, the Galois sub-hierarchy is integrated in some FCA tools, such as Latviz~\cite{DBLP:conf/cla/AlamLN16}, {\sc Galicia}~\cite{valtchev2003galicia}, RCAExplore~\footnote[1]{\url{http://dolques.free.fr/rcaexplore/}} or AOC-poset Builder~\footnote[2]{\url{http://www.lirmm.fr/AOC-poset-Builder/}}. \medskip \begin{definition}[Introducer concept] An \emph{Object-Concept} is a concept $(o^{\prime\prime}, o^\prime)$ with $o\in \mathcal S_1$. We say that this concept \emph{introduces} $o$. An \emph{Attribute-Concept} is a concept $(a^\prime, a^{\prime\prime})$ with $a\in \mathcal S_2$. We say that this concept \emph{introduces} $a$. \end{definition} \medskip A concept can introduce both attributes and objects, or it can introduce neither. We call the sub-order restricted to the introducer concepts an \emph{Attribute-Object-Concept partially ordered set} (AOC-poset), or Galois Sub-Hierarchy (GSH). While a concept lattice can have up to $2^{min(|\mathcal S_1|,|\mathcal S_2|)}$ concepts, the associated GSH has at most $|\mathcal S_1|+|\mathcal S_2|$ elements. Several algorithms exist to compute the GSH~\cite{Dicky1995, DBLP:journals/ita/HuchardDL00, DBLP:conf/icfca/ArevaloBHPS07, Berry2014}. \subsection{Polyadic Concept Analysis} \begin{definition} An \emph{$n$-context} is an $(n+1)$-tuple $\mathcal{C} = (\mathcal S_1,\dots,\mathcal S_n,\mathcal R)$ in which $S_i$, $i\in \{1,\dots,n\}$, is a set called a \emph{dimension} and $R$ is an $n$-ary relation between the dimensions. \end{definition} \medskip An $n$-context can be represented by a $|S_1|\times \dots \times |S_n|$ cross table as illustrated in Fig. \ref{fig:context}. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.5] \pgfmathsetmacro{\cubex}{5} \pgfmathsetmacro{\cubey}{5} \pgfmathsetmacro{\cubez}{5} \draw[thick] (0,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle; \draw[thick] (0,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle; \draw[thick] (0,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle; \draw[dotted, thick] (-\cubex,-\cubey,0) -- ++(0,0,-\cubez) -- ++ (\cubex,0,0); \draw[dotted, thick] (-\cubex,-\cubey,-\cubez) -- ++(0,\cubey,0); \foreach \x in {1,...,\cubex}{ \draw (-\cubex+\x, -\cubey, 0) -- ++ (0,\cubey, 0); \draw (-\cubex-0.01+\x, 0, 0) -- ++ (0,0,-\cubez); } \foreach \y in {1,...,\cubey}{ \draw (-\cubex, -\cubey+\y, 0) -- ++ (\cubex,0, 0); \draw (0, -\cubey-0.01+\y, 0) -- ++ (0, 0, -\cubez); } \foreach \z in {1,...,\cubez} \draw (-\cubex, 0, -\z)-- ++ (\cubex,0, 0) -- ++ (0,-\cubey,0); \node (S1) at (-\cubex-1, -2.5,0) {$S_1$}; \coordinate (boutS11) at (-\cubex-0.5, 0, 0); \coordinate (boutS12) at (-\cubex-0.5, -\cubey, 0); \draw[<->] (boutS11) -- (boutS12); \node (S2) at (-\cubex, 1,-2) {$S_2$}; \coordinate (boutS21) at (-\cubex, 0.5, 0); \coordinate (boutS22) at (-\cubex, 0.5, -\cubez); \draw[<->] (boutS21) -- (boutS22); \node (S3) at (-2.5, -\cubey-1,0) {$S_3$}; \coordinate (boutS31) at (-\cubex, -\cubey-0.5, 0); \coordinate (boutS32) at (0, -\cubey-0.5, 0); \draw[<->] (boutS31) -- (boutS32); \end{tikzpicture} \caption{Visual representation of a 3-context without its crosses\label{fig:context}.} \end{figure} \medskip \begin{definition} \medskip An \emph{$n$-concept} of $\mathcal{C} = (S_1,\dots,S_n,R)$ is an $n$-tuple $(X_1,\dots,X_n)$ such that~$\prod_{i\in \{1,\dots,n\}} X_i\subseteq R$ and there are no $i\in \{1,\dots,n\}$ and $k\in S_i\setminus X_i$ such that~$\{k\}\times \prod_{j\in \{1,\dots,n\}\setminus \{i\}} X_j\subseteq R$. \medskip \end{definition} An $n$-concept can be viewed as a maximal $n$-dimensional box full of crosses up to permutations on the elements of the dimensions. We denote by $\mathcal T(\mathcal C)$ the set of $n$-concepts of a $n$-context $\mathcal C$. \medskip \begin{figure}[ht] \begin{center} \begin{tabular}{c|lll||lll} & a & b & c & a & b & c \\ \hline 1 &$\times$&$\times$&\hphantom{$\times$}& $\times$ & & \\ 2 & & & & $\times$ & & \\ 3 & $\times$ & & & $\times$ & & $\times$\\ \hline \multicolumn{1}{c|}{} & \multicolumn{3}{c||}{$\alpha$} & \multicolumn{3}{c}{$\beta$}\\ \end{tabular} \caption{An example of a $2\times 3\times 3$ $3$-context\label{fig:exampleContext}.} \end{center} \end{figure} \medskip In the Fig.~\ref{fig:exampleContext} example, seven 3-concepts are present: $(\alpha, 1, ab)$, $(\alpha\beta, 13, a)$, $(\beta, 3, ac)$, $(\beta, 123, a)$, $(\alpha\beta, 123, \emptyset)$, $(\alpha\beta,\emptyset,abc)$ and $(\emptyset, 123,abc)$. \medskip \begin{definition}[From~\cite{DBLP:journals/order/Voutsadakis07}] $\mathcal S=(S,\lesssim_1,\dots,\lesssim_n)$ is an $n$-ordered set if for $A\in S$ and $B\in S$ : \begin{enumerate} \item $A\sim_i B,\forall i\in\{1,\dots,n\} \Rightarrow A=B$ (Uniqueness Condition) \item $A\lesssim_i B, \forall i\in(\{1,\dots,n\}\setminus j)\Rightarrow B\lesssim_j A$ (Antiordinal Dependency) \end{enumerate} \end{definition} \medskip For the Antiordinal Dependency condition to be respected, it is sufficient to have $i,j\in \{1,\dots,n\}, i\not=j$ such that $A\lesssim_i B$ and $B\lesssim_j A$. \medskip The set of all the $n$-concepts of an $n$-context together with the $n$ quasi-orders $\lesssim_i$ induced by the inclusion relations on the subsets of each dimension forms an $n$-ordered set. Additionally, the existence of some particular joins makes it a complete $n$-lattice. Every $n$-lattice can be associated with some $n$-context~\cite{DBLP:journals/order/Voutsadakis02}. \medskip \begin{definition}\label{def:couche} Let $x\in \mathcal S_i$ be an element of a dimension $i$. We denote by $\mathcal C_x$ the $(n-1)$-context $\mathcal C_x = (S_1,\dots,S_{i-1},S_{i+1},\dots,S_n, \mathcal R_x)$ where \[\mathcal R_x=\{(s_1,\dots,s_{i-1}, s_{i+1},\dots,s_n)\ |\ (s_1,\dots,s_{i-1},x,s_{i+1},\dots,s_n)\in \mathcal R\}\] \end{definition} With the previous definition, $C_x$ is the $(n-1)$-context corresponding to element $x$, represented by the shaded area in Fig.~\ref{fig:couche}. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.53] \pgfmathsetmacro{\cubex}{5} \pgfmathsetmacro{\cubey}{5} \pgfmathsetmacro{\cubez}{5} \draw[thick] (0,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle; \draw[thick] (0,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle; \draw[thick] (0,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle; \draw[dotted, thick] (-\cubex,-\cubey,0) -- ++(0,0,-\cubez) -- ++ (\cubex,0,0); \draw[dotted, thick] (-\cubex,-\cubey,-\cubez) -- ++(0,\cubey,0); \foreach \x in {1,...,\cubex}{ \draw (-\cubex+\x, -\cubey, 0) -- ++ (0,\cubey, 0); \draw (-\cubex-0.01+\x, 0, 0) -- ++ (0,0,-\cubez); } \foreach \y in {1,...,\cubey}{ \draw (-\cubex, -\cubey+\y, 0) -- ++ (\cubex,0, 0); \draw (0, -\cubey-0.01+\y, 0) -- ++ (0, 0, -\cubez); } \foreach \z in {1,...,\cubez} \draw (-\cubex, 0, -\z)-- ++ (\cubex,0, 0) -- ++ (0,-\cubey,0); \node (x) at (-0.25,0.25,-\cubez) {$x$}; \fill[color=gray!20, pattern=north east lines, very thin] (0,0,-\cubez) -- ++ (-1,0,0) -- ++ (0,0,\cubez) -- ++ (0,-\cubey,0) -- ++ (1,0,0) -- ++ (0,0,-\cubez) --cycle; \node (S1) at (-\cubex-1, -2.5,0) {$S_1$}; \coordinate (boutS11) at (-\cubex-0.5, 0, 0); \coordinate (boutS12) at (-\cubex-0.5, -\cubey, 0); \draw[<->] (boutS11) -- (boutS12); \node (S2) at (-\cubex, 1,-2) {$S_2$}; \coordinate (boutS21) at (-\cubex, 0.5, 0); \coordinate (boutS22) at (-\cubex, 0.5, -\cubez); \draw[<->] (boutS21) -- (boutS22); \node (S3) at (-2.5, -\cubey-1,0) {$S_3$}; \coordinate (boutS31) at (-\cubex, -\cubey-0.5, 0); \coordinate (boutS32) at (0, -\cubey-0.5, 0); \draw[<->] (boutS31) -- (boutS32); \end{tikzpicture} \caption{If $x$ is an element of $\mathcal S_3$ in this 3-context, then $C_x$ is the 2-context resulting from fixing $x$\label{fig:couche}.} \end{figure} \section{Introducer concepts in $n$-Lattices} \label{sec:introducers} In this section, we define introducer concepts in $n$-lattices. In the next definitions, we will call dimension $i$ the height, while all other dimensions are called the width. \medskip \begin{definition} Let $x\in\mathcal S_i$ be an element of a dimension $i$. The concepts with maximal width such that $x$ is in the height are the introducer concepts of $x$. The set of introducer concepts of $x$ is denoted by $I_x$. \end{definition} \medskip In the Fig.~\ref{fig:exampleContext} example, we have $I_\alpha=\{(\alpha\beta,13,a), (\alpha,1,ab)\}$. \medskip We denote by $\mathcal I(\mathcal S_i) = \bigcup_{x\in\mathcal S_i} I_x$ the set of concepts that introduce an element of dimension $i$ and by $\mathcal I(\mathcal C) = \bigcup_{i\in\{1,\dots,n\}} \mathcal I(\mathcal S_i)$ the set of all introducer concepts of a context $\mathcal C$. \medskip As in the 2-dimensional case, irreducible elements are introducer concepts. However, $\mathcal I(\mathcal C)$ is not always strictly the set of irreducible elements as some applications expect the context not to be reduced. \medskip \begin{proposition}\label{prop:nposet} $(\mathcal I(\mathcal C),\lesssim_1,\dots,\lesssim_n)$ is an $n$-ordered set. \end{proposition} \begin{proof} Let $A$ and $B$ be in $\mathcal I(\mathcal C)$. We recall that $A_i\subseteq B_i\Leftrightarrow A\lesssim_iB$ and that $A_i=B_i\Leftrightarrow A\sim_i B$. Without loss of generality, let $A\in\mathcal I(\mathcal S_i)$ and $B\in\mathcal I(\mathcal S_j)$. If $\forall k\in\{1,\dots,n\}$, $A\sim_k B$, then $\forall k\in\{1,\dots,n\}$, $A_k = B_k$, so $A=B$ (Uniqueness Condition). If $A$ and $B$ are distinct, $\exists k\in\{1,\dots,n\}$ such that $A\lesssim_k B$ or $B\lesssim_k A$. Without loss of generality, suppose $A\lesssim_k B$. Suppose that there is no $\ell\in\{1,\dots,n\}$ such that $B\lesssim_\ell A$. That implies that all the components of $A$ are included in the components of $B$. Then this is in contradiction with the maximality condition implied by $A$ being a concept. Thus $\exists j\in\{1,\dots,n\}\setminus i$ such that $B\lesssim_j A$ (Antiordinal Dependency). \qed \end{proof} \medskip As in the 2-dimensional case where concept lattices and GSH are respectively complete lattices and partially ordered sets, in the $n$-dimensional case we have complete $n$-lattices and $n$-ordered sets. \medskip \begin{proposition}\label{prop:fromConcepts} Let $x\in S_i$. If $(X_1,\dots,X_{i-1},X_{i+1},\dots,X_n)$ is an $(n-1)$-concept of $\mathcal{C}_x$, then $(X_1,\dots,\{x\}\cup X_i,\dots,X_n)$ is an introducer of $x$. If $(X_1,\dots,\{x\}\cup X_i,\dots,X_n)$ is an introducer of $x$, then there exists an $(n-1)$-concept $(X_1,\dots,X_{i-1},X_{i+1},\dots,X_n)$ in $\mathcal{C}_x$. \end{proposition} \begin{proof} We suppose, without loss of generality, that $x\in \mathcal S_1$. The $(n-1)$-concepts of $\mathcal C_x$ are of the form $(X_2,\dots,X_n)$. If $(x,X_2,\dots,X_n)$ is an $n$-concept of $\mathcal C$, then it is minimum in height and maximal in width and is thus an introducer of $x$. \medskip If $(x,X_2,\dots,X_n)$ is not an $n$-concept of $\mathcal C$, as $(X_2,\dots,X_n)$ is a $(n-1)$-concept of $\mathcal C_x$, then $(x,X_2,\dots,X_n)$ can be augmented only on the first dimension. As such, there exists an $n$-concept $(\{x\}\cup X_1,X_2,\dots,X_n)$ that is maximal in width and is thus an introducer of $x$. \medskip Suppose that there is an $X=(X_1,\dots,X_n)\in I_x, x\in X_1$, that is not obtained from an $(n-1)$-concept of $\mathcal C_x$ by extending $X_1$. It means that $(X_2,\dots,X_n)$ is not maximal in $\mathcal C_x$ (else it would be an $(n-1)$-concept). Then, there exists an $n$-concept $Y=(Y_1,Y_2\dots,Y_n)$ with $Y_1\subseteq X_1$ and $X_i\subseteq Y_i$, for $i\in\{2,\dots,n\}$. This is a contradiction with the fact that $X$ is an introducer of $x$.\qed \end{proof} Proposition~\ref{prop:fromConcepts} states that every $(n-1)$-concept of $\mathcal C_x$ maps to an introducer of $x$ in $\mathcal{C}$, and that every introducer of $x$ is the image of an $(n-1)$-concept of $\mathcal C_x$. \section{Algorithm} \label{sec:algo} In this section, we present an algorithm to compute the introducer concepts in an $n$-context. \medskip Algorithm~\ref{algo:introDim} computes the introducers for each element of a dimension $i$. For a given element $x\in \mathcal S_i$, we compute $\mathcal T(\mathcal C_x)$. Then, for each $(n-1)$-concept $X\in \mathcal T(\mathcal C_x)$, we build the set $X_i$ needed to extend $X$ into an $n$-concept. An element $y$ is added to $X_i$ when $y\times\prod_{j\not=i} X_j\subseteq \mathcal R$, that is if there exists an $(n-1)$-dimensional box full of crosses (but not necessarily maximal) in $\mathcal R$, at level $y$. The final set $X_i$ always contains at least $x$. \medskip \begin{algorithm}[ht] \DontPrintSemicolon \KwIn{$\mathcal C$ an $n$-context, $i\in\{1,\dots,n\}$ a dimension} \KwOut{$\mathcal I(\mathcal S_i)$ the set of introducer concepts of elements of dimension $i$} $I\gets \emptyset$\; \ForEach{$x\in\mathcal S_i$}{ $C\gets\emptyset$\; \ForEach{$X=(X_1,\dots,X_{i-1},X_{i+1},\dots,X_n)\in\mathcal T(\mathcal C_x)$}{ $X_i\gets \emptyset$\;\label{candidate} \ForEach{$y\in\mathcal S_i$}{\label{loop:extend} \If{$\prod_{j\not=i} X_j\times y\subseteq \mathcal R$}{ $X_i\gets X_i\cup y$\; } } $C\gets C \cup (X_1,\dots,X_i,\dots,X_n)$\;\label{line:cup1} } $I\gets I\cup C$\;\label{line:cup2} } \Return{$I$}\; \caption{{\sc IntroducerDim}$(\mathcal C, i)$} \label{algo:introDim} \end{algorithm} \medskip Algorithm~\ref{algo:intro} calls Algorithm~\ref{algo:introDim} on each dimension. This ensure that each element of each dimension will be scanned for its introducer concepts. Algorithm~\ref{algo:intro} computes the introducer set $\mathcal I(\mathcal C)$ for $n$-context $\mathcal C$. \medskip \begin{algorithm}[ht] \DontPrintSemicolon \KwIn{$\mathcal C$ an $n$-context} \KwOut{$\mathcal I(\mathcal C)$ the set of all introducer concepts for $\mathcal C$} $R \gets \emptyset$\; \ForEach{dimension $i$} { $R\gets R\cup \text{\sc IntroducerDim}(\mathcal C,i)$\;\label{line:cup3} } \Return{$R$}\; \caption{{\sc Introducers}$(\mathcal C)$} \label{algo:intro} \end{algorithm} \medskip Algorithm~\ref{algo:introDim} requires the computation of the $(n-1)$-concepts from an $(n-1)$-context. Several algorithms exist to complete this task~\cite{DBLP:journals/tkdd/CerfBRB09, makhalova2017incremental, bazin2017incremental}. \medskip \begin{proposition} Algorithm~\ref{algo:introDim} ends and returns all the introducer concepts of elements of the dimension $\mathcal S_i$. \end{proposition} \begin{proof} The $\mathcal S_i$ are finite. The set of $(n-1)$-concepts of an $(n-1)$-context resulting from fixing an element is also finite. Algorithm~\ref{algo:introDim} passes through each element $x\in\mathcal S_i$ and on each concept of $\mathcal T(\mathcal C_x)$ exactly once. The maximality test on dimension $i$ looks at the elements of $\mathcal S_i$, which is finite. Thus, the algorithm ends. Proposition~\ref{prop:fromConcepts} ensures that every introducer of $x$ can be computed from the concepts of $\mathcal C_x$. Thus, every introducer of an element of the dimension $S_i$ is returned.\qed \end{proof} \medskip At the time of writing, the only known bound for the number of $n$-concepts of an $n$-context $(S_1,\dots,S_n,R)$ is $\prod_{i\in\{1,\dots,n\}\setminus k} 2^{|\mathcal S_i|}$ with $k=\argmax_{k\in\{1,\dots,n\}}|\mathcal S_k|$. Let $\mathbb K_n$ be the maximal number of $n$-concepts in an $n$-context. Computing $\mathcal C_x$ from $\mathcal C$ is in $O(|\mathcal R|)$. Building the set $X_i$ that extends an $(n-1)$-concept of $\mathcal C_x$ into an introducer of $x$ can be done in $O(|\mathcal S_i|\times\prod_{j\not=i}|X_j|)$ We denote by $T$ the complexity of computing $\mathcal T(\mathcal C_x)$ from $\mathcal C_x$. Thus the complexity of Algorithm~\ref{algo:introDim} for context $\mathcal C =(\mathcal S_1,\dots,\mathcal S_n,\mathcal R)$ and dimension $i$ is $O\left(|\mathcal S_i|\times \left(T+\mathbb K_{n-1} \times \prod_{j\in\{1,\dots,n\}}|\mathcal S_j|\right)\right)$ and the complexity of Algorithm~\ref{algo:intro} is $O\left(\sum_{i\in\{1,\dots,n\}}\left(|\mathcal S_i|\times \left(T+\mathbb K_{n-1} \times \prod_{j\in\{1,\dots,n\}}|\mathcal S_j|\right)\right)\right)$. \section{Conclusion} In this paper, we introduced the $n$-dimensional equivalent of Galois Sub-Hierarchies or AOC-posets. We showed that the set of introducer concepts, together with the $n$ quasi-orders induced by the inclusion on each dimension, forms an $n$-ordered set. We provided an algorithm to compute the set of introducer concepts from an $n$-context. \medskip Although our approach was not initially motivated by an applicative problem, it would be interesting to use the notion of introducer concepts in $n$-dimensions to address some specific problems in software engineering or data mining. \medskip It would be interesting to experiment on datasets (real and generated) to evaluate the gains (in term of number of concepts) of the restriction to introducer concepts. \section*{Acknowledgements} This research was partially supported by the European Union's ``\emph{Fonds Europ\'een de D\'eveloppement R\'egional (FEDER)}'' program. \bibliographystyle{unsrt}
{ "timestamp": "2018-02-13T02:18:00", "yymm": "1802", "arxiv_id": "1802.04030", "language": "en", "url": "https://arxiv.org/abs/1802.04030" }
\section{Introduction} Recently, Tsai and Wang \cite{TsaiWang17} considered $n$-dimensional minimal submanifolds $\Sigma\subset M$ where $(M,g)$ is an $(n+m)$-dimensional ambient Riemannian manifold. They consider the partial Ricci operator on the normal bundle $N\Sigma$: $$\mathcal{R}(V) = \text{tr}_{\Sigma}(R(\cdot,V)\cdot)^\perp,$$ where $R$ is the Riemann curvature tensor of $(M,g)$. They call $\Sigma$ \emph{strongly stable} if $\mathcal{R} - \mathcal{A}$ is a (pointwise) positive operator on $N\Sigma$, where $\mathcal{A}$ is a quadratic expression in the second fundamental form of $\Sigma$ in $(M,g)$. In coordinates this condition is equivalent to asking that there exists a constant $c_0>0$ such that, for any $p \in \Sigma$: $$ -\sum_{\alpha,\beta,i}R_{i\alpha i\beta}v^\alpha v^\beta - \sum_{\alpha,\beta,i,j}h_{\alpha ij}h_{\beta ij} v^\alpha v^\beta \geq c_0 \sum_{\alpha} (v^\alpha)^2\, $$ for any $V = \sum_{\alpha} v^\alpha\bar{e}_\alpha \in N_p\Sigma$, where $(e_i)_{i=1,\ldots,n}$ and $(\bar{e}_\alpha)_{\alpha=1,\ldots,m}$ are orthonormal bases of $T_p\Sigma$ and $N_p\Sigma$ respectively and $(h_{\alpha ij})$ are the coefficients of $\mathcal{A}$. Note that strong stability implies the integrand in the second variation formula for the volume functional is pointwise positive along $\Sigma$, and so $\Sigma$ is strictly stable in the usual sense. Tsai and Wang show that there are many examples of strongly stable minimal submanifolds, see \cite[Proposition A]{TsaiWang17}. Moreover, they show that strong stability implies local uniqueness of $\Sigma$ as a minimal submanifold as follows. \begin{theorem}[Theorem A, \cite{TsaiWang17}] \label{thm:tsaiwang1} Let $\Sigma^n\subset (M,g)$ be a compact, oriented minimal submanifold which is strongly stable. There exists a tubular neighbourhood $U$ of $\Sigma$ such that $\Sigma$ is the only compact minimal submanifold in $U$ of dimension at least $n$. \end{theorem} A further consequence is a dynamical stability result. \begin{theorem}[Theorem B, \cite{TsaiWang17}]\label{thm:tsaiwang2} Let $\Sigma^n\subset (M,g)$ be a compact, oriented minimal submanifold which is strongly stable. If $\Gamma$ is an $n$-dimensional submanifold that is close to $\Sigma$ in $C^1$, then the mean curvature flow $\Gamma_t$ with $\Gamma_0=\Gamma$ exists for all time, and $\Gamma_t$ converges to $\Sigma$ smoothly as $t\rightarrow \infty$. \end{theorem} We first note the local uniqueness result extends to a considerable weaker setting. \begin{theorem}\label{thm:mainthm.2} Let $\Sigma^n\subset (M,g)$ be a compact minimal submanifold which is strongly stable. There exists a tubular neighbourhood $U$ of $\Sigma$ such that, up to higher multiples of $\Sigma$, there is no other stationary integral varifold with support in $U$ of dimension greater than or equal to $n$. \end{theorem} We also show that dynamical stability extends to much weaker initial conditions. \begin{theorem}\label{thm:mainthm} Let $\Sigma^n\subset (M,g)$ be a compact, oriented minimal submanifold which is strongly stable. Then there exists a tubular neighbourhood $U$ of $\Sigma$ such that the following holds. Let $\Gamma$ be an integral $n$-current in $U$ which is in the same homology class (as currents) as $\Sigma$ in $U$ such that $\mathbf{M} [\Gamma] < 2 |\Sigma|$. Furthermore, let $\{\mu_t\}_{t\geq 0}$ be an enhanced Brakke flow starting at $\Gamma$. Then $\mu_t$ is non-vanishing for any $t\geq 0$ and for $t \rightarrow \infty$ converges smoothly to $\Sigma$. \end{theorem} Here $|\Sigma|$ denotes the volume of $\Sigma$ and $\mathbf{M} [\,\cdot\,]$ the mass of a current. For the definition of an enhanced Brakke flow see Theorem \ref{thm:enhanced-flow}. We shall deduce Theorem \ref{thm:mainthm.2} from Theorem \ref{thm:mainthm}: both are proved in Section \ref{sec:extension}. \begin{remark} One can drop the assumption that $\Sigma$ is orientable by working with flat chains mod 2 instead of integral currents. Then the same results hold true. \end{remark} We shall apply our results to show that we can, in some important cases of interest, flow through the singularities of Lagrangian mean curvature flow which are proved to occur in the groundbreaking work of Neves \cite{NevesFTS}. We also obtain global long-time existence and smooth convergence of an enhanced Brakke flow starting from weak initial conditions in key examples of complete Ricci-flat manifolds with special holonomy. See Section \ref{sec:applications} for these applications. \paragraph{{\bf Acknowledgements}} This research was supported by an HIMR Focused Research Grant and Leverhulme Trust Research Project Grant RPG-2016-174. \section{Extension to enhanced Brakke flows}\label{sec:extension} Recall that a family of Radon measures $(\mu_t)_{t\geq 0}$ on $M$ is called an integral $n$-Brakke flow, provided, given any $\varphi \in C_c^2(M;\mathbb R^+)$, the following inequality holds for every $t>0$ \begin{equation}\label{brakkeflow} \bar{D}_t\mu_t(\varphi)\leq \int -\varphi |\mathbf{H}|^2 + \langle\nabla \varphi, \mathbf{H} \rangle \, d\mu_t, \end{equation} where $\bar{D}_t$ denotes the upper derivative at time $t$, and $\mathbf{H}$ is the weak mean curvature vector. We take the right-hand side to be $-\infty$ if $\mu_t$ is not the mass measure of an integral $n$-varifold which carries a weak mean curvature which is summable in $L^2$. Note that in the case $\mu_t$ corresponds to a smooth motion by mean curvature flow, $\bar{D}_t$ is just the usual derivative and we have equality in \eqref{brakkeflow}. For more details we refer the reader to \cite{Ilmanen}. We recall Ilmanen's existence result for enhanced Brakke flows, which is proven using an elliptic regularisation scheme. \begin{theorem}[\cite{Ilmanen}, \S 8.1]\label{thm:enhanced-flow} Let $T_0$ be a local integral $n$-current in $(M^{n+m},g)$ with $\partial T_0 = 0$, finite mass ${\bf M}[T_0]<\infty$ and compact support. There exists a local integral $(n+1)$-current $T$ in $M\times [0,\infty)$ and a family $\{\mu_t\}_{t\geq 0}$ of Radon measures on $M$ such that \begin{itemize} \item[$(i)$] (a) $\partial T = T_0$\\[1ex] (b) ${\bf M}[T_B]$, where $T_B = T\ensuremath{\,\textsf{\small \upshape L}\,} (M\times B),\ B\subset [0,\infty)$, is absolutely continuous with respect to $\mathcal{L}^1(B)$.\\[-2ex] \item[$(ii)$] (a) $\mu_0=\mu_{T_0}, {\bf M}[\mu_t]\leq {\bf M}[\mu_0]$ for $t>0$.\\[1ex] (b) $\{\mu_t\}_{t\geq 0}$ is an integral $n$-Brakke flow.\\[-2ex] \item[$(iii)$] $\mu_t\geq \mu_{\pi_\#(T_t)}$ for each $t\geq 0$, where $T_t$ is the slice $\partial(T\ensuremath{\,\textsf{\small \upshape L}\,} (M^{m+k}\times [t,\infty))$ and $\pi: M\times \mathbb R \rightarrow M$ is the projection on the first factor. \end{itemize} \end{theorem} Ilmanen calls $(\{\mu_t\}_{t\geq 0}, T)$ with the above properties an enhanced Brakke motion. We will instead call this an \emph{enhanced Brakke flow}. Tsai--Wang's local uniqueness and long-time convergence results (Theorems \ref{thm:tsaiwang1}--\ref{thm:tsaiwang2}) hinge on the following estimate for the squared distance function $\psi$ to $\Sigma$, which we reformulate slightly for our purposes. Note, although stated there, orientability of $\Sigma$ is not needed for the proof. \begin{proposition}[Proposition 4.1, \cite{TsaiWang17}] \label{thm:distest} Let $\Sigma^n \subset (M,g)$ be a compact minimal submanifold which is strongly stable. There exist positive constants $\varepsilon_1$ and $c_1$, which depend on the geometry of $M$ and $\Sigma$, such that on the tubular neighbourhood $U_{\varepsilon_1}$ of $\Sigma$ we have: $$ \text{\emph{tr}}_n \nabla^2\psi \geq c_1 \psi\ ,$$ where $\nabla^2\psi$ is the Hessian of $\psi$, and $\text{\emph{tr}}_n$ is the sum of the smallest $n$ eigenvalues. \end{proposition} We now show how Proposition \ref{thm:distest} together with White's barrier theorem, Theorem \ref{thm:barrier}, yields the proof of Theorem \ref{thm:mainthm}. Let $\Sigma^n \subset (M,g)$ be a compact, oriented minimal submanifold which is strongly stable and consider the tubular neighbourhood $U=U_{\varepsilon_1}$ given by Proposition \ref{thm:distest}. Let $\{\mu_t\}_{t\geq 0}$ be an integral $n$-Brakke flow in $(M,g)$ such that ${\rm spt}\, \mu_0 \subset U$. Recall $c_1>0$ given by Proposition \ref{thm:distest} and consider for any $\varepsilon > 0$ the function \begin{equation}\label{eq.0} u(p,t) = e^{c_1 t} \psi - \varepsilon t \ . \end{equation} Then we see that $$\frac{\partial u}{\partial t} - \text{tr}_n \nabla^2 u \leq - \varepsilon < 0\, ,$$ and thus by Theorem \ref{thm:barrier} that $$ u(x,t) \leq \varepsilon_1^2$$ on ${\rm spt}\, \mu_t$. Letting $\varepsilon \rightarrow 0$ this implies that $$ \psi \leq e^{-c_1t} \varepsilon_1^2$$ on ${\rm spt}\, \mu_t$ and thus \begin{equation}\label{eq.1} {\rm spt}\, \mu_t \subset U_{e^{-c_1 t/2} \varepsilon_1}\, . \end{equation} \vspace{1ex} \begin{proof}[Proof of Theorem \ref{thm:mainthm}] Continuing to use the notation above, by making $\varepsilon_1$ smaller if necessary we can assume that $[\Sigma]_U \neq 0$, where $[\Sigma]_U$ denotes the homology class of $\Sigma$ in $U$ with respect to integral currents. This implies that the infimum of $\mathbf{M}[S]$, where $S$ represents the same homology class as $\Sigma$ in $U$, is positive, i.e. \begin{equation}\label{eq.3} \delta:=\inf_{S \in [\Sigma]_U} \mathbf{M}[S] >0\ . \end{equation} Consider now an enhanced Brakke flow $(\{\mu_t\}_{t\geq 0}, T)$ starting at $\Gamma$ such that ${\rm spt}\, \mu_0 \subset U$. By \eqref{eq.1} and by Theorem \ref{thm:enhanced-flow} $(iii)$ we have that $${\rm spt}\, \mu_{\pi_\#(T_t)} \subset U_{e^{-c_1 t/2} \varepsilon_1 }$$ and thus ${\rm spt}\, T \subset U \times [0,\infty)$. Since $\partial (T_{[0,t]}) = \Gamma - T_t$ we obain $$ \partial \pi_\#(T_{[0,t]}) = \Gamma - \pi_\#(T_t) $$ and thus $\pi_\#(T_t) \in [\Sigma]_U$ for all $t\geq 0$. By \eqref{eq.3} and Theorem \ref{thm:enhanced-flow} $(iii)$ we obtain \begin{equation}\label{eq.4} \mu_t(M)\geq \mathbf{M}(\pi_\#(T_t)) \geq \delta >0 \end{equation} for all $t\geq 0$, and thus the flow is non-vanishing. Observe that the definition of Brakke flow implies that that for any $0\leq t_1 \leq t_2$ one has the estimate \begin{equation}\label{eq.5} \int_{t_1}^{t_2} \int |\mathbf{H}|^2\, d\mu_t\, dt \leq \mu_{t_1}(M) - \mu_{t_2}(M)\, , \end{equation} where $\mathbf{H}$ is the mean curvature vector. Combining this with \eqref{eq.4} implies that for any sequence $t_i \rightarrow \infty$ there is a subsequence $t'_i \rightarrow \infty$ such that the flows $\{\mu_{t+t'_i}\}_{-t_i\leq t <\infty}$ converge to a non-vanishing Brakke flow $\{\bar{\mu}_t\}_{t\in \mathbb R}$. By \eqref{eq.1} we have ${\rm spt}\, \bar{\mu}_t \subset \Sigma$ for all $t \in \mathbb R$ and by \eqref{eq.5} we have that $\bar{\mu}_t$ is the mass measure of a stationary varifold for almost all $t\in \mathbb R$. Thus by the constancy theorem, see for example \cite{Simon}, we have for any such $t$ that $$ \bar{\mu}_t = \theta \,\mathcal{H}^n\ensuremath{\,\textsf{\small \upshape L}\,} \Sigma $$ for some constant multiplicity $\theta \in\mathbb N$. By assumption we have $\mathbf{M} [\Gamma] < 2 |\Sigma|$ and thus the monotonicity of total measure for Brakke flows implies that the multiplicity $\theta$ has to be one. Thus $\{\bar{\mu}_t\}_{t \in \mathbb R}$ is the static Brakke flow corresponding to $\Sigma$. Brakke's regularity theorem, see \cite{Brakke} or \cite{Tonegawa14a, Tonegawa14b}, now implies that the convergence is smooth. This implies that as $t\rightarrow \infty$ the Brakke flow $\{\mu_t\}_{t\geq 0}$ converges smoothly to $\Sigma$. \end{proof} \vspace{1ex} \begin{proof}[Proof of Theorem \ref{thm:mainthm.2}] One can use Proposition \ref{thm:distest} and the first variation formula for stationary varifolds to deduce Theorem \ref{thm:mainthm.2}. For convenience we use Theorem \ref{thm:mainthm}. We choose $U = U_{\varepsilon_1}$ as above. Assume $\Gamma^{n+k}$ is a stationary integral varifold with ${\rm spt}\, \Gamma \subset U$. Note first that the barrier \eqref{eq.0} works for all Brakke flows of dimension $n+k \geq n$. We can thus treat $\Gamma^{n+k}$ as a stationary Brakke flow. The proof of Theorem \ref{thm:mainthm} yields that ${\rm spt}\, \Gamma \subset \Sigma$. Thus $k = 0$ and even more $\Gamma$ is the varifold associated to $\Sigma$ up to a constant multiplicity. \end{proof} \section{Applications}\label{sec:applications} \subsection{Singularities of Lagrangian mean curvature flow} Consider a compact special Lagrangian $L$ in a Calabi--Yau manifold. Suppose that $L$ is strongly stable. For example, we could assume $L$ has positive Ricci curvature, such as the zero section in $T^*S^n$ with the Stenzel metric \cite{Stenzel}, since $L$ is then strongly stable by \cite[Proposition A]{TsaiWang17}: this is a consequence of the Gauss equation and the special Lagrangian condition, which in particular imposes symmetries on the second fundamental form of $L$. It is worth noting that, by the work of Hein--Sun \cite{HeinSun}, special Lagrangian $n$-spheres with positive Ricci curvature are now known to exist in certain compact Calabi--Yau $n$-folds. Using the work of Neves in \cite{NevesFTS} we may construct a Lagrangian $L'$ Hamiltonian isotopic to $L$ which is arbitrarily $C^0$ close to $L$ but Lagrangian mean curvature flow $L'_t$ starting at $L'$ will develop a finite-time singularity. Thus, $L'$ cannot satisfy the conditions of Tsai--Wang's result, Theorem \ref{thm:tsaiwang2}. However, we can choose $L'$ so that $\mathbf{M}[L']<2|L|$, and $L'$ is homologous to $L$ since it is Hamiltonian isotopic to $L$. Moreover, we can ensure that $L'$ lies in the tubular neighbourhood $U$ provided by Theorem \ref{thm:mainthm}, as $L'$ is $C^0$ close to $L$. Hence, applying Theorem \ref{thm:mainthm} gives that the enhanced Brakke flow starting at $L'$ exists for all time and converges smoothly to $L$. For all times before the first singular time of $L'_t$, the enhanced Brakke flow will agree with $L'_t$. Hence the enhanced Brakke flow enables us to flow through the singularity of $L'_t$ and still converge smoothly to the special Lagrangian $L$. It would be useful to study this situation further, to see if this sheds light on the problem of long-time existence and converge of Lagrangian mean curvature flow. \subsection{Non-compact manifolds with special holonomy} There are several well-known examples of manifolds $M$ with complete Ricci-flat metrics with special holonomy and maximal volume growth, which have the structure of a vector bundle over a compact base: \begin{itemize} \item $T^*S^n$ $(n\geq 2)$ and $T^*\mathbb{CP}^n$ (Calabi--Yau, i.e.~holonomy $\mathrm{SU}(n)$, metrics \cite{Calabi,Stenzel}); \item $\Lambda^2_-T^*S^4$, $\Lambda^2_-T^*\mathbb{CP}^2$ and the spinor bundle of $S^3$ (holonomy $\mathrm{G}_2$ metrics \cite{BryantSalamon}); \item the negative spinor bundle of $S^4$ (holonomy $\mathrm{Spin(7)}$ metric \cite{BryantSalamon}). \end{itemize} In each case the zero section $\Sigma^n$ of the bundle is volume-minimizing (since it is calibrated) and strongly stable by \cite[Proposition A]{TsaiWang17}. Moreover, the squared distance function to $\Sigma$ is strictly convex away from $0$ \cite{TsaiWang16}, so we can take $U=M$ in our Theorems \ref{thm:mainthm.2}--\ref{thm:mainthm} in all of these cases. We deduce that we get global uniqueness of $\Sigma$ amongst stationary integral varifolds in $M$ with support of dimension at least $n$, up to multiplicity, and long-time smooth convergence to $\Sigma$ of an enhanced Brakke flow starting at any $\Gamma\in[\Sigma]$ with mass strictly less than twice the volume of $\Sigma$. Notice in particular in the Calabi--Yau cases that we do not have to start with a Lagrangian and yet we still get convergence of an enhanced Brakke flow to the special Lagrangian base. As the results of Neves indicate \cite{NevesFTS}, one expects singularities to develop along the flow, even starting with a smooth Lagrangian initial condition, and so the enhanced Brakke flow gives a flow through singularities in these cases. We would similarly expect the mean curvature flow in the $\mathrm{G}_2$ and $\mathrm{Spin}(7)$ cases to develop singularities in general, yet we can still obtain a flow through singularities to the volume-minimising base. \begin{appendix} \section{Avoidance principle in higher codimension.} We recall White's barrier theorem for mean curvature flow, see \cite[Theorem 14.1]{White_mcfnotes}. We include the proof for completeness. \begin{theorem}[White]\label{thm:barrier} Suppose $\mathcal{M}$ is the space-time support of an $n$-dimensional integral Brakke flow $\{\mu_t\}_{t\in I}$ in $\Omega \subset M$. Let $u:\Omega \times \mathbb R \rightarrow \mathbb R$ be a smooth function, so that at $(x_0,t_0)$, $$\frac{\partial u}{\partial t} < \text{\emph{tr}}_n \nabla^2u\, ,$$ where $\nabla^2u$ is the spatial ambient Hessian, and $\text{\emph{tr}}_n$ is the sum of the smallest $n$ eigenvalues. Then $$ u\big|_{\mathcal{M} \cap \{t\leq t_0\}}$$ cannot have a local maximum at $(x_0,t_0)$. \end{theorem} \begin{proof} Suppose otherwise, for a contradiction. We may assume $\mathcal{M} = \mathcal{M} \cap \{t\leq t_0\}$ and that $u|_\mathcal{M}$ has a strict local maximum at $(x_0,t_0)$. (Otherwise we could replace $u$ by $u- d(x,x_0)^4 - |t_0-t|^2$). Let $P(r) = B_r(x_0) \times (t_0-r^2,t_0]$. Choose $r>0$ small enough so that $t_0-r^2$ is past the initial time of the flow, $r$ is smaller than the injectivity radius at $x_0$, $u|_{\mathcal{M}\cap \overline{P(r)}} $ has a maximum at $(x_0,t_0)$ and nowhere else and $\tfrac{\partial u}{\partial t} < \text{tr}_n \nabla^2u$ on $\overline{P(r)}$. By adding a constant we can furthermore assume that $u_{\mathcal{M} \cap (\bar{P}\setminus P)}<0< u(x_0,t_0)$. We let $u^+:= \max\{u,0\}$ and insert $(u^+)^4$ into the definition of Brakke flow. Thus \begin{equation*} \begin{split} 0&\leq \int_{B_r} (u^+)^4 \, d\mu_{t_0} = \int_{B_r} (u^+)^4 \, d\mu_{t_0} - \int_{B_r} (u^+)^4 \, d\mu_{t_0-r^2}\\ &\leq \int_{t_0-r^2}^{t_0} \int \bigg(\frac{\partial}{\partial t} (u^+)^4 + \langle \mathbf{H}, \nabla(u^+)^4\rangle - |\mathbf{H}|^2 (u^+)^4 \bigg) \, d\mu_t dt\\ &\leq \int_{t_0-r^2}^{t_0} \int \bigg(\frac{\partial}{\partial t} (u^+)^4 - \text{div}_{\mathcal{M}}\big(\nabla(u^+)^4\big) \bigg) \, d\mu_t dt\\ &= \int_{t_0-r^2}^{t_0} \int 4 \bigg((u^+)^3\frac{\partial}{\partial t} u^+-3 (u^+)^2 |\nabla^\mathcal{M} u^+|^2 - (u^+)^3\text{div}_{\mathcal{M}}\big(\nabla(u^+)\big) \bigg) \, d\mu_t dt\\ &\leq \int_{t_0-r^2}^{t_0} \int 4 (u^+)^3\bigg(\frac{\partial}{\partial t} u^+ - \text{tr}_{n} \nabla^2u^+ \bigg) \, d\mu_t dt < 0\, , \end{split} \end{equation*} which is a contradiction. \end{proof} \end{appendix} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2018-04-20T02:05:45", "yymm": "1802", "arxiv_id": "1802.03941", "language": "en", "url": "https://arxiv.org/abs/1802.03941" }
\section{Introduction} Let $F_{q}$ be the Galois field of $q$ elements and let $\mathrm{PG}(v,q)$ be the $v$ -dimensional projective space over $F_{q}$. For an introduction to geometrical objects in such spaces, see~\cite{Hirs, HirsSt}. For an integer $\varrho $ with $0\leq \varrho \leq n$ we say that a set of points $S\subseteq PG(v,q)$ is \linebreak $\varrho ${\em -saturating} if for any point $x\in PG(v,q)$ there exist $\varrho +1$ points in $S$ generating a subspace of $PG(v,q)$ in which $x$ lies and $\varrho $ is the smallest value with such property, cf. \cite{DMP, DavO2, Ughi}. Note that the term ``saturated'' for points in $S$ was applied in \cite{Ughi} and then was used in some papers. But in \cite{PamSt} the points of $% PG(v,q)\setminus S$ are said to be saturated and this seems to be more natural. Therefore in \cite{DMP, DavO2} and here the points in $S$ are called ``saturating''. In \cite{G2007}, see also the references therein, saturating sets are called "dense sets". Note also that in \cite{BPW} saturating sets are called ``$R$-spanning sets''. Finally, in some works the points of $% PG(v,q)\setminus S$ are called to be ``covered''. This term seems acceptable too. A $\varrho $-saturating set of $k$ points is called {\em minimal} if it does not contain a $\varrho $-saturating set of $k-1$ points \cite{DMP, Ughi}. In this paper we consider minimal 1-saturating sets in binary projective spaces $PG(v,2)$. A set $S\subset PG(v,2)$ is 1-saturating if any point of $% PG(v,2)\setminus S$ lies on a bisecant of $S$. Arcs in $PG(2,2)$ and caps in $PG(v,2),$ $v\geq 3,$ are sets of points, no three of which are collinear. Complete arcs and caps are minimal 1-saturating sets \cite{DMP, Ughi} which we call ``CA sets'' for complete arcs and ``CC sets'' for complete caps. For sizes, constructions, and estimates of binary CA and CC sets, see, for example, \cite{ BW1, BW2, DavPrep02, DFP, DGMP2010, DMPbin, DMPJG, DT, FPJG, GDT, GL2010, Hirs, HirsSt, KhaLis, Lison, Weh1}, and the references therein. On the other hand, a minimal 1-saturating set may contain three points of the same line. Then it is neither an arc nor a cap. We call such minimal 1-saturating set an ``NA set'' in $PG(2,2)$ and an ``NC set'' in $PG(v,2),$ $% v\geq 3$. NC sets have a more wide spectrum of possible sizes than CC sets. Some constructions, sizes, and estimates for binary NC sets are given in \cite{BDGMP2016, Handbook, BPW, Coh, DMP, FG2003, GDT, G2007, G2013, GL2010, GrSl, KaikRos, Ughi}, and the references therein, either directly or they can be obtained from those for $q = 2$. Of particular interest is \cite{DMPbin}, where several constructions of minimal 1-saturating sets in binary projective spaces $PG(v,2)$ are presented. In \cite{GL2010}, the authors observe that a minimal 1-saturating set can be obtained from a complete cap $S$ by fixing some $s \in S$ and replacing every point $s' \in S \setminus \{s\}$ by the third point on the line through $s$ and $s'$. From here on, we will denote this construction by GL. For NC sets we can use results of the linear covering codes theory, e.g., of \cite{Handbook, BPW, Coh, DMPbin, DFMP-IEEE-LO, GDT, GrSl, KaikRos}% , due to the following considerations. A $q$-ary linear code with codimension $r$ has {\em covering radius} $R$ if every $r$-positional $q$% -ary column is equal to a linear combination of $R$ columns of a parity check matrix of this code and $R$ is the smallest value with such property. For an introduction to coverings of vector spaces over finite fields and to the concept of code covering radius, see \cite{Handbook, Coh}. The points of a $\varrho $-saturating $n$-set in $PG(r-1,q)$ can be considered as columns of a parity check matrix of a $q$-ary linear code of length $n,$ codimension $r,$ and covering radius $\varrho +1.$ This correspondence is remarked and used in many works, see, for example, \cite{BPW, DMP, DavO2}, and the references therein. For given codimension and covering radius, the linear covering codes theory \cite{Handbook, Coh}, is interested in codes of the smallest length since they have small covering density. In a geometric perspective, saturating sets of the smallest size are also interesting as extremal objects. In terms of linear covering codes, the concept of minimal saturating sets corresponds to the concept of locally optimal linear covering code; see \cite{DFMP-IEEE-LO}. A locally optimal code is nonshortening in the sense that one cannot remove any column from a parity-check matrix without increasing the code covering radius. At present minimal saturating sets seem to be studied insufficiently. In general, their smallest sizes and the spectrum of possible sizes are unknown. Relatively a few constructions of minimal saturating sets are described in literature. Note that in $\mathrm{PG}(v,2),$ a complete cap of maximal size is the complement of a hyperplane, see \cite{DT}, and its stabilizer group is $ ASL(v,2) $, while a minimal 1-saturating set of maximal size that is not a cap is a hyperplane together with a point outside it, see \cite[Corollary 1]{DMP}, and its stabilizer group is $ PSL(v,2) $. In both the cases the size of the set is $2^v$. \\ The Structure Theorem of Davydov and Tombak gives a characterization of "large" binary caps: \begin{theorem}[\cite{DT}] Any "large" (cardinality $\geq 2^{v-1}+2$) complete cap in $\mathrm{PG}(v,2)$ is obtained by a repeated application of the doubling construction to a "critical" complete cap (cardinality $2^{k-1}+1$) in $\mathrm{PG}(k,2)$ for some $k < v$. \end{theorem} In \cite{GL2010} it is stated that in $\mathrm{PG}(v,2)$ every 1-saturating set of size at least $\frac{11}{36} 2^{v+1} +3$ either is a complete cap or can be obtained from a complete cap $S$ by construction GL, that the 1-saturating sets of the second largest size are the complete cap of size $5 \times 2^{n-3}$ and the corresponding NC set defined as above, and that the third largest size is smaller than $\frac{11}{36} 2^{v+1} +3$. Note that by applying construction GL to the complement of a hyperplane, you obtain a hyperplane and a point ouside it, while by applying it to the complete cap of size five in $\mathrm{PG}(3,2)$, you obtain a projectively equivalent complete cap; the same happens by applying construction GL to the complete cap of size 17 in $\mathrm{PG}(5,2)$ whose stabilizer group has order 40320. In this paper we present the classification of all the minimal $1$-saturating sets in $% \mathrm{PG}(v,2)$ for $2 \leq v \leq 5$, and the classification of the smallest and of the second smallest minimal $1$-saturating sets in $\mathrm{PG}(6,2)$, giving for each set the list of its points, the description of its stabilizer group, and a reference to a theoretical construction when it is known. This classification has been obtained by computer. A summary of these results appeared for the first time in \cite[Section 5]{DMPbin}, where the structure of a minimal 1-saturating 19-set in $\mathrm{PG}(6,2)$ is also described in detail. \section{Classification of minimal 1-saturating sets in $\mathrm{PG}(v,2)$, $2 \leq v \leq 5$ and of small minimal 1-saturating sets in $\mathrm{PG}(6,2)$ }\label{classification_sec} We obtained the classification of the minimal 1-saturating sets in $\mathrm{PG}(v,2), 2 \leq v \leq 5$ and of the small minimal 1-saturating sets in $\mathrm{PG}(6,2)$ using an exhaustive computer search based on a backtracking algorithm \cite{DMP}. The algorithm exploits equivalence properties among sets of points of $\mathrm{PG}(v,2),$ to reduce the search space. However several projectively equivalent copies of the same minimal 1-saturating set can be obtained. Therefore the examples have been classified using MAGMA; see \cite{magma}. Using Magma, the stabilizer group has been computed and identified, if not too big. Then the names of the groups have been determined using GAP; see \cite{GAP}. The structure of the stabilizer group of the complete caps obtained by \cite[Construction D]{DMPbin} is described in \cite{DMP-Doubl2017OC}. In Table 1 we give the summary of the complete classification of minimal 1-saturating $k$% -sets in $\mathrm{PG}(v,2),$ $v\leq 5,$ for all $k$, and in $\mathrm{PG}(6,2)$ for $k\leq 20$. For ``type'' CA, CC, NA, and NC, see Introduction. The notation $n$ means the number of objects of type noted. ``Stab. group'' gives either the order of the stabilizer group if $n=1$ or the interval of the orders if $n>1.$ Table 1 appeared for the first time in \cite[Section 5]{DMPbin}. \begin{definition} $\;$\\ Let $t_{2}(v,q)$ be the smallest size of a complete arc in $\mathrm{PG}(2,q)$ and the smallest size of a complete cap in $\mathrm{PG}(v,q), v \geq 3$.\\ Let $\ell(v,q,1)$ be the smallest size of a minimal 1-saturating set in $\mathrm{PG}(v,q)$.\\ Let $m(v,q,1)$ be the greatest size of a minimal 1-saturating set in $\mathrm{PG}(v,q)$.\\ Let $m^{\prime}(v,q,1)$ be the second greatest size of a minimal 1-saturating set in $\mathrm{PG}(v,q)$.\\ Let $m^{\prime \prime}(v,q,1)$ be the third greatest size of a minimal 1-saturating set in $\mathrm{PG}(v,q)$. \end{definition} By Table 1, we have \begin{eqnarray} t_{2}(2,2) &=&\ell(2,2,1)=m^{\prime \prime }(2,2,1)=m^{\prime }(2,2,1)=m(2,2,1)=4. \nonumber \\ t_{2}(3,2) &=&\ell(3,2,1)=m^{\prime \prime }(3,2,1)=5,\text{ }m^{\prime }(3,2,1)=6,\text{ }m(3,2,1)=8. \nonumber \\ t_{2}(4,2) &=&\ell(4,2,1)=9,\text{ }m^{\prime \prime }(4,2,1)=10,\text{ }% m^{\prime }(4,2,1)=11,\text{ }m(4,2,1)=16. \nonumber \\ t_{2}(5,2) &=&\ell(5,2,1)=13,\text{ }m^{\prime \prime }(5,2,1)=18,\text{ }% m^{\prime }(5,2,1)=20,\text{ }m(5,2,1)=32. \nonumber \\ \ell(6,2,1) &=&19.\quad t_{2}(6,2)=21. \label{computres} \end{eqnarray} \begin{footnotesize} \begin{longtable} {|c|c|c|c|c||c|c|c|c|c|} \caption{ Complete classification of minimal 1-saturating $k$-sets in $PG(v,2)$, $v\leq 5$, for all $k$, and in $PG(6,2)$ for $k\leq 20$} \endfirsthead \multicolumn{8}{r}{\textit{(The table continues in the next page)}} \endfoot \multicolumn{5}{l}{Table 1 continue} \endhead \endlastfoot \hline \rule[0.1 mm]{0mm}{5 mm} v & k & \text{Type} & n & \text{Stab. group} & v & k & \text{Type} & n & \text{Stab. group} \\ \hline 2 & 4 & \rule[1.5 mm]{0mm}{5 mm} $ \begin{array}{c} \text{CA} \\ \text{NA} \end{array} $ & $ \begin{array}{c} 1 \\ 1 \end{array} $ & $ \begin{array}{c} 24 \\ 6 \end{array} $ & 5 & 14 & \text{NC} & 19 & 8\ldots 56448 \\ \hline \rule[.2 mm]{0mm}{5 mm} 3 & 5 & \text{CC} & 1 & 120 & 5 & 15 & \text{NC} & 14 & 4\ldots 72 \\ \hline \rule[.2 mm]{0mm}{5 mm} 3 & 6 & \text{NC} & 1 & 72 & 5 & 16 & \text{NC} & 15 & 2\ldots 12 \\ \hline \rule[1.5 mm]{0mm}{5 mm} 3 & 8 & $ \begin{array}{c} \text{CC} \\ \text{NC} \end{array} $ & $ \begin{array}{c} 1 \\ 1 \end{array} $ & $ \begin{array}{c} 1344 \\ 168 \end{array} $ & 5 & 17 & $ \begin{array}{c} \text{CC} \\ \text{NC} \end{array} $ & $ \begin{array}{c} 5 \\ 48 \end{array} $ & $ \begin{array}{c} 384\ldots 40320 \\ 2\ldots 8064 \end{array} $ \\ \hline 4 & 9 & \rule[1.5 mm]{0mm}{5 mm} $ \begin{array}{c} \text{CC} \\ \text{NC} \end{array} $ & $ \begin{array}{c} 1 \\ 1 \end{array} $ & $ \begin{array}{c} 336 \\ 144 \end{array} $ & 5 & 18 & $ \begin{array}{c} \text{CC} \\ \text{NC} \end{array} $ & $ \begin{array}{c} 1 \\ 108 \end{array} $ & $ \begin{array}{c} 10752 \\ 2\ldots 120960 \end{array} $ \\ \hline 4 & 10 & \rule[1.5 mm]{0mm}{5 mm} $ \begin{array}{c} \text{CC} \\ \text{NC} \end{array} $ & $ \begin{array}{c} 1 \\ 6 \end{array} $ & $ \begin{array}{c} 1920 \\ 8\ldots 1008 \end{array} $ & 5 & 20 & $ \begin{array}{c} \text{CC} \\ \text{NC} \end{array} $ & $ \begin{array}{c} 1 \\ 1 \end{array} $ & $ \begin{array}{c} 184320 \\ 9216 \end{array} $ \\ \hline \rule[1.5 mm]{0mm}{5 mm} 4 & 11 & NC & 1 & 10 & 5 & 32 & $ \begin{array}{c} \text{CC} \\ \text{NC} \end{array} $ & $ \begin{array}{c} 1 \\ 1 \end{array} $ & $ \begin{array}{c} \end{array} $ \\ \hline 4 & 16 & \rule[1.5 mm]{0mm}{5 mm} $ \begin{array}{c} \text{CC} \\ \text{NC} \end{array} $ & $ \begin{array}{c} 1 \\ 1 \end{array} $ & $ \begin{array}{c} 322560 \\ 20160 \end{array} $ & 6 & 19 & \text{NC} & 5 & 32\ldots 5760 \\ \hline 5 & 13 & \rule[1.5 mm]{0mm}{5 mm} $ \begin{array}{c} \text{CC} \\ \text{NC} \end{array} $ & $\begin{array}{c} 1 \\ 7 \end{array} $ & $ \begin{array}{c} 1152 \\ 32\ldots 4032 \end{array} $ & 6 & 20 & \text{NC} & 36 & 4\ldots 2880 \\ \hline \end{longtable} \end{footnotesize} \smallskip The relation $t_{2}(6,2)=21$ is based on the facts that in $PG(6,2)$ there is a complete 21-cap \cite[Th. 3]{GDT} but there are not complete $k$-caps with $k\leq 20,$ see Table 1 and \cite{KhaLis}. Note also that in \cite[p. 222]{GDT} the conjecture was done that this relation holds. The values of $\ell(v,2,1),$ $v\leq 6,$ and $t_{2}(v,2),$ $v\leq 5,$ are given also in \cite[Table 2]{Handbook} and\linebreak\ \cite[Tables 3.1,4.2]{FPJG}, respectively. The classification of the complete caps in $PG(v,2), v \leq 6$ can be found in \cite{KhaLis}; in \cite{BMP} the classification of all caps, complete and incomplete in PG(5,2) is given, together with the list of the points and the description of the stabilizer group. In \cite[Remark 5, p. 271]{DT} five distinct complete 17-caps in $PG(5,2)$ are constructed and the conjecture is done that other nonequivalent 17-caps in $PG(5,2)$ do not exist. This conjecture is proved by an exhaustive computer search in \cite{DMPbin} (see Table 1, $k=17,$ type~CC) and in~\cite {KhaLis}. This fact allows us to obtain all nonequivalent complete $17\cdot 2^{v-5}$-caps in $PG(v,2),$ $v\geq 6,$ by $(v-5)$-fold applying Construction DC to a complete 17-cap in $PG(5,2)$ \cite{DT}. Note that the five complete 17-caps in $PG(5,2)$ can by obtained using the construction described in \cite[Theorem 2.4]{KhaLis}; two of them can be obtained also using the construction $L_{21}$ of \cite{DMPbin}. \newpage The following tables give the classification of all the minimal 1-saturating sets in $\mathrm{PG}(v,2)$, $2 \leq v \leq 5$ and of the smallest and the second smallest minimal 1-saturating sets in $\mathrm{PG}(6,2)$. We denote a point $P$ of $\mathrm{PG}(v,2)$ by the decimal integer of which $P$ is the binary representation. When the order $i$ of a stabilizer group is too big to be identified by Magma, we denoted the group as $G_i$. When possible, we indicate the construction giving the example: KL denotes \cite[Theorem 2.4]{KhaLis}, see \cite{DMPbin} for the other symbols. \begin{footnotesize} \begin{longtable} {|c|c|c|c|c|} \caption{Classification of the minimal 1-saturating sets in $\mathrm{PG}(2,2)$} \endfirsthead \multicolumn{5}{r}{\textit{(The table continues in the next page)}} \endfoot \multicolumn{5}{l}{ Table 2 continue} \endhead \endlastfoot \hline \rule[-1 mm]{0mm}{5 mm} Size & Type & \begin{minipage}{2.3 cm} \smallskip \centering List of \\ points \\ \medskip \end{minipage} & \begin{minipage}{2.3 cm} \smallskip \centering Stabilizer\\ group \\ \medskip \end{minipage} & \begin{minipage}{0.9 cm} \smallskip \centering Cons-\\ truc-\\ tion \\ \medskip \end{minipage} \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 4 $ & CA & $ \{1, 2, 4, 7\} $ & $ ASL(2,2) $ & H \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 4 $ & NA & $ \{1, 2, 4, 6\} $ & $ PSL(2,2) $ & A \\ \hline \end{longtable} \end{footnotesize} \begin{footnotesize} \begin{longtable} {|c|c|c|c|c|} \caption{Classification of the minimal 1-saturating sets in $\mathrm{PG}(3,2)$} \endfirsthead \multicolumn{5}{r}{\textit{(The table continues in the next page)}} \endfoot \multicolumn{5}{l}{Table 3 continue} \endhead \endlastfoot \hline \rule[-1 mm]{0mm}{5 mm} Size & Type & \begin{minipage}{2.3 cm} \smallskip \centering List of \\ points \\ \medskip \end{minipage} & \begin{minipage}{2.3 cm} \smallskip \centering Stabilizer\\ group \\ \medskip \end{minipage} & \begin{minipage}{0.9 cm} \smallskip \centering Cons-\\ truc-\\ tion \\ \medskip \end{minipage} \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 5 $ & CC & $ \{1, 2, 4, 8, 15\} $ & $ S_5 $ & B\\ \hline \rule[-1 mm]{0mm}{5 mm} $ 6 $ & NC & $ \{1, 2, 3, 4, 8, 12\} $ & $ (S_3 \times S_3) \rtimes C_2 $ & A \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 8 $ & CC & $ \{1, 2, 4, 7, 8, 11, 13, 14\} $ & $ ASL(3,2) $ & H \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 8 $ & NC & $ \{1, 2, 4, 5, 8, 9, 12, 13\} $ & $ PSL(3,2) $ & A \\ \hline \end{longtable} \end{footnotesize} \smallskip \newpage \begin{footnotesize} \begin{longtable} {|c|c|c|c|c|} \caption{Classification of the minimal 1-saturating sets in $\mathrm{PG}(4,2)$} \endfirsthead \multicolumn{5}{r}{\textit{(The table continues in the next page)}} \endfoot \multicolumn{5}{l}{ \textbf{Table 5} continue} \endhead \endlastfoot \hline \rule[-1 mm]{0mm}{5 mm} Size & Type & \begin{minipage}{2.3 cm} \smallskip \centering List of \\ points \\ \medskip \end{minipage} & \begin{minipage}{2.3 cm} \smallskip \centering Stabilizer\\ group \\ \medskip \end{minipage} & \begin{minipage}{0.9 cm} \smallskip \centering Cons-\\ truc-\\ tion \\ \medskip \end{minipage} \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 9 $ & CC & $ \{1, 2, 4, 8, 14, 16, 22, 27, 28\} $ & $ C_2 \times PSL(3,2) $ & $L_{21}$ \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 9 $ & NC & $ \{1, 2, 4, 6, 8, 16, 20, 22, 27\} $ & $ S_3 \times S_4 $ & \begin{minipage}{0.4 cm} B,\\ GL \end{minipage} \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 10 $ & CC & $ \{1, 2, 4, 8, 15, 16, 21, 22, 27, 28\} $ & $ G_{1920} $ & D \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 10 $ & NC & $ \{1, 2, 4, 5, 8, 10, 16, 22, 27, 28\} $ & $ D_8 $ & \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 10 $ & NC & $ \{1, 2, 4, 8, 10, 16, 20, 22, 23, 27\} $ & $ D_{12} $ & \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 10 $ & NC & $ \{1, 2, 4, 8, 10, 14, 16, 17, 22, 28\} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 10 $ & NC & $ \{1, 2, 4, 5, 8, 11, 16, 22, 27, 28\} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 10 $ & NC & $ \{1, 2, 4, 8, 16, 20, 21, 22, 27, 28\} $ & \begin{minipage}{2.3 cm} $ (((C_2 \times D_8) \rtimes$ \\ $ C_2) \rtimes C_3) \rtimes C_2 $ \end{minipage} & \begin{minipage}{0.4 cm} $E_B$,\\ GL \end{minipage} \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 10 $ & NC & $ \{1, 2, 4, 6, 8, 9, 16, 18, 20, 22\} $ & $ S_3 \times PSL(3,2) $ & A \\ \hline \rule[-1 mm]{0mm}{5 mm} $ 11 $ & NC & $ \{1, 2, 4, 7, 8, 10, 11, 16, 22, 23, 24\} $ & $ D_{10} $ & P \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & CC & $\begin{minipage}{4.3 cm} \{1, 2, 4, 7, 8, 11, 13, 14, 16, \\19, 21, 22, 25, 26, 28, 31\} \end{minipage}$ & $ ASL(4,2) $ & H \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{4.3 cm} \{1, 2, 4, 6, 8, 10, 12, 14, 16,\\ 18, 20, 22, 24, 26, 28, 30\} \end{minipage} $ & $ PSL(4,2) $ & A \\ \hline \end{longtable} \end{footnotesize} \begin{footnotesize} \begin{longtable} {|c|c|c|c|c|} \caption{Classification of the minimal 1-saturating sets in $\mathrm{PG}(5,2)$} \endfirsthead \multicolumn{5}{r}{\textit{(The table continues in the next page)}} \endfoot \multicolumn{5}{l}{ Table 5 continue} \endhead \endlastfoot \hline Size & Type & \begin{minipage}{2.3 cm} \smallskip \centering List of \\ points \\ \medskip \end{minipage} & \begin{minipage}{2 cm} \smallskip \centering Stabilizer\\ group \\ \medskip \end{minipage} & \begin{minipage}{0.9 cm} \smallskip \centering Cons-\\ truc-\\ tion \\ \medskip \end{minipage} \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 13 $ & CC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 16, 25, 32, 37, 38, 43, 51, 58\} \end{minipage} $ & $ G_{1152} $ & $L_{21}$ \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 13 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 20, 25, 32, 43, 52, 63\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 13 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 16, 20, 24, 25, 32, 37, 43, 46\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 13 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 16, 20, 25, 29, 32, 37, 43, 46\} \end{minipage} $ & $ C_2 \times C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 13 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 16, 17, 24, 25, 32, 37, 38, 43\} \end{minipage} $ & $ C_2 \times C_2 \times S_4 $ & GL \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 13 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 16, 21, 23, 32, 33, 58, 59\} \end{minipage} $ & $ C_2 \times C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 13 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 6, 7, 8, 16, 27, 32, 43, 48, 59\} \end{minipage} $ & $ G_{1152} $ & B \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 13 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 27, 32, 43, 48, 59\} \end{minipage} $ & $ G_{4032} $ & $D_A$ \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 11, 16, 17, 25, 32, 43, 52, 63\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 13, 16, 17, 25, 32, 43, 52, 58\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 16, 17, 19, 25, 32, 43, 46, 52\} \end{minipage} $ & $ D_{12} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 16, 20, 25, 32, 36, 43, 48, 52\} \end{minipage} $ & $ S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 15, 16, 25, 27, 32, 43, 52, 63\} \end{minipage} $ & $ S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 20, 25, 32, 36, 43, 48, 52\} \end{minipage} $ & $ S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 20, 25, 32, 43, 52, 54, 57\} \end{minipage} $ & $ S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 27, 32, 33, 43, 52, 63\} \end{minipage} $ & $ S_4 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 15, 16, 25, 27, 32, 43, 52, 63\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 16, 17, 24, 25, 32, 37, 39, 43\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 16, 25, 29, 32, 37, 39, 43, 51, 59\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 16, 25, 29, 32, 34, 35, 37, 43, 51\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 16, 17, 24, 25, 32, 34, 37, 43, 47\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 16, 17, 24, 25, 32, 37, 43, 47\} \end{minipage} $ & \begin{minipage}{2 cm} $ ((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_3) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 16, 25, 32, 43, 52, 54, 60, 62\} \end{minipage} $ & $ PSL(3,2) $ & \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 18, 25, 29, 32, 43, 52, 63\} \end{minipage} $ & \begin{minipage}{2.2 cm} $ (((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_3) \rtimes C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 16, 17, 24, 25, 32, 35, 37, 39\} \end{minipage} $ & $ S_4 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 16, 18, 21, 22, 32, 33, 58, 59\} \end{minipage} $ & $ G_{1152} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 14 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 5, 6, 7, 8, 16, 24, 32, 40, 48, 56\} \end{minipage} $ & $ G_{56448} $ & A \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 16, 17, 21, 25, 29, 32, 34, 37, 43\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 17, 20, 25, 32, 43, 52, 63\} \end{minipage} $ & $ D_8 $ & $L_{12}$ \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 19, 25, 26, 31, 32, 39, 43, 52\} \end{minipage} $ & $ D_8 $ & $L_{12}$ \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 20, 25, 32, 43, 47, 52, 53, 63\} \end{minipage} $ & $ D_8 $ & $L_{12}$ \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 14, 16, 18, 20, 25, 32, 43, 48, 52\} \end{minipage} $ & $C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 27, 32, 43, 46, 52, 58, 62\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 32, 39, 43, 47, 52, 53\} \end{minipage} $ & $ D_ {12} $ & P \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 17, 22, 24, 25, 32, 43, 46, 52\} \end{minipage} $ & $ D_{12} $ & P \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 16, 25, 30, 32, 33, 35, 43, 48, 52\} \end{minipage} $ & $ D_{12} $ & P \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 15, 16, 25, 32, 39, 43, 46, 52, 63\} \end{minipage} $ & $C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 16, 17, 22, 23, 24, 25, 32, 36, 41, 43\} \end{minipage} $ & $ C_2 \times C_2 \times S_3 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 20, 25, 32, 41, 43, 48, 50, 52\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 32, 39, 40, 43, 46, 52, 61\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 15 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 16, 22, 23, 24, 25, 30, 32, 36, 41, 43\} \end{minipage} $ & $ C_2 \times S_3 \times S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 15, 16, 25, 32, 43, 46, 50, 52, 62\} \end{minipage} $& $ 1 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 17, 25, 32, 43, 46, 47, 51, 52, 63\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 20, 25, 32, 37, 43, 46, 48, 50, 52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 20, 25, 32, 36, 43, 44, 46, 52, 62\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 27, 32, 43, 46, 48, 50, 52, 62\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 15, 16, 20, 25, 32, 40, 43, 46, 50, 52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 9, 15, 16, 25, 32, 40, 43, 46, 52, 59\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 20, 25, 32, 37, 43, 44, 46, 50, 52\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 20, 25, 32, 40, 42, 43, 46, 52, 63\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 26, 32, 43, 46, 50, 52, 56, 63\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 22, 24, 25, 26, 32, 43, 46, 52\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 20, 25, 27, 32, 42, 43, 46, 52, 58\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 9, 15, 16, 25, 27, 31, 32, 33, 43, 52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 27, 31, 32, 33, 36, 43, 48, 52\} \end{minipage} $ & $ D_{12} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 15, 16, 17, 25, 32, 36, 40, 43, 47, 52\} \end{minipage} $& $ D_{12} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 16 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 27, 32, 41, 43, 48, 49, 52, 53\} \end{minipage} $ & $ C_2 \times D_8 $ & $L_{13}$ \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 17 $ & CC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 19, 21, 25, 28, 32, 43, 49, 52, 61\} \end{minipage} $ & \begin{minipage}{2.2 cm} $ C_2 \times ((((C_2 \times D_8) \rtimes C_2) \rtimes C_3) \rtimes C_2) $ \end{minipage} & KL \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 17 $ & CC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 19, 21, 22, 25, 28, 32, 43, 49, 52\} \end{minipage} $ & \begin{minipage}{1.8 cm} $ ((A_4 \times A_4) \rtimes C_2) \rtimes C_2 $ \end{minipage} & \begin{minipage}{0.5 cm} KL,\\ $L_{21}$ \end{minipage} \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & CC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 19, 21, 25, 28, 32, 38, 43, 49, 52, 61\} \end{minipage} $ & $ S_6 $ & KL \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & CC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 19, 25, 28, 32, 38, 43, 49, 52, 61, 62\} \end{minipage} $ & $ G_{11520} $ & KL \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 17 $ & CC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 19, 21, 22, 25, 26, 28, 31, 32, 43\} \end{minipage} $ & $ G_{40320} $ & \begin{minipage}{0.5 cm} KL,\\ $L_{21}$ \end{minipage} \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 11, 15, 16, 25, 27, 32, 43, 46, 52, 59, 63\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 25, 32, 43, 46, 48, 50, 52, 56, 59\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 25, 27, 32, 36, 40, 43, 46, 49, 52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 25, 27, 32, 43, 46, 48, 50, 52, 59\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 27, 30, 32, 43, 46, 48, 50, 52, 59\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 11, 15, 16, 25, 27, 32, 43, 46, 48, 52, 59\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 15, 16, 24, 25, 27, 32, 36, 43, 46, 52, 59\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 15, 16, 20, 24, 25, 32, 34, 43, 46, 50, 52\} \end{minipage} $ & $ C_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 24, 25, 27, 30, 32, 36, 43, 46, 49, 52\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 25, 27, 32, 43, 46, 49, 52, 59, 63\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 15, 16, 22, 23, 24, 25, 31, 32, 43, 52, 54\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 25, 32, 43, 46, 49, 50, 52, 56, 59\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 25, 27, 30, 32, 43, 46, 49, 52, 63\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 25, 27, 32, 43, 46, 48, 52, 59, 63\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 15, 16, 24, 25, 32, 35, 36, 38, 43, 52, 54\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 15, 16, 25, 32, 33, 36, 43, 46, 52, 62\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 12, 15, 16, 20, 25, 32, 43, 46, 50, 52, 62\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 20, 25, 32, 41, 43, 49, 50, 52, 54, 57\} \end{minipage} $ & $ D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 19, 23, 25, 32, 34, 43, 46, 50, 52\} \end{minipage} $ & $ D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 6, 7, 8, 15, 16, 18, 25, 26, 32, 42, 43, 46, 52, 59\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 27, 29, 32, 43, 45, 46, 48, 50, 52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 15, 16, 20, 24, 25, 32, 43, 49, 52, 54, 57\} \end{minipage} $ & $ D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 15, 16, 25, 30, 32, 43, 46, 48, 50, 52, 56\} \end{minipage} $ & $ D_{10} $ & $E_{9}$ \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 30, 32, 43, 46, 49, 50, 52, 56, 59\} \end{minipage} $ & $ D_{10} $ & $E_{9}$ \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 30, 32, 43, 46, 48, 50, 52, 56, 59\} \end{minipage} $ & $ D_{10} $ & $E_{9}$ \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 17, 20, 25, 32, 36, 43, 44, 47, 52, 53\} \end{minipage} $ & $ D_{10} $ & $E_{9}$ \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 15, 16, 17, 25, 32, 43, 48, 49, 52, 54, 57\} \end{minipage} $ & $ D_{10} $ & $E_{9}$ \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 12, 15, 16, 25, 32, 35, 40, 43, 46, 52, 60\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 15, 16, 25, 26, 29, 32, 33, 43, 46, 48, 52\} \end{minipage} $ & $ D_{20} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 15, 16, 25, 29, 32, 33, 43, 46, 48, 51, 52\} \end{minipage} $ & $ D_{20} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 9, 10, 11, 15, 16, 25, 27, 32, 33, 43, 52\} \end{minipage} $ & $ S_4 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 12, 15, 16, 19, 25, 32, 43, 46, 50, 52, 57\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 12, 15, 16, 24, 25, 32, 35, 39, 40, 43, 52\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 6, 7, 8, 11, 15, 16, 19, 25, 32, 34, 35, 43, 46, 52\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 21, 25, 32, 37, 43, 46, 49, 50, 52, 56\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 12, 15, 16, 19, 25, 32, 35, 40, 43, 46, 52\} \end{minipage} $ & \begin{minipage}{2.5 cm} $ (((C_4 \times C_2) \rtimes C_2) \rtimes C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 12, 16, 24, 25, 30, 32, 35, 39, 43, 52, 57\} \end{minipage} $ & \begin{minipage}{1.9 cm} $ (C_2 \times C_2 \times A_4) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 17, 19, 20, 21, 22, 23, 25, 32, 43\} \end{minipage} $ & $ C_2 \times C_2 \times S_4 $ & GL \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 13, 16, 17, 19, 20, 21, 22, 23, 25, 32, 43\} \end{minipage} $ & $ C_2 \times C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 17, 19, 20, 21, 23, 25, 29, 32, 43\} \end{minipage} $ & $ C_2 \times C_2 \times S_4 $ & GL \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 13, 16, 17, 19, 20, 21, 23, 25, 29, 32, 43\} \end{minipage} $ & $ C_2 \times C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 21, 23, 25, 32, 43, 48, 52, 54, 57\} \end{minipage} $ & $ S_5 $ & P \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 21, 25, 32, 43, 46, 51, 52, 58, 60, 63\} \end{minipage} $ & \begin{minipage}{2.5 cm} $ (((C_2 \times D_8) \rtimes C_2) \rtimes C_3) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 19, 20, 21, 22, 23, 25, 26, 32, 43\} \end{minipage} $ & \begin{minipage}{2.5 cm} $ C_2 \times ((((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_3) \rtimes C_2) \rtimes C_2) $ \end{minipage} & \begin{minipage}{0.5 cm} BL,\\ GL \end{minipage} \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 17, 20, 21, 24, 25, 28, 29, 32, 43\} \end{minipage} $ & $ G_{1152} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 6, 7, 8, 14, 16, 20, 21, 23, 24, 25, 26, 29, 32, 43\} \end{minipage} $ & $ C_2 \times S_6 $ & GL \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 6, 7, 8, 16, 17, 21, 23, 24, 25, 29, 31, 32, 43\} \end{minipage} $ & $ C_2 \times S_6 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 17 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 32, 58\} \end{minipage} $ & $ G_{8064} $ & B \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & CC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 19, 21, 22, 25, 26, 28, 32, 43, 52, 63\} \end{minipage} $ & $ G_{10752} $ & D\\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 11, 15, 16, 25, 32, 33, 36, 42, 43, 46,52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 9, 10, 11, 15, 16, 25, 32, 33, 42, 43, 46, 52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 9, 10, 11, 15, 16, 25, 32, 40, 42, 43, 46, 52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 11, 12, 15, 16, 25, 32, 36, 40, 42, 43, 46, 52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 11, 15, 16, 25, 32, 36, 40, 42, 43, 46,52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 11, 15, 16, 25, 32, 33, 42, 43, 45, 46,52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 11, 12, 15, 16, 25, 32, 36, 42, 43, 46,52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 11, 12, 15, 16, 25, 32, 33, 42, 43, 45, 46, 52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 10, 11, 12, 15, 16, 25, 32, 40, 42, 43, 46,52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 10, 11, 12, 15, 16, 25, 32, 33, 42, 43, 46,52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 11, 15, 16, 25, 32, 40, 42, 43, 45, 46,52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 11, 12, 15, 16, 25, 32, 33, 36, 42, 43, 46, 52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 11, 12, 15, 16, 25, 32, 40, 42, 43, 45, 46, 52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 11, 15, 16, 25, 32, 33, 36, 40, 42, 43, 46, 52\} \end{minipage} $ & $ C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 37, 38, 40, 43, 46, 50, 52\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 15, 16, 25, 32, 33, 35, 43, 45, 46, 47,52\} \end{minipage} $ & $ C_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 11, 15, 16, 25, 32, 33, 40, 42, 43, 45, 46, 52\} \end{minipage} $ & $ C_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 11, 12, 15, 16, 25, 32, 42, 43, 45, 46,52\} \end{minipage} $ & $ C_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 12, 15, 16, 25, 32, 37, 43, 46, 49, 50, 52, 63\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 37, 40, 43, 46, 50, 52, 63\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 11, 12, 15, 16, 25, 32, 39, 40, 42, 43, 46,52\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 12, 15, 16, 25, 32, 39, 40, 42, 43, 45, 46, 52\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 12, 15, 16, 25, 32, 37, 40, 43, 46, 50, 52, 63\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 38, 40, 43, 45, 46, 50, 52\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 15, 16, 25, 32, 40, 42, 43, 45, 46, 47,52\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 15, 16, 25, 32, 36, 40, 42, 43, 46, 47,52\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 9, 10, 15, 16, 25, 32, 40, 42, 43, 46, 47, 52\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 11, 14, 15, 16, 25, 32, 33, 43, 45, 46,52\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 11, 12, 14, 15, 16, 25, 32, 40, 43, 45, 46, 52\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 12, 15, 16, 25, 32, 36, 39, 40, 42, 43, 46, 52\} \end{minipage} $ & $ S_3 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 15, 16, 24, 25, 32, 36, 43, 48, 49, 52, 54,57\} \end{minipage} $ & $ D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 37, 43, 46, 49, 50, 52, 63\} \end{minipage} $ & $ D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 18, 25, 26, 32, 36, 43, 46, 50, 52, 58, 59\} \end{minipage} $ & $ D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 15, 16, 25, 32, 33, 35, 36, 43, 46, 47,52\} \end{minipage} $ & $ D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 11, 15, 16, 25, 32, 33, 35, 43, 45, 46,52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 12, 15, 16, 25, 32, 33, 39, 42, 43, 45, 46, 52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 18, 20, 25, 32, 36, 38, 43, 48, 50, 52, 54\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 11, 15, 16, 25, 32, 39, 40, 42, 43, 45, 46,52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 5, 7, 8, 11, 15, 16, 25, 32, 33, 35, 36, 43, 46, 52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 37, 38, 43, 46, 49, 50, 52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 5, 7, 8, 9, 11, 15, 16, 25, 32, 33, 35, 43, 46, 52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 37, 43, 46, 50, 52, 57, 63\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 9, 15, 16, 25, 32, 39, 40, 42, 43, 46, 47, 52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 18, 25, 27, 32, 36, 43, 47, 48, 50, 52, 54\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 12, 15, 16, 25, 32, 35, 40, 43, 45, 46, 47, 52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 5, 7, 8, 11, 15, 16, 25, 32, 33, 35, 43, 45, 46, 52\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 43, 46, 50, 52, 57, 60, 63\} \end{minipage} $ & $ D_{12} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 12, 15, 16, 25, 32, 37, 38, 40, 43, 46, 50, 52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 40, 43, 45, 46, 50, 52, 63\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 9, 11, 12, 15, 16, 25, 32, 33, 39, 42, 43, 46,52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 19, 25, 32, 37, 43, 46, 49, 50, 52, 63\} \end{minipage} $ & $ (C_4 \times C_2) \rtimes C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 9, 11, 12, 15, 16, 25, 32, 35, 40, 43, 46, 52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 12, 15, 16, 25, 32, 35, 36, 40, 43, 46, 47,52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 15, 16, 25, 32, 33, 35, 38, 43, 45, 46,52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 17, 18, 20, 25, 32, 38, 43, 48, 50, 52, 54\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 12, 15, 16, 25, 32, 40, 43, 46, 50, 52, 60, 63\} \end{minipage} $ & $ (C_4 \times C_2) \rtimes C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 12, 15, 16, 25, 32, 35, 40, 43, 45, 46, 47,52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 15, 16, 25, 32, 39, 40, 42, 43, 45, 46, 47,52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 15, 16, 17, 19, 22, 25, 27, 28, 32, 43, 47, 52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 15, 16, 21, 25, 32, 37, 43, 46, 49, 50, 52, 63\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 17, 18, 20, 25, 32, 36, 38, 43, 48, 52, 54\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 17, 25, 27, 32, 34, 40, 43, 45, 47, 51, 52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 38, 40, 43, 46, 50, 52, 60\} \end{minipage} $ & $ (C_4 \times C_2) \rtimes C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 37, 40, 43, 46, 50, 52, 55\} \end{minipage} $ & $ (C_4 \times C_2) \rtimes C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 11, 12, 15, 16, 25, 32, 35, 40, 43, 45, 46,52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 11, 12, 15, 16, 25, 32, 35, 36, 40, 43, 46,52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 12, 15, 16, 25, 32, 35, 36, 40, 43, 46, 47, 52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 18, 20, 25, 32, 34, 38, 43, 48, 50, 52, 54\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 12, 15, 16, 25, 32, 38, 40, 43, 45, 46, 50, 52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 9, 11, 15, 16, 25, 32, 39, 40, 42, 43, 46, 52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 10, 15, 16, 24, 25, 32, 35, 36, 38, 43, 46,52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 11, 12, 15, 16, 25, 32, 35, 40, 43, 45, 46, 52\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 12, 15, 16, 25, 32, 38, 40, 43, 46, 50, 52, 60\} \end{minipage} $ & \begin{minipage}{1.9 cm} $ ((C_4 \times C_2) \rtimes C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 7, 8, 12, 15, 16, 25, 32, 35, 36, 39, 40, 43, 47,52\} \end{minipage} $ & $ (C_2 \times D_8) \rtimes C_2 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 38, 43, 46, 49, 50, 52, 60\} \end{minipage} $ & \begin{minipage}{2 cm} $ C_2 \times C_2 \times C_2 \times C_2 \times C_2 $ \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 19, 25, 29, 32, 36, 41, 43, 46, 47, 51, 52\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 19, 25, 32, 38, 43, 46, 49, 50, 52, 60\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 20, 25, 29, 32, 34, 41, 43, 48, 50, 52, 54\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 20, 25, 32, 35, 37, 41, 42, 43, 44, 52\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 18, 25, 27, 32, 34, 41, 43, 48, 50, 52, 54\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 19, 20, 25, 32, 35, 37, 41, 42, 43, 50, 52\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 12, 15, 16, 25, 32, 35, 39, 40, 43, 45, 46, 52\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 19, 25, 32, 37, 40, 43, 46, 50, 52, 63\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 27, 29, 32, 43, 45, 46, 49, 50, 52, 63\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 27, 29, 32, 38, 40, 43, 45, 46, 50, 52\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 19, 20, 21, 25, 32, 35, 37, 41, 43, 50, 52\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 25, 27, 32, 38, 43, 46, 50, 52, 57, 60\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 15, 16, 21, 25, 32, 37, 38, 43, 46, 49, 50, 52\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 11, 15, 16, 25, 32, 33, 35, 39, 43, 45, 46,52\} \end{minipage} $ & $ C_2 \times S_4 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 19, 21, 25, 32, 37, 43, 46, 49, 50, 52, 63\} \end{minipage} $ & \begin{minipage}{2.5 cm} $ (((C_4 \times C_2) \rtimes C_2) \rtimes C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 18, 25, 27, 32, 34, 41, 43, 48, 49, 50, 52\} \end{minipage} $ & \begin{minipage}{2.5 cm} $ (((C_4 \times C_2) \rtimes C_2) \rtimes C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 12, 15, 16, 19, 25, 32, 37, 43, 46, 50, 52, 57, 63\} \end{minipage} $ & \begin{minipage}{2.5 cm} $ C_2 \times ((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2) $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 20, 25, 32, 35, 37, 41, 42, 43, 44, 50, 52\} \end{minipage} $ & \begin{minipage}{2.5 cm} $ C_2 \times ((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2) $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 25, 27, 29, 32, 40, 43, 45, 46, 50, 52, 63\} \end{minipage} $ & \begin{minipage}{2.5 cm} $ C_2 \times ((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2) $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 12, 15, 16, 25, 32, 35, 38, 40, 43, 45, 46, 52\} \end{minipage} $ & \begin{minipage}{2.5 cm} $ ((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_3) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 10, 15, 16, 21, 25, 32, 37, 43, 46, 49, 50, 52, 55\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times D_8) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 6, 7, 8, 9, 11, 12, 16, 24, 25, 32, 43, 44, 48, 50, 52\} \end{minipage} $ & $ S_3 \times S_4 $ & \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 19, 20, 21, 22, 25, 26, 32, 35, 43, 52\} \end{minipage} $ & \begin{minipage}{2.2 cm} $ (((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_3) \rtimes C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 5, 6, 7, 8, 16, 25, 32, 33, 36, 37, 38, 39, 43, 50\} \end{minipage} $ & \begin{minipage}{2.2 cm} $ (((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_3) \rtimes C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 11, 13, 16, 17, 20, 21, 24, 25, 28, 29, 32,43\} \end{minipage} $ & \begin{minipage}{2.2 cm} $ C_2 \times ((((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_3) \rtimes C_2) \rtimes C_2) $ \end{minipage} & \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 15, 16, 21, 25, 27, 32, 37, 43, 46, 49, 52, 58, 63\} \end{minipage} $ & \begin{minipage}{2.5 cm} $ C_2 \times ((((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_3) \rtimes C_2) \rtimes C_2) $ \end{minipage} & \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 20, 22, 25, 32, 35, 37, 42, 43, 44, 52\} \end{minipage} $ & \begin{minipage}{2.2 cm} $ C_2 \times ((((C_2 \times D_8) \rtimes C_2) \rtimes C_3) \rtimes C_2) $ \end{minipage} & \\ \hline \rule[-8 mm]{0mm}{18 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 13, 16, 17, 20, 21, 24, 25, 28, 29, 32, 37,43\} \end{minipage} $ & \begin{minipage}{2.2 cm} $ (C_2 \times C_2 \times (((C_2 \times C_2 \times C_2 \times C_2) \rtimes C_3) \rtimes C_2)) \rtimes C_2 $ \end{minipage} & $E_{B}$ \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 14, 16, 20, 21, 25, 26, 32, 35, 41, 43, 44, 50, 52\} \end{minipage} $ & \begin{minipage}{2.2 cm} $ (C_2 \times ((((C_2 \times D_8) \rtimes C_2) \rtimes C_3) \rtimes C_2)) \rtimes C_2 $ \end{minipage} & $E_{B}$ \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 11, 13, 16, 17, 19, 20, 21, 22, 25, 28, 32,43\} \end{minipage} $ & $ G_{1152} $ & AL \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 14, 16, 19, 20, 21, 22, 25, 26, 28, 32, 43, 52\} \end{minipage} $ & $ G_{2688} $ & GL \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 20, 21, 22, 25, 26, 28, 32, 43\} \end{minipage} $ & $ G_{2688} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 18 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 32, 48\} \end{minipage} $ & $ G_{120960} $ & A \\ \hline \rule[-4 mm]{0mm}{12 mm} $ 20 $ & CC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 13, 16, 19, 21, 22, 25, 28, 32, 37, 43, 46, 49, 52, 58, 63\} \end{minipage} $ & $ G_{184320} $ & D \\ \hline \rule[-4 mm]{0mm}{12 mm} $ 20 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 4, 5, 7, 8, 13, 16, 17, 19, 20, 21, 22, 25, 28, 32, 37,43, 49, 52\} \end{minipage} $ & $ G_{9216} $ & \begin{minipage}{0.5 cm} $E_{B}$,\\ GL \end{minipage} \\ \hline \rule[-5 mm]{0mm}{15 mm} $ 32 $ & CC & $ \begin{minipage}{5 cm} \{1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22, 25, 26, 28, 31, 32, 35, 37, 38, 41, 42, 44, 47, 49, 50, 52, 55, 56, 59, 61, 62\} \end{minipage} $ & $ ASL(5,2) $ & H \\ \hline \rule[-5 mm]{0mm}{15 mm} $ 32 $ & NC & $ \begin{minipage}{5 cm} \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32\} \end{minipage} $ & $ PSL(5,2) $ & A \\ \hline \end{longtable} \end{footnotesize} $\;$\\ $\;$\\ \begin{footnotesize} \begin{longtable} {|c|c|c|c|c|} \caption{Classification of small minimal 1-saturating sets in $\mathrm{PG}(6,2)$} \endfirsthead \multicolumn{5}{r}{\textit{(The table continues in the next page)}} \endfoot \multicolumn{5}{l}{ Table 6 continue} \endhead \endlastfoot \hline \rule[-2 mm]{0mm}{8 mm} Size & Type & \begin{minipage}{1.9 cm} \smallskip \centering List of \\ points \\ \medskip \end{minipage} & \begin{minipage}{2.1 cm} \smallskip \centering Stabilizer\\ group \\ \medskip \end{minipage} & \begin{minipage}{0.9 cm} \smallskip \centering Cons-\\ truc-\\ tion \\ \medskip \end{minipage} \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 19 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 31, 32, 43, 51, 55, \\ 64, 67, 85, 89, 101, 110, 121, 126\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 19 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 26, 29, 32, 39, \\ 43, 51, 64, 70, 76, 85, 110, 120, 121\} \end{minipage} $ & $ S_5 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 19 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 6, 8, 14, 15, 16, 24, 32, \\ 43, 47, 48, 50, 51, 64, 85, 108, 121\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ (C_2 \times C_2 \times A_5) \rtimes C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 19 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 13, 15, 16, 22, 30, 32, \\ 42, 43, 48, 51, 55, 64, 85, 108, 121\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ (A_4 \times A_5) \rtimes C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 19 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 30, 32, 43, 51, \\ 54, 64, 66, 85, 89, 101, 108, 120, 127\} \end{minipage} $ & $ G_{5760} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 29, 31, 32, 37, 43, \\ 51, 64, 72, 85, 99, 110, 118, 121, 126\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 29, 31, 32, 37, 43, \\ 51, 64, 77, 85, 87, 102, 110, 124, 126\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 3, 4, 8, 15, 16, 27, 29, 32, 39, \\ 43, 51, 64, 71, 76, 85, 110, 120, 121\} \end{minipage} $ & $ C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 3, 4, 8, 15, 16, 26, 29, 32, 36, \\ 43, 51, 64, 85, 93, 98, 104, 110, 120\} \end{minipage} $ & $ D_8 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 26, 29, 32, 37, 43, \\ 51,64, 66, 77, 85, 90, 110, 115, 121\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 29, 32, 37, 43, 51,59, \\ 64, 66, 85, 88, 108, 110, 118, 126\} \end{minipage} $ & $ C_2 \times C_2 \times C_2 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 19, 32, 43, 51, 59, \\ 64, 67, 85, 89, 93, 102, 110, 117, 126\} \end{minipage} $ & $ C_2 \times D_8 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 31, 32, 43, 51, 52, \\ 59, 60, 64, 65, 67, 85, 89, 102, 110\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 26, 29, 32, 37, 43, \\ 51, 64, 71, 72, 85, 87, 90, 110, 121\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 29, 31, 32, 37, 43, \\ 51, 64, 85, 88, 90, 102, 108, 110, 118\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 28, 32, 43, 51, 64, \\ 65, 67, 85, 89, 101, 110, 117, 121, 126\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 32, 36, 43, 51, 55, \\ 56, 64, 66, 67, 85, 89, 93, 110, 126\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 11, 15, 16, 32, 36, 43, 51, \\ 55, 56, 64, 67, 85, 89, 94, 110, 126\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 20, 28, 32, 39, 43, \\ 51, 55, 63, 64, 67, 85, 89, 110, 121\} \end{minipage} $ & \begin{minipage}{2 cm} $ C_2 \times C_2 \times C_2 \times C_2 \times C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 3, 4, 8, 15, 16, 31, 32, 43, 51, \\ 55, 64, 67, 85, 89, 102, 110, 121, 126\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 26, 29, 32, 36, 43, \\ 51, 64, 71, 77, 85, 86, 95, 110, 120\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 20, 28, 32, 43, 51, \\ 55, 64, 67, 85, 89, 102, 110, 121, 126\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ (C_2 \times C_2 \times C_2 \times C_2) \rtimes C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 3, 4, 8, 15, 16, 24, 28, 32, 39, \\ 43, 51, 55, 63, 64, 67, 85, 110, 121\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times D_8 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 26, 29, 32, 36, 43, \\ 51, 64, 85, 86, 95, 99, 105, 110, 120\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ (C_2 \times C_2 \times A_4) \rtimes C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 26, 29, 32, 37, 43, \\ 51, 64, 85, 87, 95, 99, 105, 110, 121\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times D_8) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 25, 32, 43, 45, 51, \\ 58, 64, 80, 85, 89, 92, 102, 103, 110\} \end{minipage} $ & \begin{minipage}{2 cm} $ (C_2 \times C_2 \times C_2 \times D_8) \rtimes C_2 $ \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 26, 29, 32, 37, 43, \\ 51, 64, 71, 77, 85, 87, 95, 110, 121\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ (C_2 \times C_2 \times C_2 \times D_8) \rtimes C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 26, 29, 32, 37, 43, \\ 51, 64, 71, 77, 85, 95, 110, 115, 121\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ (C_2 \times C_2 \times C_2 \times D_8) \rtimes C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 24, 31, 32, 39, 43, \\ 47, 51, 55, 59, 63, 64, 67, 85, 86\} \end{minipage} $ & $ S_3 \times S_4 $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 27, 31, 32, 39, 43, \\ 47, 51, 55, 56, 63, 64, 67, 85, 86\} \end{minipage} $ & $ S_3 \times S_4 $ & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 27, 31, 32, 39, 43, \\ 47, 51, 55, 59, 63, 64, 67, 85, 86\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times S_4 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 28, 32, 39, 43, 51, \\ 52, 56, 63, 64, 66, 67, 85, 89, 110\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times S_4 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 28, 32, 39, 43, 51,\\ 52, 56, 64, 66, 67, 85, 89, 110, 125\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times S_4 $ \end{center} \end{minipage} & \\ \hline \rule[-4 mm]{0mm}{10 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 11, 15, 16, 22, 24, 32, 43, \\ 44, 48, 51, 55, 63, 64, 85, 108, 121\} \end{minipage} $ & \begin{minipage}{2 cm} \begin{center} $ C_2 \times C_2 \times C_2 \times S_4 $ \end{center} \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 27, 28, 32, 39, 43, \\ 47, 51, 55, 59, 63, 64, 67, 85, 86\} \end{minipage} $ & $ C_2 \times D_8 \times S_4 $ & C \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 27, 28, 32, 39, 43, \\ 47, 51, 55, 56, 63, 64, 67, 85, 86\} \end{minipage} $ & $ C_2 \times D_8 \times S_4 $ & C \\ \hline \rule[-6 mm]{0mm}{14 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 6, 8, 15, 16, 24, 32, 43, 51,\\ 54, 64, 65, 85, 90, 96, 108, 125, 127\} \end{minipage} $ & \begin{minipage}{2.3 cm} \begin{center} $ ((((C_4 \times C_4) \rtimes C_2) \rtimes C_2) \rtimes C_3) \rtimes C_2 $ \end{center} \end{minipage} & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 24, 31, 32, 36, 43, \\ 47, 51, 55, 59, 63, 64, 67, 85, 86\} \end{minipage} $ & $ G_{1152} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 27, 31, 32, 36, 43, \\ 47, 51, 52, 56, 63, 64, 67, 85, 86\} \end{minipage} $ & $ G_{1152} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 27, 31, 32, 39, 43,\\ 47, 51, 55, 59, 60, 64, 67, 85, 86\} \end{minipage} $ & $ G_{1152} $ & \\ \hline \rule[-2 mm]{0mm}{8 mm} $ 20 $ & NC & $ \begin{minipage}{5.8 cm} \{1, 2, 4, 8, 15, 16, 28, 32, 43, 51, 52, \\ 64, 66, 67, 85, 89, 101, 110, 122, 125\} \end{minipage} $ & $ G_{2880} $ & \\ \hline \end{longtable} \end{footnotesize} \section*{Acknowledgements} The research of S. Marcugini and F. Pambianco was supported in part by Ministry for Education, University and Research of Italy (MIUR) (Project "Geometrie di Galois e strutture di incidenza") and by the Italian National Group for Algebraic and Geometric Structures and their Applications (GNSAGA - INDAM). The research of A.A. Davydov was carried out at the IITP RAS at the expense of the Russian Foundation for Sciences (project 14-50-00150).
{ "timestamp": "2018-02-13T02:22:44", "yymm": "1802", "arxiv_id": "1802.04214", "language": "en", "url": "https://arxiv.org/abs/1802.04214" }
\section{Introduction} \label{sec:Intro} The $\mathrm{^{22}Ne(p,\gamma)^{23}Na}$ reaction, which belongs to the NeNa cycle, is active in high temperature hydrogen burning \cite{Iliadis15-Book}. One astrophysical site of particular interest for this reaction is the Hot Bottom Burning (HBB) process in asymptotic giant branch (AGB) stars of high initial mass (M $>$ 4 M$_\odot$), which are one of the proposed candidates to explain Na anomalies in ancient stellar globular clusters \cite{Gratton04-ARAA}. The rate of this reaction is controlled by a large number of resonances. Despite recent experimental work on resonances in the 400-1200 keV range \cite{Longland10-PRC,Depalo15-PRC}, the rate was still highly uncertain especially due to the contribution of low-lying resonances. As a result, the recommended median rates from two widely used thermonuclear reaction rate compilations, NACRE \cite{NACRE99-NPA} and Iliadis \cite{Iliadis10-NPA841_31}, differed by two to three orders of magnitude, especially at the temperatures relevant for the HBB process. In order to provide a more accurate estimate, the reaction was recently studied at the Laboratory for Underground Nuclear Astrophysics (LUNA) 400\,kV accelerator, using a windowless gas target experiment and two large high-purity germanium detectors. In this campaign, three predicted resonances located at 156.2, 189.5, and 259.7 keV in the laboratory system were observed for the first time, and their energies and strengths determined \cite{Cavanna15-PRL,Depalo16-PRC}. Strength values for two of the three newly found resonances were found to be much larger than previous indirect upper limits \cite{Iliadis10-NPA841_31}, confirming the need for direct nuclear-reaction measurements. The full implications of the resultant, revised thermonuclear reaction rate have yet to be explored. Initial work exploring thermally pulsing AGB stars experiencing the HBB process suggests a larger amount of $^{23}$Na ejected \cite{Slemer17-MNRAS}. Very recently, the observation of two of the three new resonances has been independently confirmed \cite{Kelly17-PRC}, albeit with slightly larger strengths and somewhat different decay branching ratios. The present work puts the necessary parts in place to push the study of the $\mathrm{^{22}Ne(p,\gamma)^{23}Na}$ reaction to ultra-low energies. To this end, a new setup was devised to achieve a hundredfold higher efficiency than the previous one at LUNA \cite{Cavanna14-EPJA,Cavanna15-PRL,Depalo16-PRC}. The purpose of this new setup is to search for two proposed resonances at very low energy, $E_p$ = 71 and 105 keV, not observed yet \cite{Cavanna15-PRL,Depalo16-PRC}, and the direct capture contribution. Section \ref{sec:Setup} introduces the new high-efficiency setup, including its complete characterisation. The background observed in the $\gamma$-ray detector, both with and without incident ion beam, is analysed in Section \ref{sec:Background}. As a demonstration of the capabilities of the new setup, the decay branching ratios of the $E_p$ = 189.5\,keV resonance in \\ $\mathrm{^{22}Ne(p,\gamma)^{23}Na}$ are determined in Section \ref{sec:189.5keVres}. Finally, a summary and outlook are given in Section \ref{sec:Summary}. \section{Experimental setup} \label{sec:Setup} The LUNA 400 kV electrostatic accelerator is located deep underground in the INFN Gran Sasso National Laboratory (LNGS), Italy. It provides $^1$H$^+$ or $^4$He$^+$ ion beams with high currents, up to 250-500 $\mathrm{\mu A}$ on target, with very small momentum spread and excellent long-term stability \cite{Formicola03-NIMA}. Experimental results obtained at LUNA have been reviewed elsewhere \cite{Costantini09-RPP,Broggini10-ARNPS,Broggini18-PPNP}. \begin{figure*}[tbh] \centering \includegraphics[angle=0,width=\textwidth]{GasTarget_2016_vactorgraphics.eps} \caption{Schematic diagram (not to scale) of the differential pumping system. In recirculating mode, the valves \texttt{V1}, and \texttt{V2} are closed. The beam comes from the accelerator on the left, passes through the apertures \texttt{AP$_{\mbox{3}}$}, \texttt{AP$_{\mbox{2}}$} and \texttt{AP$_{\mbox{1}}$}, enters the target chamber and is stopped in the calorimeter. More than 99.5\% of the gas, which enters the chamber close to the calorimeter, is pumped through the RUVAC 2000. Approximately 0.5\% of the gas still flows towards the second pumping stage and a negligible part flows in the third pumping stage.}\label{fig:GasTarget} \end{figure*} \subsection{General considerations} The experimental setup is optimised for the irradiation of target materials that exist in gaseous form at normal temperature and pressure. The design considerations have been guided by three nuclear reactions in particular: \\$^{22}$Ne(p,$\gamma$)$^{23}$Na, $^{22}$Ne($\alpha$,$\gamma$)$^{26}$Mg, and $^{2}$H(p,$\gamma$)$^{3}$He. These studies entail the use of isotopically enriched $^{22}$Ne (9\% abundance in natural neon) and $^2$H (0.012\% abundance in natural hydrogen). Due to the hindering effect of the repulsive Coulomb barrier at the astrophysically relevant beam energies accessible at LUNA, the nuclear reactions under study exhibit ultra-low cross sections, which drop exponentially with decreasing beam energy. At the same time, these energies lie near the maximum in the stopping power curve \cite{Ziegler10-NIMB}, the so-called Bragg peak. This combination of low, and rapidly varying, cross section and high stopping power requires a careful optimisation of the experimental conditions. As a result, chemical compounds \cite[e.g.]{Caciolli12-EPJA} or implanted targets \cite[e.g.]{Depalo15-PRC} are disfavored, because they contain a different nuclear species in addition to the nucleus under study, further enhancing the stopping power but not the experimental yield. Gas cells \cite[e.g.]{Bordeanu13-NPA} are problematic, as well, because their entrance windows may cause unwanted beam energy straggling or even stop a low-energy ion beam altogether. \subsection{Windowless gas target} The solution adopted here is a windowless, extended gas target of the static type (Figure \ref{fig:GasTarget}). The gas enters the target chamber from the right (\texttt{VT} in Figure \ref{fig:GasTarget}), with the incoming gas flux precisely regulated by the \texttt{MKS248A} valve controlled by a pressure measurement device, to keep the target pressure constant within 0.5\%. The fact that there is no entrance window leads to some inevitable gas loss through the metal tube functioning as target entrance collimator (\texttt{AP$_{\mbox{1}}$} in Figure \ref{fig:GasTarget}). This effect is mitigated by limiting the diameter of \texttt{AP$_{\mbox{1}}$} to 7\,mm, and by making it relatively long, 40\,mm. The target gas is then removed from the setup by large Roots-type vacuum pumps (\texttt{RUVAC 2000} and \texttt{RUVAC 500} in Figure \ref{fig:GasTarget}), which have a pumping speed of 2050 and 505 m$^3$/h, respectively, over a relatively wide pressure range. The \texttt{RUVAC 2000}, its vacuum recipient, and a connecting tube matching it to \texttt{AP$_{\mbox{1}}$} form the first pumping stage. For typical target pressures of 2.0 (0.3) mbar in the $^{22}$Ne ($^2$H$_2$) case, the pressure in the first pumping stage is found to be almost two orders of magnitude lower with respect to the pressure inside the target. The combination of a long, narrow collimator and a powerful pump is repeated twice, for the second (collimator \texttt{AP$_{\mbox{2}}$} and turbomolecular pumps \texttt{TP$_{\mbox{2L}}$, TP$_{\mbox{2M}}$, TP$_{\mbox{2R}}$}, two of them with 1000 l/s and one with 1500 l/s nominal pumping speed, respectively) and third pumping stages (collimator \texttt{AP$_{\mbox{3}}$} and turbomolecular pump \texttt{TP$_{\mbox{3}}$} with 350 l/s nominal speed). After the third pumping stage, the connecting conditions to the accelerator (pressure in the 10$^{-6}$ mbar range and negligible gas flow) are met. When employing $^{22}$Ne gas, it is necessary to re-use the gas exhaust from the three pumping stages in order to limit consumption. To this end, the exhausted gas is collected, compressed, and guided to a chemical getter (Monotorr PS4-MT3-R-2 with a PS4-C3-R-2 heated getter) to remove impurities, typically oxygen, nitrogen, hydrogen, water, oxocarbons, and hydrocarbons. The cleaned gas is then transported to a buffer (volume 1 liter, typical pressure 400-700 mbar) and then re-used as input gas. In addition to $^{22}$Ne gas, other gases are needed for calibration and background study purposes. In order to study the ion beam induced background, natural argon gas is used. For the determination of the detection efficiency at high $\gamma$-ray energies, it is possible to insert nitrogen gas to exploit the $E_p$ = 278\,keV $^{14}$N(p,$\gamma$)$^{15}$O resonance. The core of the setup, the gas target chamber, is a stainless steel tube designed to fit inside a $\gamma$-ray calorimeter formed by a 4$\pi$ bismuth germanate (BGO) detector (see Section \ref{subsec:BGO}). A new chamber has been designed for the present experiment and has been characterised as described in the following sections. In addition to the chamber, a calorimeter, which is different with respect to the one used in the previous experiment \cite{Cavanna14-EPJA,Cavanna15-PRL,Depalo16-PRC}, was installed to monitor the beam current. Its characterisation is described in section \ref{subsec:Calorimeter}. In order to monitor the system performance, pressure sensors are connected to the target chamber by a long copper tube, to each of the three pumping stages, and to the buffer and purifier. The status of pumps and valves is controlled by a LabVIEW system and logged together with the pressure values. Typical pressures observed during the experiment, with 2 mbar of neon in the target chamber, were: from 8$\cdot 10^{-2}$ to 4$\cdot10^{-3}$~mbar in the first stage, 1.5$\cdot10^{-6}$~mbar in the second stage, and 1$\cdot10^{-7}$~mbar in the third stage. Similar ratios were observed also for different values of the pressure in the scattering chamber. The first stage is the closest to the scattering chamber, therefore a particular attention was devoted to monitor the pressure inside that part as discussed in section \ref{sec:pressure}. The most critical aperture is of course \texttt{AP$_{\mbox{1}}$}, where a compromise between the required drop in pressure and the beam spatial dimension has to be taken into account. Selecting an aperture of 7 mm, we obtained more than one order of magnitude reduction in pressure (see section \ref{sec:pressure} for details) and less then 5\% of the total beam current deposited on the collimator, typically. The other two apertures are less restrictive and the current deposited on them is lower than the 1\% of the total beam current. At the high beam intensities of LUNA, direct water cooling is necessary for all the three collimators (\texttt{AP$_{\mbox{1}}$} to \texttt{AP$_{\mbox{3}}$}), so these are kept at a temperature of 13\,$^\circ$C (286\,K, measured with a PT100 thermistor connected to \texttt{AP$_{\mbox{1}}$}). The outer walls of the target chamber and the pumping stages are at room temperature, 22\,$^\circ$C. \subsection{Pressure, temperature, and density profile} \label{sec:pressure} \begin{figure*}[tb] \includegraphics[width=\columnwidth]{PressureProfile.pdf}% \includegraphics[width=\columnwidth]{TemperatureProfile} \caption{ \label{fig:TemperatureProfile} \label{fig:PressureProfile} Left panel: Measured pressure profile $p(z)$. The center of the chamber is placed at $z = 0$ cm, while the calorimeter surface corresponds to $z = 5$ cm. --- Right panel: Temperature profile. In order to improve the readability, the temperature profile for $\mathrm{^2H}$ gas is displayed with respect to the right axis, which is shifted by 5 K. The position $z$ is measured from the center of the target chamber. On both panels, the lines, connecting the points, are not the results of any fit and they have been tracked just to guide the reader's eye.} \end{figure*} A precise understanding of the gas target density profile (i.e. the gas density $n(z)$ as a function of position $z$ along the beam axis) is needed for two reasons. First, for resonance yield measurements included in the $^{22}$Ne(p,$\gamma$)$^{23}$Na and $^{22}$Ne($\alpha$,$\gamma$)$^{26}$Mg studies, the gas density determines the beam energy loss and thus the position inside the target chamber where the maximum yield of the resonance is reached. This position, in turn, is needed because of the position dependence of the $\gamma$-ray detection efficiency. Second, for the analysis of the non-resonant cross section included in the $^{22}$Ne(p,$\gamma$)$^{23}$Na and $^{2}$H(p,$\gamma$)$^{3}$He studies, the density must be known in order to properly normalise the yield. For the determination of the density profile $n(z)$, the temperature $T(z)$ and pressure $p(z)$ were measured at a number of positions $z$ inside the target chamber and in the connecting tube between collimator \texttt{AP$_{\mbox{1}}$} and the main recipient of the first pumping stage. These measurements were performed with precise copies of the target chamber and connecting tubes, which had respectively KF10 and KF16 vacuum ports directly connected to them to enable the use of pressure and temperature sensors. For the pressure profile, four calibrated capacitance-type pressure gauges (two MKS Baratron 626 and two Pfeiffer CMR 363, typical precision 0.3\%) were used to measure the pressure at ten different positions (shown from right to left in Figure \ref{fig:PressureProfile}, left panel): four inside the target chamber, three in the \texttt{AP$_{\mbox{1}}$} collimator and three positions in the connecting tube. The collimator measurements were enabled by thin tubes (internal diameter 0.5\,mm) fixed at the sides of a specially prepared copy of \texttt{AP$_{\mbox{1}}$}. The gauges were changed in position between measurement ports to connect the data points and to check the consistency of the pressure calibrations of the various gauges. The pressure measurements were then repeated for eight different target pressures in the 0.5-4.0 mbar range for $^{22}$Ne and for seven in the 0.1-1.0 mbar range for $^2$H. The overall behaviour is the same for all nominal target pressures studied; selected profiles are shown in Figure \ref{fig:PressureProfile}, left panel. It is found that the pressure inside the target chamber is constant to $\pm$0.5\%. Inside the 40\,mm long, 7\,mm narrow collimator \texttt{AP$_{\mbox{1}}$}, there is a monotonic decrease of the target pressure, as expected for a high-impedance tube. Inside the connecting tube, the trend continues but with lower slope, consistent with the fact that the 100\,mm wide connecting tube is significantly larger than \texttt{AP$_{\mbox{1}}$}, and thus the tube has a significantly lower gas flow impedance. The final uncertainty for the pressure inside the target chamber is $\pm$0.9\%, taking into account calibration, reproducibility, and profile. The gas temperature was only measured inside the target chamber (Figure \ref{fig:PressureProfile}, right panel), using four PT100 thermistors. In the small watercooled collimator \texttt{AP$_{\mbox{1}}$} the gas is in close contact with the watercooled surfaces and we have therefore assumed in the computation a temperature of 286 K. However, considering the gas amount inside the collimator (18\% of the total), an error of 10\,K on its temperature would cause a 0.6\% error in the gas density. Inside the connecting tube, it is assumed that the temperature of the gas is the same as the outside temperature of the tube (295\,K). Inside the target chamber, the temperature drops monotonically between the beam stop (heated to 343\,K, see Section \ref{subsec:Calorimeter} below) and \texttt{AP$_{\mbox{1}}$} (cooled to 286\,K). A measurement very close to the beam stop was not performed due to the large solid angle covered by the beam stop and, hence, significant radiative heating of the PT100 sensors. The temperature profile inside the chamber is determined with 0.5\% relative uncertainty (1.5 K). For the temperatures in the collimator and connecting tube, 1\% uncertainty (3 K) is conservatively assumed. For any given incident ion, the effective target thickness observed by the beam in the connecting tube and collimator is always less than 30\% of the total gas thickness, so that this effect contributes an additional uncertainty of 0.3\% for the total gas thickness. Using the pressure and temperature data $p(z)$ and $T(z)$, the gas density $n(z)$ was then calculated using the ideal gas law% \begin{equation}\label{eqn:IdealGasLaw} n(z) = \frac{\nu N}{V} = \frac{\nu \cdot p(z)}{k_B \cdot T(z)} \end{equation} where $N$ is the number of gas molecules per volume $V$, $\nu$ the number of atoms per molecule of gas ($\nu$ = 1 for neon, $\nu$ = 2 for deuterium), and $k_B$ the Boltzmann's constant. For the parts where no data are available, i.e., close to the edge of each of the three segments (connecting tube, collimator, and target chamber) the density is extrapolated linearly based on measured pressure and density profiles inside the segment. For the interface between collimator and connecting tube, the two extrapolations do not match perfectly. The value from the connecting tube extrapolation is adopted but the discrepancy is included in full in the error budget, entailing a 0.7\% uncertainty in the integrated gas thickness. Figure \ref{fig:DensityProfile} shows the resultant density profile for the adopted working pressures of 2.0\,mbar for the $^{22}$Ne(p,$\gamma$)$^{23}$Na campaign and of 0.3\,mbar for the $^{2}$H(p,$\gamma$)$^{3}$He one. \begin{figure}[tbh] \includegraphics[width=\columnwidth]{DensityProfile.eps} \caption{\label{fig:DensityProfile} Calculated density profile from Equation (\ref{eqn:IdealGasLaw}). The density close to the calorimeter surface and the points on the dashed vertical lines (filled squares and triangles) have been extrapolated as described in the text. The profile for $\mathrm{^2}$H gas uses the right axis, which has five times lower range than the left axis. The position $z$ is measured from the center of the target chamber.} \end{figure} Taking into account the uncertainties from the pressure and temperature measurements and the extrapolation, a total uncertainty of 1.3\% is found for the integrated gas thickness. The same error is also adopted to each individual density measurement. Finally, an intense ion beam may lead to some thinning of the gas target, by the so-called beam heating effect \cite{Goerres80-NIM,Marta06-NIMA}. The beam-heating correction in neon gas was studied previously using the resonance scan technique \cite{Cavanna14-EPJA}, but in a much larger chamber. Using those measurements, for a beam intensity of 250\,$\mu$A at a beam energy of $E_p$ = 100\,keV, a correction of 8\% is found for 2\,mbar $^{22}$Ne gas and of 0.9\% for 0.3\,mbar $^2$H gas. However, the present chamber is narrower, so the conductive cooling of the heated gas volume is more efficient. Taking this effect into account \cite{Osborne84-NPA}, a correction of 6\% (0.6\%) is found for the 2\,mbar $^{22}$Ne gas (0.3\,mbar $^2$H gas) case. For the correction, a relative uncertainty of 20\% is adopted, which is included for each run based on the actual beam intensity and added in quadrature to the above mentioned 1.3\%. \subsection{Beam calorimeter} \label{subsec:Calorimeter} When the target chamber is filled with gas, a beam intensity measurement with a Faraday cup becomes impractical due to secondary electrons, therefore a different approach is followed here, using a power compensation calorimeter \cite{Vlieks83-NIM,Casella02-NIMA}. The calorimeter is made from copper and consists of three parts: the hot side (70 $^\circ$C, acting also as the beam stop), the heating resistors, and the cold side (at different possible temperatures as reported in Figure \ref{fig:CaloCalib}). The hot and cold sides are always kept at constant temperatures by regulating the current through the heating resistors for the hot side, and the cooling power in a feedback-controlled chiller for the cold side. Thus, there is always a constant temperature gradient between the hot and the cold sides. The beam stop can be heated up either by the resistors or by the ion beam, thus, the more power is provided by the beam, the less is provided by the resistors. If $W_{0}$ is the power delivered by resistors while the beam is off and $W_{\rm run}$ is the power delivered when the beam is on, the calorimetric beam power is given by \begin{equation} W_{\rm cal} = W_0 - W_{\rm run}. \end{equation} The calorimetric power values $W_0$ and $W_{\rm run}$ are calculated by measuring the voltage and current for the heating resistors. Two dividers were designed and used to decouple the power circuit from the readout, similarly to what reported in a previous work \cite{Casella02-NIMA}: The voltage divider is made of a passive resistive series with high resistance (3 $\times$ 33 k$\mathrm{\Omega}$ resistors), so that any possible influence of the voltmeter on the power circuit is negligible. The current divider is a LEM LAH 25-NP current transducer, completely decoupled from the power circuit. The outputs from the two dividers are then measured by a NI-cRIO-9207 module and logged by the LabVIEW control software. The latter, together with NI-cRIO controller and modules, actively controls the calorimeter operations and logs the data every second \cite{Ferraro17-PhD}. The statistical uncertainty on the calorimetric reading of the power was found to be 0.4 W, based on the ripple and stability of the calorimeter readings. \begin{figure}[tb] \includegraphics[width=\columnwidth]{CalorimeterCalibration_EPJA.eps} \caption{\label{fig:CaloCalib} Top panel: electrical calibration of the calorimeter to determine the parameters of Eq.~(\ref{eq:Wel_Wcalo}). The statistical error bars are smaller than the size of the data points. -- Bottom panel: residuals. See text for details.} \end{figure} Before the beam intensity can be obtained, the $W_{\rm cal}$ value must be electrically calibrated by associating it to the electrical power $W_{\rm el}$ (measured using the chamber and calorimeter as a Faraday cup) using the equation \begin{equation} W_{\rm el} = p_0 + p_1 W_{\rm cal} \label{eq:Wel_Wcalo}. \end{equation} The two parameters $p_0$ and $p_1$ reflect the facts that parasitic currents may lead to a slight overestimate of the calorimetric heat power, and that the heat flow from the hot side is very similar, but not completely equal, for the cases of localised heating by the beam and of more spread out heating by the resistors. In order to experimentally determine $p_0$ and $p_1$, a dedicated setup, without gas in target chamber, was used. A copper ring was mounted inside the target chamber, electrically insulated both from the chamber and the collimator and biased to -300 V, in order to suppress secondary electrons generated on the calorimeter surface. The electrical current impinging on the calorimeter was then integrated over 180 s with a calibrated current integrator and averaged. The beam, passing through the residual gas in the target chamber ($<$10$^{-3}$mbar), ionises the gas and some positive charges are collected by the above mentioned ring. This small positive current ($\sim$1-3\% of the current in the Faraday cup) measured on the ring is therefore added to the current. The final electrical current is then compared to the average $W_{\rm cal}$ value over the same time period (Figure \ref{fig:CaloCalib}). Four calibration data sets were taken: one with a chiller setting of $-5$\,$^\circ$C and three more at $-20$\,$^\circ$C. The results were found to be consistent (Figure \ref{fig:CaloCalib}) and averaged. The ion beam current is finally given by \begin{eqnarray}\label{eq:BeamCurrent} I & = & \frac{p_0 + p_1 W_{\rm cal}}{\left(E_p - \Delta E_{p}^{\rm cal}\right)} \times e, \\ {\rm with} \nonumber \\ p_0 & = & (-0.67 \pm 0.13) \; {\rm W}, \nonumber \\ p_1 & = & (0.936 \pm 0.002), \; \nonumber \end{eqnarray} where $E_p$ is the beam energy and $\Delta E_{p}^{\rm cal}$ is the energy loss of the beam when passing through the target gas (i.e., the full target thickness including connecting tube, collimator, and target chamber), and $e$ the electric charge. Taking into account the calorimeter uncertainties, the error on the electrical reading, and the calibration, the final uncertainty on the beam intensity $W_{\rm el}$ is 1.5\% or 0.5\,W, whichever is larger. \begin{figure*}[tb] \centering \includegraphics[width=0.25\textwidth]{BGO_front} \includegraphics[width=0.66\textwidth]{BGO_Setup_1} \caption{\label{fig:BGO_front} Left panel: Front cross section of the BGO detector. --- Right panel: Lateral cross-section view of the BGO. Also the target chamber and calorimeter are shown. The ring used only during the calorimeter calibrations and made from copper is also shown.} \end{figure*} \subsection{BGO and DAQ} \label{subsec:BGO} For the detection of the emitted $\gamma$ rays, an optically segmented bismuth germanate (BGO) borehole detector was used, the same as in previous work \cite{Caciolli11-AA}. The detector is composed of six scintillating crystals, each 28 cm long and 7 cm thick at the thinnest point, arranged in a hexagonal configuration surrounding the interaction volume (the resolution of each segment is about 11\% at 1.33 MeV \cite{Boeltzig17-JOP}). They are housed inside a stainless steel casing fitted with a borehole of 6\,cm diameter (Figure \ref{fig:BGO_front}). Each crystal is covered with a reflecting foil, except for the opening for the photomultiplier tube (PMT, Hamamatsu R1847-07). A CAEN V6533P high voltage power supply supplies an individual high voltage to each PMT, and the voltage was adjusted to match the gains of the individual PMTs. For each of the six PMTs, the anode signal is passed to an Ortec 113 preamplifier. A pulse generator with approximately 50\,Hz rate, 100 ns pulse length, is connected to each of the test inputs of the six preamplifiers, and to a seventh preamplifier used to monitor the performance of the pulser. The preamplifier output of each segment is then connected to a CAEN V1724 (8 channel, 14 bit, 100 MS/s) digitiser, hence to the PC by a USB interface (see Figure \ref{fig:Electronics}). Each of the digitiser channels triggers independently, and the charge is integrated in the preamplifier. A trapezoidal filter is then applied to determine the height of the preamplifier signal and this information is stored, together with its time stamp, for offline analysis \cite{Boeltzig17-JOP}. \begin{figure}[tbh] \includegraphics[width=\columnwidth]{electronics_new} \caption{\label{fig:Electronics} Electronics scheme. See text for details.} \end{figure} The gain of the individual channels is determined run by run from the three most prominent laboratory background peaks: the 1.461 MeV $\gamma$ ray from $^{40}$K decay, the 2.204 MeV $\gamma$ ray from $\mathrm{^{214}Bi}$, and the 2.615 MeV $\gamma$ ray from $^{208}$Tl. The data are then sorted to events using a conservatively chosen coincidence time window of 3.5\,$\mu$s and stored as ROOT trees \cite{ROOT}. In the offline analysis, first the dead time is determined for each individual channel by comparing the number of events in the pulser peak of that channel with the number of events in the seventh, pulser-only channel, which is assumed to be dead time free. Second, the pulser events are removed from the BGO channels by gating out all events where an event is recorded in the pulser-only channel (channel 7). Two general types of spectra are then created: First, a so-called add-back spectrum using the energies from all segments summed together, as if the BGO were one single detector. Second, a so-called singles sum spectrum formed by simply summing the individual histograms \cite{Boeltzig17-JOP}. The linearity of the gain calibrations was verified using the high-energy $\gamma$ rays from the $\mathrm{^{14}N(p,\gamma)^{15}O}$ reaction. \subsection{$\gamma$-ray detection efficiency} \label{subsec:Efficiency} The $\gamma$-ray detection efficiency is measured using $\mathrm{^7Be}$, $\mathrm{^{137}Cs}$, $\mathrm{^{60}Co}$, and $\mathrm{^{88}Y}$ point-like radioactive sources, calibrated to better than 1\%, at 9 positions along the beam axis inside the interaction chamber. To achieve this, a special source holder was designed made of light materials to limit self-absorption. In addition, a GEANT4 Monte Carlo simulation was developed to determine the detection efficiency at positions and energies that are inaccessible with the sources. The simulation was found to match the experimental efficiency for the radioactive sources within 4\% without any rescaling (Figure \ref{fig:137Cs_Simulation/Experiment}). The simulation reproduces also the additional passive layers due to the cooling system of the collimator as shown by the reduction in efficiency from $z$ = -5 cm to $z$ = -10 cm in Figure \ref{fig:137Cs_Simulation/Experiment}. In this region it was not possible to measure the efficiency experimentally, but, as shown clearly in Figure \ref{fig:DensityProfile}, the density drops by one order of magnitude in this region bringing to a negligible value its contribution to the experimental yield. As a consistency check, the setup was also described using the well-tested LUNA Geant3 code \cite{Casella02-NIMA}, with consistent results. For the ratio of high-energy to low-energy response, the simulations were validated using the peaks originating from the well known $\mathrm{^{14}N(p,\gamma)^{15}O}$ reaction, and a good agreement was found (Figure \ref{fig:14N_Simulation/Experiment}). For the present setup, the uncertainty associated to the validation of the simulations has been assumed to be 4\%. \begin{figure}[tbh] \includegraphics[width=\columnwidth]{EfficiencyProfile} \caption{\label{fig:137Cs_Simulation/Experiment} Efficiency profile along the detector axis for a calibrated $^{137}$Cs source, compared with the Geant4 simulation. The position $z$ is measured from the center of the target chamber.} \end{figure} \begin{figure}[tbh] \includegraphics[width=\columnwidth]{14N_54mm_BDON_addback_EPJA.eps} \caption{\label{fig:14N_Simulation/Experiment} $\mathrm{^{14}N(p,\gamma)^{15}O}$ add-back spectrum taken at the resonance energy $E_p$ = 278 keV. Comparison between the experimental spectrum and the spectrum simulated using the GEANT4 code.} \end{figure} \section{Background} \label{sec:Background} Below 3 MeV $\gamma$-ray energy, natural background dominates the observed add-back spectrum (Figure \ref{fig:NaturalBackground}). This background is actually used as a tool to determine the energy calibration for each individual run (see Section \ref{subsec:BGO} above). \begin{figure}[tbh] \includegraphics[width=\columnwidth]{NaturalBackground_2} \caption{\label{fig:NaturalBackground} Experimental natural background spectrum. } \end{figure} In laboratories above the Earth's surface, the cosmic-ray induced background above 3 MeV usually plays a critical role at low counting rate. However, LUNA benefits from a 1400 m thick rock overburden ($\approx$ 3800 meter water equivalent), which reduces the muon flux by a factor of $10^6$ and the neutron flux by a factor of $10^3$ \cite{Costantini09-RPP,Broggini10-ARNPS}. Still, a natural background contribution in the region from 5.5 to 10.5 MeV remains due to (n,$\gamma$) reactions with the detector and experimental setup as discussed in details in \cite{Boeltzig17-JOP,Bemmerer05-EPJA}. In add-back mode, the BGO detector, thanks to its high efficiency and the large solid angle coverage, is able to effectively detect the $\gamma$ rays originating from the decay of the excited state of the final nucleus, resulting in a peak at $Q+E$ (where $Q$ is the reaction $Q$-value and $E$ is the center of mass energy at which the reaction takes place) in the add-back spectrum. For the reactions under consideration here, $Q$ is always larger than 3\,MeV, namely $Q$ = 8.794 MeV for $^{22}$Ne(p,$\gamma$)$^{23}$Na, 5.493 MeV for $^{2}$H(p,$\gamma$)$^{3}$He, and 10.615 MeV for $^{22}$Ne($\alpha$,$\gamma$)$^{26}$Mg, so that in all cases $Q+E>$ 3\,MeV, and full advantage is taken of the cosmic-ray suppression at LUNA, as shown by the ultra-low rate observed in these energy regions without beam (Figure \ref{fig:NaturalBackground}). Because of this unique situation, another source of background needs careful attention, namely background produced by the ion beam. Indeed, nuclear reactions involving light contaminants may produce high energy $\gamma$ rays, leading to a background which depends on the contaminants in the setup, their position, and the beam energy \cite{Bemmerer05-EPJA}. In case of proton beam, the most relevant reactions for ion-beam induced background found in the present setup are $\mathrm{^{11}B(p,\gamma)^{12}C}$, $\mathrm{^{14}N(p,\gamma)^{15}O}$, $\mathrm{^{15}N(p,\gamma)^{16}O}$, $\mathrm{^{19}F(p,\alpha\gamma)^{16}O}$ and $\mathrm{^{18}O(p,\gamma)^{19}F}$. The $\mathrm{^{19}F(p,\alpha\gamma)^{16}O}$ reaction dominates the spectrum for beam energies above its $E_p$ = 340\,keV resonance, with a very large peak at 6.13\,MeV due to the decay of the second excited state of $^{16}$O. The $\mathrm{^{18}O(p,\gamma)^{19}F}$ reaction is problematic near its $E_p$ = 151\,keV resonance, because its $Q$-value of 7.994 MeV is only 1\,MeV lower than the $^{22}$Ne(p,$\gamma$)$^{23}$Na reaction, and it may in some cases require a narrowing of the $\gamma$-ray region of interest in the data analysis. Above $E_p$ = 278 and 300 keV the \linebreak $\mathrm{^{14}N(p,\gamma)^{15}O}$ (\cite{Lemut06-PLB,Bemmerer06-NPA}, $Q$ = 7.297\,MeV) and $\mathrm{^{15}N(p,\gamma)^{16}O}$ (\cite{Bemmerer09-JPG,Caciolli11-AA}, $Q$ = 12.127\,MeV) reactions contribute to the background, leading to sum peaks at 7.6 MeV and 12.4 MeV, respectively. Finally and most importantly, the $\mathrm{^{11}B(p,\gamma)^{12}C}$ was found to contribute significantly to the counting rate, not only at its $E_p$ = 163\,keV resonance but also above and even below, favoured by the relatively low atomic number of boron. The signature of this reaction are $\gamma$-rays at 16.1, 11.7, and 4.4 MeV, and a significant additional background from Compton scattering in the region of interest for the studied reactions. In order to subtract the ion-beam induced background from the $^{22}$Ne(p,$\gamma$)$^{23}$Na and $^{22}$Ne($\alpha$,$\gamma$)$^{26}$Mg energy spectra, monitor runs with an inert noble gas, natural argon, were performed. For these runs, the argon gas pressure was set so that the energy loss with argon was the same as with neon gas, in order to mimic also the lateral size of the beam and thus hit similar parts of the target chamber. Similarly, helium gas was used to monitor the beam induced background for the $^2$H(p,$\gamma$)$^3$He reaction. \section{Decay branching ratios of the $E_p$ = 189.5 keV resonance in $^{22}$Ne(p,$\gamma$)$^{23}$Na} \label{sec:189.5keVres} With this setup, a new study of the recently discovered \cite{Cavanna15-PRL,Depalo16-PRC} and confirmed \cite{Kelly17-PRC} $\mathrm{^{22}Ne(p,\gamma)^{23}Na}$ resonances was carried out. Here, new data on the branching ratios of the $E_p$ = 189.5 keV resonance are shown, where discrepancies have been reported in the recent two direct detections \cite{Cavanna15-PRL,Depalo16-PRC,Kelly17-PRC}. Further results on the $\omega \gamma$ and the study of the other resonances will be reported in a forthcoming publication. After determining the beam energy of maximum yield by a resonance scan, a high-statistics run was performed at the maximum of the yield curve. The data taken here differ from the previous LUNA data with two HPGe detectors reported in \cite{Cavanna15-PRL,Depalo16-PRC} in three respects: First, the $\gamma$-ray energy resolution is inferior in the new data. Second, the counting rate is hundredfold higher. Third, angle-averaged data are available due to the quasi 4$\pi$ angular coverage of the BGO borehole detector. In addition to the on-resonance run with a proton beam incident on $^{22}$Ne gas, a background monitor run was performed with proton beam on argon gas. The argon spectrum was used to subtract the beam induced background (given in this instance exclusively by the $\mathrm{^{11}B(p,\gamma)^{12}C}$ reaction), by using the counting rate in the $E_\gamma$ = 10.5-17.0\,MeV region to match the spectra \cite{Takacs17-PhD,Ferraro17-PhD} (see Figure \ref{fig:figure_new}). The background amounted to 7\% of the raw counts in the region of interest (ROI). \begin{figure}[tb] \includegraphics[width=\columnwidth]{Figure10_v2.pdf} \caption{\label{fig:figure_new} The experimental spectrum acquired using Neon (Argon) gas in red (black). The two spectra are normalised to match the region $E_\gamma$ = 10.5-17.0\,MeV as discussed in the text. The main sources of beam induced background are also labelled in the figure.} \end{figure} The $^{22}$Ne(p,$\gamma$)$^{23}$Na ROI in the add-back spectrum was $E_\gamma$ = 8.0-9.7 MeV. As a next step, by gating on the $^{22}$Ne(p,$\gamma$)$^{23}$Na ROI in the add-back spectrum and again subtracting the $^{11}$B background, a $^{22}$Ne(p,$\gamma$)$^{23}$Na only single sum spectrum was generated (Figure \ref{fig:Gated189}). The signatures from the complex decay pattern of the resonance are clearly apparent. The appropriateness of the background subtraction is confirmed by the fact that the remaining counts in the ROI can be explained by the $^{22}$Ne(p,$\gamma$)$^{23}$Na resonance. \begin{figure*}[tb] \includegraphics[width=\textwidth]{Figure_189BR_2.eps} \caption{\label{fig:Gated189} Single sum spectrum on top of the 189.5\,keV resonance in $^{22}$Ne(p,$\gamma$)$^{23}$Na, gated on the add-back energy in the sum peak, $E_\gamma^{\rm sum}$ $\in$ [8.0;9.7] MeV. The data are compared with simulated branchings from LUNA-HPGe \cite{Depalo16-PRC}, TUNL \cite{Kelly17-PRC}, and the best fit results from the present data, LUNA-BGO.} \end{figure*} \begin{table}[tb] \begin{tabular}{l c c c c c} \hline \noalign{\vskip 1 mm} \multicolumn{1}{c}{$\gamma$ transition} & \multicolumn{3}{c}{Branching [\%]} \\ & LUNA & TUNL & LUNA & \\ & HPGe \cite{Depalo16-PRC} & \cite{Kelly17-PRC} & BGO [this work] & \\ \noalign{\vskip 1 mm} \hline \hline \noalign{\vskip 1 mm} 8975 $\rightarrow 0$ & & $5.3\pm1.4$ & $\leq 1$ \\ 8975 $\rightarrow 440$ & $42.8 \pm 0.9$ & $37.7\pm1.5$ & $35 \pm 6$ \\ 8975 $\rightarrow 2076$ & $47.9 \pm 0.9$ & $39.8\pm1.3$ & $53 \pm 6$ \\ 8975 $\rightarrow 2982$ & $3.7 \pm 0.5$ & $5.0\pm0.8$ & $3.3 \pm 0.7$ \\ 8975 $\rightarrow 3678$ & & $2.2\pm0.8$ & $2.4 \pm 0.5$ \\ 8975 $\rightarrow 3914$ & $1.1 \pm 0.3$ & $3.1\pm0.6$ & $1.6 \pm 0.5$ \\ 8975 $\rightarrow 4775$ & $1.8 \pm 0.2$ & $\leq 3.0$ & $1.9 \pm 0.4$ \\ 8975 $\rightarrow 6618$ & $2.7 \pm 0.2$ & $4.7\pm0.9$ & $2.5 \pm 0.8$ \\ \hline \hline \end{tabular} \caption{\label{tab:Branchings189} Decay branching ratios for the 189.5\,keV resonance in $^{22}$Ne(p,$\gamma$)$^{23}$Na (corresponding to the $E_x$ = 8975\,keV excited state) from LUNA-HPGe \cite{Depalo16-PRC}, TUNL \cite{Kelly17-PRC}, and from the present work, here labeled LUNA-BGO. The work by Jenkins {\it et al.} \cite{Jenkins13-PRC} showed only the decay to the $E_x$ = 2982\,keV level.} \end{table} This single sum spectrum was used to test several hypotheses regarding the branching ratios, from the LUNA-HPGe experiment \cite{Depalo16-PRC} and from the TUNL experiment \cite{Kelly17-PRC}, both obtained at 55$^\circ$ angle. Both sets of branching ratios give a good match at $E_\gamma >$ 6\,MeV and $E_\gamma <$ 2\,MeV. However, it seems that neither of the two branching sets provides a good match in the central part of the spectrum $E_\gamma$ = 2-6\,MeV. In addition, the high branching to the ground state reported in \cite{Kelly17-PRC} seems inconsistent with the experimental spectrum. The experimental yield at $E_\gamma \sim$ 9\,MeV can be completely explained by summing effects from cascade transitions, and only an upper limit is found for the ground state branch. Despite of the fact that the LUNA BGO summing crystal is not particular sensitive to the branching or gamma cascades, an attempt to determine the branching ratios has been made by fitting the single sum spectrum with the simulated spectra. In Table \ref{tab:Branchings189}, the results of this fitting procedure are shown. The uncertainty on the branching probabilities are based on the errors from the fitting procedure using MINUIT and from cross-checks using simulated templates from both Geant4 and Geant3. It is clear that while any angular effects can be safely excluded, the limited energy resolution limits the precision in the branching values, which remains in general worst than the one reported in \cite{Depalo16-PRC} and \cite{Kelly17-PRC}. The new LUNA result is in good agreement with the previous one reported by \cite{Depalo16-PRC}. However, thanks to the pretty high efficiency in the new setup, a contribution to the transition to the 3678\,keV level of $^{23}$Na is required. This transition was observed in the HPGe phase with a non-significant number of counts and reported as upper limits ($<$ 0.7\%) in two Ph.D. theses \cite{Depalo15-PhD,Cavanna15-PhD} and then disregarded in the final publication. This could be an effect of the different coverage of the angular distribution. When comparing to TUNL, a stronger branching for the main transition, 8975\,keV$\rightarrow$2076\,keV (and onward mainly through the 440\,keV state to the ground state) is found, mainly caused by the observed yield near the $E_\gamma$ = 5899\,keV primary $\gamma$ ray. In contrast, three minor transitions (ground state, 8975\,keV$\rightarrow$2982\,keV, 8975\,keV$\rightarrow$3914\,keV) are found to be weaker. \section{Summary and outlook} \label{sec:Summary} In summary, a new high-efficiency setup was developed to investigate the low energy yield in the $\mathrm{^{22}Ne(p,\gamma)^{23}Na}$, $\mathrm{^{22}Ne(\alpha,\gamma)^{26}Mg}$, and $\mathrm{^{2}H(p,\gamma)^{3}He}$ reactions. The setup has been characterised and tested. As a first application of the new setup, the decay branching ratios of the $E_p$ = 189.5 keV resonance in \linebreak $^{22}$Ne(p,$\gamma$)$^{23}$Na were determined, independently from any possible angular effect. The new results confirm the previous LUNA ones \cite{Depalo16-PRC} and show a stronger branching at 8975\,keV$\rightarrow$2076\,keV with respect to TUNL \cite{Kelly17-PRC} while do not confirm the evidence of a transition to the ground state in the $E_p$ = 189.5 keV resonance. This result also shows the capability of the new setup for determining branching ratios even with complex cascade transition and using a nearly 4$\pi$ detection setup with moderate energy resolution. This could be important when investigating resonances with very low resonance strength, not detectable with detection system of lower efficiency. \subsection*{Acknowledgments} Financial support by INFN, by the Helmholtz Association Nuclear Astrophysics Virtual Institute (NAVI, HGF VH-VI-417), by the NKFIH K120666, by the Hungarian Academy of Sciences (LP2014-17), and by the Deutsche Forschungsgemeinschaft (DFG, BE 4100/4-1) is gratefully acknowledged.
{ "timestamp": "2018-02-13T02:21:05", "yymm": "1802", "arxiv_id": "1802.04164", "language": "en", "url": "https://arxiv.org/abs/1802.04164" }
\section*{Introduction} We assume throughout this article that algebras are finite dimensional algebras over a field $K$. A projective left module $Af$ with an idempotent $f$ is said to be \emph{minimal faithful projective-injective} in case $Af$ is a faithful projective-injective left module and every faithful projective-injective left module has $Af$ as a direct summand. The notion of a minimal faithful projective-injective right module is defined similarly. $A$ is said to have the \emph{double centraliser property} with respect to the minimal faithful projective-injective module $Af$ in case $A \cong End_{fAf}(Af)$ and $fAf \cong End_A(Af)$ (where the second isomorphism is automatic and always holds for any idempotent $f$). This double centraliser property occurs in many places in mathematics, we mention the Schur-Weyl duality where $A$ is the Schur algebra $S(n,r)$ for $n \geq r$ and $fAf$ is the group algebra of the corresponding symmetric group and also the doublce centraliser property of blocks of category $\mathcal{O}$ arising from Lie theory where $A$ is some block of the Bernstein-Gelfand-Gelfang category $\mathcal{O}$ and $fAf$ is a symmetric local algebra, we refer to \cite{KSX} for proofs and more on this. Let $A$ be an algebra with minimal injective coresolution $(I_i)$ of the regular module $A$: $$0 \rightarrow A \rightarrow I_0 \rightarrow I_1 \rightarrow \cdots .$$ The \emph{dominant dimension} of $A$ is defined as zero in case $I_0$ is not projective and equal to $\sup \{ n \geq 0 | I_i$ is projective for $i=0,1,...,n \}+1$ in case $I_0$ is projective. It can be shown that $A$ has dominant dimension at least one if and only if there is a minimal faithful projective-injective right module $eA$ if and only if there is a minimal faithful projective-injective left module $Af$, see for example chapter 4 of \cite{Ta}. Note that for algebras with dominant dimension at least one, one has $eAe \cong fAf$ as algebras. Furthermore $A$ has the double centraliser property with respect to a minimal faithful projective-injective left module $Af$ if and only if it has dominant dimension at least two, see for example \cite{Ta} chapter 10. The class of algebras having dominant dimension at least two is very large and includes for example the higher Auslander algebras introduced by Iyama in \cite{Iya} and the Morita algebras introduced by Kerner and Yamagata recently in \cite{KerYam}. Because of this equivalent characterisation via the dominant dimension of algebras having the double centraliser property with respect to a minimal faithful projective-injective module, we will often speak for short about algebras with dominant dimension at least two instead of the longer term of algebras having the double centraliser property with respect to a minimal faithful projective-injective module. Recall that an algebra $A$ is called \emph{monomial} in case $A \cong KQ/I$ for some finite quiver $Q$ with an admissible ideal $I$ that is monomial, which means that it is generated by non-zero paths. Note that an algebra $A$ of the form $KQ/I_1$ with admissible non-monomial ideal $I_1$ can be isomorphic to an algebra $KQ/I_2$, where $I_2$ is an admissible monomial ideal and thus $A$ is a monomial algebra even though $I_1$ is not monomial, see for example exercise 4 in chapter I. of \cite{SkoYam}. In this article we give a classification of monomial algebras having the double centraliser property with respect to a minimal faithful projective-injective module. Since we will deal here mainly with monomial algebras, we will assume in the following that every algebra is connected and given by quiver and admissible relations if not stated otherwise. Recall that a \emph{Nakayama algebra} (some authors call those algebras serial algebras) is an algebra where every indecomposable projective and every indecomposable injective module is uniserial. It can be shown that for Nakayama algebras in fact every indecomposable module is uniserial and the quiver of a Nakayama algebra can have only two shapes as in the following. The quiver of a Nakayama algebra with a cyclic quiver: $$Q=\begin{xymatrix}{ & \circ^0 \ar[r] & \circ^1 \ar[dr] & \\ \circ^{n-1} \ar[ur] & & & \circ^2 \ar[d] \\ \circ^{n-2} \ar[u] & & & \circ^3 \ar[dl] \\ & \circ^5 \ar @{} [ul] |{\ddots} & \circ^4 \ar[l] & }\end{xymatrix}$$ \newline \newline The quiver of an Nakayama algebra with a linear quiver: $$Q=\begin{xymatrix}{ \circ^0 \ar[r] & \circ^1 \ar[r] & \circ^2 \ar @{} [r] |{\cdots} & \circ^{n-2} \ar[r] & \circ^{n-1}}\end{xymatrix}$$ Especially: Every Nakayama algebra is a monomial algebra. We refer for example to \cite{AnFul} and \cite{SkoYam} for proofs and more on Nakayama algebras. For a module $N$, $add(N)$ denotes the full subcategory of $mod-A$ consisting of finite direct sums of indecomposable modules that are direct summands of $N$. Recall that a module $N$ is a generator in case it contains every indecomposable projective module as a direct summand and $N$ is a cogenerator in case it contains every indecomposable injective module as a direct summand. We call a module $N$ \emph{generator-cogenerator} in case it is a generator and a cogenerator. A module $N$ is called basic in case it does not have a direct summand of the form $M^2$ where $M$ is an indecomposable non-zero module. $D:=Hom_K(-,K)$ denotes the natural duality of a finite dimensional algebra. Our main theorem of this article can be stated as follows: \begin{theorem*} Let $A$ be a finite dimensional algebra. The following are equivalent: \begin{enumerate} \item $A$ is a monomial algebra with dominant dimension at least two. \item $A \cong End_B(M)$, where $B$ is a Nakayama algebra and $M$ a basic generator-cogenerator in \newline $add(B \oplus D(B) \oplus D(B)/soc(D(B)))$. \item $A$ is a Nakayama algebra with dominant dimension at least two. \end{enumerate} \end{theorem*} We apply this theorem to give also a classification of monomial Morita algebras. The author thanks Aaron Chan for useful discussions and allowing him to use his example of a monomial algebra with dominant dimension equal to one that is not a Nakayama algebra in \ref{finalexample}. \section{Proof of the main theorem} We assume that all algebras are given by quiver and relations and are connected finite dimensional algebras over a field $K$. We assume that all modules are finite dimensional right if not stated otherwise. We remark that we still often use left modules when talking about minimal faithful projective-injective left modules, since here the double centraliser property has a nicer form when using left instead of right modules. We assume that the reader is familiar with the basics of representation theory of finite dimensional algebras and refer for example to the books \cite{ASS}, \cite{SkoYam} and \cite{DW}. We refer also to \cite{Yam} for a survey article that treats dominant dimension and \cite{Ta} for a textbook that treats dominant dimension and double centraliser properties. Before we can prove the main theorem of this article we recall some results from the literature. \begin{theorem} \label{domdim2chara} The following are equivalent for a finite dimensional algebra $A$: \begin{enumerate} \item $A$ has dominant dimension at least two. \item $A \cong End_B(M)$ for an algebra $B$ with a generator-cogenerator $M$. \item $A$ has the double centraliser property with respect to a minimal faithful projective-injective left module $Af$. \end{enumerate} \end{theorem} \begin{proof} See for example \cite{Ta} chapter 10 or \cite{Rin}. \end{proof} The algebra $B$ as in the previous theorem is called the \emph{base algebra} of an algebra $A$ of dominant dimension at least one and is uniquely determined as the algebra $fAf$ when $Af$ is the minimal faithful projective-injective left $A$-module. \begin{theorem} \label{yamagatatheorem} Let $A$ be a Nakayama algebra and $M$ a basic generator-cogenerator. Then $End_A(M)$ is a Nakayama algebra if and only if $M \in add(B \oplus D(B) \oplus D(B)/soc(D(B)))$. \end{theorem} \begin{proof} This is the main result of \cite{Yam2} specialised to quiver algebras and generator-cogenerators. \end{proof} \begin{proposition} \label{nakayamapropo} Let $A$ be a monomial algebra with minimal faithful projective-injective module left module $Af$. Then $fAf$ is a Nakayama algebra. \end{proposition} \begin{proof} See \cite{Mar}, proposition 2.19. \end{proof} \begin{lemma} \label{APTlemma} Let $A$ be an algebra with a basic generator-cogenerator $M$ and let $B=End_A(M)$. Let $M_i$ be the indecomposable direct summands of $M$. The indecomposable projective $B$-modules are exactly $Hom_A(M,M_i)$ and the indecomposable projective-injective $B$-modules are exactly $Hom_A(M,M_i)$ for the injective indecomposable $A$-modules $M_i$. \end{lemma} \begin{proof} This is a special case of lemma 3.1. in \cite{APT}. \end{proof} As a generalisation of quasi-Frobenius algebras, \emph{QF-2 algebras} were defined as algebras such that the socle of every indecomposable projective module is simple. We refer to \cite{Yam} for a more on those algebras. \begin{proposition} \label{nakayamaqf2} Let $A$ be a Nakayama algebra with a basic generator-cogenerator $M$. Then $B=End_A(M)$ is a QF-2 algebra. \end{proposition} \begin{proof} Recall from \ref{APTlemma} that every indecomposable projective $B$-module is isomorphic to $Hom_A(M,M_i)$ when $M_i$ denote the indecomposable direct summands of $M$. Let $I$ be an indecomposable injective direct summand of $M$. Then by \ref{APTlemma} the $B$-modules $Hom_A(M,I)$ are projective-injective indecomposable and thus have simple socle. Now let $N$ be an indecomposable direct summand of $M$ that is not injective. Since $A$ is a Nakayama algebra any indecomposable $A$-module is uniserial and thus has simple socle. Now since $N$ has simple socle, its injective envelope $I(N)$ is indecomposable and there is the following short exact sequence: $$0 \rightarrow N \rightarrow I(N) \rightarrow K \rightarrow 0,$$ where $K$ is the cokernel of the inclusion $N \rightarrow I(N)$. Applying the functor $Hom_A(M,-)$ to this short exact sequence and using that it is left exact we obtain an inclusion of $B$-modules: $$0 \rightarrow Hom_A(M,N) \rightarrow Hom_A(M,I(N)).$$ This shows that the indecomposable projective $B$-module $Hom_A(M,N)$ is a submodule of the indecomposable projective-injective $B$-module $Hom_A(M,I(N))$. But with $Hom_A(M,I(N))$ also every of its submodules has simple socle and thus $Hom_A(M,N)$ has simple socle, which finishes the proof. \end{proof} \begin{lemma} \label{monomialqf2impliesnaka} Let $A$ be a monomial QF-2 algebra. Then $A$ is a Nakayama algebra. \end{lemma} \begin{proof} Assume $A$ is a monomial QF-2 algebra but not a Nakayama algebra. We can assume that $A \cong KQ/I$ with an admissible monomial ideal $I$. We will show that this gives a contradiction. Since $A$ is not a Nakayama algebra, there is a point in the quiver of $A$ such that at this point there start at least two arrows or there end at least two arrows. We look at both cases. \newline \underline{Case 1:} Assume there is a point $i$ in the quiver of $A$ where at least two arrows start. Since we assume $A$ to be QF-2, the indeocomposable projective module $e_i A$ has simple socle, which is equivalent to the condition that here is a unique longest path starting at $i$ since $A$ is assumed to be monomial. But since $A$ is monomial and there start at least two arrows $\alpha_1$ and $\alpha_2$ at $i$ there are at least two longest paths $p_1= \alpha_1 \cdots$ and $p_2 = \alpha_2 \cdots$ starting with $\alpha_1$ and $\alpha_2$ respectively. Since $A$ is monomial and the admissible ideal $I$ contains no commutativity relations, the paths $p_1$ and $p_2$ can not be identified and $e_i A$ can not have simple socle. This is a contradiciton. \newline \underline{Case 2:} Assume there is a point $i$ in the quiver of $A$ where at least two arrows end and assume $A$ is not a Nakayama algebra. In this case look at the algebra $B:=A^{op}$, the opposite algebra of $A$. Then $B$ is a monomial QF-2 algebra that is not a Nakayama algebra with a point $i$ where at least two arrows start and we are in case 1 and obtain a contradiction. \end{proof} With all the work done, we can now give an easy proof of our main theorem. \begin{theorem} \label{mainresult} Let $A$ be a finite dimensional algebra. The following are equivalent: \begin{enumerate} \item $A$ is a monomial algebra with dominant dimension at least two. \item $A$ is a Nakayama algebra with dominant dimension at least two. \item $A \cong End_B(M)$, where $B$ is a Nakayama algebra and $M$ a basic generator-cogenerator in $add(B \oplus D(B) \oplus D(B)/soc(D(B)))$. \end{enumerate} \end{theorem} \begin{proof} We first show (1) $\implies$ (2): Assume $A$ is a monomial algebra with dominant dimension at least two. Thus there is a minimal faithful projective-injective left $A$-module $Af$ such that $A \cong End_{fAf}(Af)$ by \ref{domdim2chara}. By \ref{nakayamapropo} the algebra $fAf$ is a Nakayama algebra and by \ref{nakayamaqf2} $A$ is a QF-2 algebra since $Af$ is a generator-cogenerator. By \ref{monomialqf2impliesnaka} $A$ is then a Nakayama algebra. \newline That (2) implies (3) follows directy from \ref{domdim2chara} combined with \ref{yamagatatheorem}. Assume (3) holds, then by \ref{yamagatatheorem} $A$ is a Nakayama algebra with dominant dimension at least two and thus also a monomial algebra with dominant dimension at least two since every Nakayama algebra is a monomial algebra and thus (1) follows. \end{proof} Following \cite{KerYam}, a \emph{Morita algebra} is by definition an algebra $A$ with dominant dimension at least two such that $fAf$ is selfinjective algebra when $Af$ denotes the minimal faithful projective-injective left module. As a corollary of our main result, we can give a classification of monomial Morita algebras. Note that selfinjective Nakayama algebras are exactly those Nakayama algebras where the indecomposable projective modules all have the same vector space dimension, see for example \cite{SkoYam} theorem 6.15. in chapter IV. \begin{corollary} Let $A$ be a monomial Morita algebra. Then $A$ is isomorphic to $End_B(M)$, where $B$ is a selfinjective Nakayama algebra and $M=B \oplus N$, where $N$ is a direct sum of distinct indecomposable modules of the form $P/soc(P)$ when $P$ is an indecomposable projective $B$-module. \end{corollary} \begin{proof} This is a direct consequence of \ref{mainresult} when noting that $D(B) \cong B$ and thus \newline $add(B \oplus D(B) \oplus D(B)/soc(D(B)))=add(B \oplus B/soc(B))$ because $B$ has to be selfinjective. \end{proof} We note that Nakayama algebras with dominant dimension at least two are an interesting class of algebras, which are characterised in \cite{Ful} lemma 4.3 and in chapter 5 of \cite{NRTZ} those algebras are characterised using tilting theory. While all monomial algbras with dominant dimension at least two are Nakayama algebras, there are monomial algebras with dominant dimension equal to one that are not Nakayama algebras as the following example due to Aaron Chan shows: \begin{example} \label{finalexample} Let $Q$ be the quiver: $$Q=\begin{xymatrix}{ \circ^1 \ar[r]^{\alpha_1} & \circ^2 \ar[dr]^{\alpha_4} \ar[dl]^{\alpha_3} & \circ^3 \ar[l]^{\alpha_2} \\ \circ^4 & & \circ^5}\end{xymatrix}$$ and let $I$ be the admissible ideal $I=<\alpha_1 \alpha_3 , \alpha_2 \alpha_4 >$. Let $A=kQ/I$. Then $A$ is a monomial algebra with dominant dimension equal to one, but $A$ is not a Nakayama algebra. \end{example}
{ "timestamp": "2018-02-13T02:21:27", "yymm": "1802", "arxiv_id": "1802.04175", "language": "en", "url": "https://arxiv.org/abs/1802.04175" }
\section{Introduction} Compared to other buildings, supermarkets consume proportionately more energy~\cite{TIMMA2016435,Mylona}. This is mainly due to refrigeration needed to slow down the deterioration of food, by retaining them on a predetermined temperature~\cite{Mylona}. Electricity costs associated with refrigeration accounts for a large part of the operating costs because these machines are continually utilizing energy, day and night. As a result, costs associated with refrigerator equipment can represent more than 50\% of the total energy costs~\cite{Opti,Mavro,FoodRetail,TIMMA2016435}. Retailers operate in an industry that is characterized as competitive and low-margin~\cite{Opti}. If they are able to become more energy efficient this can make them more competitive. This outlines the importance of operating the system at its optimum performance level so the associated energy costs can be reduced. Energy baselining makes it possible to analyze the energy consumption by comparing it to a reference behavior~\cite{Stulka}. Furthermore, it can be used to measure the effectiveness of energy efficiency policies by monitoring energy usage over time. Changes in energy policies, such as retrofitting the equipment, can require high investments. This makes it important for a retailer to know if the investments are truly effective, in the reduction of energy consumption. To estimate energy savings with reasonable accuracy, the energy baselines need to be accurate. It can be challenging to estimate the quality of these energy baselines. One way is to run the old policies in parallel with the new ones, which is often impossible. Determining the quality of these baselines can yield significant results for supermarkets. The objective of this work is to develop energy baselines using off-the-shelf data science technologies. Different technologies will be tested and applied on the data obtained from several supermarkets to test their performance. Fives supermarkets in Portugal, will be analyzed as a case study with a methodology based on energy baselining. \section{Background} The characteristics of the food-retail industry, such as fierce competition and low margins, makes retailers continually search for ways to operate more efficiently~\cite{Opti}. Since energy costs are the second highest costs for a retailer~\cite{HPAC}, a decent energy management process is vital for improving efficiency~\cite{SCHULZE20163692}. Energy Management (EM) has been the subject of numerous studies throughout the years, and, because the field of EM is wide, it can be described in many different ways~\cite{SCHULZE20163692}. A purpose of EM is to search for improved strategies to consume energy in a more efficient way. From a business point of view, greater energy efficiency is of importance because it provides a number of direct, and indirect, economic benefits~\cite{WORRELL20031081}. Several reasons can keep companies from investing in energy efficiency measures ~\cite{Gillingham}. For example, when inadequate information is available about the results of these investments, this can limit companies to invest in them~\cite{Gillingham}. Energy management can focus on addressing these factors to enable businesses to invest. In order to evaluate the efficiency an energy efficiency measure the observed energy consumption of the store/system must be compared to a \emph{reference behavior}~\cite{Stulka}. One way to create this reference behavior is to use energy baselining, here the reference behavior is defined as the previous, historically best, or ideal, theoretical performance of the given store~\cite{Mavro}. Energy baselines are usually created on the analysis of historical data~\cite{Stulka} and can be developed using traditional data mining techniques. Time-series prediction is a method of forecasting future values based on historical data~\cite{CHOU2016751} In time series forecasting, forecasts are made on the basis of data comprising one or more time series~\cite{chatfield2000time}. Time series data are defined as the sort of data that is captured over a period of time~\cite{hamilton1994time} (Eq.~\ref{eq:TimeseriesFormula}). \begin{equation} \label{eq:TimeseriesFormula} X_{1},X_{2},\ldots X_{t-1},X_{t}\ldots \end{equation} Where $X$ is the value measured at time $t$. Creating energy forecasts is an important aspect of the energy management of buildings~\cite{WANG2017796}. Finally, making forecasts can also help in model evaluation when testing different time series algorithms~\cite{chatfield2000time}. We want to be able to use domain-specific knowledge to engineer new features, therefore, we decided to follow a regression approach. Regression is not a time series specific algorithm for forecasting, however, it can be applied to make time series forecasts. In multiple regression models, we forecast the dependent variable using a linear combination of the independent variables. Based on this relationship the algorithm will be able to predict a value for the dependent variable. We selected off-the-shelf machine learning algorithms like Multiple Linear Regression (MLR), Random Forests (RF) and Artificial Neural Networks (ANN) to perform the regression. One way to test the accuracy of the algorithms, is to compare the predicted values with the actual observed values. Nowadays, Machine Learning models and methods are applied in various areas and are used to make important decisions which can have far-reaching consequences~\cite{BERGMEIR2012192}. Therefore, it is important to evaluate their performance. Currently, Cross-Validation (CV) is the widely accepted and most used evaluating technique in data analysis and machine learning~\cite{JIANG2017219,BERGMEIR2012192}. However, Cross Validation does not work well in evaluating the predictive performance of time series~\cite{JIANG2017219}. One way to validate the prediction performance of a time series model is to make use of a Sliding Window design~\cite{HOOT2008116}, (Figure~\ref{fig:TS}). In this method, the algorithm is trained and tested in different periods of time. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{Images/SlidingWindowValidation.jpg} \caption{Example of a Sliding Window Validation} \label{fig:TS} \end{figure} To evaluate the prediction performance of the algorithms we used the Mean Absolute Error (MAE) as the error metric because the MAE is the most natural measure of the average prediction error~\cite{MAE,Wilmott}. The following formula shows how the Mean Absolute Error is calculated: \begin{equation} \label{eq:MAE} MAE = \frac{1}{N}\sum\limits_{{i = 1}}^N {\left| {\hat{Y_i} - {{Y_i}}} \right|} \end{equation} Here \(\hat{Y_i}\) is the predicted value and \({Y_i}\) is the observed value. Numerous studies focused on energy prediction because forecasting the energy consumption is an important component of any energy management system~\cite{Nasr}. In New Zealand~\cite{SAFA2017107}, researchers used MLR to calculate the optimal energy usage level for office buildings, based on monthly outside temperatures and numbers of full-time employees. With this knowledge, they could build an energy monitoring and auditing system for the optimization and reduction of energy consumption. In the UK~\cite{SPSS}, researchers used an MLR to forecast the expected effect of climate change on the energy consumption of a specific supermarket. They estimated that, until 2040, the gas consumption will increase 28\%, which is more, compared to the electricity usage, which will increase 5,5\%. In the UK, most supermarkets negotiate energy prices and, when they exceed their predicted demand, they have to pay a penalty. Therefore, their ability to accurately predict energy consumption will facilitate their negotiations on electricity tariffs with suppliers. One supermarket in the UK used ANN's to analyze the Store’s Total Electricity Consumption as well as their individual systems, such as Refrigeration and Lighting~\cite{Mavro}. For each of these systems, they developed a model to provide an energy baseline. This baseline is used for performance monitoring which is vital to ensure systems to perform adequately and guarantee operating costs and energy use are kept to a minimum. Finally, ANN's have been used for energy prediction with the final goal of estimating the supermarkets future CO2 emissions~\cite{Chari}. A recent paper~\cite{WANG2017796}, provides a detailed literature review on the state-of-the-art developments of Artificial Intelligent (AI) based models for building energy use prediction. It provides insight into ensemble learning, which combines multiple AI-based models to improve prediction accuracy. The paper concludes that ensemble methods have the best prediction accuracy but that a high level of technical knowledge and computational resources is required to develop them. Consequently, this has hindered their application in real practice. An advantage of high prediction accuracy is that this can allow early detection of equipment faults that could disrupt store operations~\cite{Mavro}. These studies show that predicting energy consumption is possible with data mining techniques and that they can predict energy usage within acceptable errors. Compared to other engineering methods, ensemble methods require less detailed information of the physical building parameters~\cite{WANG2017796}. This saves money and time in conducting predictions compared to simulation tools. Hence, they could replace them in the future. Because studies use different types and volumes of input data, there is no unified input data format. Therefore, knowledge of the methods and a variety of data is needed to create meaningful and accurate predictions. \section{Defining baselines with Machine Learning Algorithms} Every forecast $\hat{Y_i}$ of an observed value ${Y_i}$ will have a forecast error $E$, which describes the deviation among them. These deviations can result from poor prediction performance or energy savings/losses. It is very hard to forecast a numeric value correctly, the deviations can be larger or smaller. Thus, to provide good estimates of the effect of changes in energy management policies, it is important to have a learning model that can create energy baselines as accurate as possible. The objective of this study is to asses the reliability of the learning model in different aspects. First, we want to determine which model is best in creating a reliable baseline with the least amount of training days. This can be beneficial in two specific situations: when a retailer opens a new store, or implements new energy policies. When a new store is opened, no data has been collected about the energy performance of \emph{this} specific store. To create a baseline as soon as possible, it is essential to know how many days it takes to collect sufficient data. Therefore, we study the minimum amount of days needed to create a reliable baseline. This information is also suitable for updating the baseline when the configuration of the store changes, e.g., due to upgrades of the refrigeration equipment. When we know this setup, we want to discover the lifespan of this prediction, i.e., how long does this energy baseline remain reliable after being learned. It is important to determine how reliable the baseline is and if it needs updating, because we expect that the prediction error will grow over time. As a result, the prediction error will behave differently for short and long term predictions. With this information, the life-cycle of a model can be determined, which defines how often the model needs to be updated. When a new energy saving policy is implemented, the Retailer wants to estimate how much energy is saved. Therefore, a model has to be developed which is able to make long term predictions based on the old configuration of the store. With this baseline, the Retailer can see what the estimated energy consumption would be if they did not change the layout. By comparing this baseline with the observed energy consumption or the new baseline, the difference can be estimated. We will examine the behavior of the model for long term predictions because the Retailer needs to know for how long he can estimate, with a reasonable accuracy, the energy gains from a certain energy policy. \subsection{Approach} We obtained time series data from five supermarkets across Portugal, which consist of measurements of the \emph{Refrigeration Energy Consumption}, \emph{Outside temperature} and the \emph{Timestamp}. The original time series data was provided, in sometimes irregular, 15-minute intervals. After this restructuring, the data is converted into hourly values and eventually, transformed to daily formats. The energy consumption is measured in kilowatt hour (kWh) from the Retailer's energy monitoring system. The weather data consists of the outside temperature derived from a sensor placed on the roof of the store and is measured in degrees Celsius (C$^{\circ}$). In order to apply a similar approach to the data of each store, we decided to work separately with datasets that have a similar structure. We will use domain knowledge to create features for the datasets. The process of designing new features, based on domain knowledge, is called Feature engineering~\cite{LI2017232}. Before creating these datasets, we first identified the dependent and the independent variables. In this study, an energy baseline will be created that reflects the estimated refrigeration energy consumption. Consequently, this will be the dependent variable, and the independent variables are the ones influencing this consumption. Only the factors that are measured, by all stores, can be used here as an independent variable. \subsection{Estimating Reliability} For a retailer it is important to estimate, with reasonable accuracy, the energy savings resulting from energy policies. If we train an algorithm with data before a energy policy change, we can create an energy baseline that shows what the energy consumption would be if this policy has not been changed. By comparing this energy baseline with the observed consumption, after the policy change, we can estimate the energy savings. The first objective of this study is to define the minimal set of training examples needed to build a reliable energy baseline. To do this, we train the machine learning algorithms with different numbers of training days. Each iteration we increase the number of training examples and evaluated the models’ prediction accuracy. When all iterations have been completed, we are ready to plot the error metrics in the learning curves. Because this approach is replicated for the three algorithms, this also reveals which one performs best. After we selected the learning model which is able to create the baseline with the least amount of data, we define the update frequency of this setup. We expect the prediction error to grow over time, and therefore the energy baseline will become unreliable at some point when the prediction error becomes too high. To find the point of which we recommend updating, we use the previously defined setup, to make predictions for the remaining dataset. As soon as the predictions are made, we compute a MAE for each of 10 subsequent predictions. Once all the errors are computed, we can plot them to see how the prediction error develops over time. This enables us to analyze how the prediction accuracy develops along the prediction horizon, and define the update frequency. Finally, the third part of this research is to analyze the long term prediction performance. This was done by training each model with various sizes of training data and let it predict for the remaining dataset. After the predictions were made, we then calculated a MAE for every 10 subsequent predictions. Having plotted the error metrics meant that we could study their performance over time. \section{Experimental Setup} In order to study the three objectives described before, we designed an approach based on Learning curves in combination with Sliding windows. Our experimental setup is a variation of the Time series approach used by~\cite{Busetti,vanRijn2015}. The method we propose is visualized in Figure~\ref{fig:SDLC}. We decided to use this particular method because we want to train machine learning models with different sizes of historical training data. The learning curves enable us to visualize and evaluate their performance. \subsection{Data} The studied datasets are mainly based on the energy consumption and weather data for the whole year of 2016 and the first half of 2017 (Table~\ref{tab:OriginalData}). The data for each store is available from the moment the store opened or started to collect the data. Hence, for each store, the maximum amount of data is available. \begin{table}[hpt] \caption{Overview Datasets} \label{tab:OriginalData} \centering \small \renewcommand{\arraystretch}{1.25} \begin{tabular}{llll} \hline \hline Store & First day & Last day & Observations \\ \hline Aveiro & 04/12/2015 & 26/04/2017 & 510 days \\ Fatima & 07/01/2016 & 26/04/2017 & 476 days \\ % Macedo de Cavaleiros & 13/11/2015 & 26/04/2017 & 531 days \\ % Mangualde & 16/05/2016 & 16/05/2017 & 366 days \\ % Regua & 16/05/2016 & 16/05/2017 & 366 days \\ \hline \hline \end{tabular} \normalsize \end{table} Based on the two available variables, \emph{Timestamp} and \emph{Outside temperature}, we created new features with additional information that the algorithm can use. Designing appropriate features is one of the most important steps to create good predictions because they can highly influence the results that will be achieved with the learning model~\cite{SILVA2014395}. To determine which features to create, knowledge about the behavior of the store is important~\cite{Mavro}. The domain knowledge required for this process, was acquired through conversations with experts, reviewing similar studies~\cite{Mavro,SPSS,SAFA2017107,Chari,KARATASOU2006949,Jacob,OROSA201289} and using descriptive data mining techniques, e.g., Subgroup Discovery (SD). SD is a method to identify, unusual, behaviors between dependent and independent variables in the data~\cite{WIDM1144,Herrera2011}. In this study, SD will be used to improve our understanding of the behavior of the energy consumption. Table~\ref{tab:Features} gives an overview of the created features. \begin{table}[hpt] \caption{Overview Features} \label{tab:Features} \centering \tiny \renewcommand{\arraystretch}{1.25} \begin{tabular}{llll} \hline \hline Name & Type & Description & Derived from \\ \hline Weekday & Categorical (1-7) & Day of the week & Timestamp \\ Week of the Month & Categorical (1-4) & Week of the Month & Timestamp \\ Workday & Binary (0-1) & Workday or Weekend & Timestamp \\ Max Temperature & Numerical & Max Temperature of the Day & Temperature \\ Mean Temperature & Numerical & Mean Temperature of the Day & Temperature \\ Min Temperature & Numerical & Min Temperature of the Day & Temperature \\ Temperature Amplitude & Numerical & Absolute Difference Min and Max & Temperature \\ Max Temperature Y.. & Numerical & Max Temperature of Yesterday & Temperature \\ Mean Temperature Y.. & Numerical & Mean Temperature of Yesterday & Temperature \\ Min Temperature Y.. & Numerical & Min Temperature of Yesterday & Temperature \\ Temperature Amplitude Y.. & Numerical & Absolute Difference Min and Max & Temperature \\ \hline \hline \end{tabular} \normalsize \end{table} \subsection{Algorithms} We selected off-the-shelf machine learning algorithms like Multiple Linear Regression (MLR), Random Forests (RF) and Artificial Neural Networks (ANN) to perform the regression. Linear regression is a simple and widely used statistical technique for predictive modeling~\cite{SPSS}. It has been used before to predict the future energy consumption of a supermarket in the UK~\cite{SPSS}. The RF is considered to be one of the most accurate general-purpose learning techniques available and is popular because of its good off-the-shelf performance~\cite{Fernandez,Biau}. Finally, Artificial Neural Networks have successfully been used in recent studies to predict energy consumption~\cite{Mavro,Chari,WANG2017796,KARATASOU2006949,Nasr,FOUCQUIER2013272}. \subsection{Performance Estimation} In Machine Learning, learning curves are used to reflect the predictive performance as a function of the number of training examples~\cite{LC}. Figure~\ref{fig:LC} reveals the developing learning ability of a model when the number of training examples increases. The curve indicates how much better the model gets in predicting when more training examples are used. The general idea is to find out how good the model can become in predicting and what the subsequent number of training examples is~\cite{LC}. Since we are searching for the minimum number of training days to create a baseline, we can use the learning curves to identify this number. \begin{figure}[ht!] \centering \includegraphics[scale=0.6]{Images/LearningCurve.jpg} \caption{A graphical representation of a learning curve} \label{fig:LC} \end{figure} To test the learning ability of a model one can create several training sets of data and evaluate their performance on a test set~\cite{Langley1988}. These training sets can differ in, e.g., volume. It is preferred that the data for these sets are randomly selected from the available data~\cite{Langley1988}. The purpose is to train the model multiple times, and after every training, the model performance should be tested. The results of these tests can be plotted to draw a learning curve which shows the evolution in the performance of the model. These curves can be clarifying, especially when the performance of multiple models is compared. Besides for model selection, also the performance of a model can be compared in relation to the number of training examples used~\cite{LC}. Such a learning curve will tell how the model behaves when it is constructed with varying volumes of training data. \begin{figure}[ht!] \centering \includegraphics[scale=0.45]{Images/SlidingWindowandLC.jpg} \caption{Example of our Sliding Window Process} \label{fig:SDLC} \end{figure} \section{Results} \subsection{Reliability of baselines} \label{sec:RS2} In Figure~\ref{fig:TotalModels}, we see how the error evolves as we train the model with more data points i.e.\ days. This plot displays the learning curves obtained for each of the trained models, MLR, ANN, and RF. The number of training examples ranged from 10 up to 180 days, with threads of 10, and have been tested for a period of 50 days. Each line represents the mean of 18 iterations, for all stores, Aveiro, Fatima, and Macedo Cavaleiros, we performed six iterations regarding the method visualized in~\ref{fig:SDLC}. In Figure~\ref{fig:TotalModels}, we observe that the MLR is the most reliable by a number of 30 days with a MAE of 0.25. Besides, we observe that using the MLR, as we expand the size of training examples, there is an increase in the MAE. Furthermore, we perceive a different behavior for the other two learning models. We see that the performance of the RF stabilizes when we increase the training data following 70 training examples up to 180. Moreover, we remark that the ANN exhibits a continuous reduction in the MAE when more training examples, up to 180, are attached to the training set. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{Images/Total_Models.PNG} \caption{Learning curves, based on an average for all stores and methods} \label{fig:TotalModels} \end{figure} The learning curves in Figure~\ref{fig:TotalModels}, reveal that each of the learning models is affected differently by the change in the training set size. We notice that the MLR outperforms the other two methods, for making a reliable baseline using the least amount of days. Furthermore, we see that the performance of the MLR worsens when we increase the number of training examples. This can be explained by the nonstationary nature of the datasets. This non-stationarity is a problem for the MLR since it has difficulties with nonlinear relationships. Because the MLR works well with a smaller number of training examples, we assume that the dataset contains periods of local stationarity. One study~\cite{LOCALStationary}, shows that it is possible that nonstationary time series appear stationary when examined close up. In this local period, the statistical properties change slowly over time. As a consequence, the data that lies close to the forecast period is more likely to be predictive for this forecast period. For the ANN and RF, stationarity is irrelevant since they are able to handle more complex, nonlinear relations. We see evidence for this in our results, there is a promising development over time in the associated learning curve. We believe that with more diverse data, the ANN could be able to predict a baseline with less number of training days than the MLR. Unfortunately, we were not able to investigate this further. As shown in Figure~\ref{fig:TotalModels}, we are able to create a reliable model with the MLR trained on 30 days. Therefore, we trained the MLR for each of the stores during the same period of the year, March 2016, and we estimated the energy consumption for the period of one year, from April 2016, until February 2017. Figure~\ref{fig:LC30}, shows the evolution of the MAE throughout this period. We observe that during the first 30 days of predictions, the MAE remains quite low, under 0.5. Next, we see that during the period between 50 till 180 days, the MAE is higher for all the stores. As a matter of fact, this period represents the months June, July, August, and September. Table~\ref{tab:AVGTEMP} shows, that throughout these months, temperature levels reach higher values than in March, the period that was used for training the model. This explains why the MAE is higher. To avoid this problem, we could train a different model for each of the two energy profiles. Because our dataset is limited, we were not able to test this in practice. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{Images/LifeCycle_30Days_New.PNG} \caption{MAE over time using MLR} \label{fig:LC30} \end{figure} We observe, in Figure~\ref{fig:LC30}, that in Aveiro the influence of seasonality is less evident than for the supermarkets in Fatima and Macedo Cavaleiros. Since all stores are trained and tested with the same model and in the same period of time, the most plausible factor, for this, are the variables that are related to Temperature. The average temperatures of the three stores follow a similar pattern, higher in the summer and lower in the winter. However, if we focus on the amplitudes of the average temperatures per month, (Table~\ref{tab:AVGTEMP}), we observe that Aveiro registered the smallest amplitude, with a difference of $9 C^{\circ}$. The other stores, Fatima and Macedo Cavaleiros, noted an amplitude of $13 C^{\circ}$ and $18 C^{\circ}$ respectively. This seems to explain why the model trained for the store of Aveiro, is less affected by seasonality. In Figure~\ref{fig:LC30}, we notice that after 220 days the accuracy of the model increases again. When we look at Table~\ref{tab:AVGTEMP}, we see that the temperature values from November on, are comparable to the ones in March. Nevertheless, the error is still higher than in the period of the first 30 days. We applied this method in different periods of time, and we perceived similar behavior. In conclusion, we base our decision on the average prediction. Figure~\ref{fig:LC30} shows that the average prediction remains stable until 30 days, therefore, we recommend updating the model up to 30 days. \begin{table}[hpt] \caption{Average Temperature per Month and Store} \label{tab:AVGTEMP} \centering \small \setlength\tabcolsep{3.5pt} \begin{tabular}{lllllllllllllll} \hline \hline Store & Jan & Feb & Mar & Apr & May & June & July & Aug & Sep & Oct & Nov & Dec \\ \hline Aveiro & 12 & 13 & 14 & 16 & 17 & \textbf{20} & \textbf{21} & \textbf{21} & 19 & 18 & 14 & 14 \\ Fatima & 9 & 11 & 11 & 14 & 15 & 19 & \textbf{22} & \textbf{22} & \textbf{20} & 17 & 12 & 10 \\ M. Cav. & 8 & 10 & 12 & 15 & 16 & \textbf{22} & \textbf{26} & \textbf{25} & \textbf{22} & 16 & 10 & 8 \\ \hline \hline \end{tabular} \end{table} \subsection{Estimated energy savings} Each store has a different number of observations, and they are also collected in different periods of time. We will train the MLR, RF, and ANN with the first 180 and 360 days of data, and test for the remaining days. We will do this for the stores located in Aveiro, Fatima, and Macedo Cavaleiros. Therefore, we train each store in different periods, and not within the same period. In Figure~\ref{fig:LC30}, we noticed that 30 training days were not enough to make accurate long term predictions. Therefore, we decide to include more training days into our training set. Each of the following plots, in Figures~\ref{fig:AV180},~\ref{fig:FA180},~\ref{fig:MC180},~\ref{fig:AV360},~\ref{fig:FA360}, and~\ref{fig:MC360}, show how the prediction error evolves over time, per store, per model and number of training days. Each point shows the average error for 10 subsequent predictions. Figures~\ref{fig:AV180},~\ref{fig:FA180}, and~\ref{fig:MC180} show the evolution of the prediction error when the models are trained on the first 180 days of data. We observe, that each store shows a similar behavior as shown in Figure~\ref{fig:LC30}. This is more evident when we compare the error of the MLR (red line) with the error in Figure~\ref{fig:LC30}. Overall, the MAE is lower for the stores of Fatima and Macedo Cavaleiros, if we use 180 days instead of 30 days. These results also show, that the effect of the different consumption modes is still visible, but less dramatically. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{Images/AV_180.jpg} \caption{MAE over time using 180 training days, Aveiro} \label{fig:AV180} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{Images/FA_180.jpg} \caption{MAE over time using 180 training days, Fatima} \label{fig:FA180} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{Images/MC_180.jpg} \caption{MAE over time using 180 training days, Macedo Cavaleiros} \label{fig:MC180} \end{figure} We expect that long term predictions become more accurate when we use 360 training days to train the model because the model is trained with data from all periods of the year. Because we use this number of training days, a bigger variation of temperature values is included in the training set. Therefore, we decided to train the models, for all stores, on the first 360 training days and study the predictions on the remaining days. Figures~\ref{fig:AV360},~\ref{fig:FA360}, and~\ref{fig:MC360} show us how the MAE error evolves for this period of time. We observe, that the for the corresponding period of time, the MAE is a bit lower than for the models trained on 180 days. In contrast to Figure~\ref{fig:TotalModels}, the MLR has the worst performance, while the RF and ANN perform somewhat similar. The results of this experimental part supports the general idea that when we train the models with more data, our predictions will improve. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Images/AV_360.jpg} \caption{MAE over time using 360 training days, Aveiro} \label{fig:AV360} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Images/FA_360.jpg} \caption{MAE over time using 360 training days, Fatima} \label{fig:FA360} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Images/MC_360.jpg} \caption{MAE over time using 360 training days, Macedo Cavaleiros} \label{fig:MC360} \end{figure} When the algorithms are trained with 180 training days, the effect of the different energy consumption modes is still visible. When we use 360 training days, we observe that the predictions become more accurate. Therefore, we advice to train algorithms on 360 training days to create long term predictions. \section{Estimate Energy Savings} \label{sec:Energysavings} The Retailer wants to estimate, with reasonable accuracy, the energy savings resulting from its energy policies. Changes in energy policies, such as the retrofitting an equipment, require high investments. This makes it important for the Retailer to know if the investments are truly effective, in the reduction of energy consumption. If we use a baseline trained with data before some measure is implemented, we can estimate the energy savings by comparing its estimates with the observed consumption. We selected two stores that have undergone a retrofitting of the equipment. From these stores exactly one year of data is available. Mangualde and Regua had, respectively, 170 and 200 training days available before the Retrofit. Because we have less than a year of data available, we decide to use the MLR, trained on 30 days, which shows the best performance in Figure~\ref{fig:TotalModels}. Figures~\ref{fig:Mangualde} and~\ref{fig:Regua} show the observed consumption (orange lines) versus the baseline estimates (blue lines) for these two stores. We trained the MLR for both stores, on 30 training days, between 50 and 20 days before the Retrofit and we predicted for 50 days. This makes it easier to visualize how the baseline compares with the energy consumption before and after the Retrofit. The deviations, between the baseline and the energy consumption, can result from poor prediction performance or energy savings/losses. We chose a setup that gives us a reliable baseline, therefore, we believe that the deviations are caused by energy savings. In both Figures~\ref{fig:Mangualde} and~\ref{fig:Regua}, we observe that, before the Retrofit, the baseline and the real energy consumption intertwine in several points. This behavior, which was also seen before, shows that the predictions are close to the real consumption. After the Retrofit, however, the observed consumption is always lower than the prediction, which offers strong evidence that the implemented measure was effective. Hence, if we assume that the baseline is accurate enough, we can estimate the energy savings using the difference between the predicted and observed energy consumption. \begin{figure}[hpt!] \centering \includegraphics[width=\textwidth]{Images/Mangualde_Retrofit.png} \caption{Example of the predicted and observed Energy Consumption, Mangualde} \label{fig:Mangualde} \end{figure} \begin{figure}[hpt!] \centering \includegraphics[width=\textwidth]{Images/Regua_Retrofit.png} \caption{Example of the predicted and observed Energy Consumption, Regua} \label{fig:Regua} \end{figure} \section{Conclusions} Energy efficiency measures can require high investments. This makes it important for the Retailer to know if the investments are truly effective, in reducing energy consumption. Energy baselines can be used to study the effectiveness of energy efficiency measures. The results can simplify decisions to reserve funding for the required investments in other stores. In this study, we researched if off-the-shelf data science technologies can be used to create energy baselines that support improved energy management. Before that, we also performed some exploratory analysis to better understand the data. Our first goal, was to determine the minimum amount of training days needed to create a reliable baseline, and which model performs best. For that, we studied the prediction accuracy of three machine learning models, ANN, RF, and MLR, based on various datasets. For the experiments, we proposed a sliding window approach in which we systematically expanded the size of the training set with historical data. Our experiments show, that the MLR has a clear advantage over the other two methods for creating a baseline with a minimum amount of days. This model needs 30 training days to estimate a reliable baseline. The second goal was to determine how often the algorithm needs to be updated when trained with a MLR on 30 training days. We trained our algorithm multiple times, on all stores, and in different time periods. Our analysis shows that the MAE stays low for a period of 30 days, after this the MAE dramatically increases. Moreover, we observed that the energy consumption follows a different profile when average temperatures are higher than 20 degrees. These findings are in line with our insights derived from Subgroup Discovery. Our analysis shows, that the amplitude of the average temperature affects the prediction performance. Hence, we advise updating the model up to 30 days. Our third goal, was to determine if we can estimate energy savings after implementing an energy efficiency measure. To answer this question, we trained our models with 180 and 360 training days and predicted for the remaining days. Our findings show, that the predictions become the most accurate when trained with 360 training days. Because we use 360 training days, a bigger variation of temperature values is included in the training set. This supports the general idea that when we train the models with more data, our predictions will improve. With a baseline, trained on 360 training days, the Retailer is able to estimate, with reasonable accuracy, the energy savings resulting from its energy policies. Moreover, he can compare the energy savings to the investment made for the measure. This has obvious advantages for the retailer. In summary, the results of this study show that we have been able to create reliable energy baselines using off-the-shelf data science technologies. Moreover, we found a way to create them based on short term historical data. \section*{Acknowledgments} This work is financed by the ERDF – European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT – Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within project 3GEnergy (AE2016-0286). \section*{References} \bibliographystyle{abbrv}
{ "timestamp": "2018-02-13T02:20:20", "yymm": "1802", "arxiv_id": "1802.04128", "language": "en", "url": "https://arxiv.org/abs/1802.04128" }
\section{Introduction} \label{Loss} Standard classification tasks focus on building a classifier which predicts well on future examples. The overall goal is to minimize the number of mis-classifications. However, when the cost of mis-classification is very high, a generic classifier may still suffer from very high risk. In such cases it makes more sense not to classify high risk examples. This choice given to the classifier is called reject option. Hence, the classifiers which can also reject examples are called reject option classifiers. The rejection also has its cost but it is very less compared to the cost of mis-classification. For example, making a poor decision based on the diagnostic reports can cost huge amount of money on further treatments or it can be cost of a life \cite{Rocha2011}. If the reports are ambiguous or some rare symptoms are seen which are unexplainable without further investigation, then the physician might choose not to risk misdiagnosing the patient. In this case, he might instead choose to perform further medical tests, or to refer the case to an appropriate specialist. Reject option classifier may also be found useful in financial services \cite{Rosowsky2013}. Consider a banker looking at a loan application of a customer. He may choose not to decide on the basis of the information available, and ask for a credit bureau score or further recommendations from the stake holders. Reject option classifiers have been used in wide range of applications from healthcare \cite{btn349,Rocha2011} to text categorization \cite{1234113} to crowd sourcing \cite{Qunwei2017} etc. Reject option classifier can be viewed as combination of a classifier and a rejection function. The rejection region impacts the proportion of examples that are likely to be rejected, as well as the proportion of predicted examples that are likely to be correctly classified. An optimal reject option classifier is the one which minimizes the rejection rate as well as the mis-classification rate on the predicted examples. Let ${\cal X} \subseteq \mathbb{R}^d$ be the feature space and ${\cal Y}$ be the label space. For binary classification, we use ${\cal Y} = \{+1,-1\}$. Examples $(\mathbf{x},y)$ are generated from unknown joint distribution ${\cal D}$ on the product space ${\cal X} \times {\cal Y}$. A typical {\em reject option classifier} is defined using a decision surface ($f(\mathbf{x})=0$) and bandwidth parameter $\rho$ (determines rejection region) as follows: \begin{equation} \label{eq:rej-op-classifier} h_{\rho}(f(\mathbf{x})) = 1.I_{\{f(\mathbf{x}) > \rho\}}+ 0.I_{\{f(\mathbf{x})| \leq \rho\}} -1.I_{\{f(\mathbf{x})< -\rho\}} \end{equation} A reject option classifier can be viewed as two parallel surfaces and the area between them as rejection region. The goal is to determine both $f$ and $\rho$ simultaneously. The performance of a reject option classifier is measured using $L_{d}$ loss function defined as: \begin{equation} L_{d}(yf(\mathbf{x}), \rho) = 1.I_{\{yf(\mathbf{x}) < -\rho\}} + d.I_{\{|f(\mathbf{x})| \leq \rho\}} \end{equation} where $d$ is the cost of rejection. If $d=0$, then $f(.)$ will always reject. If $d \geq 0.5$, then $f(\mathbf{x})$ will never reject, since the cost of random labeling is $0.5$. Thus, $d$ is chosen in the range $(0,0.5)$. $h_\rho(f(\mathbf{x}))$ (described in equation.~\ref{eq:rej-op-classifier}) has been shown to be infinite sample consistent with respect to the generalized Bayes classifier \cite{Yuan:2010}. A reject option classifier is learnt by minimizing the risk which is the expectation of $L_{d}$ with respect to the joint distribution ${\cal D}$. The risk under $L_{d}$ is minimized by {\em generalized Bayes discriminant} $f_d^*(\mathbf{x})$ \cite{Chow:2006}, which is \begin{equation}\label{gbd} f_d^*(\mathbf{x}) = 1.\mathbb{I}_{\{\eta(\mathbf{x}) > 1-d\}} +0.\mathbb{I}_{\{d\leq \eta(\mathbf{x}) \leq 1-d\}} -1.I_{\{\eta(\mathbf{x}) < d\}} \end{equation} where $\eta(\mathbf{x})=P(y=1|\mathbf{x})$. However, in general we do not know ${\cal D}$. But, we have the access to a finite set of examples drawn from ${\cal D}$ called training set. We find the reject option classifier by minimizing the empirical risk. Minimizing the empirical risk under $L_{d}$ is computationally hard. To overcome this problem, convex surrogates of $L_{d}$ have been proposed. Generalized hinge based convex loss has been proposed for reject option classifier \cite{Bartlett:2008}. The paper describes an algorithm for minimizing $l_2$ regularized risk under generalized hinge loss. Wegkamp et.al 2011 \cite{wegkamp2011} propose sparse reject option approach by minimizing $l_1$ regularized risk under generalized hinge loss. In both these approaches \cite{Bartlett:2008,wegkamp2011}, first a classifier is learnt based on risk minimization under generalized hinge loss and then a rejection threshold is learnt. Ideally, the classifier and the rejection threshold should be found simultaneously. This approach might not give the optimal parameters. Also, a very limited experimental results are provided to show the effectiveness of the proposed approaches \shortcite{wegkamp2011}. A cost sensitive convex surrogate for $L_{d}$ called double hinge loss has been proposed in \cite{Grandvalet2008}. The double hinge loss remains an upper bound to $L_{d}$ provided $\rho \in \bigg( \frac{1-H(d)}{1-d},\frac{H(d)-d}{d}\bigg)$, which is very strict condition. So far, the approaches proposed learn a threshold for rejection along with the classifier. However, in general, the rejection region may not be symmetrically located near the classification boundary. A generic convex approach has been proposed which simultaneously learns the classifier as well as the rejection function \cite{Cortes2016}. The main challenge with the convex surrogates is that they are not constant even in the reject region in contrast to $L_{d}$ loss. Sousa and Cardoso \cite{Sousa:2013} model reject option classification as ordinal regression problem. It is not clear whether treating rejection as a separate class leads to a good approximation simply because training data does not contain rejection as a class label. Moreover, classification consistency of this approach is not known in the reject option context. A non-convex formulation for learning reject option classifier using logistic function is proposed in Fumera and Roli \shortcite{Fumera2002}. However, theoretical guarantees for the approach are not known. Also, a very limited set of experiments are provided in support of the approach. A bounded non-convex surrogate called {\em double ramp loss} $L_{dr}$ is proposed in Manwani \textit{et al.} \shortcite{Manwani15}. A regularized risk minimization algorithm was proposed with $l_2$ regularization \cite{Manwani15}. The approach proposed shown to have interesting geometric properties and robustness to the label noise. However, statistical properties of $L_{dr}$ (Fisher consistency, generalization error etc.) are not studied so far. Also, $l_2$ regularization based approach does not learn sparse classifiers. \subsection{Our Contributions} In this paper, we propose a sparse reject option classifier learning algorithm using double ramp loss. By sparseness, we mean that the number of support vectors needed to express the classifier are small. Our contributions in this work are as follows. \begin{itemize} \item We propose a difference of convex (DC) programming \cite{ThiHoaiAn:1997} based algorithm to learn sparse reject option classifier. The final algorithm turns out to be solving successive linear programs. \item We also establish statistical properties for double ramp loss function. We show that the double ramp loss function is Fisher consistent. Which means that generalized Bayes classifier minimizes the population risk under $L_{dr}$. We also show that the excess risk under loss $L_{dr}$ upper bounds the excess risk under loss $L_d$. \item We derive the generalization error bounds for the proposed approach. \item We also show experimentally that the proposed approach performs comparable to the other state of the art approaches for reject option classifier. Our approach learns sparser classifiers compared to all the other approaches. We also show experimentally that the proposed approach is robust against label noise. \end{itemize} The rest of the paper is organized as follows. We discuss the proposed method and algorithm in section~\ref{sec:approach}. In section~\ref{sec:analysis}, we provide the theoretical results for $L_{dr}$. The experiments are given in section~\ref{sec:exp}. We conclude the paper with some remarks in section~\ref{sec:conclusions}. \section{Proposed Approach}\label{sec:approach} We propose a new algorithm for learning reject option classifier which minimizes the $l_1$-regularized risk under double ramp loss function $L_{dr}$ \cite{Manwani15}. $L_{dr}$ is a non-convex surrogate of $L_{d}$ as follows. \begin{equation} \begin{aligned} L_{dr}&(t,\rho) = \frac{d}{\mu}\Big{[}\big{[}\mu-t +\rho\big{]}_+ - \big{[}-\mu^2-t+\rho\big{]}_+\Big{]} \\ &+\frac{(1-d)}{\mu}\Big{[}\big{[}\mu -t-\rho\big{]}_+ - \big{[}-\mu^2-t-\rho\big{]}_+\Big{]} \end{aligned} \end{equation} where $\mu$ is the slope of the loss in linear region, $[a]_+=\max(0,a)$ and $t=yf(\mathbf{x})$. Note that $L_{dr}$ depends on specific choice of $\mu$. Also, for a valid reject region, we want $\rho \geq \frac{1}{2}\mu(1+\mu)$. Figure~\ref{DRLoss} shows the plot of $L_{dr}$ for different values of $\mu$. \begin{figure} \begin{center} \includegraphics[scale=0.45]{figure_with_modified_text.eps} \caption{$L_{d}$ vs. Double ramp loss $L_{dr}$ ($d$=0.2, $\rho = 2$).} \label{DRLoss} \end{center} \end{figure} \subsection{Sparse Double Ramp SVM (SDR-SVM)} Let $S=\{(\mathbf{x}_1,y_1),\ldots,(\mathbf{x}_N,y_N)\}$ be the training set where $(\mathbf{x}_i,y_i) \in \mathcal{X}\times \{+1,-1\},\;i=1\ldots N$. Let the reject option classifier be of the form $f(\mathbf{x})=h(\mathbf{x}) + b$. Let $\mathcal{K}:\mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}_+$ be a Mercer kernel (continuous, symmetric and positive semi-definite) to produce nonlinear classifiers. Let ${\cal H}_{\mathcal{K}}$ be the reproducing kernel Hilbert space (RKHS) induced by the Mercer kernel $\mathcal{K}$ with the norm $\|.\|_{\mathcal{K}}$ \cite{aronszajn1950theory}. To learn sparse reject option classifier, we use $l_1$ regularization term. Thus, we find the classifier as solving following optimization problem. $$\min_{h\in {\cal H}_{\mathcal{K}}^+,b,\rho}\;\;\; \lambda \|h\|_1 + \sum_{i=1}^N L_{dr}(y_if(\mathbf{x}_i),\rho)$$ However, the optimal $h$ lies in a finite dimensional subspace ${\cal H}_{\mathcal{K},S}^+$ of ${\cal H}_{\mathcal{K}}$ \cite{Scholkopf:2001}. ${\cal H}_{\mathcal{K},S}^+ = \left\{\sum_{i=1}^Ny_i\alpha_i\mathcal{K}(\mathbf{x}_i,.)\;|\;[\alpha_1,\ldots,\alpha_N] \in \mathbb{R}^N_+\right\}$. Given $h \in {\cal H}_{\mathcal{K},S}^+$, the $l_1$ regularization is defined as $\Omega(h) = \sum_{i=1}^N \alpha_i$ for $h(\mathbf{x}) = \sum_{i=1}^N y_i\alpha_i\mathcal{K}(\mathbf{x}_i,\mathbf{x})$ \cite{817991,bradley2000massive,Wu:2005}. Thus, the sparse double ramp SVM can be learnt by minimizing following $l_1$ regularized risk. \begin{align} \label{eq-sdr-svm} J(\Theta)=\lambda \sum_{i=1}^N \alpha_i + \frac{1}{N}\sum_{i=1}^N L_{dr}(y_i f(\mathbf{x}_i),\rho) \end{align} where $f(\mathbf{x}_i) = \sum_{j=1}^N y_j\alpha_j\mathcal{K}(\mathbf{x}_i,\mathbf{x}_j) + b$. $\Theta=(\mbox{\boldmath $\alpha$},b,\rho)$. We see that $J$ is a non-convex function. However, $J$ can decomposed as a difference of two convex functions $Q_1$ and $Q_2$ as $J(\Theta)= Q_1(\Theta)-Q_2(\Theta)$, where \begin{align*} Q_1(\Theta) & = \lambda\sum_{i=1}^N \alpha_i + \frac{1}{N\mu}\sum_{i=1}^N \bigg[ d\big[\mu-y_if(\mathbf{x}_i)+\rho\big]_+ \\ & + (1-d)\big[\mu-y_if(\mathbf{x}_i)-\rho\big]_+ \bigg]\\ Q_2(\Theta) & = \frac{1}{N\mu}\sum_{i=1}^N \bigg[ d\big[-\mu^2-y_if(\mathbf{x}_i)+\rho\big]_+ \\ & +(1-d)\big[-\mu^2-y_if(\mathbf{x}_i)-\rho\big]_+\bigg] \end{align*} To minimize such a function which can be expressed as difference of two convex functions, we can use difference of convex (DC) programming. In this case, DC programming guarantees to find a local optima of the objective function \cite{ThiHoaiAn:1997}. The simplified DC algorithm uses the convexity property of $Q_2(\Theta)$ and finds an upper bound on $J(\Theta)$ as $J(\Theta) \leq B(\Theta, \Theta^{(l)})$, where $$B(\Theta, \Theta^{(l)}):= Q_1(\Theta) - Q_2(\Theta^{(l)}) - (\Theta - \Theta^{(l)})^T \nabla Q_2(\Theta^{(l)})$$ $\Theta^{(l)}$ is the parameter vector after $(l)^{th}$ iteration, $\nabla Q_2(\Theta^{(l)})$ is a sub-gradient of $Q_2$ at $\Theta^{(l)}$. $\Theta^{(l+1)}$ is found by minimizing $B(\Theta,\Theta^{(l)})$. Thus, $$J(\Theta^{(l+1)}) \leq B(\Theta^{(l+1)}, \Theta^{(l)}) \leq B(\Theta^{(l)}, \Theta^{(l)})= J(\Theta^{(l)})$$ Thus, the DC program reduces the value of $J(\Theta)$ in every iteration. Now, we will derive a DC algorithm for minimizing $ J(\Theta)$. Given $\Theta^{(l)}$, we find $\Theta^{(l+1)} \in {\arg\min}_{\Theta} \;B(\Theta, \Theta^{(l)}) = {\arg\min}_{\Theta} \;Q_1(\Theta) - \Theta^T \nabla Q_2(\Theta^{(l)}) $. We use $\nabla Q_2(\Theta^{(l)})$ as: \begin{align*} \nabla Q_2(\Theta^{(l)})&= -\sum_{i=1}^N \begin{pmatrix} \frac{d\beta_i'^{(l)}+(1-d)\beta_i''^{(l)}}{\mu N}y_1y_i\mathcal{K}(\mathbf{x}_1,\mathbf{x}_i)\\ \vdots\\ \frac{d\beta_i'^{(l)}+(1-d)\beta_i''^{(l)}}{\mu N}y_Ny_i\mathcal{K}(\mathbf{x}_N,\mathbf{x}_i)\\ \frac{d\beta_i'^{(l)}+(1-d)\beta_i''^{(l)}}{\mu N}y_i\\ -\frac{d\beta_i'^{(l)}-(1-d)\beta_i''^{(l)}}{\mu N} \end{pmatrix} \end{align*} where \begin{align*} \beta_i'^{(l)}&=\mathbb{I}_{\{y_if^{(l)}(\mathbf{x}_i)\leq \rho^{(l)} - \mu^2\}};\;i=1\ldots N\\ \beta_i''^{(l)}&=\mathbb{I}_{\{y_if^{(l)}(\mathbf{x}_i)\leq -\rho^{(l)} - \mu^2\}};\;i=1\ldots N \end{align*} Note that $f^{(l)}(\mathbf{x}) = \sum_{i=1}^N\alpha_i^{(l)}y_i\mathcal{K}(\mathbf{x}_i,\mathbf{x}) + b^{(l)}$. The new parameters $\Theta^{(l+1)}$ are found by minimizing $B(\Theta,\Theta^{(l)})$ subject to $\rho \geq \frac{1}{2}\mu(1+\mu)$. Which becomes \begin{align*} &\min_{\mbox{\boldmath $\alpha$},b,\rho,\mbox{\boldmath $\xi$}',\mbox{\boldmath $\xi$}''}\lambda \sum_{i=1}^N\alpha_i +\frac{1}{N\mu}\sum_{i=1}^N \big( d\xi_i' + (1-d) \xi_i''\big) \\ &\;\;\;\;\;\;\;+\frac{d}{N\mu}\sum_{i=1}^N\beta_i'^{(l)}\big[ y_i\big(\sum_{j=1}^N\alpha_j y_j\mathcal{K}(\mathbf{x}_j,\mathbf{x}_i) + b\big)-\rho\big]\\ &\;\;\;\;\;\;\;+\frac{1-d}{N\mu}\sum_{i=1}^N\beta_i''^{(l)}\big[ y_i\big(\sum_{j=1}^N\alpha_j y_j\mathcal{K}(\mathbf{x}_j,\mathbf{x}_i) + b\big)+\rho\big]\\ &s.t.\begin{cases} y_i\big(\sum_{j=1}^N\alpha_j y_j\mathcal{K}(\mathbf{x}_j,\mathbf{x}_i) + b\big)\geq \rho+\mu-\xi_i'\; \forall i\\ y_i\big(\sum_{j=1}^N\alpha_j y_j\mathcal{K}(\mathbf{x}_j,\mathbf{x}_i) + b\big)\geq -\rho+\mu-\xi_i'' \; \forall i\\ \alpha_i,\xi_i',\xi_i'' \geq 0\;\; \forall i\;\;\;\;\rho \geq \frac{1}{2}\mu(1+\mu) \end{cases} \end{align*} Thus, $B(\Theta,\Theta^{(l)})$ can be minimized by solving a linear program. Thus, the algorithm solves a sequence of linear programs to learn a sparse reject option classifier. The complete approach is described in Algorithm~\ref{algo1}. Convergence guarantee of this algorithm follows from the convergence of DC algorithm given in \cite{ThiHoaiAn:1997}. The final learnt classifier is represented as $f(\mathbf{x})=h(\mathbf{x})+b$ and $\rho$. \begin{algorithm}[t] \caption{Sparse Double Ramp SVM (SDR-SVM)} \label{algo1} \begin{algorithmic} \STATE {\bf Input: }$S=\{(\mathbf{x}_1,y_1),\ldots,(\mathbf{x}_N,y_N)\},\;\epsilon>0,\;d\in (0,0.5),\;\mu\in (0,1],\;\lambda>0$\; \STATE {\bf Output: }$\mbox{\boldmath $\alpha$}^*,b^*,\rho^*$\; \STATE {\bf Initialize: }$l=0$, $\mbox{\boldmath $\alpha$}^{(0)},b^{(0)},\rho^{(0)}$\; \WHILE{($J(\Theta^{(l)})-J(\Theta^{(l+1)})>\epsilon$)} \FOR{$i = 1$ to $N$} \STATE $\beta_i'^{(l)}=\mathbb{I}_{\{y_if^{(l)}(\mathbf{x}_i)\leq \rho^{(l)} - \mu^2\}}$\; \STATE $\beta_i''^{(l)}=\mathbb{I}_{\{y_if^{(l)}(\mathbf{x}_i)\leq -\rho^{(l)} - \mu^2\}}$ \ENDFOR \STATE $\mbox{\boldmath $\alpha$}^{(l+1)},b^{(l+1)},\rho^{(l+1)} = {\arg\min}_{\Theta}\;B(\Theta,\Theta^{(l)})$ \ENDWHILE \end{algorithmic} \end{algorithm} \section{Analysis} \label{sec:analysis} In this paper, we are proposing an algorithm based on $L_{dr}$. We first need to ensure that minimizer of the population risk under $L_{dr}$ is minimized by the generalized Bayes classifier $f_d^*$ (defined in eq.(\ref{gbd})). This property is called Fisher consistency or classification calibrated-ness. \begin{theorem}{\bf Fisher Consistency of $L_{dr}$} The generalized Bayes discriminant function $f_{d}^{*}(\mathbf{x})$ (described in eq.~(\ref{gbd})) minimizes the risk \begin{equation*} \mathcal{R}_{dr}(f,\rho)=\mathbb{E}\big[ L_{dr}(yf(\mathbf{x}),\rho)\big] \end{equation*} over all measurable functions $ f $. \end{theorem} The proof of this theorem is provided in Appendix~A. To approximate the optimal classifier, Fisher consistency is the minimal requirement for the loss function. \subsection{Excess Risk Bound} We will now derive the bound on the excess risk $(\mathcal{R}_d(f,\rho) - \mathcal{R}_d(f_d^*,\rho_d^*))$ in terms of the excess risk under $L_{dr}$ where $\mathcal{R}_{d}(f, \rho) = \mathbb{E}[L_{d}(yf(\mathbf{x}), \rho)]$. We know that $L_{d}(f(\mathbf{x}), \rho) \leq L_{dr}(f(\mathbf{x}), \rho),\;\forall \mathbf{x} \in \mathcal{X},\;\forall f$. This relation remains preserved when we take expectations both side, means $\mathcal{R}_{d}(f, \rho) \leq \mathcal{R}_{dr}(f,\rho)$. This relation is also true for excess risk. To show that, We first define the following terms. Let $\eta(\mathbf{x}) = P(y = 1 | \mathbf{x})$ and $z = f(\mathbf{x})$. We define following terms. \begin{equation*} \begin{aligned} \xi(\eta) &:= \eta \mathbb{I}_{\{\eta<d\}}+d\mathbb{I}_{\{d\leq {\eta}\leq 1-d\}}+(1-\eta)\mathbb{I}_{\{\eta>1-d\}} \\ H(\eta) &:= \inf_{z, \rho}\;\;{\eta}L_{dr}(z,\rho) + (1-\eta)L_{dr}(-z,\rho)\\ &= \eta( 1+\mu )\mathbb{I}_{\{\eta<d\}} + d( 1+\mu )\mathbb{I}_{\{d\leq {\eta}\leq 1-d\}}\\ &+ (1-\eta)(1+\mu)\mathbb{I}_{\{\eta>1-d\}} \end{aligned} \end{equation*} We know that $\mathcal{R}_{d}^{*}=\mathbb{E} [\xi(\eta)]$ and $\mathcal{R}_{dr}^{*}=\mathbb{E}[H(\eta)]$. Furthermore, we define \begin{equation*} \begin{aligned} {\xi}_{-1}(\eta) &:= \eta - {\xi}(\eta) \\ {\xi}_{r}(\eta) &:= d - {\xi}(\eta) \\ {\xi}_{1}(\eta) &:= (1-\eta) - {\xi}(\eta) \\ H_{-1}(\eta) &:= \inf_{z<-\rho}\;\;{\eta}L_{dr}(z,\rho) + (1-\eta)L_{dr}(-z,\rho) \\ H_{r}(\eta) &:= \inf_{|z| \leq \rho}\;\;{\eta}L_{dr}(z,\rho) + (1-\eta)L_{dr}(-z,\rho) \\ H_{1}(\eta) &:= \inf_{z>\rho}\;\;{\eta}L_{dr}(z,\rho) + (1-\eta)L_{dr}(-z,\rho) \end{aligned} \end{equation*} We observe the following relationship. \begin{proposition} \begin{equation*} \begin{aligned} {\xi}_{-1}(\eta) &\leq {H_{-1}}(\eta) - H(\eta)\\ {\xi}_{r}(\eta) &\leq {H_{r}}(\eta) - H(\eta) \\ {\xi}_{1}(\eta) &\leq {H_{1}}(\eta) - H(\eta) \\ \end{aligned} \end{equation*} \end{proposition} The proof of the Proposition~2 is given in Appendix~B. Now we prove that the excess risk of $L_{d}$ loss is bounded by excess risk of $L_{dr}$ using above proposition. \begin{theorem} \label{thm-excess-risk} For any measurable function $f:\mathcal{X}\rightarrow\mathbb{R}$, \begin{equation*} \mathcal{R}_{d}(f, \rho)-\mathcal{R}_{d}(f_d^*, \rho_d^*) \leq \mathcal{R}_{dr}(f,\rho)-\mathcal{R}_{dr}(f_d^*,\rho_d^*) \end{equation*} \end{theorem} \begin{proof} We know that $$\mathcal{R}_{d}(f, \rho)=\mathbb{E}[{\eta}{\mathbb{I}_{\{f<-\rho\}}} + {d}{\mathbb{I}_{\{-\rho\leq f\leq \rho\}}} + (1-\eta){\mathbb{I}_{\{f>\rho\}}}]$$ and $\mathcal{R}_{dr}(f,\rho)=\mathbb{E}[r_{\eta}(f)]$ where $r_{\eta}(f(\mathbf{x}))= \mathbb{E}_{y|\mathbf{x}}[L_{dr}(yf(\mathbf{x}),\rho)] = \eta L_{dr}(f(\mathbf{x}), \rho) + (1 - \eta)L_{dr}(-f(\mathbf{x}), \rho)$ . Therefore, \begin{equation*} \begin{aligned} &\mathcal{R}_{d}(f, \rho)-\mathcal{R}_{d}(f_d^*, \rho_d^*)\\ &= \mathbb{E}\big[{\eta}{\mathbb{I}_{\{f<-\rho\}}} + {d}{\mathbb{I}_{\{|f|\leq \rho\}}} + (1-\eta){\mathbb{I}_{\{f>\rho\}}}\big] -\mathbb{E}\big[\xi(\eta)\big] \\ &= \mathbb{E}\big[{\xi_{-1}(\eta)}{\mathbb{I}_{\{f<-\rho\}}} + {\xi_{r}(\eta)}{\mathbb{I}_{\{-\rho\leq f\leq \rho\}}} + {\xi_{1}(\eta)}{\mathbb{I}_{\{f>\rho\}}}\big] \end{aligned} \end{equation*} Using Proposition~1, we will get \begin{equation*} \begin{aligned} &\mathcal{R}_{d}(f, \rho)-\mathcal{R}_{d}(f_d^*, \rho_d^*) \;\leq \; \mathbb{E}\big[(H_{-1}(\eta) - H(\eta)){\mathbb{I}_{\{f<-\rho\}}} \\ & + (H_{r}(\eta) - H(\eta)){\mathbb{I}_{\{-\rho\leq f\leq \rho\}}} + (H_{1}(\eta) - H(\eta)){\mathbb{I}_{\{f>\rho\}}}\big] \\ & \leq \;\mathbb{E}\big[H_{-1}(\eta){\mathbb{I}_{\{f<-\rho\}}} + H_{r}(\eta){\mathbb{I}_{\{-\rho\leq f\leq \rho\}}}\\ &\;\;\;\;+ H_{1}(\eta){\mathbb{I}_{\{f>\rho\}}} - H(\eta)\big] \\ & \leq \;\mathbb{E}[r_{\eta}(f) - H(\eta)] \leq \; \mathcal{R}_{dr}(f,\rho) - \mathcal{R}_{dr}(f_d^*,\rho_d^*) \end{aligned} \end{equation*} \end{proof} Hence, excess risk under $L_d$ is upper bounded by excess risk under $L_{dr}$. From Theorem~\ref{thm-excess-risk}, we need to bound $\mathcal{R}_{dr}(f, \rho) - \mathcal{R}_{dr}(f_d^*, \rho_d^*)$ in order to bound $\mathcal{R}_{d}(f, \rho) - \mathcal{R}_{d}(f_d^*, \rho_d^*)$. We thus need an error decomposition for $\mathcal{R}_{dr}(f, \rho) - \mathcal{R}_{dr}(f_d^*, \rho_d^*)$. \subsection{Error Decomposition of $\mathcal{R}_{dr}(f, \rho) - \mathcal{R}_{dr}(f_d^*, \rho_d^*)$} The decomposition for RKHS based regularization schemes is well established \cite{Cucker:2007:LTA:1214096}. To understand the details, consider the $l_2$ regularized empirical risk minimization with $L_{dr}$. For $S=\{(\mathbf{x}_1,y_1),\ldots,(\mathbf{x}_N,y_N)\}$ and $\lambda_2 >0$, let $f_{\lambda_{2}, S}^*=h_{\lambda_{2}, S}^*+ b_{\lambda_{2}, S}^*$ where \begin{align} \label{l2-emp-risk-min} (h_{\lambda_{2}, S}^*, b_{\lambda_{2}, S}^*,\rho_{\lambda_{2}, S}^*) &= \arg\min_{h\in {\cal H}_{\mathcal{K}}, b, \rho} \; \frac{\lambda_{2}}{2} \| h \|^2_{\mathcal{K}} + \hat{\cal R}_{dr}(f,\rho) \end{align} Note that $\hat{\mathcal{R}}_{dr}$ denotes the empirical risk under double ramp loss. In this case, we observe the following decomposition. \begin{align} \label{error-decomp-l2} \nonumber &\mathcal{R}_{dr}(f_{\lambda_{2}, S}^*,\rho_{\lambda_{2}, S}^*)-\mathcal{R}_{dr}(f_{d}^*, \rho_d^*)\leq \mathcal{A}(\lambda_2) + \mathcal{R}_{dr}(f_{\lambda_{2}, S}^*,\rho_{\lambda_{2}, S}^*) \\ &-\hat{\mathcal{R}}_{dr}(f_{\lambda_{2}, S}^*,\rho_{\lambda_{2}, S}^*)+\hat{\mathcal{R}}_{dr}(f_{\lambda_{2}}^*,\rho_{\lambda_{2}}^*) -\mathcal{R}_{dr}(f_{\lambda_{2}}^*,\rho_{\lambda_{2}}^*) \end{align} where $\hat{\mathcal{R}}_{dr}(f,\rho)$ is the empirical risk of $(f,\rho)$ under double ramp loss. $f_{\lambda_2}^*=h_{\lambda_2}^*+b_{\lambda_2}^*$ and $\rho^*_{\lambda_2}$ are defined as follows. \begin{align} \label{eq-l2-risk-minimize} (h_{\lambda_{2}}^*, b_{\lambda_{2}}^*,\rho_{\lambda_{2}}^*)& = \arg\min_{h\in {\cal H}_{\mathcal{K}}, b, \rho} \; \frac{\lambda_{2}}{2} \| h \|^2_{\mathcal{K}} + {\cal R}_{dr}(f,\rho) \end{align} $\mathcal{A}(\lambda_2)$ measures the approximation power in RKHS $\mathcal{K}$ and is defined as follow. \begin{equation} \label{eq-approx-powr} \mathcal{A} (\lambda_2) = \inf_{h\in {\cal H}_{\mathcal{K}}, b, \rho} \; \frac{\lambda_2}{2} \| h \|^2_{\mathcal{K}} + \mathcal{R}_{dr} (h+b, \rho) - \mathcal{R}_{dr} (f_{d}^*, \rho_d^*) \;\;\; \forall \lambda_2 > 0 \end{equation} The error decomposition in eq.(\ref{error-decomp-l2}) is easy to derive once we know that both $h^*_{\lambda_2}$ and $h^*_{\lambda_2,S}$ lie in the same function space. However, this doe not hold true in case of SDR-SVM proposed in this paper. It happens because the error analysis becomes difficult due to the data dependent nature of ${\cal H}_{\mathcal{K}}^+$. We use the techniques discussed in \cite{Wu:2005,JMLR:v15:huang14a}. We establish the error decomposition of SDR-SVM using the error decomposition (\ref{error-decomp-l2}) with the help of $f^*_{\lambda_2,S}$. We first characterize some properties of $f^*_{\lambda_2,S}, \rho^*_{\lambda_2,S}$. Note that from here onwards, we assume $\mu=1$ (slope parameter in the loss function $L_{dr}$). \begin{proposition} \label{prop-reg-bound} For any $\lambda_{2} > 0$, ${f}_{ \lambda_{2}, S}^* = (h_{\lambda_{2}, S}^*, b_{\lambda_{2}, S}^*, \rho^*_{\lambda_2,S})$ is given by eq.(\ref{l2-emp-risk-min}). Then, \begin{equation*} \begin{aligned} \Omega({h}_{\lambda_{2}, S}^*) \leq \frac{1}{\lambda_{2}} \hat{\mathcal{R}}_{dr}({f}_{ \lambda_{2}, S}^*, \rho^*_{\lambda_2,S}) + \| h_{\lambda_{2}, S}^* \|^2_{\mathcal{K}} \end{aligned} \end{equation*} \end{proposition} The proof of this proposition is skipped here and is provided in Appendix C. \subsection{Error Decomposition for SDR-SVM} We now find the error decomposition for SDR-SVM. We define the sample error as below, \begin{align*} \label{eq-sample-error} \mathcal{S}(N, \lambda_1, \lambda_2)& = \big( \mathcal{R}_{dr} (f_{ \lambda_1, S }^*, \rho_{\lambda_1, S}^*) - \hat{\mathcal{R}}_{dr}(f_{\lambda_1, S}^*, \rho_{\lambda_1, S}^*) \big)\\ &+ (1 + \psi) \big( \hat{\mathcal{R}}_{dr}(f_{\lambda_2}^*, \rho_{\lambda_2}^*) - \mathcal{R}_{dr}( f_{\lambda_2}^*, \rho_{\lambda_2}^* ) \big) \end{align*} where $(f_{ \lambda_1, S }^*,\rho_{ \lambda_1, S }^*)$ is a global minimizer of optimization problem in eq.(\ref{eq-sdr-svm}) and $(f^*_{\lambda_2}, \rho^*_{\lambda_2})$ is a global minimizer of problem (\ref{eq-l2-risk-minimize}). Also, $\psi=\frac{\lambda_1}{\lambda_2}$. Following theorem gives the error decomposition for SDR-SVM. \begin{theorem} \label{thm:err-decomp-l1} For $0 < \lambda_1 \leq \lambda_2 \leq 1$, let $\psi = \frac{\lambda_1}{\lambda_2}$. Then, \begin{align*} \mathcal{R}_{dr} (f_{ \lambda_1, S }^*, \rho_{\lambda_1, S}^*) - \mathcal{R}_{dr} (f_{d}^*, \rho_{d}^*) + \lambda_1 \Omega (h_{\lambda_1, S}^*)\\ \leq \psi \mathcal{R}_{dr} (f_{d}^*, \rho_d^*)+ \mathcal{S}(N, \lambda_1, \lambda_2) + 2 \mathcal{A}(\lambda_2) \end{align*} where $\mathcal{A}(\lambda_2)$ is the approximation error defined by eq.(\ref{eq-approx-powr}). \end{theorem} Proof of above theorem is provided in the Appendix~D. Using Theorem~\ref{thm:err-decomp-l1}, the generalization error of SDR-SVM is estimated by bounding ${\cal S}(N,\lambda_1,\lambda_2)$ and ${\cal A}(\lambda_2)$. \subsection{Generalization Error of SDR-SVM} We expect that the sample error ${\cal S}(N,\lambda_1,\lambda_2)$ tends to zero with certain rate as $N$ tends to infinity. This can be understood by the convergence of the sample mean to its expected value. Also, we will have following assumption on ${\cal A}(\lambda_2)$. \begin{assumption} \label{assump1} For any $0 < \beta \leq 1$ and $c_{\beta} > 0$, the approximation error satisfies \begin{equation} \label{eq-assumption} \mathcal{A} (\lambda_2) \leq c_{\beta} \lambda^{\beta} \;\;\;\forall \lambda_2 > 0 \end{equation} \end{assumption} This is a standard assumption in the literature of learning theory \cite{Cucker:2007:LTA:1214096}. \begin{theorem} Suppose that Assumption \ref{assump1} holds for any $0 < \beta \leq 1$. Take $\lambda_1 = N^{-\frac{\beta}{4 \beta + 2}}$ and $(f_{\lambda_1, S}^*, \rho_{\lambda_1, S}^*)$ is the optimal solution of SDR-SVM. Then for any $0 \leq \delta \leq 1$, there holds \begin{equation} \mathcal{R}_{d}(f_{\lambda_1, S}^*, \rho_{\lambda_1, S}^*) - \mathcal{R}_{d}(f_{d}^*, \rho_d^*) \leq \tilde{c} \left( \text{log} \frac{4}{\delta} \right)^{1/2} N^{-\frac{\beta}{4\beta + 2}} \end{equation} with probability at least $1 - \delta$. Here $\tilde{c}$ is a constant independent of $\delta$ or $N$. \end{theorem} Proof of this theorem is provided in Appendix~E. It uses the concentration bounds results discussed in \cite{Bartlett:2003:RGC:944919.944944}. \section{Experiments} \label{sec:exp} In this section, we show the effectiveness of approach on several datasets. We report experimental results on five datasets (``Ionosphere", ``Parkinsons", ``Heart", ``ILPD" and ``Pima Indian Diabetes") available on UCI machine learning repository \cite{Lichman:2013}. \begin{figure*}[t!] \begin{center} \begin{tabular}{ccc} \includegraphics[scale=0.3]{ilpd_risk.eps}& \includegraphics[scale=0.3]{ilpd_acc.eps} & \includegraphics[scale=0.3]{ilpd_rr.eps} \\ \includegraphics[scale=0.3]{iono_risk.eps}& \includegraphics[scale=0.3]{iono_acc.eps} & \includegraphics[scale=0.3]{iono_rr.eps}\\ \includegraphics[scale=0.3]{parkinsons_risk.eps}& \includegraphics[scale=0.3]{parkinsons_acc.eps} & \includegraphics[scale=0.3]{parkinsons_rr.eps} \\ \includegraphics[scale=0.3]{heart_risk.eps}& \includegraphics[scale=0.3]{heart_acc.eps} & \includegraphics[scale=0.3]{heart_rr.eps} \\ \includegraphics[scale=0.3]{pima_risk.eps}& \includegraphics[scale=0.3]{pima_acc.eps} & \includegraphics[scale=0.3]{pima_rr.eps} \end{tabular} \caption{Comparison Plots for Different Datasets. Column 1 shows the risk $R_d$, column 2 shows accuracy on un-rejected examples, column 3 shows the rejection rate.} \label{Fig:Comparisons} \end{center} \end{figure*} \subsection{Experimental Setup} In the proposed approach, to solve linear programming problem in each iteration, we have used CVXOPT package in python language \cite{cvxopt}. In our experiments, we apply a Gaussian kernel $\mathcal{K}(\mathbf{x}_{i}, \mathbf{x}_{j}) = \exp( -\gamma {\| \mathbf{x}_{i} - \mathbf{x}_{j} \|}^2)$ for nonlinear problems. In all the experiments, we set $\mu$ = 1. Regularization parameter $\lambda$ and kernel parameter $\gamma$ are chosen using 10-fold cross validation. We compare the performance of the proposed approach (SDR-SVM) with 3 other approaches as follows. The first approach is standard SVM based reject option classifier. In that approach, we first learn a learning decision boundary using SVM and then set the width of rejection region by cross-validation such that empirical risk under $L_{d,\rho}$ is minimized. We use this approach as a proxy for the approach proposed in Bartlett and Wegkamp \shortcite{Bartlett:2008}. Again, parameters of SVM ($C$ and $\gamma$) are learnt using 10-fold cross-validation. The second approach is the SVM with embedded reject option (ER-SVM) \cite{Fumera2002}. We used the code for this approach available online \cite{Fumera-Code}. We also compare our approach with Double hinge SVM (DH-SVM) based reject option classifier \cite{Grandvalet2008}. \subsection{Simulation Results} We report the experimental results for different values of $d\in [0.05, 0.5]$ with the step size of 0.05. For every value of $d$, we find the cross-validation risk (under $L_{d,\rho}$), \% rejection rate (RR), \% accuracy on the un-rejected examples (Acc). We also report the average number of support vectors (the corresponding ${\alpha}_{i}\geq 10^{-6}$). The results provided here are based on 10 repetitions of 10-fold cross-validation (CV). Now we discuss the experimental results. Figure~\ref{Fig:Comparisons} shows the comparison plots for different datasets. We observe the following. \begin{enumerate} \item {\bf Average Cross Validation Risk $\mathcal{R}_d$: }We see that SDR-SVM performs better than ER-SVM with huge gaps in terms of the average cross validation risk ($\hat{\mathcal{R}}_d$) for all datasets and for all values of $d$. For Parkinsons and Heart datasets, SDR-SVM has smaller $\hat{\mathcal{R}}_d$ risk (for all values of $d$) compared to DH-SVM. For ILPD, Ionosphere and PIMA datasets, $\hat{\mathcal{R}}_d$ risk of SDR-SVM is comparable to DH-SVM. SDR-SVM performs better than Normal-SVM based approach on Parkinsons, Heart, ILPD and PIMA datasets. For Ionosphere dataset, SDR-SVM performs comparable to Normal-SVM based approach. \item {\bf Rejection Rate: } We observe that for Inosphere, Heart and Parkinsons datasets, rejection rate of SDR-SVM is much smaller compared to other approaches except for smaller values of $d$ (0.05 and 0.1). For PIMA and ILPD datasets, the rejection rates of SDR-SVM are comparable to DH-SVM. The rejection rates for these two datasets are comparatively higher for all values of $d$. Possible reason for that could be high overlap between the two class regions. \item {\bf Performance on Unrejected Examples: }We see that SDR-SVM also gives good classification accuracy on unrejected examples. It always gives better accuracy compared ER-SVM. As compared to normal SVM based approach, SDR-SVM does always better on ILDP and Parkinsons datasets. For rest of the datasets, SDR-SVM gives comparable accuracy to normal SVM based method on unrejected examples. Compared to double hinge SVM, SDR-SVM does comparable to DH-SVM. \end{enumerate} Thus, overall SDR-SVM learns reject option classifiers which attain smaller $\hat{\mathcal{R}}_{d}$ risk. It achieves this goal by simultaneously minimizing the rejection rate and mis-classification rate on unrejected examples. \subsection{Sparseness Results} We now show that SDR-SVM learns sparse reject option classifiers. As discussed, by sparseness we mean that the resulting classifier can be represented as a linear combination of a very small fraction of training points. Sparseness results for SDR-SVM are shown in Figure~\ref{fig:sparseness}. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[scale=0.28]{ilpd_svs.eps} & \includegraphics[scale=0.28]{iono_svs.eps} \\ \includegraphics[scale=0.28]{pima_svs.eps}& \end{tabular} \caption{Sparseness Comparison of SDR-SVM with DH-SVM and Normal-SVM} \label{fig:sparseness} \end{figure} We see that for ILPD, Ionosphere and PIMA datasets, SDR-SVM outputs classifiers which are much sparser compared to DH-SVM and Normal-SVM based approaches. ER-SVM does not have obvious representation for the classifier as a linear combination of training examples. \subsection{Experiments with Noisy Data} $L_{dr,\rho}$ is generalization of ramp loss function for the reject option classification. For normal binary classification problem, ramp loss function is shown robust against label noise \cite{Ghosh:2015}. Motivated by the above fact, we did experiments to test the robustness of $L_{dr,\rho}$ against uniform label noise (with noise rates of $10\%, 20\%, 30\%$). Figure~\ref{fig:noise}. \begin{figure}[t] \begin{tabular}{cc} \includegraphics[scale=0.28]{iono_risk.eps}& \includegraphics[scale=0.28]{iono10_risk.eps}\\ \includegraphics[scale=0.27]{iono20_risk.eps}& \includegraphics[scale=0.27]{iono30_risk.eps} \end{tabular} \caption{Comparison Results in presence of uniform Label Noise} \label{fig:noise} \end{figure} We observe the following. \begin{enumerate} \item We observe that with 10\% noise rate, increment in the risk for SDR-SVM is not significant. As we increase the noise rate, model in reject option classification confuses more for classifying the examples, therefore model tries to put more examples in rejection region for smaller values of $d$. Which leads to increase in width of rejection region. Thus, for smaller values of $d$, risk is dominated by rejection cost for proposed approach. But as we increase $d$, cost of rejection also increases and model in label noise will force examples to classify to one of the label. With increasing noise rate, SDR-SVM remains robust for higher values of $d$. \item Compared to ER-SVM, SDR-SVM does significantly better for all values of $d$ and for all noise rates. \item For large values of $d$, SDR-SVM performs better than DH-SVM and noraml SVM in presence of label noise. \end{enumerate} \section{Conclusions}\label{sec:conclusions} In this paper, we proposed sparse approach for learning reject option classifier using double ramp loss. We propose a DC programming based approach for minimizing the regularized risk. The approach solves successive linear programs to learn the classifier. Our approach also learns nonlinear classifier by using appropriate kernel function. Further, we have shown the Fisher consistency of double ramp loss $L_{dr,\rho}$. We upper bound the excess risk of $L_d$ in terms of excess risk of $L_{dr}$. We then derive generalization bounds for SDR-SVM. We showed experimentally that the proposed approach does better compared to the other approaches for reject option classification and learns sparse classifiers. We also experimental evidences to show robustness of SDR-SVM against the label noise.
{ "timestamp": "2019-03-25T01:08:43", "yymm": "1802", "arxiv_id": "1802.04235", "language": "en", "url": "https://arxiv.org/abs/1802.04235" }
\section{Introduction} Although studies of the transport of life-bearing rocks between planets have a long history \citep[e.g.][]{melosh1988}, claims of the discovery of traces of ancient life in the meteorite ALH84001 in the mid-1990s \citep{mcketal1996} accelerated investigations into panspermia within the Solar system. The last two decades have since featured detailed work \citep[e.g.][]{miletal2000} outlining delivery dynamics \citep{melton1993,glaetal1996,glaetal1997,alvetal2002,reyetal2012,woretal2013}, impact physics and chemistry \citep{meyetal2011,rubin2015,baretal2016}, and biological survival requirements \citep{horetal1994,masetal2001,nicholson2009,moeetal2010} with respect to Earth, Mars and other solar system bodies. Consequently, a detailed foundation for panspermia-related processes has been established. Despite these advances, the applicability of these processes to extrasolar planetary systems is still in question, partly because in those systems we lack the detailed knowledge of our own planetary system. Nevertheless, efforts to characterize panspermia between different extrasolar systems, or between the solar system and extrasolar systems, have contributed to our understanding \citep{adaspe2005,valetal2009,wesson2010,beletal2012,linloe2015,galwan2017}. However, panspermia amongst extrasolar planets within the same system has received a relatively little but increasing amount of attention \citep{helarm2014,steli2016,krietal2017,linloe2017a}. A potential reason for this relative dearth of studies is the lack of observational evidence of multiple planets in the habitable zone of the same star. This situation has now changed with the groundbreaking discovery of multiple potentially habitable planets in the TRAPPIST-1 system \citep{giletal2017}. TRAPPIST-1 is an M dwarf with a mass of $0.08M_{\odot}$ that harbours seven observed transiting planets all with masses similar to the Earth and three (planets e, f and g) which are securely in the star's habitable zone (although all seven may be, with effective temperatures ranging from about 150K to 400K). Because all seven planets are seen transiting from the Earth, their orbits are nearly coplanar. The system is compact (all planets could fit well within Mercury's orbit), and are likely to be resonantly interacting in a long chain \citep{lugetal2017}. We do not yet know if an equivalent of the Late Heavy Bombardment event (believed to have occurred in the Solar system, even if less intense than originally thought; \citealt*{botnor2017}) has or will occur in that system, nor what types of potential impactors lurk beyond the most distant planet (h) and outside of our field of view. Despite the uncertainties, investigation of panspermia from within this system has already been undertaken \citep{krietal2017,linloe2017a} and might be prompted further by additional observations, which are currently underway. Here, we study and derive several aspects of lithopanspermia in more general closely-packed multi-planet systems, with a focus on analytics and dynamical delivery, but also addressing micro-organism survival at each stage. Numerous Solar system studies have taught us that $N$-body simulations are both computationally expensive and dependent on a large number of parameters \citep{donetal1999} which are unknown in exoplanetary systems. Computational times for most known extrasolar planetary systems would be worse because of their compact nature. Therefore, we adopt a purely analytical approach, one which could be applied to extrasolar systems with multiple habitable planets. The characterisation of such systems is expected to increase steadily over the next decade, culminating with the PLATO mission \citep{rauetal2014}, which will measure habitable planets out to about 1 au. Throughout the paper, our subscript convention for physical quantities will be: $i$ for the impactor, no subscript for the source planet, a single prime for the fragmented debris ejected from the source planet, and a double prime for the target planet. In Section 2, we establish our setup and describe how a life-bearing rock could be transferred between one planet and the orbit of another planet; Appendix A provides most of the intermediate equations required for this section, and Appendix B contains an extension with a fictitious template compact system which can be used for quick estimates. Section 3 then details the likelihood of that rock actually impacting a target planet. Section 4 constrains the characteristics of the ejecta that would both satisfy the dynamics and have the capability to harbour life. In Section 5, we consider the biological prospects of life surviving all aspects of lithopanspermia. We conclude in Section 6. \section{Orbit transfers} \subsection{Setup} Consider a pair of planets such that one, the ``source'', contains a life-bearing organism, whereas the other, the ``target'', initially does not. An impactor crashes into the source, producing a spray of life-bearing ejecta. By assuming that the ejecta is ``kicked'' impulsively, we estimate its direction and speed such that it would reach the orbit of the target. By impulsively, we refer to the timescale of the kick being much smaller than the orbital period of the source. In multi-planet systems within the detectability threshold of transit photometry surveys, planet orbital periods are on the order of days, whereas impact kick timescales would typically be on the order of minutes. The underlying formalism we use was established in \cite{jacwya2012} and expanded upon in \cite{jacetal2014}, and similar to that in Appendix A of \cite{feuwie2008}. We briefly repeat here the geometrical setup in \cite{jacetal2014}: Denote the kick speed as $\Delta v$, and the circular speed of the source as $v_{\rm k} \equiv \sqrt{G\left(M_{\rm star} + M\right)/a}$, where $a$ and $M$ are the source's semimajor axis and mass. The launch speed $v_{\rm lau}$ and escape speed $v_{\rm esc}$ are related to $\Delta v$ through $v_{\rm lau}^2 = \left(\Delta v\right)^2 + v_{\rm esc}^2$, where $v_{\rm esc}^2 = 2GM/R$. For perspective, the circular speed of the TRAPPIST-1 planets are, from planet b to h moving outward, $\left\lbrace 79.9, 68.2, 57.5, 50.2, 43.7, 39.6, 33.9\right\rbrace$ km/s. The ratio $\Delta v / v_{\rm k}$ features frequently in the equations, and is bound from above in our study by the value of $\sqrt{2} + 1$, which is the maximum possible value for which the debris can remain in the planetary system. Further, the minimum value of $\Delta v / v_{\rm k}$ for which the ejecta can escape the system is $\sqrt{2} - 1$. At impact, the source is assumed to lie at the pericentre of its orbit such that its argument of pericentre, longitude of ascending node, and inclination $I$ are all zero\footnote{The restrictiveness of the assumption of impact at orbital pericentre will be removed later when we assume circular orbits.}. The source is assumed to move counterclockwise, and the kick direction is defined by two variables: $\theta$ and $\phi$. The angle between the source's angular momentum vector and the kick direction is $\theta$ and the angle between the star-source pericentre line and the projection of the kick direction onto the source's orbital plane is $\phi$. \begin{figure*} \includegraphics[width=9cm]{IncPlot1.eps} \includegraphics[width=9cm]{IncPlot2.eps} \caption{ How the kick direction affects the inclination of the ejecta orbit. By assuming coplanarity amongst all TRAPPIST-1 planets, we plot, for three different values of the ratio of kick velocity to circular Keplerian velocity ($\Delta v/v_{\rm k}$), the dependence on the kick direction in the source's orbital plane $\phi$. The gray region corresponds to where the resulting ejecta orbital inclination is never large enough to exceed the radius of any TRAPPIST-1 target at any point in the orbit, and the red region corresponds to the opposite extreme, where vertical coincidence occurs only near the orbital nodes. } \label{incplots} \end{figure*} \subsection{Ejecta orbit characteristics} The orbit of the ejecta, whose elements are denoted by primes, are related to the unprimed quantities (which refer to the source planet), through equations (\ref{aaprime})-(\ref{apo}) \citep{jacetal2014}. Because known compact multi-planet systems are dynamically ``cold'' -- exhibiting low eccentricities -- henceforth we make the approximation that all planets are on circular orbits. Doing so greatly simplifies the analysis. For example, the upper limits for the planetary eccentricities in the TRAPPIST-1 system are all under 0.085. Further, because for any compact multi-planet system with habitable plants, we cannot yet know if panspermia has occurred, or will occur, at a particular time, we assume, without loss of generality, that the true anomaly $f=0$. We hence constrain only when ejecta intersects the orbit of the target, and use equations (\ref{peri}) and (\ref{apo}) for this purpose. \subsection{Coplanarity restrictions} First however, consider that in order for ejecta to hit the target, the ejecta must coincide with the target in all three spatial dimensions. This intersection is most easily achieved if the orbital planes of the source, ejecta and target lie close to one another as we can then be guaranteed of coincidence in one of the three dimensions. Hence, in this section, we quantify how coplanar the orbits of the ejecta, source and target must be to achieve spatial coincidence in the vertical direction (direction of the angular momentum vector of the source) throughout the debris orbit. This condition is mathematically equivalent to $q' \sin{I'} \lesssim R$ or $Q' \sin{I'} \lesssim R$, when assuming that $R$ denotes planet radius, $q'$ denotes orbital pericentre, $Q'$ denotes orbital apocentre, and the reference plane is the one which connects a coplanar source and target. Note that unless the longitudes of ascending node can be measured, the mutual inclinations of all source-target pairs in a particular system will remain unknown\footnote{In the TRAPPIST-1 system, the longitudes of ascending node are so far unknown, and the maximum difference in measured inclinations is $0.21^{\circ}$, when neglecting errors.}. The approximations in these relations result from effects not considered here such as gravitational focusing and atmospheric drag. If the kick direction is perpendicular to the source's angular momentum direction ($\theta = \pi/2$), then the ejecta orbit will be coplanar with the reference orbit ($I' = 0$). If, however, the kick direction deviates from these values, then the result is less obvious and is dependent on $\phi$ (and $e$ when not assumed to be zero). Figure \ref{incplots} illustrates the resulting dependence of $I'$ on $\phi$ for two different values of $\theta$, one where $\theta$ deviates from coplanarity by $10^{-3}$ degrees (left panel) and the other where the deviation is 0.5 degrees (right panel). In order to provide some context, we superimpose some results from the TRAPPIST-1 system on this figure. The gray region corresponds to where in the TRAPPIST-1 system $I' \le \arcsin{(\min{(a/R)})} = 0.029^{\circ}$ and the red region to where $I' \ge \arcsin{(\max{(a/R)})} = 0.24^{\circ}$. For this red region, the ejecta and target would be vertically coincident only near the nodes of their orbits, whereas for the gray region, they would be vertically coincident throughout their orbits (increasing the chances of impact). The purple curve, corresponding to $\Delta v/v_{\rm k} = \sqrt{2} + 1$, is a limiting case for which the ejecta may remain bound. As $\Delta v/v_{\rm k}$ is lowered and approaches zero, the resulting curves would be lower than the green curve. For a kick deviation offset from coplanarity exceeding about $0.86^{\circ}$, we find that all three curves lie entirely within the red region. \subsection{Other spatial restrictions} Having linked the angles of impact with the inclinations, we now turn to the other two spatial dimensions. The subsequent analysis greatly benefits from three simplifications, which are sufficient for this study. The first two are our continued assumption of circular and coplanar orbits. The third is that we consider the source and target planets in pairs, ignoring the influence of any other planets including those whose orbits lie in-between the pairs. This last assumption degrades in accuracy as the distance between the planets increases, because the debris will take longer to traverse this distance, and hence be diverted to a greater extent by extra bodies. However, these effects are sensitively dependent on the number, masses and locations of other planets in the system. \subsubsection{Intersecting orbits} In order for a collision to occur, the debris must be ejected onto an orbit which intersects the orbit of another planet. We can place bounds on this geometry by considering the pericentre and apocentre. Hence we begin by expressing these quantities as a function of the impacted planet's semimajor axis, the velocity ratio and $\phi$ (equations \ref{origsmallq}-\ref{chieq}). We wish to find the range of velocity kicks for which these pericentres or apocentres are achieved. Hence, inverting the equations yields, for inward motion towards the orbital pericentre, \begin{equation} \left( \frac{\Delta v}{v_{\rm k}} \right)_{\rm in} = \left\{ \begin{array}{ll} \frac{\left(a-q'\right)\left[\left(a+q'\right)\sin{\phi}+\sqrt{\xi}\right]}{q'^2-a^2\sin^2{\phi}}, & \quad \sin^2{\phi} < \left(\frac{q'}{a}\right)^2 \\ \frac{\left(a-q'\right)\left[\left(a+q'\right)\sin{\phi}-\sqrt{\xi}\right]}{q'^2-a^2\sin^2{\phi}}, & \quad \sin^2{\phi} > \left(\frac{q'}{a}\right)^2 \end{array} \right. \label{FirstPiece} \end{equation} \noindent{}where \begin{equation} \xi = q' \left[q' + \left(2a + q'\right) \sin^2{\phi} \right] . \end{equation} \noindent{}Recall that $a$ and $q'$ refer respectively to the source planet's semimajor axis and the ejecta's orbital pericentre, and $\phi$ is the angle between the projection of the kick direction onto the source's orbital plane and the star-source pericentre line. For panspermia outward from the star, no piecewise function is necessary, as in the following equation the denominator is always positive and the term in square brackets is always negative. \begin{equation} \left( \frac{\Delta v}{v_{\rm k}} \right)_{\rm out} = \frac{\left(a-Q'\right)\left[\left(a+Q'\right)\sin{\phi}-\sqrt{\zeta}\right]}{Q'^2-a^2\sin^2{\phi}} \label{SecondPiece} \end{equation} \noindent{}where \begin{equation} \zeta = Q' \left[Q' + \left(2a + Q'\right) \sin^2{\phi} \right] . \label{ThirdPiece} \end{equation} \noindent{}Equations (\ref{FirstPiece})-(\ref{ThirdPiece}) provide the necessary and sufficient conditions for kick speeds to propel ejecta into the orbit of another planet. \subsubsection{Application to TRAPPIST-1} In order to demonstrate how these equations can be applied to a real system, Fig. \ref{perisample} illustrates the application of equation (\ref{FirstPiece}) to the TRAPPIST-1 system. The figure displays inward panspermia from planet h, as well as what minimum kick speeds are necessary to thrust the ejecta just into the orbits of planets g, f, e, d, c and b. Note that this value is highly dependent on $\phi$, and exceeds the system escape speed for many values of $\phi$. Further, the dependence on $\phi$ is non-monotonic. \begin{figure} \includegraphics[width=9cm]{PeriV.eps} \caption{ Inward panspermia in the TRAPPIST-1 system from planet h. Shown is the minimum speed kick required to place ejecta from planet h into the orbit of another planet, as a function of kick direction ($\phi$). For many values of $\phi$, the minimum kick speed exceeds the system escape speed (top axis of plot). } \label{perisample} \end{figure} \begin{figure} \vspace{1em} \includegraphics[width=9cm]{ApoV.eps} \caption{ Outward panspermia in the TRAPPIST-1 system to planet h. Shown is the minimum speed kick required to place ejecta from a planet into the orbit of planet h, as a function of kick direction ($\phi$). In all cases, the minimum kick speed is less than the system escape speed (top axis of plot). } \label{aposample} \end{figure} When we instead consider outward panspermia to planet h, the functional form changes. Application of equation (\ref{SecondPiece}) yields Fig. \ref{aposample}. Here, for all values of $\phi$, the minimum kick speed required to propel the ejecta to another planet's orbit is smaller than the system escape speed. The smallest kick would be from Planet h's nearest neighbour (planet g), whereas the greatest kick is required from the planet furthest away (planet b). Both Figs. \ref{perisample} and \ref{aposample} do not take into account the escape speed from the planets themselves. The escape speed of the TRAPPIST-1 planets are, from planet b to h moving outward, $\left\lbrace 9.90, 12.79, 8.15, 9.19, 9.02, 12.2, 9.35 \right\rbrace$ km/s. The relative velocity of the ejecta after escape from the planet is likely to be comparable to the escape velocity of the planet. \subsubsection{When keeping $\phi$ fixed} If instead one has reason to assume a particular fixed value of $\phi$, then equations (\ref{FirstPiece})-(\ref{ThirdPiece}) may be considered as functions of $\left(q'/a\right)$ or $\left(a/Q'\right)$. As an example, the result for outward panspermia is shown in Fig. \ref{nondim} for six values of $\phi$. This plot may be applied to any planetary system, including the solar system. For example, outward panspermia from Earth to Mars \citep{woretal2013} corresponds roughly to the right axis of the plot. The escape speeds of those planets, however, differ by over an order of magnitude from those of the TRAPPIST-1 planets. \begin{figure} \vspace{1em} \includegraphics[width=9cm]{nodimapo.eps} \caption{ Outward panspermia as a function of $\left(a/Q'\right)$ when $\phi$ is kept fixed. The $x$-axis includes the entire relevant range in the TRAPPIST-1 system; the upper end also roughly corresponds to outward panspermia from Earth's orbit to Mars' (assuming that they are on circular, coplanar orbits). } \label{nondim} \end{figure} \subsubsection{Curve extrema} Returning to Figs. \ref{perisample}-\ref{aposample}, in order to find the minimum value of these curves as a function of $\phi$, we consider all possible extrema of equation (\ref{FirstPiece}), which are found in equation (\ref{curveex}). For outward panspermia, \begin{equation} \max{\left(\frac{\Delta v}{v_{\rm k}}\right)_{\rm out}^{\rm abs}} = \sqrt{\frac{2Q'}{a + Q'} } + 1 \label{useful1} \end{equation} \noindent{}and $(\phi)_{\rm ex} = \pi/2$, which gives the absolute minimum \begin{equation} \min{\left(\frac{\Delta v}{v_{\rm k}}\right)_{\rm out}^{\rm abs}} = \sqrt{\frac{2Q'}{a + Q'} } - 1. \label{useful2} \end{equation} \noindent{}These equations explain why the system escape speed corresponding to $\Delta v / v_{\rm k}$ ($= \sqrt{2} + 1$) is never reached. Because the edge of the system is beyond planet h, the ejecta will reach planet h before the edge of the system. Alternatively, for inward panspermia, there are multiple real solutions: $(\phi)_{\rm ex} = \left\lbrace -\pi/2, \ \ -\arccos{\left[\pm \sqrt{a^2 - q'^2}/a\right]} \right\rbrace$. Combined with the escape boundary, we have \begin{equation} \max{\left(\frac{\Delta v}{v_{\rm k}}\right)_{\rm in}^{\rm abs}} = \sqrt{2} + 1 , \end{equation} \begin{equation} \min{\left(\frac{\Delta v}{v_{\rm k}}\right)_{\rm in,1}} = \sqrt{\frac{2q'}{a + q'} } + 1 , \label{loc} \end{equation} \begin{equation} \min{\left(\frac{\Delta v}{v_{\rm k}}\right)_{\rm in,2}} = \frac{a \left(a - q'\right) }{2q' \left(a + q'\right)} . \label{useful5} \end{equation} The absolute minimum could arise from either equations (\ref{loc}) or (\ref{useful5}). Equation (\ref{loc}) is the absolute minimum when $q'/a < (1/2)(\sqrt{2} - 1) \approx 0.21$; otherwise equation (\ref{useful5}) gives the absolute minimum. Equations (\ref{useful1})-(\ref{useful5}) may be useful constraints for any compact multi-planet system. \subsubsection{Probability distributions} Given these constraints, we can now construct probability distribution functions for debris reaching the target planet's orbit . We obtain a probability $P$ of a given ratio ($\Delta v / v_{\rm k}$) being sufficiently high for the ejecta to reach the orbit of another planet as an explicit function of ($\Delta v / v_{\rm k}$), $a$ and either $q'$ or $Q'$, by assuming some distribution for $\phi$. Because the cratering record on Solar system bodies indicates that ejecta are effectively isotropically distributed, we assume that a uniform distribution of $\phi$ holds generally for other planetary systems. Consequently, the formulae we obtain may be applied widely if extrasolar systems experience similar cratering processes. Our final results are presented in equations (\ref{out1})-(\ref{In4}). \begin{figure} \vspace{1em} \includegraphics[width=9cm]{ProbIn.eps} \caption{ Application of the general algorithm to yield transfer probabilities (equations \ref{In1}-\ref{In4}) to the TRAPPIST-1 system. Shown is inward panspermia in to planet h. The only variables needed to generate this plot were the semimajor axes of the planets. } \label{probin} \end{figure} \begin{figure} \vspace{1em} \includegraphics[width=9cm]{ProbOut.eps} \caption{ Similar to Fig. \ref{probin}, but for outward transfer of debris to the orbit of planet h (equations \ref{out1}-\ref{out3}). For high-enough kick velocities, ejecta will always have the capability of colliding with an outer planet. } \label{probout} \end{figure} Now we provide an example using these probability functions. We utilize the TRAPPIST-1 system both in order to be consistent with the previous applications and because, fortuitously, planets b and h are sufficiently well-separated to allow us to sample a special case where $q'/a < (1/2)(\sqrt{2} - 1)$. Recall that the only variables required to construct these functions are the semimajor axes of the planets and the velocity ratio. Figures \ref{probin} and \ref{probout} are the result. Note first that for inward panspermia, $P$ never reaches unity, unlike for outward panspermia: the geometry responsible for these relations are folded into the curves. Second, for both directions, transferring to neighbouring planets is easier than for those further away. Third, the kinks in Fig. \ref{probin} ultimately result from the piecewise nature of the velocity ratio in equation (\ref{FirstPiece}). The kink in the bottommost curve occurs at $P=0$ because the separation of planets h and planet b is wide enough to cross the critical threshold mentioned in the last paragraph. \section{Impacting the target in one pass} In the last section, we placed bounds on orbital properties of ejecta that could hit the target planet. In order to better quantify the probability of actually impacting the target, we now consider the location of the debris in space, rather than just its orbit. Assuming that the ejecta orbit intersects or almost intersects with the orbit of the target, and that these orbits remain fixed, then expressions exist for the probability of collision in one pass. These expressions, pioneered by \cite{opik1951} and \cite{wetherill1967}, have led to substantial and wide-ranging applications. A recent updated and simplified series of derivations was provided by \cite{jeomal2017}. They found in their equations 29 and 37 the probability of collision per revolution of the ejecta, $\mathcal{P}$, as a function of (i) the orbital periods of the target and ejecta, (ii) the gravitational acceleration due to the parent star (assumed to be constant in the vicinity of the collision), (iii) the collision radius, (iv) the velocities of both objects at collision, and (v) the angle, $\lambda$, between the common direction of the velocity vectors and the vector from the star to the collision point. We now consider these dependencies in more detail. Because we assume that the target is on a circular orbit, the gravitational acceleration at the collision point is $GM_{\rm star}/a''^2$ (recall that the target's orbital parameters and mass are denoted with double primes). The collision radius is the sum of the radii of the ejecta and target multiplied by the gravitational focusing factor $(1+ v_{\rm esc}''^2/|v'-v''|^2)$. These velocities can be expressed in orbital elements as $v'' = n''a''$ and $v' = n'a'\sqrt{(1+e'\cos{f'})(1-e'^2)}$ such that $n'$ and $n''$ denote the mean motion of the ejecta and target. In this last expression, we can further relate $f'$ to $\phi$. For the circular source orbit case, and assuming coplanarity amongst the source, debris and target, equations (\ref{sinf}) and (\ref{cosf}) reduce to \begin{equation} \tan{f'} = \frac{\left|1 + \left(\frac{\Delta v}{v_{\rm k}}\right) \sin{\phi} \right| \cot{\phi}} {2 + \left(\frac{\Delta v}{v_{\rm k}}\right) \sin{\phi} } \end{equation} \noindent{}which leads to Fig. \ref{fPlot}. \begin{figure} \vspace{2em} \includegraphics[width=9cm]{fPlot.eps} \caption{ Relating $f'$ to $\phi$ for different kick speeds. } \label{fPlot} \end{figure} Adding these various components above together leads to the following expression for the collision probability per time in our formalism. Assuming that the target and ejecta are much less massive than the parent star allows us to concisely write \[ \mathcal{P} = \frac{c}{4\pi^2} \sqrt{ G\left(\frac{M_{\rm star}^2}{M''}\right) \left(\frac{R'+R''}{a'^3a''}\right) } \] \begin{equation} \ \ \ \ \ \times \sqrt{\frac{\left|v' - v''\right|}{v'+v''} \left(1 + \frac{v_{\rm esc}''^2}{\left|v'-v''\right|^2} \right) \csc{\lambda} } \label{collprob} \end{equation} \noindent{}where $c$ is a constant equal to either $2\sqrt{2}$ for exactly intersecting orbits, or 1.7 for non-intersecting orbits. Equation (\ref{collprob}) accounts for both inward and outward panspermia because of the first absolute value. As emphasized by \cite{jeomal2017}, equation (\ref{collprob}) should not be used to predict a specific impact event in the past or future, but rather be used in a statistical way with streams of debris. The time taken for the debris to travel from the source to the target planet -- a measure which influences the survival rates of microbes -- may be approximated by the inverse of equation (\ref{collprob}). This transit time is hence strongly influenced by $\left|v' - v''\right|$, which is the difference in speed of the ejecta and target planet. When these values are comparable, the inverse of equation (\ref{collprob}) becomes singular. Further, the angle $\lambda$ is crucial, and can reduce the transit time almost arbitrarily. These factors primarily explain the several order-of-magnitude spread in transit times in fig. 3 of \cite{krietal2017} for the TRAPPIST-1 system. Nevertheless, when comparing transit times in TRAPPIST-1 to other systems, the functional dependencies in equation (\ref{collprob}) may be useful. For example, the transit time scales inversely with the mass of the star and as the square root of the distance from the target planet. \section{Ejecta characteristics} Alternatively, if we assume that all of the ejecta represents a single boulder or pebble of mass $m'$, then we can speculate on the characteristics of this piece of ejecta, and whether it could be life-bearing. In this section, we neglect more complex possibilities, such as indirect supplementary ejection from small source planets. Source planets the size of Ceres, for example, have low-enough escape velocities and surface pressures to be susceptible to glaciopanspermia \citep{houtkooper2011,linloe2017b}. We also neglect the portion of the atmosphere -- even for large source planets -- that will invariably be ejected along with the debris \citep{berera2017}. The debris can act as an agent to transfer the source planet's atmospheric constituents to the target. If the transfer of material is extensive enough, as during perhaps a period of heavy bombardment \citep{deNetal2012,botnor2017}, then the atmosphere of the target planet may become less or more susceptible towards hosting life on its surface. \subsection{Escaping the atmosphere} First, consider that the radius and density of this piece of ejecta are related to $m'$ through $m' = (4\pi/3)\rho'R'^3$. This radius must be larger than the critical radius needed to maintain escape velocity from the source. If the source planet contains an atmosphere with surface pressure $p$ and gravitational acceleration $g$, then the radius $R'$ of a piece of ejecta which could escape an atmosphere is \citep{artiva2004} \begin{equation} R' \ge \frac{3p}{8g\rho} \left[ \frac{\Delta v + v_{\rm esc}} {\Delta v - v_{\rm esc}} \right], \ \ \ \ \ {\Delta v > v_{\rm esc}} \label{ejecrit} \end{equation} \noindent{}where $g = GM/R^2$ such that any ejector could escape from an atmosphere-less source as long as $\Delta v$ is sufficiently high. Consequently, the minimum single-body ejecta mass which eventually hits the target and could initially escape from the source is \begin{equation} m' \ge \frac{9\pi}{128} \left( \frac{\rho' p^3}{\rho^3 g^3} \right) \left[ \frac{\Delta v + v_{\rm esc}} {\Delta v - v_{\rm esc}} \right]^3, \ \ \ \ \ {\Delta v > v_{\rm esc}}. \label{minmass} \end{equation} \begin{figure} \vspace{1em} \includegraphics[width=9cm]{MinEjMass.eps} \caption{ The minimum mass of an impact fragment that could be ejected from an Earth-analogue. The surface pressure ($P$), surface gravity ($g$) and density of the Earth ($\rho$) were assumed. Each curve represents a different impact fragment density ($\rho'$). The $x$-axis represents the post-impact speed $\Delta v$ is bounded from below by the escape speed. The plot demonstrates that near the escape speed, the minimum mass needed to escape can vary by orders of magnitude. Otherwise, most post-impact fragments with approximately at least the same mass as Big Ben (the bell) or the Hubble Space Telescope can escape an exo-Earth. } \label{MinEj} \end{figure} We plot equation (\ref{minmass}) in Fig. \ref{MinEj} assuming that the source planet is an Earth-analog. The plot illustrates that atmospheric drag has the strongest effect when the post-impact speed is close to the escape speed, and flattens out as $\Delta v$ increases (note that for large $\Delta v$, equation \ref{minmass} becomes independent of $\Delta v$). \subsection{Largest fragment} Recall that $m'$ represents a fragment of the impact. Now we consider how to compute the largest possible size of an impact fragment, with radius $R'_{\rm max}$. This deduction is based on detailed physics (porosity, spalling, Grady-Kipp processes, simple versus complex crater formation) that are beyond the scope of this study. Further, certain dependencies which might work for icy Solar system satellites \citep{bieetal2012,sinetal2013,alvetal2017} might not apply uniformly to all planets in extrasolar systems. We just use \citep{miletal2000} \begin{equation} \frac{R'_{\rm max}}{R_{\rm i}} = \left( \frac{3 + W}{2} \right) \left[ \frac{T}{\rho \left(\Delta v^{2/3}\right) \left(v_{\rm i}^{4/3}\right)} \right] \label{avgR} \end{equation} \noindent{}where $R_{\rm i}$ and $v_{\rm i}$ are the radius and impact speed of the impactor, $T$ is the tension at fracture, and $W$ is the Weibulls modulus \citep{affetal2006}. For context, as mentioned by \cite{miletal2000}, $T = 0.1 \times 10^9$ Pa and $W = 9.5$ for basalt, a type of igneous rock. We may relate the pre-impact and post-impact speed from equation (\ref{avgR}) by assuming that the energy of the impact is deposited at a depth which is comparable to $R_{\rm i}$. Then \citep{melosh1984,melosh1988} \begin{equation} \frac{\Delta v}{v_{\rm i}} \approx \left(\frac{R_{\rm i}}{d} \right)^{2.87} \end{equation} \noindent{}where $d > R_{\rm i}$ is the distance from the ejection point to the centre of energy deposition \footnote{See \cite{houetal1983} and \cite{dobetal2010} for alternative formulations.}. Consequently, the ejecta speed cannot be larger than the impactor speed. Otherwise, the speed ratio is largely determined by the detailed physics of the impact. As we do not pursue such detail here, we suffice to leave $\Delta v/v_{\rm i}$ as a free parameter ranging from 0 to 1. Then, we can express equation (\ref{avgR}) for basalt (adopting $\rho = 3$ g/cm$^3$) as \begin{equation} \frac{R'_{\rm max}}{R_{\rm i}} = \left( \frac{v_{\rm i}}{0.456 \ \frac{\rm km}{\rm s}} \right)^{-2} \left( \frac{\Delta v}{v_{\rm i}} \right)^{-2/3}. \label{Rratio} \end{equation} \noindent{}We plot equation (\ref{Rratio}) in Fig. \ref{Radmax0}. The plot illustrates that, in general, fragment radii are no more than about 5\% of the radius of their progenitor. \begin{figure} \vspace{1em} \includegraphics[width=9cm]{MaxRad.eps} \caption{ The maximum impact fragment size in terms of the initial radius of the impactor, for different incoming impactor speeds. The tension at fracture is assumed to be $0.1 \times 10^9$ Pa, and the density and Weibulls modulus are consistent with basalt. } \label{Radmax0} \end{figure} \subsection{Liberating material} Now we may consider the minimum impactor size that could liberate material. If $R' = R'_{\rm max}$ is taken to be the minimum size of ejecta that can escape the atmosphere (through equation \ref{ejecrit}), then it follows that the corresponding $R_{\rm i} = R_{\rm i}^{\rm min}$ (through equation \ref{Rratio}) must be the minimum size impactor that can liberate material. Combining these equations, and assuming basalt for the impactor and an Earth-sized, Earth-mass planet, yields Fig. \ref{minlib}. The figure demonstrates that generally an impactor must have a radius of at least tens of km in order to be the catalyst for panspermia. \begin{figure} \vspace{1em} \includegraphics[width=9cm]{MinLib.eps} \caption{ The minimum impactor size that could liberate material from the planet's gravitational well. The impactor and planet have densities of 3 g/cm$^3$, which is consistent with basalt. The figure illustrates that panspermia could occur typically only if the impactor size is at least on the order of tens of km in size. } \label{minlib} \end{figure} \subsection{Destroying the source} Finally, if the impactor is too energetic, then it could sterilize, eviscerate or break apart the source planet. Any of these processes would inhibit prospects for panspermia. Here, we do not analyze the consequences in detail, but merely provide bounds for the impactor in the most extreme case of breakup. We can place an upper bound on the maximum size of the impactor by considering the maximum specific energy $E_{\rm max}$ imparted to a source for which the source will remain intact. The conditions for catastrophic disruption have an extensive associated literature, and can be characterized through a variety of metrics. For example, see \cite{benasp1999} and the hundreds of more recent papers which cite that one. We adopt the explicit formalism in Section 5 of \cite{movetal2016}. They show that the source planet would break apart if the following condition is met: \begin{equation} \left(\frac{1}{2}\right)\left(\frac{\epsilon M + M_{\rm i}}{M + M_{\rm i}}\right) \left(\frac{M M_{\rm i}}{M + M_{\rm i}} \right) v_{\rm i}^2 > E_{\rm max} \label{colcon} \end{equation} \noindent{where} \begin{equation} \epsilon \equiv \left\{ \begin{array}{ll} \frac{3 R_{\rm i} l^2 - l^3}{4 R_{\rm i}^3}, & \quad l < 2 R_{\rm i} \\ 1, & \quad l \ge 2 R_{\rm i} \end{array} \right. , \end{equation} \begin{equation} l \equiv \left(R + R_{\rm i} \right) \left(1 - \sin{\theta} \right) , \end{equation} \begin{equation} E_{\rm max} = \left(13.4 \pm 10.8\right) \left[\frac{3GM^2}{5R} + \frac{3GM_{\rm i}^2}{5R_{\rm i}} + \frac{GMM_{\rm i}}{R + R_{\rm i}} \right] \label{Emax} \end{equation} \noindent{}with $M_{\rm i}$ being the mass of the impactor, and $\theta$ the impact angle, such that a head-on collision corresponds to $\theta=0$. The numerical range given in equation (\ref{Emax}) applies for $0 \le \theta \lesssim 45^{\circ}$. We can quantify equations (\ref{colcon})-(\ref{Emax}) in a simplistic but comprehensive manner by reducing the number of degrees of freedom in the equation. Assume that the collision is head-on and the impactor and source planet are made of the same substance (or more technically have equal densities). Then we can reduce the condition to a function of two ratios as \[ \left( \frac{v_{\rm i}}{v_{\rm esc}} \right)^2 > \left(1.10 \pm 0.58 \right) \] \begin{equation} \ \ \ \times \left[ 8 + 3 \left(\frac{R_{\rm i}}{R} \right)^{-3} - 5 \left(\frac{R_{\rm i}}{R} \right) + 8 \left(\frac{R_{\rm i}}{R} \right)^2 + 3 \left(\frac{R_{\rm i}}{R} \right)^5 \right] . \label{vandr} \end{equation} \noindent{}Figure \ref{Radmax} illustrates the phase space of equation (\ref{vandr}). The plot demonstrates that for common definitions of asteroid and planet, an asteroid could never destroy a planet. However, the result of a Mars-sized object colliding with an Earth-sized object is less clear. \begin{figure} \vspace{1em} \includegraphics[width=9cm]{CataDisr.eps} \caption{ The speed and radius of an impactor that would catastrophically destroy the source planet. The collision is assumed to be head-on, and the impactor and source planet are assumed to be made of the same material. The two curves represent the bounds of the model prediction from the numerical coefficient of equation (\ref{vandr}). If the impactor radius is less than about 10\% of the source planet's, then the required impact speed for a destructive collision would be unrealistically high. } \label{Radmax} \end{figure} \subsection{Temperature at impact} We now have some idea of the size of potential ejectors. Returning our attention to panspermia, we must also consider temperature and how it will affect the ability of life to survive such an impact. Fig.~4 of \cite{weihea2016} illustrates averaged ejecta temperatures as a function of both crater diameter and surface heat fluxes on Mars. They find a range of 215 - 600 K for crater diameters between 0 - 150 km in roughly linear relationships for surface heat fluxes of 20 - 100 mW/m$^2$. Therefore, the temperatures generated are a strong function of crater diameter. Despite this wide range in temperature, simulations have shown that a significant proportion of the ejecta will not exceed 100 $^\circ$C due to the existence of a `spall' zone \citep{melosh1984}. This zone comprises much of the surface layer undergoing the impact and refers to a region in which the shock wave from the impactor is effectively cancelled via superposition with its reflected counterpart. These relatively low-temperature fragments would offer more favourable conditions to any lifeforms residing upon them. The following section will discuss specific forms of life that have shown great promise when tested against the panspermia hypothesis, from initial ejection (Section 5.1) through interplanetary travel (Section 5.2) to eventual atmospheric entry upon reaching the target planet (Section 5.3). \section{Survival of micro-organisms} Over the past few decades, it has become possible to simulate the three stages of panspermia: (1) initial ejection from the impacted planet, (2) the subsequent journey through interplanetary space, and finally (3) impact with the target planet. Each step provides a new set of challenges to the survival of life. This section aims to supply a brief account of how certain micro-organisms have fared when placed in environments that are reminiscent of the panspermia process, and places these constraints into an extrasolar planetary context. A more comprehensive review of the near-Earth and Solar system contexts can be found in \cite{horetal2010}. \subsection{Planetary ejection} A precondition for ejecta to be produced is the existence of impactors. Not every exoplanet host star contains compact asteroid belts \citep{marliv2013}. However, evidence from white dwarf planetary systems indicate that between one quarter and one half of Milky Way white dwarfs host asteroid belts or Kuiper belt analogs \citep{koeetal2014} that are dynamically excited by a number of mechanisms involving planets \citep{veras2016}, the same fraction of main sequence stars thought to host planets throughout the Galaxy \citep{casetal2012}\footnote{No exoplanets have so far been discovered in other galaxies, and therefore the prevalence of impactors there is unconstrained, although habitability prospects may still be assessed \citep{staetal2017}.}. Further, the transfer of either biomolecules (pseudo-panspermia; \citealt*{linloe2017a}) or dead organisms (necropanspermia; \citealt*{wesson2010}) imposes less stringent requirements than the transfer of living organisms. In compact extrasolar systems, we might expect billions of rocks to be transferred between their planets, as it has been estimated that in the much more expansive solar system hundreds of millions of rocks could have already been ejected from the spall zones of Martian impacts and made their way to Earth \citep{miletal2000}. Such a healthy estimate has provided motivation for a number of investigators to conduct simulations which attempt to investigate the survivability of an impact large enough to eject material from Mars. One study, by \cite{stoetal2007}, applied pressures of 5 - 50 GPa to micron-thin layers of different micro-organisms. This pressure range, applied via high explosives, is thought to be typical of Martian impact ejection \citep{artiva2004}. Spores of \textit{Bacilus subtilis} exhibited survival fractions ($N/N_0$ where $N$ is the number of surviving cells and $N_0$ is the original number of viable cells) of 10$^{-4}$ under a pressure of 42 GPa. Bacterial spores are resilient casings that contain identical genetic information to their corresponding micro-organism. Thought to be due to their low water content, the cores of bacterial spores have been found to exhibit notably low enzyme activity. This property is believed to contribute to the resilience of the spores, alongside the fact that the bacterial DNA is mixed with acid-soluble proteins, which aid in enzymatic reactivation (Setlow 1995). Besides the application of high explosives, the planetary ejection stage has also been simulated by firing projectiles at layers of spores, with similar survival rates to that outlined above \citep{buretal2004,fajetal2009}. \subsection{Journey through interplanetary space} Following successful ejection from the impacted planet, the life must then endure the trip to the target planet. Although not as violent as impact-driven ejection, this stage of interplanetary panspermia has been shown to be equally as deadly for a number of micro-organisms, owing to the deleterious conditions present in the space environment. The space between planets is vast and empty; vacuum pressures can drop as low as 10$^{-14}$ Pa, inducing severe desiccation. Exposure to both stellar and Galactic cosmic radiation can also be highly damaging, with stellar UV posing the biggest threat. Temperature extremes during interplanetary transit have the potential to rival those of the ejection phase, depending on the orientation of the fragment's orbit around its host star. Together, these form a lethal concoction for many micro-organisms. Some, however, have exhibited impressive resilience against the harsh conditions of space. The seven planets of TRAPPIST-1 orbit an ultra-cool M dwarf star. Due to the star's relatively low temperature, its habitable zone lies very close in (at hundredths of an au). As such, it is likely to play host to a radiation environment that is much more damaging than the one surrounding the Earth. Investigations into the X-ray/EUV irradiation of the planets were undertaken by \cite{wheetal2017}. They deduced that TRAPPIST-1 has an X-ray luminosity similar to that of the quiet Sun; it has been hypothesised that such a high flux at these wavelengths could have stripped the planets of their atmospheres \citep{donetal2017,roekan2017}, raising severe doubts regarding their habitability. The planetary atmospheres are also expected to alter frequently due to persistent flaring events \citep{videtal2017}, which can penetrate magnetospheres \citep{garetal2017} and also affect the surface-based biospheres \citep{caretal2018} in conjunction with geochemistry \citep{baretal2017}. It would appear, therefore, that the TRAPPIST-1 planets are unlikely to support atmospheres that would enable the harbouring of life. For more Solar system-like exoplanetary systems, we can consider the numerous exposure missions that have taken place in low Earth orbit (LEO), namely at altitudes less than 2000 km. It is important to note that the results of LEO experiments provide a mere estimate of the survivability of panspermia; such orbits are relatively close in proximity to the Earth, and therefore fail to accurately mirror the conditions expected during an interplanetary transit. For instance, the minimum vacuum pressure in LEO is approximately 10$^{-6}$ -- 10$^{-7}$ Pa, several orders of magnitude higher than would be experienced for the majority of a planet-planet trip. Furthermore, the magnetic field of the Earth would shield the lifeform from much of the harmful cosmic radiation that would be plentiful in other regions of the Solar System. Nevertheless, LEO missions have contributed vastly to our understanding of the survival limits of copious micro-organisms, placing constraints on the plausibility of panspermia as a concept. \subsubsection{Long Duration Exposure Facility} Still holding the record for the longest exposure to LEO, NASA’s Long Duration Exposure Facility (LDEF) subjected spores of \textit{B. subtilis} to a combination of the space vacuum, solar UV and multiple components of Galactic cosmic radiation \citep{horetal1994}. In accordance with several other astrobiological experiments, solar UV was found to cause most damage due to its tendency to target and break DNA strands within the spores. Prospects for survival were greatly improved with adequate shielding from the UV in place; around 70 \% of \textit{B. subtilis} spores were able to survive 6 years of exposure to the LEO vacuum when mixed with the sugar glucose. \subsubsection{EURECA} Similar conclusions were drawn from the results of the EURECA mission, which reported a complete loss of viability of \textit{Deinococcus radiodurans} cells following 9 months of exposure to LEO \citep{dosetal1995}. Up to 12 DNA double strand breaks were observed per chromosome in samples exposed to the solar UV, although shielded cells also showed complete inactivation. These findings were surprising, as \textit{D. radiodurans} is known to be incredibly resistant to desiccation and radiation; the bacterium can withstand radiation doses of 5000 Gray (around 1000 times a typically lethal dose for humans) with no loss of viability \citep{mosetal1971}. A more recent study has found that \textit{D. radiodurans} cells can survive many more dessication-induced DNA double strand breaks than those observed in the EURECA mission, due to the fact that the genome is able to reassemble before each successive cycle of cell division \citep{coxbat2005}. As such, the survival rate observed following exposure to LEO conditions could have been higher than that inferred from the experimental results. Regardless, it is clear that the combination of stellar UV radiation and space vacuum form a deadly cocktail, survivable only for the most resilient of lifeforms known to inhabit our Earth. Indeed, vacuum-induced dehydration has been found to alter DNA photochemistry in such a way as to enhance the UV sensitivity of \textit{B. subtilis} spores ten-fold in comparison to irradiation at atmospheric pressure \citep{horneck1998,nicholson2000}. \subsubsection{Biopan} Owing to these early findings, a general consensus has emerged that adequate shielding from the harmful environment of interplanetary space must be in place for micro-organisms, such as bacterial spores, lichens and tardigrades, to stand a chance of surviving panspermia. \paragraph{Shielding} \cite{miletal2000} provide a thorough investigation of shielding for the case of Earth and Mars, and \cite{cockno1999} provide a thorough summary of shielding mechanisms from UV radiation. The effects of shielding were explored as part of the series of experiments that took place using the Biopan facilities aboard various Foton satellites \citep{horetal2001}. A survival fraction of 10$^{-6}$ was obtained when \textit{B. subtilis} spores were exposed to the full LEO environment, whilst much higher fractions of 0.5 - 0.97 were determined for shielded samples. Clay shielding was found to be ineffective when placed in the form of a `shadowing' layer; much more protection was received if the spores were mixed in with the clay, or ground meteorite powder. Importantly, the samples consisted of multilayers of spores; the outer layers would have encountered the full extent of LEO conditions, inactivating quickly and potentially forming a protective `crust', offering added protection to the innermost layers of spores. It is also thought that endolithic micro-organisms, residing in microcracks present within rocks, likely exist in the form of biofilms embedded within a complex matrix of sugar molecules \citep{cosetal1987}. This configuration would provide additional protection against the space vacuum. As such, lifeforms mixed within a layer of rock or clay are likely to receive much greater shielding from both the stellar UV and the space vacuum. \paragraph{Lichens} During the Biopan 5 mission, thalli of the lichens \textit{Xanthoria elegans} and \textit{Rhizocarpon geographicum} were exposed to the space vacuum and selected wavebands of the solar UV for 14.6 days \citep{sanetal2007}. A lichen comprises a stable symbiotic interaction between fungi and/or cyanobacteria. Lichens can be endolithic, growing between the grains inside rock, and are commonly found in mountainous regions. They have been found to survive complete water loss throughout periods of severe desiccation \citep{kraetal2008} and withstand higher than average levels of UV radiation. Following the exposure to LEO, 83 \% of \textit{X. elegans} cells were found to have intact membranes, whilst a similarly high survival rate of 71 \% was determined for \textit{R. geographicum}. Furthermore, full photosynthetic recovery was observed, even for samples exposed to over 99 \% of the solar light. The lichens contain certain pigments that provide screening from the UV, heightening protection during exposure, such as parietin phenolic acids \citep{soletal2003}. Similar UV-screening properties were exhibited by cells of the halophilic cyanobacterium \textit{Synechoccus} following two weeks of exposure to LEO as part of the Biopan 1 series of experiments \citep{manetal1998}. Interestingly, \textit{X. elegans} has also been tested in simulations of the planetary ejection stage of panspermia. The lichen fared similarly to \textit{B. subtilis} spores, with survival rates dropping by only four orders of magnitude upon the application of 50 GPa pressure \citep{horetal2008}. \paragraph{Tardigrades} Biopan 6, on the other hand, provided the first testing ground for tardigrades in space \citep{jonetal2008}. Tardigrades have been identified as one of the most resilient animals on Earth, so are a natural choice for testing in LEO. They have been found to survive extreme temperatures and pressures for significant periods of time \citep{henetal2009,horetal2009}, and show incredible resistance to radiation, surviving doses of up to 5000 Gray \citep{hasetal2016}. A recent study by \cite{sloetal2017} deduced that tardigrades are likely to survive any mass extinction event with an astrophysical cause, such as a nearby supernova, gamma-ray burst or large asteroid impact. In a similar way to bacterial spores, tardigrades can undergo a process known as cryptobiosis, whereby metabolic processes shut down in a reversible fashion during times of extreme stress. One particular form of cryptobiosis, known as anhydrobiosis, is of particular relevance to our discussion of survival in space. In this process, a tardigrade will contract and lose the vast majority of its water content, enabling cell stabilisers like trehalose to be formed and metabolism to, in the most extreme cases, be temporarily halted altogether \citep{weletal2011}. Samples of the tardigrade species \textit{Milnesium tardigradum} and \textit{Richtersius coronifer} survived exposure to the LEO vacuum very well. Combined exposure to both the space vacuum and solar/Galactic radiation resulted in reduced, yet still finite, survival for both species tested. Tardigrades therefore have joined bacterial spores and lichens in the list of lifeforms that have survived exposure to the full LEO environment. \subsubsection{EXPOSE} The most recent results obtained from exposure missions in LEO are those of the European Space Agency's EXPOSE facilities, mounted aboard various modules of the International Space Station. Conducted upon EXPOSE-E, the LIFE experiment subjected a variety of eukaryotic organisms to long-term exposure (1.5 years) for the first time \citep{onoetal2012}. Most notably, \textit{X. elegans} once again achieved full photosynthetic recovery, provided the samples were shielded from UV irradiation. The AMINO experiment, which took place aboard the EXPOSE-R facility, exposed organic molecules to LEO both in their natural state and embedded in meteorite powder \citep{beretal2015}. Chosen for the key roles they play in the formation of macromolecules considered essential for life, the amino acids glycine, alanine and aspartic acid showed minimal deterioration following exposure, with 72 \% of glycine remaining unaffected in unshielded form. Samples of the prokaryote \textit{Halorubrum chaoviator}, a halophilic archaeon, were exposed to LEO as part of the OSMO experiment \citep{mancinelli2015}. If shielded from the solar UV, the archaea exhibited 90 \% survival rates. \subsection{Atmospheric entry} From the many exposure experiments that have taken place in LEO, it is clear that the deleterious conditions in space can have a devastating effect on many micro-organisms. However, it is also clear that a number of lifeforms possess the necessary resilience to survive in such hazardous environments, especially when adequate shielding is in place. We now turn our attention to the final stage of material transfer: atmospheric entry upon reaching the target planet. Because entry speeds range from 12 to 20 kms$^{-1}$ for typical asteroids, the overall process can occur in the space of a few tens of seconds \citep{nicholson2009}. Frictional heating over this rapid timescale leads to the formation of a fusion crust on the surface of the meteorite. This crust ensures that the heating fails to penetrate further than the first few millimetres of material, allowing the interior to maintain a relatively constant temperature throughout. Provided the target planet possesses an atmosphere, the eventual impact with the surface will occur at terminal velocity (50 ms$^{-1}$ for Earth), a far tamer value than what is involved in the planetary ejection phase. Thus far, the best attempts to assess the ability of micro-organisms to survive meteoric entry have been those of the STONE missions, conducted upon the recovery module heat shields of the same Foton satellites used to host the Biopan 5 and 6 facilities \citep{paretal2008,fouetal2010}. The entry speed was measured to be 7.7 kms$^{-1}$, falling short of the expected speeds for asteroids provided above. Nevertheless, none of the micro-organisms tested showed any signs of viability following retrieval, most notably \textit{B. subtilis}. For one of the samples, the fusion crust was found to be around 5 cm deep, possibly due to cracks in the surface of the shield. It would seem, therefore, that further experimentation is required to make any sort of conclusion regarding the survivability of the entry stage of panspermia. \section{Conclusions} The strong prospects for future discoveries of habitable multi-planet systems prompted us to analyze several aspects of panspermia and derive new results. Here, we have applied an impulse formalism from \cite{jacetal2014} to generate orbital constraints on life-bearing ejecta travelling between planets in the coplanar circular case (equations \ref{FirstPiece} and \ref{SecondPiece}). Resulting analytic probability distributions depend only on the semimajor axes of the source and target planets (equations \ref{out1}-\ref{In4}) and can be readily applied to compact multi-planet systems. We have also repackaged and consolidated physical relations that are associated with ejecta to fit within one framework (minimum radius and mass to escape atmosphere: equations \ref{ejecrit}-\ref{minmass}; largest impact fragment: equations \ref{avgR} and \ref{Rratio}; minimum impactor size to liberate material: Section 4.3; speed and impactor radius to destroy source: equation \ref{vandr}). We finally included biological constraints from impact, interplanetary travel and atmospheric entry (Section 5). We hope that our results will represent useful tools to analyze future discoveries of compact multi-planet habitable systems. \section{Acknowledgments} We thank both referees for particuarly helpful and specific comments on the manuscript, resulting in an improved document. DV gratefully acknowledges the support of the STFC via an Ernest Rutherford Fellowship (grant ST/P003850/1), and has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement n. 320964 (WDTracer). DJA is supported by STFC through consolidated grant ST/P000495/1, and JAB is supported through STFC grant ST/R505195/1. APJ acknowledges support from NASA grant NNX16AI31G.
{ "timestamp": "2018-11-13T02:15:02", "yymm": "1802", "arxiv_id": "1802.04279", "language": "en", "url": "https://arxiv.org/abs/1802.04279" }
\section{Introduction} The discovery of the Higgs boson~\cite{higgs,discovery} at the CERN Large Hadron Collider (LHC) is of monumental significance. The completion of the Standard Model (SM) provides us with a consistent theory valid up to high scales. As a perturbative gauge theory, it allows for precision predictions for essentially all LHC observables. In parallel, experimental advances have turned ATLAS and CMS into the first hadron collider precision experiments in history. In combination, these developments open new avenues to tackle fundamental physics questions at the LHC and future high-energy facilities. On the theory side, we are still lacking an understanding of if and how the Higgs mass, the only dimensionful parameter in the theory, is stabilized against a large new physics scale. The Higgs potential responsible for the electroweak symmetry breaking (EWSB) in the SM is determined by the triple and quartic Higgs self-coupling $\lambda_\text{SM}\approx 1/8$. It is a true self-interaction in the sense that it is not associated with any conserved charge after EWSB. With our ignorance for new physics beyond the SM, the shape of the Higgs potential is deeply linked to the fundamental question of electroweak symmetry breaking in the early universe, allowing for a slow second-order phase transition in the SM or a strong first-order phase transition with a modified Higgs potential. It has been argued that a wide range of modified Higgs potentials, which result in a strong first-order EW phase transition, lead to order-one modifications of $\lambda_\text{SM}$~\cite{ew_phase}. All of this points to the Higgs self-coupling $\lambda$ as a benchmark measurement for the coming LHC runs, as well as any kind of planned colliders \cite{Arkani-Hamed:2015vfh}. \medskip Higgs pair production $pp\rightarrow hh$ offers a direct path to pin down $\lambda$ at a hadron collider~\cite{hh-orig,hh-early}. Previous studies show that promising final states from the $hh$ decays are $b\bar{b}\gamma\gamma$~\cite{hh-gamma,vernon}, $b\bar{b}\tau\tau$~\cite{hh-tautau-4b,hh-tautau}, $b\bar{b}WW$~\cite{hh-bbww}, $b\bar{b}b\bar{b}$~\cite{hh-4b}, and $4W$~\cite{hh-ww}. Theoretical studies as well as current analyses point to the $b\bar{b}\gamma\gamma$ decay as the most promising signature at the LHC~\cite{current-gamma}. Combinations with indirect measurements of the self-coupling from quantum effects confirm that Higgs pair production provides the most robust self-coupling measurement~\cite{indirect}. For the high-luminosity LHC (HL-LHC), ATLAS and CMS projections indicate a very modest sensitivity to the Higgs self-coupling~\cite{hl-lhc}. In anticipation to probe new physics beyond the SM, it is customary to parametrize the modification of the self-coupling as \begin{align} \kappa_\lambda = \frac{\lambda}{\lambda_\text{SM}} \; . \end{align} In the optimistic scenario that we can neglect systematic uncertainties, those studies indicate that the LHC will probe the coupling at 95\% confidence level \begin{align} -0.8 < \kappa_\lambda < 7.7\;. \end{align} An issue with those studies is that they are based on the total rate for Higgs production, but neglect a wealth of available information. Including a full kinematic analysis could lead to an improved measurement \cite{madmax-hh} \begin{align} -0.2< \kappa_\lambda <2.6 \;, \end{align} falling short in precision in comparison to other Higgs property measurements at the LHC, and far from satisfactory in probing the Higgs potential.\medskip In this study, we systematically compare the prospects for measuring the Higgs self-coupling at current and higher energy $pp$ colliders. We focus on the two leading proposals for future hadron colliders: \begin{enumerate} \item the 27~TeV high-energy LHC (HE-LHC) with an integrated luminosity of $15~{\ensuremath\rm ab^{-1}}$, \item a 100~TeV hadron collider with $30~{\ensuremath\rm ab^{-1}}$, under consideration at CERN (FCC-hh)~\cite{europe_100tev} and in China (SppC)~\cite{china_100tev}. \end{enumerate} We include state of the art signal and background estimates for the $b\bar{b} \gamma \gamma$ channel, as well as realistic acceptance cuts and efficiencies. While there exist a series of 100~TeV studies of Higgs pair production at different levels of sophistication~\cite{hh-nimatron}, we include a 100~TeV analysis to be able to compare with the HE-LHC reach on equal footing.\medskip We start with a study of relevant phase space regions using a Neyman-Pearson maximum likelihood approach~\cite{madmax-hh,madmax}. This allows us to estimate the impact of using simple kinematic distributions on the measurement of the Higgs self-coupling at the different colliders. Furthermore, we can evaluate the maximum significance of extracting the Higgs pair signal and the significance of detecting a modified self-coupling under idealized conditions. In the main part of our paper, we perform a state-of-the-art analysis of Higgs pair production including additional jet radiation and a full set of realistic detector efficiencies. Unlike earlier analyses, we include $b$-jets from Higgs decays even when they become sub-leading in transverses momentum to the additional jet radiation. Our analysis focuses on the di-Higgs invariant mass distribution, both for the extraction of the Higgs pair signal and for the measurement of the Higgs self-coupling. Using a log-likelihood approach on this single kinematic distribution, we show that the Higgs self-coupling can be properly measured not only at a future 100~TeV collider, but also at the 27~TeV HE-LHC. \section{Higgs Pair Signature} \label{sec:frame} The leading $hh$ production mechanism in the Standard Model at hadron colliders is depicted by the Feynman diagrams in Fig.~\ref{fig:feyn1}. Due to the difference of the top quark propagators in the loops, the two diagrams interfere destructively. In Fig.~\ref{fig:xs_hh_Ecm} we show the total rate for $hh$ production as a function of the center of mass energy $\sqrt s$ in TeV, including the next-to-leading order (NLO) corrections~\cite{nlo}. The width of the curve illustrated the theoretical uncertainties around 10\%~\cite{nnlo}. At the LHC, the signal rate is the limiting factor for Higgs pair studies. At 14~TeV, the cross section including higher-order corrections is in the range of 0.033~pb~\cite{nnlo}, corresponding to at most 100k events with an integrated luminosity of $3~{\ensuremath\rm ab^{-1}}$ at the HL-LHC. Assuming one Higgs decay to tagged bottom quarks, the available rate is reduced to 60k events in the life time of the HL-LHC. The crucial question is what kind of second Higgs decay allows us to effectively trigger the events and to reduce the QCD backgrounds to a manageable level. The leading candidate is the signature~\cite{hh-gamma} \begin{align} pp \to hh \to b\bar{b} \; \gamma \gamma \; , \label{eq:bbgg} \end{align} because of the excellent di-photon mass resolution and the guaranteed trigger. The expected number of signal events in the Standard Model at the HL-LHC is 260. Alternatively, the $b\bar{b} \; \tau \tau$ signature leads to 7.2k events times the tau tagging probability rate squared, and hampered by a significantly worse signal-to-background ratio.\medskip \begin{figure}[t!] \includegraphics[width=.22\textwidth]{diagram1}\hspace{0.2cm} \includegraphics[width=.22\textwidth]{diagram2}\\ \caption{Representative Feynman diagrams contributing to the leading Higgs pair production process via gluon fusion.} \label{fig:feyn1} \end{figure} \begin{figure}[b!] \includegraphics[width=.44\textwidth]{sigma_hh} \caption{Total cross section for $pp\to hh$ production at NLO as a function of the $pp$ collider energy. The width of the curve reflects the 10\% theoretical uncertainty. } \label{fig:xs_hh_Ecm} \end{figure} Because of the rapidly growing gluon luminosity at higher energies, the $hh$ production cross section increases by about a factor of 4~(40) at 27~(100)~TeV. This means that at the HE-LHC with the anticipated integrated luminosity of $15~{\ensuremath\rm ab^{-1}}$ the number of events in the $b\bar{b} \; \gamma \gamma$ channel increases by a factor $4 \times 5 = 20$ to around 5k events. A 100~TeV hadron collider with a projected integrated luminosity of $30~{\ensuremath\rm ab^{-1}}$ features another increase by a factor $10 \times 2=20$, to around 100k expected Higgs pair events in the Standard Model. This estimate shows how the combination of increased energy and increased luminosity slowly turns Higgs pair production into a valid channel for precision measurements. The numbers fundamentally affect our proposed analysis strategy, because the small number of signal and background events suggests a kinematic analysis including as few kinematic distributions as possible. It is possible to improve this situation, for example, using the matrix element technique, as we will discuss below.\medskip We generate the signal with \textsc{MadGraph5}~\cite{mg5}, accounting for a next-to-leading order (NLO) QCD factor $K_\text{NLO}\sim 1.6$~\cite{nlo}. In the final state we demand two $b$-tagged jets and two isolated photons with the minimal acceptance and trigger cuts \begin{alignat}{5} && p_{T,j}>30~{\ensuremath\rm GeV} , \quad |\eta_j |<2.5 \; , \notag \\ && p_{T,\gamma}>30~{\ensuremath\rm GeV}, \quad |\eta_\gamma| <2.5 \; , \notag \\ && \Delta R_{\gamma \gamma, \gamma j, jj} >0.4 \; . \label{eq:base_selections} \end{alignat} The background to our $b\bar{b} \; \gamma \gamma$ signal consists of other Higgs production modes ($t\bar{t}h, Zh$) with $h \to \gamma \gamma$, continuum $b\bar{b}\gamma\gamma$ production, and of multi-jet events with light-flavor jets faking either photons or $b$-jets ($jj\gamma\gamma, b\bar{b}\gamma j$)~\cite{hh-gamma}. The different backgrounds are discussed in detail in Sec.~\ref{sec:ana}. The proper simulation of efficiencies and fake rates are a key ingredient for a realistic background estimate in this analysis. For the HE-LHC and the future 100~TeV collider we follow the ATLAS projections~\cite{performance}. The efficiency for a tight photon identification can be well parametrized by \begin{align} \epsilon_{\gamma\to\gamma} = 0.863 - 1.07 \cdot e^{-p_{T,\gamma}/34.8~{\ensuremath\rm GeV}}\;, \end{align} and a jet-to-photon mis-identification rate by \begin{align} \epsilon_{j\to\gamma} = \begin{cases} 5.3\cdot 10^{-4} \exp \left( -6.5 \left( \dfrac{p_{T,j}}{60.4~{\ensuremath\rm GeV}} - 1 \right)^2 \right)\;, \notag \\[4mm] 0.88 \cdot 10^{-4} \left[ \exp \left( -\dfrac{p_{T,j}}{943~{\ensuremath\rm GeV}} \right) +\dfrac{248~{\ensuremath\rm GeV}}{p_{T,j}}\right] \;, \end{cases} \end{align} where the upper form applied to softer jets with ${p_{T,j} <65}$~GeV. This leads to a photon efficiency of about 40\% at $p_{T,\gamma}=30$~GeV, saturating around 85\% for $p_{T,\gamma}>150$~GeV. Note that the Higgs decay products tend to be soft, $p_{T,\gamma}\sim m_h/2$. For $b$-tagging, we adopt an efficiency with \begin{align} \epsilon_b =0.7 \; , \end{align} associated with mis-tag rates of 15\% for charm quarks and 0.3\% for light flavors. These flat rates present a conservative estimate from the two dimensional distribution on $(p_{Tj},\eta_j)$ shown in the HL-LHC projections~\cite{madmax-hh}. Encouragingly, the small light flavor fake rate projections result in a strong suppression for the initially dominant $jj\gamma\gamma$ background. Obviously, the final outcome of the analyses would depend on the detector performance for the efficiencies of photon identification and $b$-tagging, as well as the background jet rejection. To have a comprehensive exploration and comparison, we will also examine the other available detector parameters, one from CMS \cite{Chatrchyan:2012jua} and the other from the CERN Yellow Report \cite{Mangano:2017tke} for the future collider (FCC), as shown in the Appendix. \begin{figure*}[t] \includegraphics[width=0.32\textwidth]{Plot_mhh_27TeV} \includegraphics[width=0.32\textwidth]{Plot_pth_27TeV} \includegraphics[width=0.32\textwidth]{Plot_draa_27TeV} \\ \includegraphics[width=0.32\textwidth]{Plot_mhh_100TeV} \includegraphics[width=0.32\textwidth]{Plot_pth_100TeV} \includegraphics[width=0.32\textwidth]{Plot_draa_100TeV} \caption{Kinematic distributions (dashed lines with left vertical axes) and significance distribution (solid lines with right vertical axes) assuming a Higgs self-coupling with $\kappa_\lambda=0,1,2$. The significance describes the discrimination of an anomalous self-coupling $\kappa_\lambda \neq 1$ from the SM hypothesis $\kappa_\lambda = 1$. The results are for the HE-LHC (upper row) and for the 100~TeV collider (lower row).} \label{fig:madmax_diff} \end{figure*} \section{The Mother of Distributions} \label{sec:features} As depicted in Fig.~\ref{fig:feyn1}, Higgs pair production receives contributions from a triangular loop diagram combined with the Higgs self-coupling and from a box or continuum diagram (plus a crossing diagram), where over most of phase space the box contribution completely dominates the total rate. While we can define a number of kinematic observables describing the continuum backgrounds, the measurement of the Higgs self-coupling relies on a simple $2 \to 2$ process with two independent kinematic variables. Three distinct phase space regions provide valuable information on a modified Higgs self-coupling, both from a large destructive interference between the triangle and box contributions. First, there is the threshold~\cite{hh-early,hh-ww} in the partonic center of mass energy \begin{align} m_{hh}^\text{(th)} \approx 2 m_h \; . \end{align} In the absence of hard additional jets, the di-Higgs invariant mass is identical to the partonic collider energy $s \equiv m_{hh}^2$. Note that this threshold is below $2m_t$. Based on the effective Higgs--gluon Lagrangian~\cite{low_energy} we can write the corresponding amplitude for Higgs pair production as \begin{align} \frac{\alpha_s}{12 \pi v} \left( \frac{\kappa_\lambda \lambda_\text{SM}}{s-m_h^2} - \frac{1}{v} \right) \to \frac{\alpha_s}{12 \pi v^2} \left( \kappa_\lambda -1 \right) \stackrel{\text{SM}}{=} 0 \; . \label{eq:higgs_pair} \end{align} While the heavy-top approximation is known to give a poor description of the signal kinematics as a whole, it does describe the threshold dependence correctly~\cite{hh-ww}. This indicates that we can search for a deviation of the Higgs self-coupling by looking for an enhancement of the rate at threshold. Second, an enhanced sensitivity to the self-coupling appears as top mass effect. For large positive values of $\lambda$ absorptive imaginary parts lead to a significant dip in the combined rate at the threshold $p_{T,h} \approx 100$~GeV~\cite{hh-tautau} or equivalently~\cite{madmax-hh} \begin{align} m_{hh}^\text{(abs)} \approx 2 m_t \; . \end{align} The sharpest interference dip takes place near $\lambda\approx 2$. For negative values of $\lambda$ the interference becomes constructive. Finally, the triangular and box amplitudes generally have different scaling in the limit~\cite{hh-early,hh-tautau} \begin{align} m_{hh}^\text{(high)} \gg m_h, m_t \; . \end{align} While the triangle amplitude features an explicit suppression of either $m_h^2/m_{hh}^2$ or $m_t^2/m_{hh}^2$ at high invariant mass, the box diagrams drop more slowly towards the high-energy regime. The impact of all three kinematic features can be quantified statistically and is illustrated in detail in Fig.~5 of Ref.~\cite{madmax-hh}. They clearly indicate that essentially the full information on the Higgs self-coupling can be extracted through a shape analysis of the $m_{hh}$ distribution~\cite{martin}.\medskip The practical relevance of the different kinematic regimes has to be estimated including the variation of the signal cross section, the number of expected events at a given collider, and the size of the backgrounds. There exist two similar statistical approaches to answer this problem, the \textsc{MadMax} approach based on the Neyman-Pearson lemma~\cite{madmax} and the \textsc{MadFisher} approach based on information geometry~\cite{madfisher}. While the latter is especially well-suited to estimate the reach for example of precision measurements at the LHC, we employ the former for a simple hypothesis test. The integrated log-likelihood ratio over the full phase space or specific kinematic regimes allows us to estimate the maximum significance with which any multi-variate analysis will be able to extract a signal from backgrounds or distinguish two assumed values of the Higgs self-coupling~\cite{madmax-hh}. Throughout maximum likelihood analysis we limit ourselves to irreducible backgrounds and assume that statistical uncertainties dominate over the relevant phase space regions. Events with soft final states typically contribute little to the search for new particles with weak-scale masses. The exact choice of acceptance cuts in Eq.~\eqref{eq:base_selections} and the modeling of $b$-tagging or photon identification efficiencies will have a negligible effect on our results. \medskip For our numerical analysis, we account for all backgrounds discussed in Sec.~\ref{sec:frame}, except for the $t\bar{t}h$ channel with its significantly different final state. As part of the detailed background analysis in Sec.~\ref{sec:ana}, we will see that this assumption is justified. The setup is essentially identical to Ref.~\cite{madmax-hh}, but now using the cuts and fake rates given in Sec.~\ref{sec:frame}. In particular, we account for the smearing of the Higgs peak as leading detector effect. The invariant mass distributions are smeared by a Gaussian with width 1.52 GeV for the $\gamma\gamma$ channel~\cite{CMS:2016zjv} and 12.6 GeV for the $bb$ channel~\cite{Vernieri:2014wfa}. The signal rate is adjusted to account for the loss of signal rate through a poor description of the tails of the distributions~\cite{madmax-hh}. This allows us to restrict ourself to the two Higgs mass windows $m_{bb} = 80~...~160$~GeV and $m_{\gamma \gamma} = 120~...~130$~GeV. All other detector effects are left to our actual analysis in Sec.~\ref{sec:ana}. \medskip In Fig.~\ref{fig:madmax_diff} we first show the signal and background distributions for three relevant kinematic variables, $m_{hh}$, $p_{T,h}$, and $\Delta R_{\gamma \gamma}$. The transverse momentum distributions of the two Higgs bosons will be identical, so we can measure them either as $p_{T,\gamma \gamma}$ or as $p_{T,bb}$. Both, for $m_{hh}$ and $p_{T,h}$ the QCD backgrounds reside at small values, with similar signal-to-background ratios at the HE-LHC and the 100~TeV collider. The geometric separation of the two photons from the continuum background has to be large to generate an invariant mass around the Higgs mass. \begin{figure}[t] \includegraphics[width=0.40\textwidth]{XSZ_vs_Lambda} \caption{Higgs pair production cross section (red lines with left vertical axis) and maximum significance (black lines with right vertical axis) for discriminating an anomalous self-coupling $\kappa_\lambda \ne 1$ from the SM, as a function of the modified self-coupling. The results are for the HL-LHC, the HE-LHC, and a future 100~TeV collider, respectively. The HL-LHC results are taken from Ref.~\cite{madmax-hh}.} \label{fig:madmax_tot} \end{figure} Also in Fig.~\ref{fig:madmax_diff}, we show how the significance of extracting an anomalous self-coupling $\kappa_\lambda \ne 1$ depends on these key observables. The alternative hypothesis in this case is the combination of the backgrounds and the signal with $\kappa_\lambda = 1$. In addition to the signal features, the significance is limited by the rapidly dropping backgrounds, covering both of the above-mentioned regions with an enhanced dependence on the triangle diagram. In the absence of background, the significance indeed peaks between the production threshold and the top-mass threshold~\cite{madmax-hh}. The drop towards large values of $m_{hh}$ is a combination of the dominance of the box diagram in the signal and the limited number of expected signal events. The significance with which we can extract modified self-couplings either smaller ($\kappa_\lambda = 0$) or larger ($\kappa_\lambda = 2$) than in the Standard Model shows a similar phase space dependence. The only difference is a slightly harder significance distributions for $\kappa_\lambda = 2$, an effect of the dip at $m_{hh}^\text{(abs)}$.\medskip Obviously, we can combine the maximum significance distributions into a global maximum significance accumulated over the full phase space. In Fig.~\ref{fig:madmax_tot} we show the idealized, maximum significance with which we can hope for at the HL-LHC, the HE-LHC, and a future 100~TeV collider. The asymmetric behavior for the HL-LHC is a remainder of a degeneracy in the total cross section as a function of the self-coupling, also shown in Fig.~\ref{fig:madmax_tot}. A SM-like rate appears when an enhanced triangle diagram overcomes the larger box contribution and flips the sign of the amplitude. Obviously, this degeneracy will be broken by kinematic information, for example the $m_{hh}$ distribution. For the HE-LHC and the 100~TeV collider, the total rate constraint becomes increasingly irrelevant for the measurement of the self-coupling. The expected statistical error bars are narrow and approximately symmetric around on $\kappa_\lambda = 1$. For both future colliders, we can indeed expect a proper measurement of the Higgs self-coupling. \section{Detector-level Analysis} \label{sec:ana} \begin{figure}[t] \includegraphics[width=.22\textwidth]{diagram3}\hspace{0.2cm} \includegraphics[width=.22\textwidth]{diagram4} \caption{Representative Feynman diagrams contributing to Higgs pair production via gluon fusion including an ISR jet at hadron colliders.} \label{fig:feyn2} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=.38\textwidth]{ptj2_decomposition} \hspace*{0.02\textwidth} \includegraphics[width=.38\textwidth]{ptj2_decomposition_100TeV} \vspace*{-3mm} \caption{Composition of the second-hardest jet in the signal sample after the acceptance cuts of Eq.~\eqref{eq:base_selections} for the HE-LHC and the 100 TeV future collider, respectively, with arbitrary units.} \label{fig:compo} \end{figure*} Following the analysis path laid out in Sec.~\ref{sec:features}, we now design a detailed analysis strategy to extract the Higgs self-coupling with a focus on the shape of the $m_{hh}$ distribution. Our signal is \begin{align} pp \to hh + X \to b\bar{b} \; \gamma \gamma + X. \end{align} In anticipation of increasing QCD radiation at higher energies, we inclusively allow extra jets in the events from initial state radiation, along with two tagged $b$-jets and two isolated hard photons, passing the acceptance cuts of Eq.~\eqref{eq:base_selections}. \begin{table*}[t!] \setlength{\tabcolsep}{1pt} \begin{tabular}{ll|rrr|rrrrr|r|cc} \toprule Collider & Process & \multicolumn{3}{c|}{$\kappa_{\lambda}$} & $t\bar{t}h$ & $Zh$ & $b\bar{b}\gamma\gamma$ & $jj\gamma\gamma$ & $b\bar{b}\gamma j$ & BG tot. & $S/\sqrt{S+B}_{1{\ensuremath\rm ab^{-1}}}$ & $S/B$ \\ & & 0 & 1 & 2 & & & & & & & & \\ \cmidrule{1-13} \multirow{10}{*}{HE-LHC} &$\sigma$ [fb] & 0.69 & 0.36 & 0.18 & 6.43 & 0.77 & 1.24 pb & 36.6 pb & 506 pb & & & \\ \cmidrule{2-13} & Baseline & 2.87K & 1.57K & 838 & 21.8K & 1.44K & 1.19M & 36M & 1.13M & 38.3M & 0.07 & $4\cdot 10^{-5}$\\ &$n_{j} \le 3$, $n_b =2$ & 648 & 356 & 190 & 954 & 389 & 200K & 67.4K & 105K & 374K & 0.15& $1 \cdot 10^{-3}$\ \\ &$\Delta m_{bb} \le 25$~GeV & 470 & 260 & 140 & 195 & 66 & 43.7K & 10.6K & 25.8K & 80.4K & 0.24 & 0.003 \\ \cmidrule{2-13} &$\Delta m_{\gamma\gamma} \le 3$~GeV & 459 & 253 & 136 & 197 & 63 & 1.42K & 505 & 758 & 2.94K & 1.2 & 0.09 \\ (15~ab$^{-1}$) &$\Delta m_{\gamma\gamma} \le 2$~GeV & 459 & 253 & 136 & 197 & 63 & 957 & 342 & 504 & 2.06K & 1.4 & 0.12 \\ &$\Delta m_{\gamma\gamma} \le 1$~GeV & 459 & 253 & 136 & 197 & 63 & 485 & 182 & 245 & 1.17K & 1.7 & 0.22 \\ \cmidrule{2-13} &$\Delta m_{\gamma\gamma} \le 3$~GeV, $m_{hh}>400$ & 320 & 206 & 120 & 56 & 21 & 324 & 97 & 178 & 676 & 1.8 & 0.30 \\ &$\Delta m_{\gamma\gamma} \le 2$~GeV, $m_{hh}>400$ & 320 & 206 & 120 & 56 & 21 & 220 & 67 & 122 & 485 &2.0 & 0.42\\ &$\Delta m_{\gamma\gamma} \le 1$~GeV, $m_{hh}>400$ & 320 & 206 & 120 & 56 & 21 & 115 & 41 & 61 & 293 & 2.4& 0.70 \\ \cmidrule{1-13} \multirow{10}{*}{100~TeV} &$\sigma$ [fb] & 6.95 & 3.72 & 1.97 & 84.8 & 3.76 & 6.21 pb & 126 pb & 3.03 nb & & & \\ \cmidrule{2-13} & Baseline & 51.8K & 29.8K & 16.9K & 535K & 13.1K & 13.6M & 330M & 18.6M & 363M & 0.29 & $8\cdot 10^{-5}$\cr &$n_{j} \le 3$, $n_b =2$ & 9.22K & 5.28K & 3.02K & 18K & 2.84K & 1.79M & 773K & 1.42M & 4.00M & 0.48 & 0.001 \cr &$\Delta m_{bb} \le 25$~GeV & 6.45K & 3.80K & 2.18K & 3.3K & 669 & 361K & 218K & 373K & 956K & 0.71 & 0.004 \\ \cmidrule{2-13} &$\Delta m_{\gamma\gamma} \le 3$~GeV & 6.30K & 3.70K & 2.13K & 3.12K & 653 & 8.34K & 6.06K & 8.99K & 27.2K & 3.9 & 0.14 \\ (30~ab$^{-1}$) &$\Delta m_{\gamma\gamma} \le 2$~GeV & 6.30K & 3.70K & 2.13K & 3.12K & 653 & 5.66K & 4.13K & 5.99K & 19.5K & 4.4 & 0.19 \\ &$\Delta m_{\gamma\gamma} \le 1$~GeV % & 6.30K & 3.70K & 2.13K & 3.12K & 653 & 2.82K & 1.91K & 2.99K & 11.4K & 5.5 & 0.32\\ \cmidrule{2-13} &$\Delta m_{\gamma\gamma} \le 3$~GeV, $m_{hh}>400$ % & 4.66K & 3.16K & 1.93K &1.09K & 203 & 1.56K & 1.10K & 1.90K & 5.86K & 6.1 & 0.54 \\ &$\Delta m_{\gamma\gamma} \le 2$~GeV, $m_{hh}>400$ % & 4.66K & 3.16K & 1.93K &1.09K & 203 & 1.04K & 747 & 1.14K & 4.23K& 6.7 & 0.73 \\ &$\Delta m_{\gamma\gamma} \le 1$~GeV, $m_{hh}>400$ & 4.66K & 3.16K & 1.93K &1.09K & 203 & 523 & 359 &617 & 2.79K & 7.5 & 1.13 \\ \bottomrule \end{tabular} \caption{Number of signal and background events for the HE-LHC and the 100~TeV collider. We present results for $\kappa_\lambda=0,1,2$ and the Higgs mass windows $ |m_{\gamma\gamma}-m_h|<1,2,3$~GeV. In our analysis $c\bar{c} \gamma\gamma$ events are part of the $jj\gamma\gamma$ background. The significance is given for $1~{\ensuremath\rm ab^{-1}}$ of data.} \label{tab:cutflow} \end{table*} For the detector-level analysis we generate the signal and background samples with \textsc{MadGraph5}+\textsc{Pythia8}~\cite{mg5,pythia8}, including one extra jet using the \textsc{Mlm} scheme~\cite{mlm}. A representative set of Feynman diagrams for the signal is shown in Figs.~\ref{fig:feyn1} and~\ref{fig:feyn2}. Higher-order corrections are included through a next-to-leading order $K$-factor 1.6~\cite{nlo,nnlo,hh_madgraph}, neglecting possible higher-order effects on the $m_{hh}$ distribution. We normalize the $t\bar{t}h$ and $Zh$ to their respective NLO and NNLO rates 2.8~pb and 2.2~pb at 27~TeV (37~pb and 11~pb at 100~TeV)~\cite{tth_zh}. We also include the full set of detector effects with \textsc{Delphes3}~\cite{delphes}, following the HL-LHC projections~\cite{performance}.\medskip Jets are defined with the anti-$k_T$ algorithm ${R=0.4}$ via \textsc{FastJet}~\cite{fastjet}. While the $t\bar{t}h$ background is almost irrelevant at the 14~TeV LHC, it becomes increasingly important at higher energies. Obviously, the more complex, high-multiplicity final state offers many handles to tame it. We employ a simple veto on leptons with \begin{align} p_{T,\ell}>10~{\ensuremath\rm GeV}\ \text{and}\ |\eta_\ell|<2.5 \; , \end{align} combined with a veto of more than three jets passing Eq.~\eqref{eq:base_selections}. \begin{figure*}[t] \centering \includegraphics[width=.4\textwidth]{mhh_lambda_dependence_1GeV_nj3_rebin_log} \hspace*{0.02\textwidth} \includegraphics[width=.4\textwidth]{mhh_lambda_dependence_1GeV_nj3_rebin_100TeV_log} \vspace*{-3mm} \caption{Higgs pair invariant mass for the signal and backgrounds based on realistic simulations for the HE-LHC (left) and the 100 TeV future collider (right). The $m_{\gamma\gamma}$ distribution is described by a Gaussian with width $0.75$~GeV.} \label{fig:reso} \end{figure*} To suppress the initially overwhelming $jj\gamma\gamma$ background, we demand two $b$-tags among the three hardest jets. A crucial observation is that at higher energies, initial state radiation (ISR) often leads to a harder jet than the Higgs decay products, such that either the hardest or second-hardest jet is not a $b$-jet for roughly half of all events. This is illustrated in Fig.~\ref{fig:compo} as the composition of the second-hardest parton-level jet, requiring that both truth-level $b$-jets pass the selection of Eq.~\eqref{eq:base_selections}. Thus, the $b$-tagging requirement as the two leading jets should be adjusted accordingly. Based on this observation we account for two patterns of the $p_T$ jets, $(bb,bbj)$ and $(jbb,bjb)$. This increases our signal efficiency by around 50\%. Expanding this scheme to even more jets is not effective because it eventually also increases the continuum backgrounds and the $t\bar{t}h$ contributions. The reliability of our Monte Carlo simulation underlying this procedure is guaranteed by the fact that the hardest three jets are generated using multi-jet merging. To control the continuum backgrounds, we require two Higgs mass windows, \begin{align} |m_{bb}-m_h|<25~{\ensuremath\rm GeV}, \quad |m_{\gamma\gamma}-m_h|<1~{\ensuremath\rm GeV} . \label{eq:jreslov} \end{align} An obvious way to enhance the Higgs pair signal is to improve the resolution on the reconstructed photons and $b$-jets from the Higgs decays. We adopt the rather conservative resolution for $m_{bb}$ as in Eq.~\eqref{eq:jreslov}. Any improvement on it in experiments would be greatly helpful for the signal identification and background separation. As for the photon resolution, we illustrate this effect by using three representative values where the $m_{\gamma\gamma}$ distribution is smeared by a Gaussian width of $0.75$, $1.5$, $2.25$~GeV, corresponding to Higgs mass windows \begin{align} |m_{\gamma\gamma} - m_h| \le 1,2,3~{\ensuremath\rm GeV}. \label{eq:preslov} \end{align} A resolution of $1.5$~GeV has already been achieved at the LHC~\cite{CMS:2016zjv}.\medskip The results at this stage of the analysis are illustrated in Table~\ref{tab:cutflow} with a full cut flow for the two collider energies and assuming $\kappa_\lambda = 0,1,2$. We already find a large background suppression $S/B\sim 0.09~...~0.2$ for the HE-LHC and $0.14~...~0.3$ at a future 100~TeV collider. Requiring ${m_{hh}>400}$~GeV improves it to $S/B\sim 0.3~...~0.7$ or $0.5~...~1.1$, respectively. This is entirely due to the rapidly falling backgrounds as compared to the $hh$ signal,but will be at the expense of the self-coupling determination. The $m_{hh}$ distribution of the signal and the different backgrounds is shown in Fig.~\ref{fig:reso}. \begin{figure*}[t] \centering \includegraphics[width=.4\textwidth]{hh_5sigma_2b3j} \includegraphics[width=.4\textwidth]{hh_5sigma_width} \caption{Luminosity required for a $5\sigma$ discover of Higgs pair production for the HE-LHC (dashed) and a 100~TeV collider (full). Left: sensitivity in terms of the total rate, demanding two $b$-tags among the two or three leading jets and assuming $|m_{\gamma\gamma}-m_h|<1$~GeV. Right: sensitivity for three mass windows $|m_{\gamma\gamma}-m_h|<1,2,3$~GeV. We assume the SM hypothesis with $\kappa_\lambda=1$ and use a binned log-likelihood analysis of the $m_{hh}$ distribution.} \label{fig:bound1} \end{figure*} The signal-to-background ratio can be strongly improved by a better $m_{\gamma\gamma}$ resolution. As long as most of the $h\to \gamma\gamma$ events are captured by an appropriate $m_{\gamma\gamma}$ window, the contributions from continuum backgrounds can be estimated using the side-band measurements. \medskip Going beyond a cut-based analysis for example on $m_{hh}$, we employ a binned log-likelihood analysis based on the CL$_{s}$ method, using the full $m_{hh}$ distribution to extract $\kappa_{\lambda}$~\cite{read}. The dominant backgrounds feature powerful control regions or ratio measurements like $t\bar{t}h/t\bar{t}Z$~\cite{nimatron_yt}. Therefore, we neglect their systematic uncertainties. As a starting point, we show the $5\sigma$ determination on the Higgs pair signal strength in the left panel of Fig.~\ref{fig:bound1}, requiring two $b$-tagged jets among the two or three leading jets. We decompose the latter case in two sub-samples $(bb,bbj)$ and $(jbb,bjb)$. We see how exploring the extra-jet emission significantly improves the significance as compared to the standard procedure adopted in the literature. The $5\sigma$ measurement for HE-LHC is pushed from $2.8~{\ensuremath\rm ab^{-1}}$ to below $2.3~{\ensuremath\rm ab^{-1}}$. In the right panel of Fig.~\ref{fig:bound1} we show the discovery reach for the Higgs pair signal as a function of the luminosity of the HE-LHC and the 100~TeV collider. We assume three di-photon invariant mass resolutions with three Higgs mass windows as in Eq.~\eqref{eq:preslov} for a SM self-coupling $\kappa_{\lambda}=1$. Higgs pair production will be discovered at the HE-LHC with approximately $2.5~...~5~{\ensuremath\rm ab^{-1}}$ and at the 100~TeV collider with $0.2~...~0.3~{\ensuremath\rm ab^{-1}}$ of data, in both cases well below the design luminosity.\medskip As commented in the Introduction, there exist physics scenarios that the Higgs self-coupling could be modified at the level of order one deviation from the SM value. The accurate measurement of the Higgs self-coupling via Higgs pair production at future colliders has the best promise to uncover the new physics associated with the Higgs sector. In Fig.~\ref{fig:bound2}, we show the accuracy on this measurement. At the 68\% confidence level the triple Higgs coupling can be measured with the precision \begin{align} \kappa_{\lambda} &\approx 1 \pm 15\% \qquad \text{(HE-LHC, 27~TeV, $15~{\ensuremath\rm ab^{-1}}$)} , \notag \\ \kappa_{\lambda} &\approx 1 \pm 5\% \ \,\qquad \text{(100~TeV, $30~{\ensuremath\rm ab^{-1}}$).} \end{align} At the 95\% confidence level, \begin{align} \kappa_{\lambda} &\approx 1 \pm 30\% \qquad \text{(HE-LHC, 27~TeV, $15~{\ensuremath\rm ab^{-1}}$)} , \notag \\ \kappa_{\lambda} &\approx 1 \pm 10\% \qquad \text{(100~TeV, $30~{\ensuremath\rm ab^{-1}}$).} \end{align} The way to improve these expected limits towards the mathematically-defined best reach shown in Fig.~\ref{fig:madmax_tot} is to exploit more kinematic features and this way also suppress the reducible $t\bar{t}h$ background. \begin{figure}[t] \centering \includegraphics[width=.4\textwidth]{hh_dlamb} \caption{Confidence level for separating an anomalous Higgs self-coupling hypothesis from the Standard Model $\kappa_{\lambda}=1$.} \label{fig:bound2} \end{figure} To gain some insight on how robust our results are, we have also examined the other available choices of detector parameters, one from CMS \cite{Chatrchyan:2012jua} and the other from the CERN Yellow Report (YR)\cite{Mangano:2017tke} for the future collider (FCC). As shown in Fig.~\ref{fig:bound_appendix} in the Appendix, we find that the results are quite consistent with each other, with the YR performance being slightly better. This indicates possible room for further improvement. \section{Summary and Outlook} \label{sec:sum} In this paper, we have explored Higgs pair production as a direct way to measure the Higgs self-coupling, the least-known but arguably the most important fundamental parameter of the Standard Model.\medskip We first presented the production cross section for $pp \to hh$ at future high-energy colliders in Fig.~\ref{fig:xs_hh_Ecm}, Sec.~\ref{sec:frame}. We discussed the signal rate for the process with leading sensitivity $pp\to hh\to b\bar b\ \gamma\gamma$, and laid out the event selection criteria in accordance with the experimental acceptance at the LHC. In Sec.~\ref{sec:features}, we discussed the kinematic features of the signal and compared with the backgrounds, as shown in Fig.~\ref{fig:madmax_diff}. The key variable is the invariant mass distribution of the Higgs pair that presented distinctive behaviors. We first performed a parton-level analysis that combines the maximum significance distributions into a global maximum significance accumulated over the full phase space, for the HL-LHC, the HE-LHC, and a future 100~TeV collider. For both future colliders we found excellent prospects for kinematics-based determinations of the Higgs self-coupling as shown in Fig.~\ref{fig:madmax_tot}.\medskip In Sec.~\ref{sec:ana}, we then carried out a search strategy based on a rate combined with kinematic shapes with realistic simulations. The approach is not only more powerful~\cite{hh-ww,madmax-hh} than a purely rate-based measurement but also more stable against systematic and theoretical uncertainties, provided we account for all bin-to-bin correlations. Our method removes all degeneracies which appear in a rate-based measurement and leads to well-defined symmetric error bars on the modified self-coupling. Higher energy colliders allow for including events with high $m_{hh}$. In such more and more common configurations at high energies, the additional jets from QCD radiation frequently surpass the $b$-jet energy about $m_h/2$, as seen in Fig.~\ref{fig:compo}. To improve the signal efficiency we included at least three observable jets, fully accounting for QCD jet radiation via the {\textsc MLM} merging, with possibly softer $b$-jets from Higgs decays. We showed a cut-flow in Table \ref{tab:cutflow} to illustrate the staged improvements and to give a comparison for the two future colliders. We further enhance our measured significances, decomposing the samples into two sub-samples $(bb,bbj)$ and $(jbb,bjb)$. Finally, we determined the integrated luminosity needed to reach a $5\sigma$ significance to observe the SM $hh$ signal as shown in Fig.~\ref{fig:bound1}. We found that the high-energy upgrade of the LHC to 27~TeV would reach a 5$\sigma$ observation of the Higgs pair production with an integrated luminosity of about 2.5 ab$^{-1}$. It would have the potential to reach 15\% (30\%) accuracy at the 68\% (95\%) confidence level to determine the SM Higgs boson self-coupling. A future 100 TeV collider could improve the self-coupling measurement to better than 5\% (10\%) at the 68\% (95\%) confidence level, as shown in Fig.~\ref{fig:bound2}. These results roughly agree with the optimal reach shown in Fig.~\ref{fig:madmax_tot}. Our conclusions are quite robust against some moderate variations of the detector performances as shown in Fig.~\ref{fig:bound_appendix} in the Appendix. In the hope of searching for effects from physics beyond the SM, our results should provide conclusive information weather or not the Higgs-self-interaction is modified to a level of order one.\medskip While our conclusions on the determination of Higgs-self-interaction at future hadron colliders are robust and important, there is still room for improvement. Although the final state $b\bar b\ \gamma\gamma$ is believed to be the most sensitive channel because of the background suppression and signal reconstruction, there exist complementary channels such as $gg\to hh \to b\bar b\ \tau^+\tau^-$, $b\bar b\ W^+W^-$, $b\bar b\ b\bar b$, etc. The kinematics-based measurement and the all features related to QCD radiation at higher energies should be equally applicable to all of them. \section{Appendix} \label{sec:appendix} As explained in the text, we optimize our set of selection cuts primarily to reduce the continuum background, which would be accompanied by large systematic uncertainty, and secondarily to reduce the $t\bar{t}h$ background, which is the largest background component with a Higgs mass peak structure. To achieve the above optimization, we take the photon identification working point with a reasonably efficient jet-fake rejection~\cite{performance}, and require the additional jet veto ($n_j \le 3$). We believe our selection is almost optimal, but for completeness, we assess the effects of applying different efficiencies taken in the literature and provide the final sensitivities assuming those numbers. For comparison, we have worked on two different efficiency scenarios found for the CMS projections~\cite{Chatrchyan:2012jua} and in the CERN Yellow Report (YR)~\cite{Mangano:2017tke} for the study of Future Circular Colliders (FCC). \begin{figure}[t] \centering \includegraphics[width=.4\textwidth]{hh_dlamb_projections_27tev} \includegraphics[width=.4\textwidth]{hh_dlamb_projections_100tev} \caption{Comparison of the final confidence level for separating an anomalous Higgs self-coupling hypothesis from the Standard Model $\kappa_{\lambda}=1$ for several efficiency choices. We display the results for 27~TeV (top panel) and 100~TeV (bottom panel)} \label{fig:bound_appendix} \end{figure} We adopt the fitted CMS projections as follows: \begin{eqnarray} &&\epsilon_{\gamma\to\gamma}=0.85,\cr &&\epsilon_{j\to\gamma}=\begin{cases} 0.0113\exp(- \frac{p_T}{26.3~{\ensuremath\rm GeV}}) \ \ [p_T<100~{\ensuremath\rm GeV}]\cr 0.0025 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [p_T \ge 100~{\ensuremath\rm GeV}] \end{cases},\cr &&\epsilon_{b\to b}=0.85\tanh\left(\frac{p_T}{400~{\ensuremath\rm GeV}}\right)\frac{25.0}{1+ p_T/15.9~{\ensuremath\rm GeV}}, \cr &&\epsilon_{c\to b}=0.25\tanh\left(\frac{p_T}{55.6~{\ensuremath\rm GeV}}\right)\frac{1}{1+ p_T/769~{\ensuremath\rm GeV}},\cr &&\epsilon_{j\to b}=0.01. \end{eqnarray} The efficiency set used in the YR is the following: \begin{eqnarray} &&\epsilon_{\gamma\to\gamma}=0.9, \ \ \ \epsilon_{j\to\gamma}=0.01\exp\left(-\frac{p_T}{30~{\ensuremath\rm GeV}}\right),\cr &&\epsilon_{b\to b}=0.75,\ \ \ \epsilon_{c\to b}=0.1,\ \ \ \epsilon_{j\to b}=0.01. \end{eqnarray} Fig.~\ref{fig:bound_appendix} shows the comparison among the final results using the three different sets of the efficiencies for 27~TeV (top) and 100~TeV (bottom). The red lines show the final results assuming our adopted efficiencies (from the ATLAS HL-LHC projection study)~\cite{performance}, while the green and the blue lines show those assuming the YR and the CMS ones, respectively. Our analysis sensitivity is not much improved by taking the working points with a larger photon efficiency used by these two alternative references, due to the corresponding worse light-jet rejection rate, which enhances the continuum background, especially the $bb\gamma j$ contribution. Note that we devise our analysis with a large $S/B$ by targeting to reduce the continuum background $bb\gamma j$ and leave the main background contributions from $t\bar{t}h$. In this way we achieve $S/B\sim 0.7$ against the corresponding numbers 0.45 (YR), and 0.4 (CMS), respectively, for the 27~TeV analysis. For the 100~TeV analysis, we achieve $S/B\sim 1.1$ against 0.6 (YR) and 0.5 (CMS). Thus, we can provide a more robust estimate against the systematic uncertainty of the continuum background. Additionally, it allows us to have a larger sensitivity from the lower $m_{hh}$ profile, a regime that is more background contaminated and that displays larger effects on the triple Higgs coupling. \bigskip \bigskip \begin{center} \textbf{Acknowledgment} \end{center} We would like to thank Michelangelo Mangano and Michele Selvaggi for discussions. This work was supported in part by the U.S.~Department of Energy under grant No.~DE-FG02- 95ER40896 and by the PITT PACC. DG is supported in part by the U.S.~National Science Foundation under the grant PHY-1519175. FK is supported by the U.S.~National Science Foundation under the grant PHY-162063. MT is supported in part by the JSPS Grant-in-Aid for Scientific Research Numbers JP16H03991, JP16H02176, 17H05399, and by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. TP would like to thank Uli Baur (15 years ago) and Michael Spannowsky for helpful discussions concerning kinematic features of Higgs pair production.
{ "timestamp": "2018-10-25T02:04:34", "yymm": "1802", "arxiv_id": "1802.04319", "language": "en", "url": "https://arxiv.org/abs/1802.04319" }
\section{\label{intr}Introduction} Precise cosmological observations gathered in recent years have provided us with an increasingly detailed picture of the Universe and its constituents (see, e.g., \cite{Suzuki:2011hu,Anderson:2012sa,Ade:2015xua}). At the present time, the Universe appears to be dominated by two main energy components whose fundamental nature remains mysterious: dark energy (or modified gravity) --- responsible for the current acceleration of the expansion of the Universe --- and dark matter --- required to explain the observed large-scale structure of the Universe. However, several other particles, such as baryons and photons, have a much more familiar nature and play a fundamental role in the Universe's structure and evolution. Some of these particles may be regarded as localized energy concentrations, with fixed rest mass and structure, which are not significantly affected by their self-induced gravitational field. Hence, they are often modeled as topological solitons. Still, the modeling of particles as solitons in the simplest scalar field models is not without problems. In particular, the existence of stable finite energy solutions of the nonlinear Klein-Gordon equation in more than one spatial was discarded by Hobard and Derrick \cite{1963PPS....82..201H,1964JMP.....5.1252D} using a simple scaling argument. In Ref. \cite{Avelino:2010bu} Derrick's argument was applied to the case of more general scalar field models and the existence of static global defect solutions of arbitrary dimensionality whose energy does not diverge at spatial infinity was explicitly demonstrated in that context. Skyrmions \cite{1962NucPh..31..556S,Battye:1997qq} --- topological solitons of a Lagrangian embodying chiral symmetry --- and Q-balls \cite{Coleman:1985ki,Kusenko:1997si} --- stationary non-topological solitons whose stability is guaranteed by a conserved charge --- are other examples of localized defects in 3 + 1 dimensions. In the present paper we start by investigating the necessary conditions for the existence of localized static concentrations of energy (static solitons) in the absence of a significant self-induced gravitational field, providing a considerable extension of the results presented in Ref. \cite{Avelino:2010bu}. The focus will be on the restrictions imposed on the on-shell matter Lagrangian of a solitonic particle or of a collection of moving solitonic particles which can be described as a fluid. This is particularly relevant for modified theories of gravity with nonminimal coupling to matter where the matter Lagrangian appears explicitly in the equations of motion of the gravitational field, such as $f(R,{\mathcal L_m})$ \cite{Harko:2010mv} and $f(R,T)$ \cite{2011PhRvD..84b4020H} theories of gravity, since, in this context, the knowledge of the energy-momentum tensor is, in general, insufficient to compute the relevant physics \cite{Faraoni:2009rk,Minazzoli:2012md}. Throughout the paper, we will assume the metric signature $[-,+,\cdots,+]$ and units in which the speed of light in vacuum $c$ equals unity. The Einstein summation convention will be used when a latin or greek index variable appears twice in a single term, once in an upper (superscript) and once in a lower (subscript) position --- the exception will be the latin index $l$ (or $\hat l$), for which the Einstein summation convention shall not be used. Greek and latin indices take the values $\mu, \nu = 0,\cdots,D$; $a, b, c = 1,\cdots,{\mathcal D}$; $i, j, l= 1,\cdots,D$, ${\hat i}, {\hat j}, {\hat l} = N-D+1,\cdots,N$ (with $D \le N$) --- the exception will be the greek index $\lambda$ which shall denote a positive real parameter. \section{Derrick's argument\label{sec2}} Consider a $D+1$-dimensional Minkowski space-time with line element given by \begin{equation} ds^2=g_{\mu \nu} dx^\mu dx^\nu =-dt^2 + \delta_{ij} dx^i dx^j \end{equation} and a $\mathcal D$-dimensional real scalar field multiplet $\{\phi^1, ..., \phi^{\mathcal D}\}$ described by the action $S=\int {\mathcal L}_m \, d^{D+1}x$, where \begin{equation} \label{standardL} {\mathcal L}_m =X - V(\phi^a) \end{equation} is the matter Lagrangian. Here, $X=-\delta_{ab} \phi^a_{,\mu} \phi^{b,\mu}/2$, the comma in $\phi^a_{,\mu}$ a denotes a partial derivative with respect to the space-time coordinate $x^\mu$, $\phi^a_{,\mu} = g_{\mu\nu} \phi^{a,\nu}$, $g_{\mu\nu}$ are the components of the metric tensor, $\delta_{ab}$ is the Kronecker delta ($\delta_{ab}=1$ if $a=b$ and $\delta_{ab}=0$ if $a \neq b$), and $V \ge 0$. The energy-momentum tensor for this model is given by \begin{equation} T_{\mu\nu}=\delta_{ab} \phi^a_{,\mu} \phi^b_{,\nu} + g_{\mu\nu}{\mathcal L}_m\,, \end{equation} and the total energy can be computed as $E=\int d^D x \, T_{00}$. Consider a static solution $\phi^a=\phi^a(x^i)$ with finite energy equal to \begin{equation} E= \int d^D x \, \left(\delta_{ab} X^{ab} + V (\phi^a) \right) = K + U\,, \end{equation} where \begin{equation} K = \int d^D x \, \left(\delta_{ab} X^{ab} \right)\,, \qquad U = \int d^D x \, V (\phi^a) \end{equation} are, respectively, the gradient and potential contributions to the total energy, and $X^{ab}=-\phi^a_{,i} \phi^{b,i}/2$. Under the rescaling $x^i \to {\tilde x}^i=\lambda x^i$, where $\lambda$ is a positive real parameter (that equals unity in the initial configuration), the total energy becomes \begin{equation} E (\lambda)= \int d^D x \, \left(\delta_{ab} X^{ab}_\lambda + V (\phi^a_\lambda) \right) \,, \end{equation} where $\phi^a_\lambda = \phi^a (\lambda x^i)$ and $X^{ab}_\lambda=-\phi^a_{\lambda,i} \phi^{b,i}_\lambda/2$. Changing the integration variable to ${\tilde x}^i=\lambda x^i$, one obtains \begin{eqnarray} E(\lambda) &=& \lambda^ {-D} \int d^D {\tilde x} \, \left(\delta_{ab} \lambda^2 X^{ab} + V (\phi^a) \right) \nonumber \\ &=& \lambda^{2-D} K + \lambda^{-D} U \,. \end{eqnarray} A static solution $\phi^a=\phi^a(x^i)$ must satisfy \begin{equation}\label{Dcondition} \left[\frac{dE}{d\lambda}\right]_{\lambda=1}=(2-D) K - D U=0\,. \end{equation} Hence, no equilibrium static solutions with finite $K > 0$ and finite $U > 0$ exist for $D \ge 2$ \cite{1963PPS....82..201H,1964JMP.....5.1252D}. Despite this fact, static global string and monopole solutions do exist in 3+1 dimensions, since these are cases for which the gradient energy $K$ formally diverges. Still, in practice, there will always be a cutoff at some energy scale (for instance, in the cosmological context, the linear divergence in the energy of a global monopole has a cutoff due to the finite --- sub-horizon --- characteristic length of the global monopole network \cite{Lopez-Eiguren:2016jsy,Sousa:2017wvx}). In Ref. \cite{Avelino:2010bu} Derrick's argument has been generalized to the case of scalar field Lagrangians of the form \begin{equation} \label{gen} {\mathcal L}_m = {\mathcal L}_m (\phi^a,X^{bc})\,, \end{equation} with the energy-momentum tensor given by \begin{equation} T_{\mu\nu}={\mathcal L}_{m,X^{ab}} \phi^a_{,\mu} \phi^b_{,\nu} + g_{\mu\nu}{\mathcal L}_m\,. \end{equation} There, it has been shown that any static equilibrium solution $\phi^a=\phi^a(x^i)$ must satisfy \begin{equation}\label{Dcondition} \left[\frac{dE}{d\lambda}\right]_{\lambda=1} = \int d^D x {T^i}_i=0 \end{equation} or, equivalently, that the average pressure (over volume and directions) must vanish. \section{Solitonic particles and fluids: $ \langle {\mathcal L_m}\rangle =\langle T \rangle$ \label{sec3}} Let us describe a static particle as a localized static concentration of energy (static soliton of finite size), and assume that the spacetime is locally Minkowskian on the particle's characteristic lengthscale. Again, we shall implicitly assume that the gravitational field has a negligible impact on the particle structure, so that one may safely neglect the perturbations to the Minkowski metric when computing the total energy of the particle. We shall also start by assuming that the matter fields can be described by a generic real scalar field multiplet $\{\phi^1, ..., \phi^{\mathcal D}\}$, an assumption that shall be relaxed later on. \subsection{Spherical deformation} Consider again the transformation $x^i \to {\tilde x}^i = \lambda x^i$, and assume that the matter scalar fields describing a solitonic particle transform under it [this is equivalent to assuming that the functions $\phi^a ({\tilde x}^i)$ are independent of $\lambda$]. The line element may be rewritten as a function of the spatial coordinates ${\tilde x}^i$ as \begin{equation} ds^2=-dt^2 + \delta_{ij} dx^i dx^j = -dt^2 +{\tilde g}_{ij} d{\tilde x}^i d{\tilde x}^j\,, \label{newds} \end{equation} where ${\tilde g}_{ij}=\lambda^{-2} \delta_{ij}$. Here, we shall also assume that the on-shell matter Lagrangian is invariant with respect to an arbitrary rescaling of the time coordinate, so that \begin{equation} \frac{\delta \mathcal L_m}{\delta g^{0 0}}=0\,, \label{condt} \end{equation} in the proper frame in which the particle is static. The components of the energy-momentum tensor of the matter fields are defined by \begin{equation} T_{\mu\nu}=-\frac{2}{\sqrt {-g}} \frac{\delta (\mathcal L_m \sqrt {-g})}{\delta g^{\mu \nu}} = -2 \frac{\delta \mathcal L_m}{\delta g^{\mu \nu}} + g_{\mu \nu} \mathcal L_m\,, \label{Tmunu} \end{equation} where $g$ is the determinant of the metric. Equations (\ref{condt}) and (\ref{Tmunu}) imply that the energy density is given by \begin{equation} \rho=T_{00}=-\mathcal L_m\,, \label{rho} \end{equation} so that the total energy of the transformed static concentration of energy is \begin{equation} E(\lambda)=-\int \mathcal L_m ({{\tilde g}}^{ij},{\tilde x}^k) \sqrt{-{\tilde g}} d^D {\tilde x}\,, \label{energy} \end{equation} where $\sqrt{-{\tilde g}}=\lambda^{-D}$. Note that the transformed matter Lagrangian $\mathcal L_m$ will be a function of both ${\tilde g}_{ij}$ and the matter fields, with the matter fields preserving the dependence on ${\tilde x}^i$ of the initial static configuration. A necessary condition for static equilibrium around the initial configuration is that $E(\lambda)$ has a minimum at $\lambda=1$. Therefore, \begin{equation} \left[\frac{dE}{d\lambda}\right]_{\lambda=1}=0 \end{equation} or, equivalently, \begin{eqnarray} \left[\frac{dE}{d\lambda}\right]_{\lambda=1} & = & - \int \left[\frac{\partial\left(\mathcal L_m \sqrt{-{\tilde g}}\right)}{\partial \lambda}\right]_{\lambda=1} d^D {\tilde x} = \nonumber \\ & = & -\int \left[\left(\frac{\delta\mathcal L_m}{\delta {\tilde g}^{ij}}\frac{\partial {\tilde g}^{ij}}{\partial\lambda}-\frac{D}{\lambda}\mathcal L_m\right)\lambda^{-D}\right]_{\lambda=1}d^D {\tilde x}=\nonumber\\ & = & -\int \left[2\frac{\delta\mathcal L_m}{\delta{\tilde g}^{ij}}{\tilde g}^{ij}-D\mathcal L_m\right]d^D {\tilde x}=0\,, \label{DEDL} \end{eqnarray} where the fact that ${\tilde g}_{ij}=\lambda^{-2} \delta_{ij}$ (implying that $\partial{\tilde g}^{ij}/\partial\lambda=2{\tilde g}^{ij}/\lambda$) has been used in the derivation of Eq. (\ref{DEDL}). Hence, \begin{equation} \int {T^{i}}_{i} d^D x= 0\,, \label{strace} \end{equation} which, combined with Eq. (\ref{rho}), implies that \begin{equation} \langle \mathcal L_m \rangle \equiv \frac{\int \mathcal L_m d^D x}{\int d^D x}= \frac{\int T d^D x} {\int d^D x} \equiv \langle T \rangle\,, \label{trace} \end{equation} where $T={T^\mu}_\mu={T^0}_0+{T^i}_i$ is the trace of the energy-momentum tensor. Equation (\ref{trace}) is a scalar equation (${\mathcal L_m}$ and $T$ are both scalars) and, despite being derived in the particle's rest frame, it is also valid in any moving frame. As a matter of fact, since an inertial comoving frame wherein the particle is static exists, the volume averages of ${\mathcal L_m}$ and $T$ are invariant under any Lorentz boost and, thus, Eq.(\ref{trace}) is independent of the velocity of the particle. Therefore, this result is also applicable to fluids that may be well described by a collection of moving solitonic particles, provided that the spacetime is locally Minkowskian on the smallest proper macroscopic lengthscale in which the fluid approximation applies. Note that here we do not consider potential model-dependent inter-soliton interactions. These, however, are not expected to affect our results unless they have a significant long-range impact on the mass and structure of the particles. Furthermore, an additional requirement to ensure the stability of the static configuration is that \begin{equation} \left[\frac{d^2E}{d\lambda^2}\right]_{\lambda=1}>0\,, \end{equation} which results in the following condition \begin{eqnarray} \int \left[4\frac{\delta^2\mathcal L_m}{\delta(g^{ij})^2}(g^{ij})^2+D\left(D+1\right)\mathcal L_m - \right. \nonumber\\ \left. -\left(4D-2\right)\frac{\delta\mathcal L_m}{\delta g^{ij}}g^{ij} \right] d^D x <0\,. \label{lcond} \end{eqnarray} The results obtained in this section also hold if the matter fields providing a significant contribution to the energy of the particle include higher order tensor fields ${\bf {\mathcal T}}$ of arbitrary order ${\mathcal N}$, provided that Eq. (\ref{condt}) is satisfied. If, under the transformation $x^i \to {\tilde x}^i = \lambda x^i$, the components ${\mathcal T}_{\mu_1,...,\mu_{\mathcal N}}({\tilde x}^i)$ are assumed to be fixed functions of ${\tilde x}^i$, independently of the value of $\lambda$, then all the results, given by Eqs. (\ref{energy})-(\ref{lcond}), remain valid. \subsection{Nonspherical deformation} Let us now consider the transformation $x^l \to {\tilde x}^l = \lambda_l x^l$, for $l=1,...,D$, where $\lambda_l$ are positive real parameters, such that $\lambda_l=1$ in the initial configuration. The line element, when written as a function ${\tilde x}^i$, is still given by Eq. (\ref{newds}), but in this case, ${\tilde g}^{ij}= \lambda_i\lambda_j \delta^{ij}$. We shall demonstrate in the present section that considering this more general deformation, allowing for different directional scaling parameters, leads to conditions on the form of the energy-momentum tensor that are even more restrictive than those in Eq. (\ref{strace}). Assuming that Eq. (\ref{condt}) remains valid, the total energy of the transformed static configuration may be written as \begin{equation} E\left(\lambda_1,...,\lambda_D\right)=-\int \mathcal L_m\left({\tilde g}^{ij},{\tilde x}\right)\sqrt{-{\tilde g}} d^D{\tilde x}\,, \end{equation} with \begin{equation} \sqrt{-{\tilde g}} = \prod _{i}\lambda_i^{-1}\,. \end{equation} In this case, static equilibrium can only be preserved if \begin{equation} \left[\frac{dE}{d\lambda_i}\right]_{\lambda_1=1,...,\lambda_D=1}=0\,,\qquad \mbox{for all }i=1,...D\,. \end{equation} Considering a specific value of $l$ and applying a similar procedure to that employed in Eq. (\ref{DEDL}) one obtains \begin{equation} -\int\left(2\frac{\delta\mathcal L_m}{\delta g^{ll}} g^{ll} -\mathcal L_m\right)d^D x=0\,, \end{equation} or, equivalently, \begin{equation} \int T_{l l} d^D x =0\,,\qquad \mbox{for all } l=1,...,D\,. \end{equation} This not only implies that the volume of the spatial trace of the energy-momentum tensor must be equal to zero in the rest frame of the solitonic particle [Eq. (\ref{strace})] but also that the volume average of the pressure along all $l=1,...,D$ directions must vanish. Moreover, static equilibrium around the initial static configuration can only be guaranteed if $E(\lambda_1,...,\lambda_d)$ has a minimum at $\lambda_1=...=\lambda_D=1$, implying that \begin{equation} \left[\frac{d^2 E}{d\lambda_i^2}\right]_{\lambda_1=1,...,\lambda_D=1}>0\,, \end{equation} which results in the following constraints \begin{equation} \int \left[4\frac{\delta^2\mathcal L_m}{\delta (g^{ll})^2}(g^{ll})^2+2\mathcal L_m-2\frac{\delta\mathcal L_m}{\delta g^{ll}}\right]d^D x<0\,, \end{equation} for all $l=1,...,D$. \section{Defects of codimension $D$ in $N+1$-dimensional space-times \label{sec4}} Our results may be generalized to also describe p-branes of codimension $D$ ($p=N-D$), embedded in a Minkowski space-time with $N > D$ spatial dimensions (see Refs. \cite{Sousa:2011ew,Sousa:2011iu} for a unified unified framework describing the macroscopic evolution of featureless p-branes). Assuming that \begin{equation} \frac{\delta \mathcal L_m}{\delta g^{{\hat i} {\hat j} }}=0\,, \qquad \mbox{for all } {\hat i},{\hat j} =D+1,...,N\,, \label{defect} \end{equation} $x^{\hat i}$ with ${\hat i}=D+1,...,N$ being the additional space-time coordinates, Eq. (\ref{Tmunu}) implies that \begin{equation} T_{{\hat l}{\hat l}} = \mathcal L_m\,, \qquad \mbox{for all } {\hat l}=D+1,...,N\,, \end{equation} independently of the velocity of the observer. In practice Eq. (\ref{defect}) means that the defect is featureless along the ${\hat l}=D+1,...,N$ directions or, equivalently, that it is not possible to measure the velocity of the defect along these directions. In the defect rest frame, one has that $T_{00}= -T_{{\hat l}{\hat l}}$ and $T_{ll}=0$. Hence, a statistically homogeneous and isotropic network of frozen defects will have an (averaged) equation of state given by \begin{equation} p=-\frac{N-D}{N} \rho\,. \end{equation} Here, $\rho$ and $p$ represent the average energy density and pressure associated with the defect network, independently of the specific form of the matter Lagrangian or the defect geometry along the first $D$ spatial directions. If the defects have a nonzero root mean square velocity $v$, the (average) pressure becomes \cite{Avelino:2015kdn} \begin{equation} p=\left(-\frac{N-D}{N} + \frac{N-D+1}{N} v^2\right)\rho \,, \end{equation} so that $p \to \rho/N$ in the $v \to 1$ limit (note that if $N=3$ and $v=1$ then $p=\rho/3$). \section{Conclusions \label{sec5}} In this paper, we have shown that the volume average of the matter Lagrangian ${\mathcal L_m}$ of a solitonic particle, or of a collection of solitonic particles with fixed rest mass and structure, is equal to the volume average of the trace $T$ of the particle's energy-momentum tensor. This result, obtained with minimal assumptions about the particle structure and constitution, is crucial for the accurate computation of the equations of motion of the gravitational and matter fields in the context of modified theories of gravity with nonminimal coupling to matter where the matter Lagrangian appears explicitly in the equations of motion of the gravitational field, such as $f(R,{\mathcal L_m})$ and $f(R,T)$ gravity. It also implies that, whenever the sole contribution to the gravitational field comes from matter sources which may be well modeled by a collection of solitonic particles with fixed rest mass and structure, $f(R,{\mathcal L_m})$ gravity may be considered a subclass of $f(R,T)$ gravity.\\ P.P.A. thanks Rui Azevedo for enlightening discussions. L.S. was supported by Funda{\c c}\~ao para a Ci\^encia e a Tecnologia (FCT, Portugal) through the Grants No. SFRH/BPD/76324/2011 and No. CIAAUP-02/2018-PPD. Funding of this work has also been provided by the FCT Grant No. UID/FIS/04434/2013. This paper benefited from the participation of the authors on the European Cooperation in Science and Technology (COST) action CA15117 (CANTATA), supported by COST.
{ "timestamp": "2018-03-28T02:17:12", "yymm": "1802", "arxiv_id": "1802.03961", "language": "en", "url": "https://arxiv.org/abs/1802.03961" }
\section{Introduction} Dark matter haloes are the fundamental building blocks of cosmic large-scale structure, and galaxies form by condensing in their cores. Understanding the structure, evolution and formation of dark matter haloes is an essential step towards understanding how galaxies form and ultimately, to test cosmological models. However, this is a difficult problem due to the highly non-linear nature of the haloes' dynamics. Dark matter haloes originate from random perturbations seeded in the early Universe and grow via mass accretion and mergers with smaller structures throughout their assembly history. N-body simulations provide the only practical tool to compute non-linear gravitational effects starting from an initial random field \citep[{e.g.}][]{gadget, gadget2, Sim-stateofart}. Analytic approximations of structure formation yield useful physical interpretations of these detailed numerical studies. Generally, analytic techniques assume dark matter collapse occurs once the smoothed linear density contrast exceeds a threshold value. Combined with excursion set theory, this ansatz provides a tool to analytically predict the final halo mass of an initially overdense region. This can be used to infer useful quantitites such as the abundance of dark matter haloes in the Universe, or the halo mass function, based on properties of a Gaussian random field alone \citep{PS, Bond, Bond&Myers}. The halo mass function is the quantity most often used to assess the accuracy of different analytic frameworks against numerical simulations. The original form of the halo mass function proposed by \citet{PS}, although qualitatively correct, is known to underestimate the abundance of the most massive haloes, and overestimate the abundance of the less massive ones. The need for precision mass functions led to modifications of the original halo mass function in the form of parametric functions calibrated with cosmological simulations \citep{Jenkins, Reed, Tinker}. Pure analytic extensions of the excursion set ansatz have also been constructed which yield better agreement with numerical simulations \citep{Sheth, Maggiore, Paranjape, Fahari, Porciani}. Given these successful predictions, the excursion set description has become an accepted physical interpretation of the process of structure formation itself. We present a machine learning approach to learn cosmological structure formation directly from N-body simulations. The machine learning algorithm is trained to learn the relationship between the initial conditions and final halo population that results from non-linear evolution. Using the resulting initial conditions-to-haloes mapping, we aim to provide new physical insights into the process of dark matter halo formation, and compare with existing interpretations gained from widely investigated analytic frameworks. In contrast to existing analytic theories, our approach does not require prior assumptions about the physical process of halo collapse; the haloes' non-linear dynamics is learnt directly from N-body simulations rather than approximated by an excursion set model in the presence of a collapse threshold. We provide the machine learning algorithm with a set of informative properties about the dark matter particles extracted from the initial conditions. Machine learning algorithms are sufficiently flexible to include a wide range of initial conditions properties which may contain relevant information about halo formation, without changing the training process of the algorithm. We choose these properties to be aspects of the initial density field in the local surroundings of the dark matter particles' initial position. By quantifying their impact on the learning accuracy of the algorithm, we can investigate which aspects of the early universe density field contain relevant information on the formation of dark matter haloes. The trained initial conditions-to-haloes mapping can then also be used to predict the mapping for new initial conditions, without the need to run a further simulation. The highly non-linear nature of dark matter evolution makes it a problem well-suited to machine learning. Machine learning is a highly efficient and powerful tool to learn relationships which are too complex for standard statistical techniques \citep{witten2016data}. In the context of structure formation, machine learning techniques have also been shown to be effective, for example, in learning the relationship between dark and baryonic matter from semi-analytic models \citep{ML-cosmology, Agarwal2017, Nadler2017}. We choose \emph{random forests} \citep{breiman1984classification, Leo}, a popular algorithm which has been shown to outperform other classifiers in many problems \citep{Niculescu-MizilCaruana, Caruana, Douglas, Lochner}. Random forests also lend themselves to physical interpretation, as they provide measures that allows the user to infer which of the inputs are predominantly responsible for the learning outcomes of the algorithm. Random forests are ensembles of decision trees, each following a set of simple decision rules to predict the class of a sample \citep{ballbrunner}. The prediction of the random forest is given by the average of the probabilistic predictions of the individual trees, where the variance of the forest predictions is greatly reduced compared to that of a single tree. To apply this approach, we must turn the process of dark matter evolution into a supervised classification problem. We chose to focus on the simplest case of a binary classification task to illustrate the approach and allow for a cleaner understanding of the physics behind the learning process of the algorithm. We distinguish between dark matter particles which end up in haloes of mass above a threshold, and those which belong either to lower mass haloes or to no halo at all. This defines two classes; the former set of particles belongs to the \emph{IN haloes} class while the latter forms the \textit{OUT haloes} class. The machine learning algorithm is trained to predict whether the dark matter particles in the initial conditions will end up in IN class haloes or in the OUT class at $z=0$. The training is performed on an existing N-body simulation where we already know the associated halo for each particle (if any). The predictive accuracy of the algorithm crucially depends on the choice of features extracted from the initial conditions and used as input to the machine learning algorithm. We first train the random forest with the initial linear density field as features and subsequently add information on the tidal shear field. We are able to quantify the physical relevance of such properties in the halo collapse process, based on their respective impact on the classification performance of the random forest. Our results demonstrate the utility of machine learning in gaining insights into the physics of structure formation, as well as providing a fast and efficient classification tool. The paper is organised as follows. We present an overview of the classification pipeline and describe how we extract features from the linear density field and train the machine learning algorithm in Sec. \ref{sec:method}. In Sec. \ref{sec:denclass} we interpret the classification output and present our results in Sec. \ref{sec:den_results}. We then extend the feature set to include the tidal shear field in Sec. \ref{sec:shear} and discuss the resulting implications. We study the algorithm's performance as a function of halo properties in Sec. \ref{sec:part_properties}. We perform two blind tests of our pipeline on independent simulations in Sec. \ref{sec:blind}, demonstrating the generality of our results, and finally conclude in Sec. \ref{sec:conclusions}. \section{Method} \label{sec:method} We trained and tested the random forest with an existing dark-matter-only simulation produced with \texttt{P-GADGET-3} \citep{gadget2, gadget} and a WMAP5 $\Lambda$CDM cosmological model \citep{WMAP}; $\Omega_{\Lambda} = 0.721$, $\Omega_{\mathrm{m}} = 0.279$, $\Omega_{\mathrm{b}} = 0.045$, $\sigma_{8} = 0.817$, $h = 0.701$, $n_s = 0.96$. The comoving softening length of the simulation is $\epsilon = \SI{25.6}{kpc}$. The simulations evolve $ 256^3 $ dark-matter particles, each of mass $M_{\mathrm{particle}} = \SI{8.24e8}{{M}_{\odot}}$, in a box of comoving size $L = \num{50} \ h^{-1} \si{Mpc}$ from $z=99$ to $z=0$.\footnote{We make use of the Python package \texttt{pynbody} \citep{pynbody} to analyse the information contained in the simulation snapshots.} The haloes were identified using the \texttt{SUBFIND} halo finder \citep{gadget}, a friends-of-friends method with a linking length of $0.2$, with the additional requirement that particles in a halo be gravitationally bound. While \texttt{SUBFIND} also identifies substructure within halos, we consider the entire set of bound particles to make up a halo and do not subdivide them further. The simulation contains $18,801$ haloes at $z=0$, ranging from masses of $\sim 10^{9}~\mathrm{M}_{\odot}$ to $ \sim 10^{14}~\mathrm{M}_{\odot}$. We used the the final snapshot ($z=0$) to label each particle with its corresponding class. At $z=0$, we split the dark matter particles between two classes; \emph{IN haloes} and \emph{OUT haloes}. We chose the IN class to contain all particles in haloes of mass $M \geq 1.8 \times 10^{12}~\mathrm{M}_{\odot}$ at $z=0$ ($401$ haloes), and the OUT class to contain all remaining particles, including those in haloes of mass $M < 1.8 \times 10^{12}~\mathrm{M}_{\odot}$ and those that do not belong to any halo.\footnote{The mass scale $M=1.8 \times 10^{12}~\mathrm{M}_{\odot}$ corresponds to the mass of a particular halo of the simulation and was chosen as the class boundary for convenience.} This choice was made in order to split the haloes into the two classes at an intermediate scale within the mass range probed by the simulation. Our pipeline allows the selection of any mass threshold which would ultimately allow us to extend the binary classification to a multi-class one. Each particle, with its associated class label, was traced back to the initial conditions ($z=99$) where we extracted features to be used as input for the random forest as described below. The random forest was trained based on these input features and the known output class for a training subset of particles. We tested the algorithm using the remaining dark matter particles, where the random forest's class prediction was compared to their respective true class label. The robustness of the algorithm was tested further on independent N-body simulations (Sec. \ref{sec:blind}). \subsection{Density Field Features} \label{sec:densityfeatures} Most machine learning algorithms, including random forests, require a \emph{feature extraction} process to extract key properties of the dark matter particles. The classification performance crucially depends on whether or not the chosen features provide meaningful information to allow for a clean separation between the IN and OUT classes. We extracted machine learning features from the linear density field. This choice was motivated by the work of \citet{PS} (PS) who developed a model to predict the (comoving) number density of dark-matter haloes as a function of mass based on properties of the linear density field. The ansatz is that a Lagrangian patch will collapse to form a halo of mass $M$ at redshift $z$ if its linear density contrast exceeds a critical value $\delta_c(z)$. An improved theoretical footing for PS theory was developed by \citet{Bond} based on the excursion-set formalism, known as extended Press-Schechter (EPS). The crucial assumption is that the final halo mass corresponds to the matter enclosed in the \textit{largest} possible spherical region with density contrast $\delta_L=\delta_c$. This method yields a halo mass function qualitatively consistent with numerical simulations, suggesting that a useful mapping between Lagrangian regions and final collapsed haloes can be obtained from spherical overdensities. This motivates our choice of machine learning features from the initial linear density field as follows. We smoothed the density contrast $ \delta (\textbf{x}) = \left[ \rho (\textbf{x}) - \bar{\rho} \right]/ \bar{\rho} $, where $ \bar{\rho} $ is the mean matter density of the universe, on a smoothing scale $R$, \begin{equation} \delta (\textbf{x}; R) = \int \delta \left( \textbf{x}^\prime \right) W_{\mathrm{TH}} \left( \textbf{x} - \textbf{x}^\prime; R \right) \text{d}^3 x^\prime, \label{smoothed_delta} \end{equation} where $W_{\mathrm{TH}} (\textbf{x}, R)$ is a real space top-hat window function \begin{equation} W_{\mathrm{TH}} (\textbf{x},R) = \begin{cases} \dfrac{3}{4 \pi R^3} &\text{ for } \left| \textbf{x} \right| \leq R, \\ 0 &\text{ for } \left| \textbf{x} \right| >R. \end{cases} \end{equation} The convolution \eqref{smoothed_delta} was carried out in Fourier space, which naturally accounts for the periodicity of simulations. A window function $W(\textbf{x}, R)$ of characteristic radius $R$ corresponds to a mass scale $M_{\mathrm{smoothing}} = \bar{\rho} V(R)$, where in the case of a top-hat window function $V_{\mathrm{TH}}(R) = 4/3 \pi R^3$. The feature for machine learning then consists of the density contrast smoothed with a top-hat window function of mass scale $M_{\mathrm{smoothing}}$ (or, smoothing scale $R$) centred on the particle's position in the initial conditions. \begin{figure} \includegraphics[width=\columnwidth]{trajectories.pdf} \caption{Examples of density trajectories corrresponding to particles belonging to the IN and OUT classes. The linear density field is smoothed with a real space top-hat filter centred on each particle's initial position. We calculate the smoothed overdensity $\delta$ as the smoothing mass scale $M$ is increased.} \label{fig:trajectories} \end{figure} We repeated the smoothing for $50$ mass scales evenly spaced in $\log M$ within the range allowed by the volume and resolution of the simulation box i.e., $\num{3e10} \leq M_\mathrm{smoothing} / \mathrm{M}_{\odot} \leq \num{1e15}$, yielding a set of $50$ features per particle. We found that using a larger number of smoothing scales did not yield improvement in the classification performance, meaning that $50$ smoothing scales were sufficient to capture the relevant information carried by the density field. In the context of excursion set theory, the density contrast of a particle as a function of smoothing scale is known as a \textit{density trajectory}. Fig. \ref{fig:trajectories} shows examples of density trajectories of particles belonging to the true IN and OUT classes. The trajectories describe whether particles are found in overdense or underdense regions as a function of increasing mass scale. As one approaches the largest mass scales probed by the simulation box, the trajectories start to converge to $\delta(x, \infty)=0$, where the density coincides with the mean density of the Universe. The ensemble of trajectories constitutes the full feature set we used to first train then test the random forest. \subsection{Training the random forest} \label{sec:RF} We make use of the random forest implementation in the \textsc{scikit-learn} \citep{sk-learn} Python package. The random forest was trained using a set of $50$,$000$ randomly selected particles from the simulation, each carrying its own set of density features and corresponding IN or OUT class label. The size of the training set was chosen to form a subset of particles representative of the full simulation box. To test for representativeness, we checked the performance of the algorithm for training sets of different sizes and found no improvement for training sets larger than $50$,$000$ particles. Therefore, we concluded that $50$,$000$ randomly selected particles are sufficient to form a training set representative of the full simulation box. The remaining particles in the simulation were used as a test set; the trained random forest predicts the class label of the particles in the test set, which is then compared to the particles' true labels to assess the algorithm's performance. Note also that random forests are robust to correlated features \citep{Leo}, meaning that the high correlation present in our density features does not affect the predictive performance of the algorithm. Like most machine learning algorithms, random forests have hyperparameters which need to be optimised for a given training set. These include the number of trees and the maximum depth of the forest, the maximum number of particles at the end node of a tree and the size of the subset of features to select at a node split. We used a grid search algorithm combined with $k$-fold cross validation \citep{kfold} to optimise the random forest's hyperparameters. In $k$-fold cross validation, the training set is divided into $k$ equally sized sets where $k-1$ sets are used for training and one is used as a validation set, on which the algorithm is tested. This procedure is repeated $k$ times so that each set is used as a validation set once. For each validation set we evaluate a score based on a chosen scoring metric (here we use the area under the Receiver Operating Characteristic curve, see Sec. \ref{sec:denclass}) and average scores over all $k$ validation sets to obtain the final score of a training set. Here, we performed a five-fold cross validation for all combinations of hyperparameters and retained the combination which achieved the best score. \section{Interpreting the classification output} \label{sec:denclass} \begin{table} \renewcommand{\arraystretch}{1.4} \centering \caption{Confusion matrix for two classes: Positives and Negatives. We use this to quantify the performance of the machine learning algorithm, where the positives are particles of the IN class and the negatives are particles of the OUT class.} \label{tab:confusion} \begin{tabular}{|cc|cc|} \hline & & \multicolumn{2}{c|}{\textbf{True Class}} \\ & & \textbf{P} & \textbf{N} \\ \hline \multirow{2}{*}{\makecell{\textbf{Predicted} \\ \textbf{Class}}}& \multicolumn{1}{c|}{\textbf{P}} & \multicolumn{1}{c|}{True Positive (TP)} & False Positive (FP) \\ \cline{3-4} & \textbf{N} & \multicolumn{1}{c|}{False Negative (FN)} & True Negative (TN) \\ \hline \end{tabular} \renewcommand{\arraystretch}{1} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{ROC_density_shear_final.pdf} \caption{ROC curves for the density feature set and the combined shear and density feature set. The machine learning algorithm is able to learn the information contained in the density trajectories to match the EPS prediction. The ST prediction represents an extension of standard excursion set developed by \citet{ShethTormen}, which adopts a moving collapse barrier motivated by tidal shear effects. The comparison between the two ROC curves shows little improvement in the test set classification once information on the shear field is added. The ST analytic prediction also does not provide an overall improvement compared to the EPS prediction; the false positive rate (or, contamination) decreases at the expense of decreasing the true positive rate (or, completeness). The machine learning algorithm is able to recover the ST analytic prediction when presented with information on the density field alone by altering the probability threshold.} \label{fig:ROC_shear_density} \end{figure} \begin{figure*} \includegraphics[width=0.8\textwidth]{density_importances.pdf} \caption{The importance ranking of the density features, shown as a function of their smoothing mass scales. The most relevant information in the training of the random forest comes from the density contrast smoothed at mass scales $10^{12}$ -- $10^{13}$ M$_{\odot}$ scales, within the mass range of the IN class haloes. The largest halo mass in the simulation is marked by a grey line.} \label{fig:density_importances} \end{figure*} A random forest (like most machine learning algorithms) outputs a probabilistic measure of belonging to a class for every particle. For practical use this must be mapped onto a concrete class for each particle. Many approaches exist for such a mapping; we choose to consider different probability thresholds at which a particle is considered to belong to a class. A high probability threshold will contain a very pure sample of particles but also will be incomplete. As the probability threshold decreases, one allows for a more complete set of particles at the expense of including misclassified ones. Once the probability-to-class mapping is established, we quantify the performance of the algorithm making use of a confusion matrix for binary classification problems as shown in Table \ref{tab:confusion}. Throughout this analysis we always take the positives to be particles of the IN class and negatives to be particles of the OUT class. The perfect classifier consists of true positives and true negatives only. A more realistic classifier will include a number of incorrectly classified particles: misclassified positives fall in the false negative category, yielding a loss of \emph{completeness}, and misclassified negatives fall in the false positive category, yielding an increase in \emph{contamination}. We measure the true positive rate (TPR), the ratio between the number of particles correctly classified as positives and the total number of positives in the data set, \begin{equation} \mathrm{TPR} = \frac{\mathrm{TP}}{\mathrm{TP} + \mathrm{FN}}, \end{equation} and the false positive rate (FPR), the ratio between the number of particles incorrectly classified as positives and the total number of negatives in the data set, \begin{equation} \mathrm{FPR} = \frac{\mathrm{FP}}{\mathrm{FP} + \mathrm{TN}}. \end{equation} Receiver Operating Characteristic (ROC) curves \citep{green1988signal, AUC, Fawcett} are a tool to graphically represent the balance between completeness and contamination at various probability thresholds. A ROC curve compares the true positive rate to the false positive rate as a function of decreasing probability threshold. As one lowers the probability threshold, one allows for a more complete set of IN particles (increase in true positive rate) at the expense of a larger contamination of misclassified particles (increase in false positive rate). The area under the curve (AUC) of a ROC curve is a useful quantity to compare classifiers. The perfect classifier would have an AUC of $1$, whereas a random assignment of classes would obtain an AUC of $0.5$. Typically, algorithms are considered to be performing well if AUC $\geq 0.8$. We use ROC curves and AUCs to evaluate and compare the performance of the random forest for different feature sets (Sec. \ref{sec:den_results} \& \ref{sec:shear}), different halo mass and radial position ranges (Sec. \ref{sec:part_properties}) and different simulations (Sec. \ref{sec:blind}). \section{Density field Classification} \label{sec:den_results} Figure \ref{fig:ROC_shear_density} shows the ROC curve for the density feature set resulting from classifying all particles in the simulation that were not used for training the random forest. The random forest achieves an AUC score of $0.876$. In order to assess whether machine learning can learn as much as human-constructed models, we wish to compare its performance to existing theories. In particular, the EPS formalism motivated our choice of density features and has been demonstrated to infer approximately correct number densities of collapsed haloes from a Gaussian random field \citep{Bond}. Although EPS is commonly used to predict the dark matter halo mass function, we make use of it to predict an independent set of class labels for the test set particles and compare their accuracy to that of the machine learning predictions. Following EPS, the fraction of haloes of mass $M$ is equivalent to the fraction of density trajectories with a first upcrossing of the density threshold barrier $\delta_{\mathrm{th}}$ at mass scale $M$. We take the density threshold to be the spherical collapse threshold adopted by \citet{Bond}: $ \delta_{\mathrm{th}}(z) = \left( D(z)/D(0) \right) \delta_{\mathrm{sc}} $, where $\delta_{\mathrm{sc}} \approx 1.686 $. The predicted halo mass of each particle is given by the smoothing mass scale of the particle's first upcrossing. We then assign to each particle an IN or OUT label depending on whether its predicted halo mass falls in the mass range of the IN or OUT class. We emphasise that the labels inferred from the EPS framework are independent from the predictions of the random forest. We plot in Fig. \ref{fig:ROC_shear_density} the resulting true positive rate and false positive rate inferred from the EPS predicted labels and find that the EPS prediction lies on the ROC curve of the random forest. In other words, the random forest is able to `learn' EPS and the EPS results correspond to a $\sim 42\%$ probability threshold on the ROC curve. Machine learning adds the flexibility to trade contamination for completeness along the ROC curve as we vary the probability threshold. Instead, EPS results in a single point in true positive rate-false positive rate space since it gives a single prediction for each particle rather than a probability associated with a class. \subsection{Physical Interpretation} The algorithm's performance depends on whether or not the input features contain relevant information to separate particles between classes. For example, the ideal feature would split a set of particles into two pure sets, each containing only particles of one class. By contrast, irrelevant features are not able to distinguish between classes, yielding a poor class separation in the two resulting sets. Therefore, we can determine which features contain the most information in mapping particles into the correct halo mass range, based on their ability to separate classes when training the random forest. There are many metrics designed to measure the relevance of the inputs to a machine learning algorithm; here we use \emph{feature importances} \citep{f-imp}. The importance of a feature $X$ is a weighted sum of the impurity decrease\footnote{We use Shannon entropy to measure the impurity at a node $i_E(t) = - \sum\limits_{i=1}^c p(j,t) \log_2 p(j,t)$, where $p(j,t)$ is the proportion of particles that belong to class $j$ at node $t$ and $c$ is the total number of classes.} at all nodes $t$ where the feature is used, averaged over all trees $T$ in the forest: \begin{equation} \mathrm{Imp(X)} = \frac{1}{N_T}\sum_{T} \sum_{t \in T} p(t) \Delta i (t), \end{equation} where $N_T$ is the number of trees, $p(t)$ is the fraction of particles reaching node $t$ and $\Delta i(t)$ is the impurity decrease, i.e. the difference in entropy between the parent node and the child nodes. We calculate the relative importances in the density feature set to find the most relevant features in distinguishing between the IN and OUT classes. Fig. \ref{fig:density_importances} shows the relative importance of each density feature as a function of its smoothing mass scale. The importances are normalised such that the sum of all importances is $1$ and the errors are computed by training the random forest multiple times, each with a randomly drawn set of training particles. The largest halo mass in the simulation is marked by a grey line. We find that most of the information lies in mass ranges of $10^{12}$ -- $10^{13}~\mathrm{M}_{\odot}$, just above the boundary between the IN and OUT classes. \begin{figure*} \includegraphics[width=0.8\textwidth]{wide_shear_density_importances.pdf} \caption{Relative importance of the density features (\textit{upper panel}), ellipticity features (\textit{middle panel}) and prolateness features (\textit{lower panel}) in the full shear and density feature set. The density features are more relevant than the ellipticity and prolateness features. This confirms that the shear field adds little information in distinguishing whether particles will collapse in haloes of mass above the class boundary mass scale or not, compared with the density field.} \label{fig:importances_shear_density} \end{figure*} \section{Adding the tidal shear tensor} \label{sec:shear} Peaks in Gaussian random fields are inherently triaxial \citep{Doroshkevich, BBKS}. Therefore, extensions of the standard spherical model were made in order to incorporate the dynamics of ellipsoidal collapse. The impact of the tidal shear on properties of collapsed regions has been extensively studied \citep{Bond&Myers, ShethTormen, Sheth}. \citet{ShethTormen} (ST) have studied how ellipsoidal collapse modifies the mass function of dark matter haloes in the excursion set formalism. Spheres are distorted into an ellipsoid due to tidal shear effects and the collapse time of a halo therefore depends explicitly on the ellipticity and prolateness of the tidal shear field. We extended the original density feature set to incorporate additional information on the local tidal shear field around particles. We studied the impact on the halo classification performance and quantified the shear's relevance in the training process via the feature importances. The advantage of studying tidal shear effects with machine learning is that these can be straightforwardly translated into features and used as input to the same machine learning algorithm. On the other hand, analytic models usually require incorporating approximations to the tidal shear within the excursion set formalism. In general, any potentially relevant physical property can be added in the form of a feature without adding complexity to the algorithm. We will first describe how we constructed features from the tidal shear field, then present the classification results of the full density and shear feature sets. \subsection{Tidal shear features} The deformation tensor is given by the Hessian of the gravitational potential \begin{equation} D_{ij} = \dfrac{\partial^2 \Phi}{\partial x_i \partial x_j}, \label{eq:shear} \end{equation} where $\Phi(\textbf{x})$ is the peculiar gravitational potential at position $\textbf{x}$ and is related to the density contrast via Poisson's equation $\nabla^2 \Phi = \delta$. The ordered eigenvalues of $D_{ij}$, $\lambda_1 \geq \lambda_2 \geq \lambda_3$, can be re-parametrised in terms of the ellipticity, $e$, and prolateness, $p$ \citep{Bond&Myers}: \begin{align} e &= \dfrac{\lambda_1 - \lambda_3}{2 \delta}, \\ p &= \dfrac{\lambda_1 - 2 \lambda_2 + \lambda_3}{2 \delta}, \label{eq:ell_prol} \end{align} where $\lambda_1 + \lambda_2 + \lambda_3 = \delta$ and $\delta$ is the smoothed overdensity used as a density feature. In order to minimise redundancy between the features, we removed the density dependence from the ellipticity and prolateness. We computed the eigenvalues of the traceless deformation tensor, known as the tidal shear tensor, $t_i = \lambda_i - \delta/3$, now satisfying $t_1 + t_2 + t_3 = 0$. The ellipticity and prolateness in terms of the traceless eigenvalues $t_i$ take the form \begin{align} e_t &= t_1 - t_3, \\ p_t &= 3 \left( t_1 + t_3 \right). \label{eq:ell_prol_features} \end{align} For each particle we assigned two new features $e_t$ and $p_t$ evaluated at each smoothing mass scale. Therefore, the original $50$--dimensional feature set of density contrasts was augmented to a $150$--dimensional feature set given by the density contrast, ellipticity and prolateness. To test the robustness of random forests to a high-dimensional feature space, we used PCA to reduce the $150$--dimensional feature set to a $10$--dimensional space retaining $98\%$ of the information contained in the original feature set. We found identical predictive performance, meaning that random forests are robust to a $150$--dimensional feature set. \subsection{Results} The ROC curve of the density and shear feature set is overplotted in Fig. \ref{fig:ROC_shear_density}. We find that adding information on the tidal shear tensor shows little improvement compared to the case of the density-only feature set. We find an improvement of only $2\%$ in the AUC of the ROC curve. Fig. \ref{fig:importances_shear_density} demonstrates the low impact of the shear features in the classification process. The three panels show the relative importance in the training process of the random forest of the density, ellipticity and prolateness features as a function of smoothing mass scales. The most relevant features are the density contrasts smoothed on mass scales in the range $10^{12}$ -- $10^{13}$ M$_{\odot}$, similar to what was found in the case of the density-only feature set (Fig. \ref{fig:density_importances}). The distributions of the density importances in the two feature sets are consistent despite minor variations in the peak and variance of the distributions. The changes are due to the change in the range of hyperparameters when increasing the dimensionality of the feature set from $50$ to $150$ features. The ellipticity and prolateness have low feature importance scores confirming that the information they contain is irrelevant to the training process of the machine learning algorithm compared with that of the density field. As with the density feature set, we can compare the machine learning predictions to existing analytic predictions based on the same set of properties of the initial conditions. The ST formalism provides a prescription to predict the final halo mass of a particle based on the density field and the shear field, which we can use to compare to the machine learning output. ST accounts for the effect of the shear field in the context of the excursion set formalism by adopting a moving collapse barrier rather than the spherical collapse barrier adopted by \citet{Bond}. The ST collapse barrier $b(z)$ varies as a function of the mass variance $\sigma^2(M)$ and is given by \begin{equation} b(z) = \sqrt{a} \delta_\mathrm{sc}(z) \left[ 1 + \left( \beta \dfrac{\sigma^2 (M)}{a \delta_{\mathrm{sc}}^2 (z)} \right)^{\gamma} \right], \label{eq:ST_barrier} \end{equation} where $\delta_{\mathrm{sc}} (0) \approx 1.686$, the parameters $\beta = 0.485$ and $\gamma = 0.615$ incorporate an approximation to ellipsoidal dynamics, and $a = 0.707$ is a normalisation constant. These values are the best-fit parameters found in \citet{Sheth}. The predicted halo mass of each particle follows the excursion-set framework as for the EPS case; the largest mass scale at which the particle's trajectory up-crosses the collapse barrier in Eq. \eqref{eq:ST_barrier} gives the predicted halo mass. The triangle labelled ``ST prediction'' in Fig. \ref{fig:ROC_shear_density} shows the true and false positive rates predicted by ST. In our study, the ST formalism does not yield an absolute improvement to EPS theory; the false positive rate decreases at the expense of a decrease in the true positive rate. Therefore ST predicts a less contaminated but more incomplete set of IN class particles compared to EPS, corresponding to a probability threshold of $73\%$ on the ROC curve. We find that the random forest is able to reproduce the ST result with both the density-only feature set and the shear and density feature set. This shows that there is sufficient information in the density field for the random forest to match the analytic ST prediction. Overall, we find that shear effects do not contain additional physical information to improve the classification output of the random forest. The learning process of the algorithm is predominantly driven by the local overdensity around dark matter particles and unaffected by the surrounding tidal shear. The analytic ST prediction, interpreted as an improvement to standard EPS due to the inclusion of tidal shear effects, can be reproduced by the random forest when trained on the density field only. In conclusion, these results show that the physical processes leading to dark matter halo formation for our choice of mass scale splitting the two classes are insensitive to tidal shear effects in the initial conditions. \begin{figure*} \includegraphics[width=\textwidth]{Figure_mass_radius_roc.pdf} \caption{\textit{Left panel}: The IN class particles are split into inner ($ r/r_{\mathrm{vir}} \leq 0.3$), mid ($0.3 < r/r_{\mathrm{vir}} \leq 0.6$) and outer ($0.6 < r/r_{\mathrm{vir}} \leq 1$) radial ranges according to their distance from the centre of the halo. The ROC curves for each category show that the classification performance improves for particles closer to the halo's centre of mass. \textit{Right panel}: The IN class particles are split into cluster-sized ($\num{1e14} \leq M_{\mathrm{halo}} /\mathrm{M}_{\odot} \leq \num{4e14}$), group-sized ($\num{1e13} \leq M_{\mathrm{halo}} /\mathrm{M}_{\odot} < \num{1e14}$) and galaxy-sized ($\num{1.2e12} \leq M_{\mathrm{halo}} /\mathrm{M}_{\odot} < \num{1e13}$) haloes and the ROC curves show the random forest's performance in classifying each category. Particles in higher mass haloes are increasingly better classified by the random forest. The ROC curve of the full test set of particles is shown as a dashed line in both panels for comparison. The EPS and ST predictions, labelled by dots and triangles respectively, are also overplotted for each halo mass and radial position category.} \label{fig:ROC_mass_radius_bins} \end{figure*} \section{Classification dependence on halo mass and radial position} \label{sec:part_properties} We now investigate how properties of particles such as the position within a halo and the halo mass affect the accuracy of classification when the algorithm is trained on density features only. To do this we split the test particles into categories based on their radial and halo mass properties to study their respective classification performance. First, we subdivided particles of the IN class into three mass ranges: particles in \textit{cluster}-sized haloes ($\num{1e14} \leq M_{\mathrm{halo}} /\mathrm{M}_{\odot} \leq \num{4e14}$), particles in \textit{group}-sized haloes ($\num{1e13} \leq M_{\mathrm{halo}} /\mathrm{M}_{\odot} < \num{1e14}$) and particles in \textit{galaxy}-sized haloes ($\num{1.2e12} \leq M_{\mathrm{halo}} /\mathrm{M}_{\odot} < \num{1e13}$). We combined each of these subsets in turn with all the OUT particles to form three distinct test sets. The ROC curves for the three mass range categories of haloes are shown in the right panel of Fig. \ref{fig:ROC_mass_radius_bins}, where the ROC curve of the full original test set is shown for comparison (dashed line). We find that particles in cluster-sized haloes reach an AUC of $0.913$, whilst particles in group-sized haloes and galaxy-sized haloes are increasingly more difficult to classify. We overplotted the ST (triangles) and EPS (dots) predictions for each halo mass category of particles, again showing results consistent with those of the machine learning algorithm. It is likely that the decrease in performance as a function of halo mass is a result of the choice of mass scale used to split haloes into classes, $M = 1.8 \times 10^{12}~\mathrm{M}_{\odot}$. This was a necessary step in order to define the two classes of the binary classification problem. Haloes of mass just above and below the IN/OUT mass boundary belong to different classes although they originate from Lagrangian regions with similar properties reflecting their similarity in mass. Therefore, the closer haloes of different classes are in mass, the harder it is for the random forest to distinguish whether their particles belong to one class or the other. Fig. \ref{fig:hm_dependence} further demonstrates that haloes of mass approaching the IN/OUT mass boundary from above and below contain a larger fraction of misclassified particles. In the upper (lower) panel, we show the false positive (negative) rate i.e., the ratio of misclassified OUT (IN) particles over all particles contained in each halo mass bin, for $4$ different probability thresholds. The true halo mass of each particle is shown on the horizontal axis in terms of its distance from the IN/OUT mass boundary. We find that the false positive and negative rates increase for particles in haloes of mass approaching the IN/OUT mass boundary. We next investigated possible correlations between the particles' position within the haloes and the random forest's classification performance. Here, we subdivided particles of the true IN class into three radial ranges, subject to their radial position in the halo with respect to the halo's virial radius $r_{\mathrm{vir}}$. We defined particles in the \emph{inner radial} range ($ r/r_{\mathrm{vir}} \leq 0.3$), particles in the \emph{mid radial} range ($0.3 < r/r_{\mathrm{vir}} \leq 0.6$) and particles in the \emph{outer radial} range ($0.6 < r/r_{\mathrm{vir}} \leq 1$). Similar to the mass range study, each subset of haloes was combined with all the OUT class particles from the original set to form three distinct sets. The left panel of Fig. \ref{fig:ROC_mass_radius_bins} shows the ROC curves for the three radial categories, together with that of the original test set again shown for comparison (dashed line). Particles in the innermost regions of haloes are the best classified by the random forest, achieving an AUC of $0.937$ which is greater than that obtained when classifying \emph{all} particles in the simulation. The classification performance of the random forest decreases as we move from the halo's centre-of-mass towards the virial radius. \begin{figure} \includegraphics[width=\columnwidth]{misclassified_mass_distance_prob_same.pdf} \caption{Fraction of misclassified particles in haloes of each mass bin range, where the halo mass bins are labelled as a function of their distance from the IN/OUT boundary mass scale. The upper (lower) panel shows the fraction of misclassified OUT (IN) particles i.e., the false positive (negative) rate in each mass bin. We consider four distinct probability thresholds for assigning a particle's (IN or OUT) class, where higher thresholds imply lower contamination. The misclassification rate increases as the true mass approaches the classification boundary for all choices of the completeness-to-contamination trade-off.} \label{fig:hm_dependence} \end{figure} We first tested whether the decrease in performance when classifying particles of the outer radial range was due to under-representativeness in the training set. Indeed, if the training particles of the outer radial range are not representative of the entire simulation, the classifier's performance on the outer radial range test set would be strongly affected. To test this, we re-trained the machine learning algorithm with a training set containing equal number of particles for each radial range category. We found identical ROC curves and AUCs as in the left panel of Fig. \ref{fig:ROC_mass_radius_bins}, therefore excluding the possibility that the higher misclassification rate of outer radial range particles is due to non-representativeness in the training set. One other possible reason may be that particles living in outer regions of haloes are more likely to have been affected by late-time halo mergers, tidal stripping or accretion events. Therefore, the final halo mass prediction for such particles is the result of a more complicated dynamical history involving these late-time effects. Conversely, particles near the halo's centre-of-mass are less sensitive to the halo's assembly history and their final halo mass prediction correlates more strongly with the local overdensity in the initial conditions. This hypothesis could be verified by adding features sensitive to the particles' dynamical history (for instance a particle's initial distance to the nearest density peak) and testing whether this information improves the classification of particles located at the boundary of the halo's virial region. In addition to this, the further particles are from the centre of haloes, the closer they are to the boundary between the IN and OUT classes, where particles are harder to classify for the machine learning algorithm. This also translates into a larger uncertainty in the halo mass prediction for particles at the edge of haloes compared to those in the innermost regions of haloes. As a result, the overall uncertainty in the halo mass predictions of centre-of-mass particles is smaller than for particles in the outskirts of haloes. This result is also consistent with excursion set predictions, where ST demonstrated that centre-of-mass particles provide a better estimate of the final halo mass compared to inferences made from the full ensemble of particles in the simulation. To confirm this, we overplotted the EPS (dots) and ST (triangles) predictions for the three radial test sets in the left panel of Fig. \ref{fig:ROC_mass_radius_bins}, demonstrating that analytic formalisms also perform increasingly well for particles that are close to the halo's centre-of-mass. The machine learning algorithm again shows its ability to match the excursion set predictions at fixed probability thresholds for each radial range category. For completeness, we also explored the misclassification rate of OUT particles that do not belong to any halo. We find that overall these particles have very low misclassification rates compared to particles in haloes. For example, if we consider probability thresholds of $70\%$, $60\%$, $50\%$ and $40\%$ to assign particles to the IN class (as in the upper panel of Fig. \ref{fig:hm_dependence}), the fraction of misclassified over all particles that don't belong to haloes is $2.45\%$, $4.3\%$, $6.58\%$ and $10.11\%$, respectively. Therefore, the OUT particles predicted by the random forest form a highly pure and complete set. In conclusion, we find that the best classified categories of particles are those which are further away from the classification boundary, both in terms of mass and radius: particles in the most massive and least massive haloes in the simulation; particles in the innermost regions of haloes; and those furthest away in voids. We further tested whether the addition of the tidal shear information could improve the classification performance of poorly classified particles, such as those in the outskirts of halos and in galaxy-sized halos. We find no significant improvement in the classification performance of such particles, other than the $2\%$ improvement found for the whole ensemble and reflected in each mass and radial category. \section{Blind tests on independent simulations} \label{sec:blind} Up to this point we have trained and tested the machine learning algorithm on a single dark-matter-only simulation. To test whether the machine learning algorithm trained on one simulation also gives robust results for different N-body simulations without re-training, we performed blind tests of our pipeline on two independent simulations from the one used for training. The first independent test simulation (W-Test) is a different realisation of the same WMAP5 $\Lambda$CDM cosmology adopted in the training simulation, for a box of also same size and resolution (see Sec. \ref{sec:method}). The second independent test simulation (P-Test) is a realisation of a different cosmological model, a \textit{Planck} $\Lambda$CDM cosmology\footnote{The cosmological parameters are $\Omega_{\Lambda} = 0.6914$, $\Omega_{\mathrm{m}} = 0.3086$, $\Omega_{\mathrm{b}} = 0.045$, $\sigma_{8} = 0.831$, $h = 0.6727$, $n_s = 0.96$.}\citep{Planck2015} in a box of comoving size $L = \SI{50}{Mpc}$ containing $N=512^3$ particles. Moreover, in the P-Test simulation we identify haloes at $z=0$ using the Amiga Halo Finder (AHF) \citep{AHF2004, AHF2009}, instead of the \texttt{SUBFIND} halo finder used in both the training simulation and the W-Test simulation. This allows us to simultaneously test the sensitivity of the machine learning algorithm to the choice of halo finder. For each test simulation, we extracted the input features from the initial conditions and used the pre-trained machine learning algorithm to predict the class labels of the simulations' dark matter particles. In Fig. \ref{fig:ROC_blind} we compare the performance of the machine learning algorithm for the independent W-Test and P-Test simulations with that of the test set of particles in the training simulation. The upper panel shows the ROC curves obtained from predictions based on the density features only, whilst the lower panel shows the case of density and shear features. The machine learning algorithm produces consistent ROC curves in all three simulations for both feature sets. The P-Test simulation yields a difference in AUC with the training simulation of $0.2\%$ for the density-only feature set and $1.1\%$ for the density and shear feature set. For the W-Test simulation, the AUC difference with the training simulation is of $1.3\%$ for the density-only feature set and $1.6\%$ for the density and shear feature set. Such differences between the test and training simulations are consistent with uncertainties in the AUC due to statistical noise. \begin{figure} \includegraphics[width=\columnwidth]{ROCs_blind_tests.pdf} \caption{We perform a blind test of the trained machine learning algorithm on two independent N-body simulations; a different realisation of the WMAP5 cosmology used in the training simulation, and a realisation of a \textit{Planck} cosmological model. The ROC curves are consistent in all three simulations for both the density feature set and the density and shear feature set, with differences in the AUCs of order $\sim 1\%$. The EPS and ST predictions in each simulation match the machine learning performance at different probability thresholds, such that the ST formalism always predicts a less contaminated but more incomplete set of IN particles. These blind tests demonstrate the robustness of the results from a machine learning algorithm trained on one simulation, and applied to different realisations of the same cosmology or realisations of different cosmologies.} \label{fig:ROC_blind} \end{figure} The EPS and ST predicted labels are calculated from the first upcrossings of each simulation's respective particles' trajectories. In all three simulations, the machine learning algorithm is able to match the analytic predictions at different probability thresholds, such that the ST formalism consistently predicts a less contaminated but more incomplete set of IN class particles. For the W-Test simulation, the EPS and ST predictions match the machine learning predictions at probability thresholds of $41.5\%$ and $74.5\%$ respectively, differing only slightly to the $42.8\%$ and $74.7\%$ probability thresholds of the training simulation. For the P-Test simulation, the match to the EPS and ST predictions is found at the lower probability thresholds of $40\%$ and $56\%$, respectively. This is because the change in cosmological parameters in the \textit{Planck} simulation results in a slightly lower EPS collapse barrier and a significantly lower ST collapse barrier compared to those in a WMAP5 cosmological setting. Therefore, trajectories in the P-Test simulation upcross the collapse barriers at larger smoothing mass scales, resulting in more complete but also less pure sets of predicted IN particles. The change in completeness and contamination is such that both the ST and EPS predictions still match the machine learning ROC curves of the P-Test simulation, but for lower probability thresholds than the WMAP5 simulations. We conclude that the mapping learnt by the algorithm on one simulation can be generalised to different simulations based on the same or different cosmological parameters, without the need for re-training, and that the results are insensitive to simulation settings. \section{Conclusions} \label{sec:conclusions} We have presented a machine learning approach to investigate the physics of dark matter halo formation. We trained the algorithm on N-body simulations, from which it learns to predict whether regions of an initial density field later collapse into haloes of a given mass range. This generated a mapping between the initial conditions and final haloes that would result from non-linear evolution, without the need to adopt halo collapse approximations. Our approach provided new physical insight into halo collapse, in particular in understanding which aspects of the initial linear density field contain relevant information on the formation of dark matter haloes. We provided the algorithm with a set of properties describing the local environment around dark matter particles. By studying the performance of the algorithm in response to different inputs, insights can be gained into the physics relevant to dark matter halo formation. When the algorithm was trained on spherical overdensities from the linear density field, we found that it matched predictions based on EPS theory. When providing the algorithm with additional information on the tidal shear field (motivated by ellipsoidal collapse approximations), the classification performance of the machine learning was not enhanced. We showed that, for the mass threshold considered in our classification problem, the Sheth-Tormen ellipsoidal collapse model can be recovered from spherical overdensities alone, with predictions that differ from those of EPS theory only in the completeness-to-contamination trade-off. By performing blind analyses of our pipeline, we confirmed the generality of our results for independent initial conditions realisations and variations in cosmological parameters. We conclude that the linear density field contains sufficient information to predict the formation of dark matter haloes at the accuracy of existing spherical and ellipsoidal collapse analytic frameworks. While the focus of this paper has been on the density field and tidal shear field, any additional property of interest can be extracted from the initial conditions and used as input to the same machine learning algorithm. This allows for straightforward extensions of the present work to investigate the physics of dark matter halo formation further. Future work could also extend the binary classification problem presented in this work into multi-class classification or regression problems. Potential applications of such an extended framework include a new approach to obtaining a halo mass function, which can be directly tested against existing fitting formulae adopted by analytic approaches. More sophisticated machine learning algorithms such as deep learning offer the ability to learn from the training data which features are the most relevant to cosmological structure formation, and future work will investigate their suitability for structure formation studies. \section*{Acknowledgements} LLS thanks Nina Roth for providing one of the simulations used in this work and for useful discussions. LLS was supported by the Science and Technology Facilities Council. HVP was partially supported by the European Research Council (ERC) under the European Community's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement number 306478- CosmicDawn. AP was supported by the Royal Society. ML acknowledges support from the SKA, NRF and AIMS. This work was partially enabled by funding from the UCL Cosmoparticle Initiative. \bibliographystyle{mnras}
{ "timestamp": "2018-07-02T02:07:21", "yymm": "1802", "arxiv_id": "1802.04271", "language": "en", "url": "https://arxiv.org/abs/1802.04271" }
\section{Introduction} This paper considers estimation of the unknown sparse vector, of its $\ell_2$-norm and of the noise level in the sparse sequence model. The focus is on construction of estimators that are optimally adaptive in a minimax sense with respect to the noise level, to the form of the noise distribution, and to the sparsity. We consider the model defined as follows. Let the signal ${\boldsymbol \t}=(\theta_1,\ldots,\theta_d)$ be observed with noise of unknown magnitude $\sigma>0$: \begin{equation}\label{model} Y_i = \theta_i + \sigma\xi_i, \quad i=1,\ldots,d. \end{equation} The noise random variables $\xi_1,\ldots,\xi_d$ are assumed to be i.i.d. and we denote by $P_\xi$ the unknown distribution of $\xi_1$. We assume throughout that the noise is zero-mean, $\mathbf E(\xi_1)=0$, and that $\mathbf E(\xi_1^2)=1$, since $\sigma$ needs to be identifiable. We denote by $\mathbf P_{{\boldsymbol \t},P_\xi,\sigma}$ the distribution of ${\boldsymbol Y}=(Y_1,\dots,Y_d)$ when the signal is ${\boldsymbol \t}$, the noise level is $\sigma$ and the distribution of the noise variables is $P_\xi$. We also denote by $\mathbf E_{{\boldsymbol \t},P_\xi,\sigma}$ the expectation with respect to $\mathbf P_{{\boldsymbol \t},P_\xi,\sigma}$. We assume that the signal ${\boldsymbol \t}$ is $s$-sparse, \textit{i.e., } \begin{equation*} \|\boldsymbol{\theta}\|_0 = \sum_{i=1}^d \mathds{1}_{\theta_i\neq0} \le s, \end{equation*} where $s\in \{1,\dots, d\}$ is an integer. Set $\Theta_s=\{{\boldsymbol \t}\in\mathbb R^d\,|\,\|{\boldsymbol \t}\|_0\le s\}$. We consider the problems of estimating ${\boldsymbol \t}$ under the $\ell_2$ loss, estimating the variance $\sigma^2$, and estimating the $\ell_2$-norm $$\|\boldsymbol{\theta}\|_2 = \Big(\sum_{i=1}^d \theta_i^2\Big)^{1/2}.$$ The classical Gaussian sequence model corresponds to the case where the noise $\xi_i$ is standard Gaussian ($P_\xi={\mathcal N}(0,1)$) and the noise level $\sigma$ is known. Then, the optimal rate of estimation of ${\boldsymbol \t}$ under the $\ell_2$ loss in a minimax sense on the class $\Theta_s$ is $\sqrt{s\log(ed/s)}$ and it is attained by thresholding estimators \cite{DJ}. Also, for the Gaussian sequence model with known $\sigma$, minimax optimal estimator of the norm $\|{\boldsymbol \t}\|_2$ as well as the corresponding minimax rate are available from~\cite{CollierCommingesTsybakov2017} (see Table 1). In this paper, we study estimation of the three objects ${\boldsymbol \t}$, $\|\boldsymbol{\theta}\|_2$, and $\sigma^2$ in the following two settings. \begin{itemize} \item[(a)] {\it The distribution of $\xi_i$ and the noise level $\sigma$ are both unknown.} This is the main setting of our interest. For the unknown distribution of $\xi_i$, we consider two types of assumptions. Either $P_\xi$ belongs to a class $\mathcal{G}_{a,\tau}$, \textit{i.e., } for some $a,\tau>0$, \begin{equation}\label{definition_subgaussian} P_\xi\in\mathcal{G}_{a,\tau} \quad \text{ iff} \quad \mathbf E(\xi_1)=0, \ \mathbf E(\xi_1^2)=1 \ \text{and} \quad \forall t\ge2, \ \mathbf P\big(|\xi_1|>t\big) \le 2 e^{-(t/\tau)^a}, \end{equation} which includes for example sub-Gaussian distributions ($a=2$), or to a class of distributions with polynomially decaying tails $\mathcal{P}_{a,\tau}$, \textit{i.e., } for some $\tau>0$ and $a\ge 2$, \begin{equation}\label{definition_polynomial} P_\xi\in\mathcal{P}_{a,\tau} \quad \text{ iff } \quad \mathbf E(\xi_1)=0, \ \mathbf E(\xi_1^2)=1 \ \text{and} \quad \forall t\ge2, \ \mathbf P\big(|\xi_1|> t) \le \Big(\frac{\tau}{t}\Big)^a. \end{equation} We propose estimators of ${\boldsymbol \t}$, $\|\boldsymbol{\theta}\|_2$, and $\sigma^2$ that are optimal in non-asymptotic minimax sense on these classes of distributions and the sparsity class $\Theta_s$. We establish the corresponding non-asymptotic minimax rates. They are given in the second and third columns of Table~1. We also provide the minimax optimal estimators. \item[(b)] {\it Gaussian noise $\xi_i$ and unknown $\sigma$.} The results on the non-asymptotic minimax rates are summarized in the first column of Table 1. Notice an interesting effect -- the rates of estimation of $\sigma^2$ and of the norm $\|{\boldsymbol \t}\|_2$ when the noise is Gaussian are faster than the optimal rates when the noise is sub-Gaussian. This can be seen by comparing the first column of Table 1 with the particular case $a=2$ of the second column corresponding to sub-Gaussian noise. \end{itemize} \begin{table}[h!] \label{tab:table1} \begin{tabular}{|c|c|c|c|c|}\hline &{Gaussian noise} & {Noise in class $\mathcal{G}_{a,\tau}$,} & {Noise in class $\mathcal{P}_{a,\tau},$}\\ & model & & \\ \hline & & {\multirow{5}{*}{$\sqrt{s}\log^{\frac1a}(ed/s)$}}& {\multirow{5}{*}{$\sqrt{s}(d/s)^{\frac1a}$}}\\ ${\boldsymbol \t}$ & $\sqrt{s\log(ed/s)}$ & {\multirow{9}{*}{}}& {\multirow{5}{*}{}} \\ &known $\sigma$ \cite{DJ} & {\multirow{5}{*}{}}& {\multirow{5}{*}{}}\\ &unknown $\sigma$ \cite{Verzelen2012} & unknown $\sigma$& unknown $\sigma$\\ & & {\multirow{5}{*}{}}& {\multirow{5}{*}{}}\\ \hline & & {\multirow{5}{*}{}}& {\multirow{5}{*}{}}\\ $\|{\boldsymbol \t}\|_2$ & $\sqrt{s\log(1+\frac{\sqrt{d}}{s})} \wedge d^{1/4}$ & $\sqrt{s}\log^{\frac1a}(ed/s)\wedge d^{1/4}$ &$\sqrt{s}(d/s)^{\frac1a} \wedge d^{1/4}$\\ & known $\sigma$ \cite{CollierCommingesTsybakov2017} & known $\sigma$ & known $\sigma$ \\ & $ \sqrt{s\log(1+\frac{\sqrt{d}}{s})} \vee \sqrt{\frac{s}{1+\log_+(s^2/d)}}$ & $\sqrt{s}\log^{\frac1a}(ed/s)$ & $\sqrt{s}(d/s)^{\frac1a}$ \\ & unknown $\sigma$ & unknown $\sigma$ & unknown $\sigma$ \\ \hline & {\multirow{3}{*}{$\displaystyle{ \frac1{\sqrt{d}} \vee \frac{s}{d(1+\log_+(s^2/d))}}$ } } & {\multirow{3}{*}{$\displaystyle{\frac1{\sqrt{d}}\vee \frac{s}{d}\log^{\frac{2}{a }}\left(\frac{ed}{s}\right)}$}} & {\multirow{3}{*}{$\displaystyle{\frac1{\sqrt{d}} \vee \Big(\frac{s}{d}\Big)^{1-\frac2a}}$}}\\ $\sigma^2$ & & {\multirow{3}{*}{}}& {\multirow{3}{*}{}}\\ & & {\multirow{3}{*}{}}& {\multirow{3}{*}{}}\\ \hline \end{tabular} \vspace{3mm} \caption{\rm Optimal rates of convergence.} \end{table} Some comments about Table 1 and additional details are in order. \begin{itemize} \item The difference between the minimax rates for estimation of ${\boldsymbol \t}$ and estimation of the $\ell_2$-norm $\|{\boldsymbol \t}\|_2$ turns out to be specific for the pure Gaussian noise model. It disappears for the classes $\mathcal{G}_{a,\tau}$ and $\mathcal{P}_{a,\tau}$. This is somewhat unexpected since $\mathcal{G}_{2,\tau}$ is the class of sub-Gaussian distributions, and it turns out that $\|{\boldsymbol \t}\|_2$ is estimated optimally at different rates for sub-Gaussian and pure Gaussian noise. Another conclusion is that if the noise is not Gaussian and $\sigma$ is unknown, the minimax rate for $\|{\boldsymbol \t}\|_2$ does not have an elbow between the "dense" ($s>\sqrt{d}$) and the "sparse" ($s\le \sqrt{d}$) zones. \item For the problem of estimation of variance $\sigma^2$ with {\it known} distribution of the noise $P_\xi$, we consider a more general setting than (b) mentioned above. We show that when the noise distribution is exactly known (and satisfies a rather general assumption, not necessarily Gaussian - can have polynomial tails), then the rate of estimation of $\sigma^2$ can be as fast as $\max\left(\frac1{\sqrt{d}}, \frac{s}{d}\right)$, which is faster than the optimal rate $\max\left(\frac1{\sqrt{d}}, \frac{s}{d}\log\left(\frac{ed}{s}\right)\right)$ for the class of sub-Gaussian noise. In other words, the phenomenon of improved rate is not due to the Gaussian character of the noise but rather to the fact that the noise distribution is known. \item Our findings show that there is a dramatic difference between the behavior of optimal estimators of ${\boldsymbol \t}$ in the sparse sequence model and in the sparse linear regression model with "well spread" regressors. It is known from \cite{Gautier2013, Belloni2014} that in sparse linear regression with "well spread" regressors (that is, having positive variance), the rates of estimating ${\boldsymbol \t}$ are the same for the noise with sub-Gaussian and polynomial tails. We show that the situation is quite different in the sparse sequence model, where the optimal rates are much slower and depend on the polynomial index of the noise. \item The rates shown in Table 1 for the classes $\mathcal{G}_{a,\tau}$ and $\mathcal{P}_{a,\tau}$ are achieved on estimators that are adaptive to the sparsity index $s$. Thus, knowing or not knowing $s$ does not influence the optimal rates of estimation when the distribution of $\xi$ and the noise level are unknown. \end{itemize} We conclude this section by a discussion of related work. \citet*{ChenGaoRen2015} explore the problem of robust estimation of variance and of covariance matrix under Hubers's contamination model. As explained in Section \ref{sec:variance} below, this problem has similarities with estimation of noise level in our setting. The main difference is that instead of fixing in advance the Gaussian nominal distribution of the contamination model we assume that it belongs to a class of distributions, such as \eqref{definition_subgaussian} or \eqref{definition_polynomial}. Therefore, the corresponding results in Section \ref{sec:variance} can be viewed as results on robust estimation of scale where, in contrast to the classical setting, we are interested in adaptation to the unknown nominal law. Another aspect of robust estimation of scale is analyzed by \citet{Minsker and Wei2017} who consider classes of distributions similar to $\mathcal{P}_{a,\tau}$ rather than the contamination model. The main aim in \cite{Minsker and Wei2017} is to construct estimators having sub-Gaussian deviations under weak moment assumptions. Our setting is different in that we consider the sparsity class $\Theta_s$ of vectors ${\boldsymbol \t}$ and the rates that we obtain depend on $s$. Estimation of variance in sparse linear model is discussed in \cite{SunZhang2012} where some upper bounds for the rates are given. We also mention the recent paper \cite{GolubevKrymova2017} that deals with estimation of variance in linear regression in a framework that does not involve sparsity, as well as the work on estimation of signal-to-noise ratio functionals in settings involving sparsity \cite{VerzelenGassiat2018,GuoCai2018} and not involving sparsity \cite{JansonBarberCandes2017}. Papers \cite{CollierCommingesTsybakovVerzelen2017,CarpentierVerzelen2019} discuss estimation of other functionals than the $\ell_2$-norm $\|{\boldsymbol \t}\|_2$ in the sparse vector model when the noise is Gaussian with unknown variance. {\bf Notation.} For $x>0$, let $\lfloor x \rfloor$ denote the maximal integer smaller than $x$. For a finite set $A$, we denote by $|A|$ its cardinality. Let $\inf_{\hat{T}}$ denote the infimum over all estimators. The notation $C$, $C^{\prime}$,$c$, $c^{\prime}$ will be used for positive constants that can depend only $a$ and $\tau$ and can vary from line to line. \section{Estimation of sparse vector ${\boldsymbol \t}$}\label{sec:vector} In this section, we study the problem of estimating a sparse vector ${\boldsymbol \t}$ in $\ell_2$-norm when the variance of noise $\sigma$ and the distribution of $\xi_i$ are both unknown. We only assume that the noise distribution belongs a given class, which can be either a class of distributions with polynomial tails $\mathcal{P}_{a,\tau} $, or a class $\mathcal{G}_{a,\tau} $ with exponential decay of the tails. First, we introduce a preliminary estimator $\tilde{\sigma}^2$ of $\sigma^2$ that will be used to define an estimator of~${\boldsymbol \t}$. Let $\gamma\in(0,1/2]$ be a constant that will be chosen small enough and depending only on $a$ and~$\tau$. Divide $\{ 1,\ldots, d\}$ into $m=\lfloor \gamma d \rfloor$ disjoint subsets $B_1,\dots,B_m$, each of cardinality $|B_i|\ge k:=\lfloor d/m \rfloor \ge {1}/{\gamma}-1$. Consider the median-of-means estimator \begin{equation}\label{mom} \tilde{\sigma}^2 = {\sf med}(\bar{\sigma}_1^2,\dots, \bar{\sigma}_m^2), \ \text{where} \ \bar{\sigma}_i^2= \frac{1}{|B_i|} \sum_{j\in B_i} Y_j^2, \quad i=1,\dots,m. \end{equation} Here, ${\sf med}(\bar{\sigma}_1^2,\dots, \bar{\sigma}_m^2)$ denotes the median of $\bar{\sigma}_1^2,\dots, \bar{\sigma}_m^2$. The next proposition shows that the estimator $\tilde{\sigma}^2$ recovers $\sigma^2$ to within a constant factor. \begin{proposition}\label{proposition_over} { Let $\tau>0,a>2$. There exist constants $\gamma\in(0,1/2]$, $c>0$ and $C>0$ depending only on $a$ and $\tau$ such that for any integers $s$ and $d$ satisfying $1\le s< \lfloor \gamma d\rfloor /4$ we have \begin{equation*} \inf_{P_\xi\in\mathcal{P}_{a,\tau}} \inf_{\sigma>0} \inf_{\|{\boldsymbol \t}\|_0\le s}\mathbf P_{{\boldsymbol \t},P_\xi,\sigma} \Big( 1/2\le \frac{\tilde{\sigma}^2}{\sigma^2}\le 3/2\Big) \ge 1-\exp(-c d), \end{equation*} \begin{equation*} \sup_{P_\xi\in\mathcal{P}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s}\mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \left| \tilde{\sigma}^2 - \sigma^2 \right| \le C\sigma^{2}, \end{equation*} and for $a>4$, \begin{equation*} \sup_{P_\xi\in\mathcal{P}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s}\mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \left( \tilde{\sigma}^2 - \sigma^2 \right)^{2} \le C\sigma^{4}. \end{equation*} } \end{proposition} Note that the result of Proposition~\ref{proposition_over} also holds for the class $\mathcal{G}_{a,\tau}$ for all $a>0$ and $\tau>0$. Indeed, $\mathcal{G}_{a,\tau}\subset\mathcal{P}_{a,\tau}$ for all $a> 2$ and $\tau>0$, while for any $0<a\le 2$ and $\tau>0$, there exist $a'> 4$ and $\tau'>0$ such that $\mathcal{G}_{a,\tau}\subset\mathcal{P}_{a',\tau'}$. We further note that assuming $s< cd$ for some $0<c<1$ is natural in the context of variance estimation since $\sigma$ is not identifiable when $s=d$. In what follows, all upper bounds on the risks of estimators will be obtained under this assumption. Consider now an estimator ${\hat {\boldsymbol \t}}$ defined as follows: \begin{equation}\label{def_estimateur_mom} \hat{{\boldsymbol \t}} \in \text{arg}\min_{{\boldsymbol \t}\in \mathbb R^d} \Big(\sum_{i=1}^d (Y_i-\theta_i)^2+\tilde{\sigma}\|{\boldsymbol \t}\|_*\Big). \end{equation} Here, $\|\cdot\|_*$ denotes the sorted $\ell_1$-norm: \begin{equation}\label{sorted_norm} \|{\boldsymbol \t}\|_*=\sum_{i=1}^d \lambda_i |\theta|_{(d-i+1)}, \end{equation} where $|\theta|_{(1)}\le \cdots\le |\theta|_{(d)}$ are the order statistics of $|\theta_1|,\ldots,|\theta_d|$, and $\lambda_1\ge\cdots\ge\lambda_p>0$ are tuning parameters. Set \begin{equation}\label{opt_rate} \phi_{\sf exp}^*(s,d)=\sqrt{s}\log^{1/a}(ed/s), \qquad \phi_{\sf pol}^*(s,d)= \sqrt{s}(d/s)^{1/a}. \end{equation} The next theorem shows that $\hat{{\boldsymbol \t}}$ estimates ${\boldsymbol \t}$ with the rates $\phi_{\sf exp}^*(s,d)$ and $\phi_{\sf pol}^*(s,d)$ when the noise distribution belongs to the class $\mathcal{G}_{a,\tau} $ and class $\mathcal{P}_{a,\tau} $, respectively. \begin{theorem}\label{theorem_adaptiveupperbound} Let $s$ and $d$ be integers satisfying $1\le s< \lfloor \gamma d\rfloor/4$ where $\gamma\in(0,1/2]$ is the tuning parameter in the definition of $\tilde \sigma^2$. Then for the estimator $\hat{{\boldsymbol \t}}$ defined by \eqref{def_estimateur_mom} the following holds. \begin{enumerate} { \item Let $\tau>0$, $a>0$. There exist constants $c,C>0$ and $\gamma\in(0,1/2]$ depending only on $(a,\tau)$ such that if $\lambda_j=c \log^{1/a}(ed/j), j=1,\ldots, d,$ we have \begin{equation*} \sup_{ P_\xi\in\mathcal{G}_{a,\tau}}\sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \left(\|\hat{{\boldsymbol \t}}-{\boldsymbol \t}\|^{2}_2 \right) \le C\sigma^{2}\left(\phi_{\sf exp}^{*}(s,d)\right)^2. \end{equation*} \item Let $\tau>0,a>2$. There exist constants $c,C>0$ and $\gamma\in(0,1/2]$ depending only on $(a,\tau)$ such that if $\lambda_j=c ({d}/{j})^{1/a}, j=1,\ldots, d,$ we have \begin{equation*} \sup_{ P_\xi\in\mathcal{P}_{a,\tau}}\sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \left(\|\hat{{\boldsymbol \t}}-{\boldsymbol \t}\|^{2}_2 \right) \le C\sigma^{2}\left(\phi_{\sf pol}^{*}(s,d)\right)^2. \end{equation*} } \end{enumerate} \end{theorem} Furthermore, it follows from the lower bound of Theorem~\ref{theorem_lowerbound_norm_subgaussian} in Section \ref{sec:l2_norm} that the rates $\phi_{\sf exp}^*(s,d)$ and $\phi_{\sf pol}^*(s,d)$ cannot be improved in a minimax sense. Thus, the estimator $\hat{{\boldsymbol \t}}$ defined in \eqref{def_estimateur_mom} achieves the optimal rates in a minimax sense. From Theorem \ref{theorem_adaptiveupperbound}, we can conclude that the optimal rate $\phi_{\sf pol}^*$ under polynomially decaying noise is very different from the optimal rate $\phi_{\sf exp}^*$ under exponential tails, in particular, from the rate under the sub-Gaussian noise. At first sight, this phenomenon seems to contradict some results in the literature on sparse regression model. Indeed, \citet*{Gautier2013} consider sparse linear regression with unknown noise level $\sigma$ and show that the Self-Tuned Dantzig estimator can achieve the same rate as in the case of Gaussian noise (up to a logarithmic factor) under the assumption that the noise is symmetric and has only a bounded moment of order $a>2$. \citet*{Belloni2014} show for the same model that a square-root Lasso estimator achieves analogous behavior under the assumption that the noise has a bounded moment of order $a>2$. However, a crucial condition in \cite{Belloni2014} is that the design is "well spread", that is all components of the design vectors are random with positive variance. The same type of condition is needed in \cite{Gautier2013} to obtain a sub-Gaussian rate. This condition of "well spreadness" is not satisfied in the sparse sequence model that we are considering here. In this model viewed as a special case of linear regression, the design is deterministic, with only one non-zero component. We see that such a degenerate design turns out to be the least favorable from the point of view of the convergence rate, while the "well spread" design is the best one. An interesting general conclusion of comparing our findings to \cite{Gautier2013} and \cite{Belloni2014} is that the optimal rate of convergence of estimators under sparsity when the noise level is unknown depends dramatically on the properties of the design. There is a whole spectrum of possibilities between the degenerate and "well spread" designs where a variety of new rates can arise depending on the properties of the design. Studying them remains an open problem. \section{Estimation of the $\ell_2$-norm}\label{sec:l2_norm} In this section, we consider the problem of estimation of the $\ell_2$-norm of a sparse vector when the variance of the noise and the form of its distribution are both unknown. We show that the rates $\phi_{\sf exp}^*(s,d)$ and $\phi_{\sf pol}^*(s,d)$ are optimal in a minimax sense on the classes $\mathcal{G}_{a,\tau}$ and $\mathcal{P}_{a,\tau}$, respectively. We first provide a lower bound on the risks of any estimators of the $\ell_2$-norm when the noise level $\sigma$ is unknown and the unknown noise distribution $P_\xi$ belongs either to $\mathcal{G}_{a,\tau}$ or $\mathcal{P}_{a,\tau}$. { We denote by $\mathcal L$ the set of all monotone non-decreasing functions $\ell:[0, \infty)\to [0, \infty)$ such that $\ell(0)=0$ and $\ell\not\equiv 0$. } \begin{theorem}\label{theorem_lowerbound_norm_subgaussian} Let $s,d$ be integers satisfying $1\le s\le d$. Let $\ell(\cdot)$ be any loss function in the class $\mathcal L$. Then, for any $a>0,\tau>0$, \begin{equation}\label{lowerbound1:norm} \inf_{\hat{T}} \sup_{P_\xi\in\mathcal{G}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\,\ell \Big( c(\phi_{\sf exp}^*(s,d))^{-1} \Big| \frac{\hat{T}-\|{\boldsymbol \t}\|_2}{\sigma}\Big| \Big)\ge c', \end{equation} and, for any $a\ge 2,\tau>0$, \begin{equation}\label{lowerbound2:norm} \inf_{\hat{T}} \sup_{P_\xi\in\mathcal{P}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\,\ell \Big( \bar c(\phi_{\sf pol}^*(s,d))^{-1} \Big| \frac{\hat{T}-\|{\boldsymbol \t}\|_2}{\sigma}\Big| \Big)\ge \bar c'. \end{equation} Here, $\inf_{\hat{T}}$ denotes the infimum over all estimators, and $c, \bar c>0$, $c', \bar c'>0$ are constants that can depend only on $\ell(\cdot)$, $\tau$ and $a$. \end{theorem} The lower bound~\eqref{lowerbound2:norm} implies that the rate of estimation of the $\ell_2$-norm of a sparse vector deteriorates dramatically if the bounded moment assumption is imposed on the noise instead, for example, of the sub-Gaussian assumption. Note also that \eqref{lowerbound1:norm} and \eqref{lowerbound2:norm} immediately imply lower bounds with the same rates $\phi_{\sf exp}^* $ and $\phi_{\sf pol}^*$ for the estimation of the $s$-sparse vector ${\boldsymbol \t}$ under the $\ell_2$-norm. Given the upper bounds of Theorem \ref{theorem_adaptiveupperbound}, the lower bounds~\eqref{lowerbound1:norm} and~\eqref{lowerbound2:norm} are tight for the quadratic loss, and are achieved by the following plug-in estimator independent of $s$ or $\sigma$: \begin{equation}\label{definition_norm_mom} \hat{N} = \|\hat{{\boldsymbol \t}}\|_2 \end{equation} where $\hat{{\boldsymbol \t}}$ is defined in~\eqref{def_estimateur_mom}. In conclusion, when both $P_\xi$ and $\sigma$ are unknown the rates $\phi_{\sf exp}^* $ and $\phi_{\sf pol}^*$ defined in \eqref{opt_rate} are minimax optimal both for estimation of ${\boldsymbol \t}$ and of the the norm $ \|{\boldsymbol \t}\|_2$. We now compare these results with the findings in~\cite{CollierCommingesTsybakov2017} regarding the (nonadaptive) estimation of $\|{\boldsymbol \t}\|_2$ when $\xi_i$ have the standard Gaussian distribution ($P_\xi = {\cal N}(0,1)$) and $\sigma$ is known. It is shown in~\cite{CollierCommingesTsybakov2017} that in this case the optimal rate of estimation of $\|{\boldsymbol \t}\|_2$ has the form $$ \phi_{{\cal N}(0,1)}(s,d)= \min\left\{\sqrt{s\log(1+\sqrt{d}/s)},d^{1/4}\right\}. $$ Namely, the following proposition holds. \begin{proposition}[Gaussian noise, known $\sigma$ \cite{CollierCommingesTsybakov2017}]\label{prop:lower:gaussian} For any $\sigma>0$ and any integers $s,d$ satisfying $1\le s\le d$, we have \begin{equation*} c \sigma^2 \phi_{{\cal N}(0,1)}^2(s,d)\le \inf_{\hat{T}} \sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},{\cal N}(0,1),\sigma} \big(\hat{T}-\|{\boldsymbol \t}\|_2\big)^2 \le C \sigma^2 \phi_{{\cal N}(0,1)}^2(s,d), \end{equation*} where $c>0$ and $C>0$ are absolute constants and $\inf_{\hat{T}}$ denotes the infimum over all estimators. \end{proposition} We have seen that, in contrast to this result, in the case of unknown $P_\xi$ and $\sigma$ the optimal rates \eqref{opt_rate} do not exhibit an elbow at $s=\sqrt{d}$ between the "sparse" and "dense" regimes. Another conclusion is that, in the "dense" zone $s>\sqrt{d}$, adaptation to $P_\xi$ and $\sigma$ is only possible with a significant deterioration of the rate. On the other hand, for the sub-Gaussian class $\mathcal{G}_{2,\tau}$, in the "sparse" zone $s\le \sqrt{d}$ the non-adaptive rate $\sqrt{s\log(1+\sqrt{d}/s)}$ differs only slightly from the adaptive sub-Gaussian rate $\sqrt{s\log(ed/s)}$; in fact, this difference in the rate appears only in a vicinity of $s=\sqrt{d}$. A natural question is whether such a deterioration of the rate is caused by the ignorance of $\sigma$ or by the ignorance of the distribution of $\xi_i$ within the sub-Gaussian class $\mathcal{G}_{2,\tau}$. The answer is that both are responsible. It turns out that if only one of the two ingredients ($\sigma$ or the noise distribution) is unknown, then a rate faster than the adaptive sub-Gaussian rate $\phi_{\sf exp}^*(s,d) = \sqrt{s\log(ed/s)}$ can be achieved. This is detailed in the next two propositions. { Consider first the case of Gaussian noise and unknown $\sigma$. Set $$ \phi_{{\cal N}(0,1)}^*(s,d)= \max\left\{\sqrt{s\log(1+\sqrt{d}/s)},\sqrt{\frac{s}{1+ \log_+(s^{2}/d)}}\right\}, $$ where $\log_+(x)=\max(0,\log(x))$ for any $x>0$. We divide the set $\{1, \dots, d\}$ into two disjoint subsets $I_{1}$ and $I_{2}$ with $\min\left(|I_{1}|,|I_{2}|\right)\geq \lfloor {d}/{2}\rfloor$. Let $\hat{\sigma}^{2}$ be the variance estimator defined by \eqref{definition_noisevarianceestimator_gauss}, cf. Section \ref{sec:median} below, and let $\hat{\sigma}^{2}_{\sf med,1}, \hat{\sigma}^{2}_{\sf med,2}$ be the median estimators \eqref{definition_median} corresponding to the samples $(Y_{i})_{i \in I_{1}}$ and $(Y_{i})_{i \in I_{2}}$, respectively. Consider the estimator \begin{equation}\label{eq:C} \hat{N}^* = \left\{ \begin{array}{lcl} \sqrt{ \Big| \sum_{j=1}^d (Y_j^2~\mathds{1}_{\{ |Y_j|>\rho_{j} \}}) -d\alpha \hat{\sigma}^2\Big|}& \text{if}& s\le \sqrt{d},\\ \sqrt{ \Big| \sum_{j=1}^d Y_j^2 -d \hat{\sigma}^2\Big|}\phantom{~\mathds{1}_{\{ |Y_j|>\ \rho \}}}& \text{if} & s> \sqrt{d}, \end{array} \right. \end{equation} where $\rho_{j}= 2 \hat{\sigma}_{\sf med,1} \sqrt{2\log (1+d/s^2)}$ if $j \in I_{2}$, $\rho_{j}= 2 \hat{\sigma}_{\sf med,2} \sqrt{2\log (1+d/s^2)}$ if $j \in I_{1}$ and $\alpha = \mathbf E\left(\xi_1^2~\mathds{1}_{\{ |\xi_1|>2 \sqrt{2\log (1+d/s^2)} \}}\right)$. Note that $Y_j$ is independent of $\rho_j$ for every $j$. Note also that the estimator $\hat{N}^*$ depends on the preliminary estimator ${\tilde \sigma}^2$ since $\hat{\sigma}>0$ defined in \eqref{definition_noisevarianceestimator_gauss} depends on it. } \begin{proposition}[Gaussian noise, unknown $\sigma$]\label{prop:norm:gauss} The following two properties hold. \begin{itemize} \item[(i)] { Let $s$ and $d$ be integers satisfying $1\le s< \lfloor \gamma d\rfloor/4$, where $\gamma\in(0,1/2]$ is the tuning parameter in the definition of $\tilde \sigma^2$. There exist absolute constants $C>0$ and $\gamma\in(0,1/2]$ such that \begin{equation*}\label{upperbound:norm:gauss} \sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t}, {\cal N}(0,1),\sigma} \left( \hat{N}^*-\|{\boldsymbol \t}\|_2 \right)^{2} \le C\sigma^{2}\left(\phi_{{\cal N}(0,1)}^{*}(s,d)\right)^2. \end{equation*} \item[(ii)] Let $s$ and $d$ be integers satisfying $1\le s\le d$ and let $\ell(\cdot)$ be any loss function in the class $\mathcal L$. Then, \begin{equation*}\label{lowerbound:norm:gauss} \inf_{\hat{T}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},{\cal N}(0,1),\sigma}\,\ell \bigg( c(\phi_{{\cal N}(0,1)}^*(s,d))^{-1} \bigg|\frac{ \hat{T}-\|{\boldsymbol \t}\|_2}{\sigma}\bigg| \bigg)\ge c^{\prime}, \end{equation*} where $\inf_{\hat{T}}$ denotes the infimum over all estimators, and $c>0$, $c^{\prime}>0$ are constants that can depend only on $\ell(\cdot)$. } \end{itemize} \end{proposition} The proof of item (ii) of Proposition~\ref{prop:norm:gauss} (the lower bound) is given in the Supplementary material. Proposition~\ref{prop:norm:gauss} establishes the minimax optimality of the rate $\phi_{{\cal N}(0,1)}^{*}(s,d)$. It also shows that if $\sigma$ is unknown, the knowledge of the Gaussian character of the noise leads to an improvement of the rate compared to the adaptive sub-Gaussian rate $\sqrt{s\log(ed/s)}$. However, the improvement is only in a logarithmic factor. { Consider now the case of unknown noise distribution in $\mathcal{G}_{a,\tau}$ and known $\sigma$. We show in the next proposition that in this case the minimax rate is of the form $$ \phi_{\sf exp}^\circ(s,d)= \min\{\sqrt{s} \log^{\frac{1}{a}}(ed/s),d^{1/4}\} $$ and it is achieved by the estimator $$ \hat{N}^\circ_{\sf exp} = \left\{ \begin{array}{lcl} \phantom{\sum_{j=1}^d Y }\|\hat{{\boldsymbol \t}}\|_2& \text{if}& s\le \frac{\sqrt{d}}{\log^{\frac{2}{a}}(ed)} ,\\ \Big| \sum_{j=1}^d Y_j^2 -d \sigma^2\Big|^{1/2}& \text{if} & s> \frac{\sqrt{d}}{\log^{\frac{2}{a}}(ed)} , \end{array} \right. $$ where $\hat{{\boldsymbol \t}}$ is defined in~\eqref{def_estimateur_mom}. Note $\phi_{\sf exp}^\circ(s,d)$ can be written equivalently (up to absolute constants) as $\min\{\sqrt{s}\log^{\frac{1}{a}}(ed),d^{1/4}\}$. \begin{proposition}[Unknown noise in $\mathcal{G}_{a,\tau}$, known $\sigma$]\label{prop:norm:known_sigma} { Let $a,\tau>0$. The following two properties hold. } \begin{itemize} { \item[(i)] Let $s$ and $d$ be integers satisfying $1\le s< \lfloor \gamma d\rfloor/4$, where $\gamma\in(0,1/2]$ is the tuning parameter in the definition of $\tilde \sigma^2$. There exist constants $c, C>0$, and $\gamma\in(0,1/2]$ depending only on $(a,\tau)$ such that if $\hat{{\boldsymbol \t}}$ is the estimator defined in~\eqref{def_estimateur_mom} with $\lambda_j= c\log^{\frac{1}{a}}(ed/j)$ , $j=1,\dots,d$, then \begin{equation*}\label{upperbound:norm:subgauss} \sup_{P_\xi \in \mathcal{G}_{a,\tau}}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t}, P_\xi,\sigma} \left( \hat{N}_{\sf exp}^\circ-\|{\boldsymbol \t}\|_2 \right)^{2}\le C\sigma^{2} \left(\phi_{\sf exp}^{\circ}(s,d)\right)^2. \end{equation*} } \item[(ii)] Let $s$ and $d$ be integers satisfying $1\le s \le d$ and let $\ell(\cdot)$ be any loss function in the class $\mathcal L$. Then, there exist constants $c>0$, $c^{\prime}>0$ depending only on $\ell(\cdot)$, $a$ and $\tau$ such that \begin{equation*}\label{lowerbound:norm:subgauss} \inf_{\hat{T}} \sup_{P_\xi \in \mathcal{G}_{a,\tau}} \sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\,\ell \bigg( c(\phi_{\sf exp}^\circ(s,d))^{-1} \bigg| \frac{ \hat{T}-\|{\boldsymbol \t}\|_2}{\sigma}\bigg| \bigg)\ge c', \end{equation*} where $\inf_{\hat{T}}$ denotes the infimum over all estimators. \end{itemize} \end{proposition} Proposition~\ref{prop:norm:known_sigma} establishes the minimax optimality of the rate $\phi_{\sf exp}^\circ(s,d)$. It also shows that if the noise distribution is unknown and belongs to $\mathcal{G}_{a,\tau}$, the knowledge of $\sigma$ leads to an improvement of the rate compared to the case when $\sigma$ is unknown. In contrast to the case of Proposition~\ref{prop:norm:gauss} (Gaussian noise), the improvement here is substantial; it results not only in a logarithmic but in a polynomial factor in the dense zone $s> \frac{\sqrt{d}}{\log^{\frac{2}{a}}(ed)}$. We end this section by considering the case of unknown polynomial noise and known $\sigma$. The next proposition shows that in this case the minimax rate, for a given $a>4$, is of the form $$ \phi_{\sf pol}^\circ(s,d)= \min\{\sqrt{s} (d/s)^{\frac{1}{a}},d^{1/4}\ $$ and it is achieved by the estimator $$ \hat{N}^\circ_{\sf pol} = \left\{ \begin{array}{lcl} \phantom{\sum_{j=1}^d Y }\|\hat{{\boldsymbol \t}}\|_2& \text{if}& s\le d^{\frac{1}{2}-\frac{1}{a-2}} ,\\ \Big| \sum_{j=1}^d Y_j^2 -d \sigma^2\Big|^{1/2}& \text{if} & s> d^{\frac{1}{2}-\frac{1}{a-2}} , \end{array} \right. $$ where $\hat{{\boldsymbol \t}}$ is defined in~\eqref{def_estimateur_mom}. \begin{proposition}[Unknown noise in $\mathcal{P}_{a,\tau}$, known $\sigma$]\label{prop:norm:poly:known_sigma} { Let $\tau>0, a>4$. The following two properties hold. } \begin{itemize} { \item[(i)] Let $s$ and $d$ be integers satisfying $1\le s< \lfloor \gamma d\rfloor/4$, where $\gamma\in(0,1/2]$ is the tuning parameter in the definition of $\tilde \sigma^2$. There exist constants $c, C>0$, and $\gamma\in(0,1/2]$ depending only on $(a,\tau)$ such that if $\hat{{\boldsymbol \t}}$ is the estimator defined in~\eqref{def_estimateur_mom} with $\lambda_j= c(d/j)^{\frac{1}{a}}$, $j=1,\dots,d$, then \begin{equation*}\label{upperbound:norm:poly} \sup_{P_\xi \in \mathcal{P}_{a,\tau}}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t}, P_\xi,\sigma} \left( \hat{N}_{\sf pol}^\circ-\|{\boldsymbol \t}\|_2 \right)^{2}\le C\sigma^{2} \left(\phi_{\sf pol}^{\circ}(s,d)\right)^2. \end{equation*} } \item[(ii)] Let $s$ and $d$ be integers satisfying $1\le s \le d$ and let $\ell(\cdot)$ be any loss function in the class $\mathcal L$. Then, there exist constants $c>0$, $c^{\prime}>0$ depending only on $\ell(\cdot)$, $a$ and $\tau$ such that \begin{equation*}\label{lowerbound:norm:poly} \inf_{\hat{T}} \sup_{P_\xi \in \mathcal{P}_{a,\tau}} \sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\,\ell \bigg( c(\phi_{\sf pol}^\circ(s,d))^{-1} \bigg|\frac{ \hat{T}-\|{\boldsymbol \t}\|_2}{\sigma}\bigg| \bigg)\ge c^{\prime}, \end{equation*} where $\inf_{\hat{T}}$ denotes the infimum over all estimators. \end{itemize} \end{proposition} } Note that here, similarly to Proposition~\ref{prop:norm:known_sigma}, the improvement over the case of unknown $\sigma$ is in a polynomial factor in the dense zone $s> d^{\frac{1}{2}-\frac{1}{a-2}}$. \section{Estimating the variance of the noise}\label{sec:variance} \subsection{Estimating $\sigma^2$ when the distribution $P_\xi$ is known}\label{sec:median} In the sparse setting when $\|{\boldsymbol \t}\|_0$ is small, estimation of the noise level can be viewed as a problem of robust estimation of scale. Indeed, our aim is to recover the second moment of~$\sigma\xi_1$ but the sample second moment cannot be used as an estimator because of the presence of a small number of outliers~$\theta_i\ne 0$. Thus, the models in robustness and sparsity problems are quite similar but the questions of interest are different. When robust estimation of $\sigma^2$ is considered, the object of interest is the pure noise component of the sparsity model while the non-zero components $\theta_i$ that are of major interest in the sparsity model play a role of nuisance. In the context of robustness, it is known that the estimator based on sample median can be successfully applied. Recall that, when ${\boldsymbol \t}=0$, the median $M$-estimator of scale (\textit{cf. } \cite{Huber1981}) is defined as \begin{equation}\label{definition_median} \hat{\sigma}_{\sf med}^2 = \frac{\hat{M}}{\beta} \end{equation} where $\hat{M}$ is the sample median of $(Y_1^2,\dots,Y_d^2)$, that is \begin{equation*} \hat{M} \in \arg\min_{x>0} \big|F_d(x)-1/2\big|, \end{equation*} and $\beta$ is the median of the distribution of $\xi_1^2$. Here, $F_d$ denotes the empirical c.d.f. of $(Y_1^2,\dots,Y_d^2)$. { When $F$ denotes the c.d.f. of $\xi^{2}_1$, it is easy to see that \begin{equation}\label{equation_quantile} \beta=F^{-1}(1/2). \end{equation} } The following proposition specifies the rate of convergence of the estimator $\hat{\sigma}_{\sf med}^2$. \begin{proposition}\label{prop:gao} { Let $\xi_1^{2}$ have a c.d.f. $F$ with positive density, and let $\beta$ be given by \eqref{equation_quantile}. There exist constants $\gamma\in(0,1/8)$, $c>0$, $c_*>0$ and $C>0$ depending only on $F$ such that for any integers $s$ and $d$ satisfying $1\le s< \gamma d$ and any $t>0$ we have \begin{equation*} \sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf P_{{\boldsymbol \t}, F,\sigma} \left( \Big|\frac{\hat{\sigma}_{\sf med}^2}{\sigma^2}-1\Big| \ge c_*\left(\sqrt{\frac{t}{d}}+\frac{s}{d}\right)\right) \le 2(e^{-t} + e^{-c d}), \end{equation*} and if $\mathbf E|\xi_{1}|^{2+\epsilon}<\infty$ for some $\epsilon>0$. Then, \begin{equation*} \sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \frac{\mathbf E_{{\boldsymbol \t}, F,\sigma} \left| \hat{\sigma}_{\sf med}^2 - \sigma^{2} \right|}{\sigma^{2}} \le C\max\left(\frac1{\sqrt{d}}, \frac{s}{d}\right). \end{equation*} } \end{proposition} { The main message of Proposition~\ref{prop:gao} is that the rate of convergence of $\hat{\sigma}_{\sf med}^2$ in probability and in expectation is as fast as \begin{equation}\label{rate:gao} \max\left(\frac1{\sqrt{d}}, \frac{s}{d}\right) \end{equation} and it does not depend on $F$ when $F$ varies in a large class. The role of Proposition~\ref{prop:gao} is to contrast the subsequent results of this section dealing with unknown distribution of noise and providing slower rates. It emphasizes the fact that the knowledge of the noise distribution is crucial as it leads to an improvement of the rate of estimating the variance. However, the rate \eqref{rate:gao} achieved by the median estimator is not necessarily optimal. As shown in the next proposition, in the case of Gaussian noise the optimal rate is even better: $$ \phi_{{\cal N}(0,1)}(s,d)= \max\left\{\frac{1}{\sqrt{d}},\frac{s}{d(1+\log_{+}(s^{2}/d))}\right\}. $$ This rate is attained by an estimator that we are going to define now. We use the observation that, in the Gaussian case, the modulus of the empirical characteristic function $\varphi_d(t)=\frac{1}{d}\sum_{i=1}^d e^{itY_j}$ is to within a constant factor of the Gaussian characteristic function $\exp(-\frac{t^2\sigma^{2}}{2})$ for any $t$. This suggests the estimator $$ \tilde{v}^{2} = -\frac{2\log(|\varphi_{d}(\hat{t}_{1})|)}{\hat{t}_{1}^{2}}, $$ with a suitable choice of $t=\hat{t}_{1}$ that we further set as follows: $$ \hat{t}_{1} = \frac1{\tilde{\sigma}}\sqrt{\log\big(4(es/\sqrt{d}+1)\big)}, $$ where $\tilde{\sigma}$ is the preliminary estimator \eqref{mom} with some tuning parameter $\gamma\in(0,1/2]$. The final variance estimator is defined as a truncated version of $\tilde{v}^{2}$: \begin{equation}\label{definition_noisevarianceestimator_gauss} \hat{\sigma}^{2} = \left\{ \begin{array}{ll} \tilde{v}^{2} & \ \text{if} \ |\varphi_{d}(\hat{t}_{1})|> (es/\sqrt{d}+1)^{-1}/4 ,\\ \tilde{\sigma}^2 & \ \text{otherwise} . \end{array} \right. \end{equation} \begin{proposition}[Gaussian noise]\label{prop:variance:gauss} The following two properties hold. \begin{itemize} \item[(i)] { Let $s$ and $d$ be integers satisfying $1\le s< \lfloor \gamma d\rfloor/4$, where $\gamma\in(0,1/2]$ is the tuning parameter in the definition of $\tilde \sigma^2$. There exist absolute constants $C>0$ and $\gamma\in(0,1/2]$ such that the estimator $\hat{\sigma}^2$ defined in~(\ref{definition_noisevarianceestimator_gauss}) satisfies \begin{equation*}\label{upperbound:variance:gauss} \sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \frac{\mathbf E_{{\boldsymbol \t}, {\cal N}(0,1),\sigma} \left| \hat{\sigma}^2-\sigma^{2} \right|}{\sigma^{2}} \le C\phi_{{\cal N}(0,1)}(s,d). \end{equation*} \item[(ii)] Let $s$ and $d$ be integers satisfying $1\le s\le d$ and let $\ell(\cdot)$ be any loss function in the class $\mathcal L$. Then, \begin{equation*}\label{lowerbound:variance:gauss} \inf_{\hat{T}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},{\cal N}(0,1),\sigma}\,\ell \bigg( c(\phi_{{\cal N}(0,1)}(s,d))^{-1} \bigg|\frac{ \hat{T}}{\sigma^{2}}-1\bigg| \bigg)\ge c^{\prime}, \end{equation*} where $\inf_{\hat{T}}$ denotes the infimum over all estimators, and $c>0$, $c^{\prime}>0$ are constants that can depend only on $\ell(\cdot)$. } \end{itemize} \end{proposition} } Estimators of variance or covariance matrix based on the empirical characteristic function have been studied in several papers \cite{ButuceaMatias2005,CaiJin2010,BelomestnyTrabsTsybakov2017,CarpentierVerzelen2019}. The setting in \cite{ButuceaMatias2005,CaiJin2010,BelomestnyTrabsTsybakov2017} is different from the ours as those papers deal with the model where the non-zero components of ${\boldsymbol \t}$ are random with a smooth distribution density. The estimators in \cite{ButuceaMatias2005,CaiJin2010} are also quite different. On the other hand, \cite{BelomestnyTrabsTsybakov2017,CarpentierVerzelen2019} consider estimators close to $\tilde{v}^{2}$. In particular, \cite{CarpentierVerzelen2019} uses a similar pilot estimator for testing in the sparse vector model where it is assumed that $\sigma\in [\sigma_{-},\sigma_+]$, $0<\sigma_{-}<\sigma_+<\infty$, and the estimator depends on $\sigma_+$. Although \cite{CarpentierVerzelen2019} does not provide explicitly stated result about the rate of this estimator, the proofs in \cite{CarpentierVerzelen2019} come close to it and we believe that it satisfies an upper bound as in item (i) of Proposition \ref{upperbound:variance:gauss} with $\sup_{\sigma>0}$ replaced by $\sup_{\sigma\in[\sigma_{-},\sigma_+]}$. \subsection{Distribution-free variance estimators} \label{sec:free} The main drawback of the estimator $\hat{\sigma}_{\sf med}^2$ is the dependence on the parameter $\beta$. It reflects the fact that the estimator is tailored for a given and known distribution of noise $F$. Furthermore, as shown below, the rate~\eqref{rate:gao} cannot be achieved if it is only known that $F$ belongs to one of the classes of distributions that we consider in this paper. Instead of using one particular quantile, like the median in Section \ref{sec:median}, one can estimate $\sigma^2$ by an integral over all quantiles, which allows one to avoid considering distribution-dependent quantities like~($\ref{equation_quantile}$). Indeed, with the notation $q_\alpha=G^{-1}(1-\alpha)$ where $G$ is the c.d.f. of $(\sigma\xi_1)^2$ and $0<\alpha<1$, the variance of the noise can be expressed as \begin{equation*} \sigma^2 = \mathbf E(\sigma\xi_1)^2 = \int_0^1 q_\alpha\,d\alpha. \end{equation*} Discarding the higher order quantiles that are dubious in the presence of outliers and replacing $q_\alpha$ by the empirical quantile $\hat{q}_\alpha$ of level $\alpha$ we obtain the following estimator \begin{equation}\label{definition_noisevarianceestimator} \hat{\sigma}^2 = \int_0^{1-s/d} \hat{q}_\alpha\,d\alpha = \frac1d \sum_{k=1}^{d-s} Y^2_{(k)}, \end{equation} where $Y^2_{(1)}\le\ldots\le Y^2_{(d)}$ are the ordered values of the squared observations $Y_1^2,\dots, Y_d^2$. Note that $\hat{\sigma}^2$ is an $L$-estimator, \textit{cf. }\cite{Huber1981}. Also, up to a constant factor, $\hat{\sigma}^2$ coincides with the statistic used in~\citet*{CollierCommingesTsybakov2017} . The following theorem provides an upper bound on the risk of the estimator $\hat{\sigma}^2$ under the assumption that the noise belongs to the class~$\mathcal{G}_{a,\tau}$. Set $$ \phi_{\sf exp}(s,d) = \max\left(\frac1{\sqrt{d}}, \frac{s}{d}\log^{2/a}\left(\frac{ed}{s}\right)\right). $$ \begin{theorem}\label{theorem_upperbound_noise_subgaussian} Let $\tau>0$, $a>0$, and let $s,d$ be integers satisfying $1\le s < d/2$. Then, the estimator $\hat{\sigma}^2$ defined in~(\ref{definition_noisevarianceestimator}) satisfies \begin{equation} \sup_{P_\xi\in\mathcal{G}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s} \frac{\mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \big(\hat{\sigma}^2-\sigma^2\big)^2}{\sigma^4} \le C \phi_{\sf exp}^2(s,d) \end{equation} where $C>0$ is a constant depending only on $a$ and $\tau$. \end{theorem} The next theorem establishes the performance of variance estimation in the case of distributions with polynomially decaying tails. Set $$ \phi_{\sf pol}(s,d) = \max\left(\frac1{\sqrt{d}}, \Big(\frac{s}{d}\Big)^{1-\frac{2}{a}} \right). $$ \begin{theorem}\label{theorem_upperbound_noise_polynomial} Let $\tau>0,a>4$, and let $s,d$ be integers satisfying $1\le s< d/2$. Then, the estimator $\hat{\sigma}^2$ defined in~(\ref{definition_noisevarianceestimator}) satisfies \begin{equation} \sup_{P_\xi\in\mathcal{P}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s} \frac{\mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \big(\hat{\sigma}^2-\sigma^2\big)^2}{\sigma^4} \le C \phi_{\sf pol}^2(s,d), \end{equation} where $C>0$ is a constant depending only on $a$ and $\tau$. \end{theorem} We assume here that the noise distribution has a moment of order greater than 4, which is close to the minimum requirement since we deal with the expected squared error of a quadratic function of the observations. We now state the lower bounds matching the results of Theorems~\ref{theorem_upperbound_noise_subgaussian} and \ref{theorem_upperbound_noise_polynomial}. \begin{theorem}\label{theorem_lowerbound_noise_subgaussian} Let $\tau>0$, $a>0$, and let $s,d$ be integers satisfying $1\le s\le d$. Let $\ell(\cdot)$ be any loss function in the class $\mathcal L$. Then, \begin{equation} \inf_{\hat{T}} \sup_{P_\xi\in\mathcal{G}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s}\mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\,\ell \Big( c(\phi_{\sf exp}(s,d))^{-1} \Big| \frac{ \hat{T}}{\sigma^2} -1\Big| \Big)\ge c', \end{equation} where $\inf_{\hat{T}}$ denotes the infimum over all estimators and $c>0$, $c'>0$ are constants depending only on $\ell(\cdot)$, $a$ and $\tau$. \end{theorem} Theorems~\ref{theorem_upperbound_noise_subgaussian} and~\ref{theorem_lowerbound_noise_subgaussian} imply that the estimator $\hat{\sigma}^2$ is rate optimal in a minimax sense when the noise belongs to $\mathcal{G}_{a,\tau}$, in particular when it is sub-Gaussian. Interestingly, an extra logarithmic factor appears in the optimal rate when passing from the pure Gaussian distribution of $\xi_i$'s (\textit{cf. } Proposition~\ref{prop:variance:gauss}) to the class of all sub-Gaussian distributions. This factor can be seen as a price to pay for the lack of information regarding the exact form of the distribution. Also note that this logarithmic factor vanishes as $a\to\infty$. Under polynomial tail assumption on the noise, we have the following minimax lower bound. \begin{theorem}\label{theorem_lowerbound_noise_polynomial} Let $\tau>0$, $a \ge 2$, and let $s,d$ be integers satisfying $1\le s\le d$. Let $\ell(\cdot)$ be any loss function in the class $\mathcal L$. Then, \begin{equation} \inf_{\hat{T}} \sup_{P_\xi\in\mathcal{P}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\,\ell \Big( c(\phi_{\sf pol}(s,d))^{-1} \Big| \frac{ \hat{T}}{\sigma^2} -1\Big| \Big)\ge c' \end{equation} where $\inf_{\hat{T}}$ denotes the infimum over all estimators and $c>0$, $c'>0$ are constants depending only on $\ell(\cdot)$, $a$ and $\tau$. \end{theorem} This theorem shows that the rate $\phi_{\sf pol}(s,d)$ obtained in Theorem~\ref{theorem_upperbound_noise_polynomial} cannot be improved in a minimax sense. A drawback of the estimator defined in~(\ref{definition_noisevarianceestimator}) is in the lack of adaptivity to the sparsity parameter $s$. At first sight, it may seem that the estimator \begin{equation}\label{naive_estimator} \hat{\sigma}_*^2 = \frac2d \sum_{1\le k \le d/2} Y^2_{(k)} \end{equation} could be taken as its adaptive version. However, $\hat{\sigma}_*^2$ is not a good estimator of $\sigma^2$ as can be seen from the following proposition. \begin{proposition}\label{proposition_suboptimality} Define $\hat{\sigma}_*^2$ as in~(\ref{naive_estimator}). Let $\tau>0$, $a\ge 2$, and let $s,d$ be integers satisfying $1\le s\le d$, and $d=4k$ for an integer $k$. Then, \begin{equation*} \sup_{P_\xi\in\mathcal{G}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s} \frac{\mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \big(\hat{\sigma}_*^2-\sigma^2\big)^2}{\sigma^4} \ge \frac{1}{64}. \end{equation*} \end{proposition} On the other hand, it turns out that a simple plug-in estimator \begin{equation}\label{definition_adaptive_variance} \hat{\sigma}^2 = \frac1d \|{\boldsymbol Y}-\hat{{\boldsymbol \t}}\|_2^2 \end{equation} with $\hat{{\boldsymbol \t}}$ chosen as in Section \ref{sec:vector} achieves rate optimality adaptively to the noise distribution and to the sparsity parameter $s$. This is detailed in the next theorem. % \begin{theorem}\label{theorem_adaptiveupperbound_variance} Let $s$ and $d$ be integers satisfying $1\le s< \lfloor \gamma d\rfloor/4$, where $\gamma\in(0,1/2]$ is the tuning parameter in the definition of $\tilde \sigma^2$. Let $\hat{\sigma}^2$ be the estimator defined by~\eqref{definition_adaptive_variance} where $\hat{{\boldsymbol \t}}$ is defined in~\eqref{def_estimateur_mom}. Then the following properties hold. \begin{enumerate} \item Let $\tau>0, a>0$. There exist constants $c,C>0$ and $\gamma\in(0,1/2]$ depending only on $(a,\tau)$ such that if $\lambda_j=c \log^{1/a}(ed/j), j=1,\dots,d $, we have \begin{align*} \sup_{ P_\xi\in\mathcal{P}_{a,\tau}}\sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \big|\hat{\sigma}^2-\sigma^2 \big| \le C \sigma^{2}\phi_{\sf exp}(s,d) . \end{align*} \item Let $\tau>0, a> 4$. There exist constants $c,C>0$ and $\gamma\in(0,1/2]$ depending only on $(a,\tau)$ such that if $\lambda_j=c ({d}/{j})^{1/a}, j=1,\dots,d$, we have \begin{align*} \sup_{ P_\xi\in\mathcal{P}_{a,\tau}}\sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \big|\hat{\sigma}^2-\sigma^2 \big| \le C \sigma^{2}\phi_{\sf pol}(s,d) . \end{align*} \end{enumerate} } \end{theorem} \section{Proofs of the upper bounds} \subsection{\it Proof of Proposition~\ref{proposition_over}} Fix ${\boldsymbol \t} \in \Theta_s$ and let $S$ be the support of ${\boldsymbol \t}$. We will call outliers the observations $Y_i$ with $i\in S$. There are at least $m-s$ blocks $B_i$ that do not contain outliers. Denote by $J$ a set of $m-s$ indices $i$, for which $B_i$ contains no outliers.} As $a>2$, there exist constants $L=L(a,\tau)$ and $r=r(a,\tau)\in (1,2]$ such that $\mathbf E| \xi_1^2-1|^r\le L$. Using von Bahr-Esseen inequality (\textit{cf. }\cite{petrov}) and the fact that $|B_i|\ge k$ we get $$\mathbf P\Big( \Big|\frac{1}{|B_i|}\sum_{j\in B_i} \xi_j^2-1 \Big| > 1/2\Big) \le \frac{2^{r+1}L}{ k^{r-1}} , \quad i=1,\dots,m.$$ Hence, there exists a constant $C_1=C_1(a,\tau)$ such that if $k\ge C_1$ (i.e., if $\gamma$ is small enough depending on $a$ and $\tau$), then \begin{align}\label{pprob} \mathbf P_{{\boldsymbol \t},P_\xi,\sigma}(\bar{\sigma}_i^2\notin I)\le \frac{1}{4}, \quad i=1,\dots,m, \end{align} where $I=[\frac{\sigma^2}{2}, \frac{3\sigma^2}{2}]$. Next, by the definition of the median, for any interval $I\subseteq \mathbb R$ we have \begin{align} \mathbf P_{{\boldsymbol \t},P_\xi,\sigma}(\tilde{\sigma}^2\notin I)&\le\mathbf P_{{\boldsymbol \t},P_\xi,\sigma}\Big( \sum_{i=1}^{m} \mathds{1}_{\bar{\sigma}_i^2 \notin I} \ge \frac{m}{2} \Big)\le \mathbf P_{{\boldsymbol \t},P_\xi,\sigma}\Big( \sum_{i\in J} \mathds{1}_{\bar{\sigma}_i^2 \notin I} \ge \frac{m}{2}-s \Big). \end{align} Now, $s\le \frac{\lfloor \gamma d \rfloor}{4}=\frac{m}{4}$, so that $\frac{m}{2}-s\ge \frac{m-s}{3}$. Set $\eta_i= \mathds{1}_{\bar{\sigma}_i^2 \notin I}$, $i\in J$. Due to \eqref{pprob} we have $\mathbf E(\eta_i)\le 1/4$, and $(\eta_i, i\in J)$ are independent. Using these remarks and Hoeffding's inequality we find $$ \mathbf P\Big( \sum_{i\in J} \eta_i \ge \frac{m}{2}-s \Big)\le \mathbf P\Big( \sum_{i\in J} (\eta_i - \mathbf E(\eta_i))\ge \frac{m-s}{12}\Big) \le \exp(-C (m-s)). $$ Note that $|J|=m-s\ge 3m/4=3{\lfloor \gamma d \rfloor}/4$. Thus, if $\gamma$ is chosen small enough depending only on $a$ and $\tau$ then $$\mathbf P_{{\boldsymbol \t},P_\xi,\sigma}(\tilde{\sigma}^2\notin I )\le \exp(- C d).$$ This proves the desired bound in probability. To obtain the bounds in expectation, set $Z=\left|\tilde{\sigma}^{2} - \sigma^{2}\right|$. Let first $a>4$ and take some $r\in (1, a/4)$. Then \begin{align*} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left( Z^{2} \right) & \leq \frac{\sigma^{4}}{4} + \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left( Z^2 \mathds{1}_{Z \ge \frac{\sigma^{2}}{2}} \right) \\ & \le \frac{9\sigma^{4}}{4} + 2\left(\mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left( \tilde{\sigma}^{4r} \right)\right)^{1/r} \left(\mathbf P_{{\boldsymbol \t},P_\xi,\sigma}\left( Z \ge {\sigma^{2}}/{2} \right)\right)^{1-1/r} \\ & \le \frac{9\sigma^{4}}{4} + 2\left(\mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left( \tilde{\sigma}^{4r} \right)\right)^{1/r} \exp(- C d). \end{align*} Since $m \ge 4s$, we can easily argue that $ \tilde{\sigma}^{4r} \leq \sum_{i\in J}\bar{\sigma}_{i}^{4r} $. It follows that $$ \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left( \tilde{\sigma}^{4r} \right) \le C\sigma^{4r}d^{2}. $$ Hence $ \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left( Z^{2} \right) \leq C\sigma^{4}. $ Similarly, if $a>2$, then $ \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left( Z \right) \leq C\sigma^{2}. $ } \subsection{Proof of Theorem~\ref{theorem_adaptiveupperbound}} Set ${\boldsymbol u}=\hat{{\boldsymbol \t}}-{\boldsymbol \t}$. It follows from Lemma A.2 in \cite{{BellecLecueTsybakov2017}} that $$ 2\|{\boldsymbol u}\|_2^2\le 2 \sigma \sum_{i=1}^d \xi_i u_i+\tilde{\sigma}\| {\boldsymbol \t}\|_*-\tilde{\sigma}\| \hat{{\boldsymbol \t}}\|_*,$$ where $u_i$ are the components of ${\boldsymbol u}$. Next, Lemma A.1 in \cite{{BellecLecueTsybakov2017}} yields \begin{equation*}\label{slope_pol_2} \| {\boldsymbol \t}\|_*-\| \hat{{\boldsymbol \t}}\|_*\le \Big(\sum_{j=1}^s \lambda_j^2\Big)^{1/2} \| {\boldsymbol u}\|_2 -\sum_{j=s+1}^d \lambda_j |u|_{(d-j+1)} \end{equation*} where $|u|_{(k)}$ is the $k$th order statistic of $|u_1|,\dots,|u_d|$. Combining these two inequalities we get \begin{equation}\label{combination} 2\|{\boldsymbol u}\|_2^2\le 2 \sigma \sum_{j=1}^d \xi_j u_j+\tilde{\sigma}\Big\{ \Big(\sum_{j=1}^s \lambda_j^2\Big)^{1/2} \| {\boldsymbol u}\|_2 -\sum_{j=s+1}^d \lambda_j |u|_{(d-j+1)} \Big\}. \end{equation} For some permutation $(\varphi(1), \dots, \varphi(d))$ of $(1,\ldots,d)$, we have \begin{equation}\label{permutation} \Big|\sum_{i=1}^d \xi_j u_j \Big| \le \sum_{j=1}^d |\xi|_{(d-j+1)} |u_{\varphi(j)}| \le \sum_{j=1}^d |\xi|_{(d-j+1)} |u|_{(d-j+1)}, \end{equation} where the last inequality is due to the fact that the sequence $|\xi|_{(d-j+1)}$ is non-increasing. Hence \begin{align*} 2\|{\boldsymbol u}\|_2^2 &\le 2 \sigma \sum_{j=1}^s |\xi|_{(d-j+1)} |u|_{(d-j+1)}+\tilde{\sigma} \Big(\sum_{j=1}^s \lambda_j^2\Big)^{1/2} \| {\boldsymbol u}\|_2 + \sum_{j=s+1}^d \left(2\sigma |\xi|_{(d-j+1)} - \tilde{\sigma}\lambda_{j} \right)|u|_{(d-j+1)}\\ &\le \left\{ 2 \sigma\Big(\sum_{j=1}^s |\xi|_{(d-j+1)} ^2\Big)^{1/2} + \tilde{\sigma}\Big(\sum_{j=1}^s \lambda_j^2\Big)^{1/2} + \Big(\sum_{j=s+1}^d \left(2\sigma |\xi|_{(d-j+1)} - \tilde{\sigma}\lambda_{j} \right)^{2}_{+} \Big)^{1/2}\right\} \| {\boldsymbol u}\|_2. \end{align*} This implies $$ \|{\boldsymbol u}\|^{2}_2 \le C \left\{ \sigma^{2}\sum_{j=1}^s |\xi|_{(d-j+1)} ^2 + \tilde{\sigma}^{2}\sum_{j=1}^s \lambda_j^2 + \sum_{j=s+1}^d \left(2\sigma |\xi|_{(d-j+1)} - \tilde{\sigma}\lambda_{j} \right)^{2}_{+} \right\}. $$ From Lemmas \ref{lemma_esp4_subgaussian} and \ref{lemma_esp4_polynomial} we have $\mathbf E(|\xi|_{(d-j+1)} ^2)\le C\lambda_j^2$. Using this and Proposition \ref{proposition_over} we obtain \begin{equation}\label{eqq} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left(\|{\boldsymbol u}\|^{2}_2\right) \le C \left( \sigma^{2}\sum_{j=1}^s \lambda_j^2 + \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\Bigg(\sum_{j=s+1}^d \left(2\sigma |\xi|_{(d-j+1)} - \tilde{\sigma}\lambda_{j} \right)^{2}_{+}\Bigg)\right) . \end{equation} Define the events $\mathcal{A}_{j}=\Big\{ |\xi|_{(d-j+1)}\le {\lambda_j}/{4} \Big\}\cap \Big\{ 1/2 \le {\tilde{\sigma}^2}/{\sigma^2}\le 3/2\Big\}$ for $j=s+1,\ldots,d$. Then $$ \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left(\sum_{j=s+1}^d \left(2\sigma |\xi|_{(d-j+1)} - \tilde{\sigma}\lambda_{j} \right)^{2}_{+}\right) \le 4\sigma^{2}\mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left(\sum_{j=s+1}^d|\xi|^{2}_{(d-j+1)}\mathds{1}_{\mathcal{A}_{j}^{c}} \right). $$ Fixing some $1<r<a/2 $ we get $$ \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left(\sum_{j=s+1}^d \left(2\sigma |\xi|_{(d-j+1)} - \tilde{\sigma}\lambda_{j} \right)^{2}_{+}\right) \le 4\sigma^{2}\sum_{j=s+1}^d\mathbf E\left(|\xi|^{2r}_{(d-j+1)}\right)^{1/r}\mathbf P_{{\boldsymbol \t},P_\xi,\sigma}\left(\mathcal{A}_{j}^{c}\right)^{1-1/r}. $$ Lemmas~\ref{lemma_esp4_subgaussian}, \ref{lemma_esp4_polynomial} and the definitions of parameters $\lambda_j$ imply that $$ \mathbf E\left(|\xi|^{2r}_{(d-j+1)}\right)^{1/r} \le C\lambda_{s}^2, \quad j=s+1,\dots,d. $$ Furthermore, it follows from the proofs of Lemmas~\ref{lemma_esp4_subgaussian} and \ref{lemma_esp4_polynomial} that if the constant $c$ in the definition of $\lambda_{j}$ is chosen large enough, then $\mathbf P (|\xi|_{(d-j+1)} >\lambda_{j}/4) \le q^{j}$ for some $q<1/2$ depending only on $a$ and $\tau$. This and Proposition \ref{proposition_over} imply that $ \mathbf P_{{\boldsymbol \t},P_\xi,\sigma}(\mathcal{A}_{j}^{c}) \le e^{-cd} + q^{j}. $ Hence, $$ \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left(\sum_{j=s+1}^d \left(2\sigma |\xi|_{(d-j+1)} - \tilde{\sigma}\lambda_{j} \right)^{2}_{+}\right) \le C\sigma^{2}\lambda^{2}_{s}\sum_{j=s+1}^d(e^{-cd} + q^{j})^{1-1/r} \le C^{\prime} \sigma^{2}\sum_{j=1}^s \lambda_j^2. $$ Combining this inequality with \eqref{eqq} we obtain \begin{equation}\label{eqq1} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma}\left(\|{\boldsymbol u}\|^{2}_2\right) \le C \sigma^{2}\sum_{j=1}^s \lambda_j^2. \end{equation} To complete the proof, it remains to note that $\sum_{j=1}^s \lambda_j^2\le C (\phi_{\sf pol}^*(s,d))^2$ in the polynomial case and $\sum_{j=1}^s \lambda_j^2\le C (\phi_{\sf exp}^*(s,d))^2$ in the exponential case, cf. Lemma \ref{lemma:sum}. \subsection{Proof of part (i) of Proposition~\ref{prop:norm:gauss}} We consider separately the "dense" zone $s>\sqrt{d}$ and the "sparse" zone $s\le\sqrt{d}$. Let first $s>\sqrt{d}$. Then the rate $\phi_{{\cal N}(0,1)}^*(s,d)$ is of order $\sqrt{\frac{s}{1+\log_+(s^{2}/d)}}$. Thus, for $s>\sqrt{d}$ we need to prove that \begin{equation}\label{g1} \sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t}, {\cal N}(0,1),\sigma} \bigg( \bigg|\frac{ \hat{N}^*-\|{\boldsymbol \t}\|_2}{\sigma}\bigg|^{2} \bigg)\le \frac{Cs}{1+\log_+(s^{2}/d)}. \end{equation} Denoting ${\boldsymbol \xi}=(\xi_1,\dots,\xi_d)$ we have \begin{eqnarray}\label{g2} \Big| \hat{N}^* - \|{\boldsymbol \t}\|_2\Big| &=& \bigg| \Big| \sum_{j=1}^d Y_j^2 -d\hat{\sigma}^2\Big|^{1/2} - \|{\boldsymbol \t}\|_2\bigg|\\ \nonumber &=&\bigg| \sqrt{ \big| \|{\boldsymbol \t}\|_2^2+2\sigma({\boldsymbol \t},{\boldsymbol \xi}) + \sigma^2\|{\boldsymbol \xi}\|_2^2 -d\hat{\sigma}^2\big|} - \|{\boldsymbol \t}\|_2\bigg| \\ \nonumber &\le&\bigg| \sqrt{ \big| \|{\boldsymbol \t}\|_2^2+2\sigma({\boldsymbol \t},{\boldsymbol \xi}) \big|} - \|{\boldsymbol \t}\|_2\bigg| + \sigma \sqrt{\big|\|{\boldsymbol \xi}\|_2^2 -d\big| } +\sqrt{d |\sigma^2 - \hat{\sigma}^2 |}. \end{eqnarray} The first term in the last line vanishes if ${\boldsymbol \t}= 0$, while for ${\boldsymbol \t}\ne 0$ it is bounded as follows: \begin{equation}\label{g3} \bigg| \sqrt{ \big| \|{\boldsymbol \t}\|_2^2+2\sigma({\boldsymbol \t},{\boldsymbol \xi}) \big|} - \|{\boldsymbol \t}\|_2\bigg| = \|{\boldsymbol \t}\|_2\bigg| \sqrt{ \Big| 1+\frac{2\sigma({\boldsymbol \t},{\boldsymbol \xi})}{\|{\boldsymbol \t}\|_2^2} \Big|} - 1\bigg|\le \frac{2\sigma|({\boldsymbol \t},{\boldsymbol \xi})|}{\|{\boldsymbol \t}\|_2} \end{equation} where we have used the inequality $| \sqrt{ | 1+x|} - 1|\le |x|$, $\forall x\in \mathbb R$. Since here $|({\boldsymbol \t},{\boldsymbol \xi})|/\|{\boldsymbol \t}\|_2\sim {\cal N}(0,1)$ we have, for all ${\boldsymbol \t}$, \begin{equation}\label{g31} \mathbf E\left( \left| \sqrt{ \big| \|{\boldsymbol \t}\|_2^2+2\sigma({\boldsymbol \t},{\boldsymbol \xi}) \big|} - \|{\boldsymbol \t}\|_2\right|^{2} \right) \le 4 \sigma^{2}, \end{equation} and since $ \|{\boldsymbol \xi}\|_2^2$ has a chi-square distribution with $d$ degrees of freedom we have \begin{eqnarray}\label{g4} \mathbf E\Big( \big| \|{\boldsymbol \xi}\|_2^2-d \big| \Big)&\le & \left(\mathbf E\Big( \big| \|{\boldsymbol \xi}\|_2^2-d \big|^{2} \Big)\right)^{1/2} = \sqrt{2d}.\nonumber \end{eqnarray} Next, by Proposition~\ref{prop:variance:gauss} we have that, for $s> \sqrt{d}$, \begin{equation}\label{gf1} \sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t}, {\cal N}(0,1),\sigma} \left( \Big|\frac{\hat{\sigma}^2}{\sigma^2}-1\Big| \right) \le \frac{Cs}{d(1+\log_+(s^{2}/d))} \end{equation} for some absolute constant $C>0$. Combining \eqref{g2} -- \eqref{g4} yields \eqref{g1}. Let now $s\le \sqrt{d}$. Then the rate $\phi_{{\cal N}(0,1)}^*(s,d)$ is of order $\sqrt{s\log(1+d/s^{2})}$. Thus, for $s \le \sqrt{d}$ we need to prove that \begin{equation}\label{gf10} \sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t}, {\cal N}(0,1),\sigma} \bigg( \bigg|\frac{ \hat{N}^*-\|{\boldsymbol \t}\|_2}{\sigma}\bigg|^{2} \bigg)\le C s\log(1+d/s^{2}). \end{equation} We have \begin{eqnarray}\label{gf2} \quad \Big| \hat{N}^* - \|{\boldsymbol \t}\|_2\Big| &=& \bigg| \Big| \sum_{j=1}^d (Y_j^2~\mathds{1}_{\{ |Y_j|>\rho_{j} \}}) -d\alpha \hat{\sigma}^2\Big|^{1/2} - \|{\boldsymbol \t}\|_2\bigg|\\ \nonumber &=& \bigg| \Big| \sum_{j \in S}(Y_j^2~\mathds{1}_{\{ |Y_j|>\rho_{j} \}}) + \sigma^{2}\sum_{j \not\in S}(\xi_{j}^2~\mathds{1}_{\{ \sigma|\xi_j|>\rho_{j} \}}) - d\alpha \hat{\sigma}^2\Big|^{1/2} - \|{\boldsymbol \t}\|_2\bigg|\\ \nonumber &\le&\left| \sqrt{ \sum_{j \in S}(Y_j^2~\mathds{1}_{\{ |Y_j|>\rho_{j} \}})} - \|{\boldsymbol \t}\|_2 \right| + \left| \sigma^{2}\sum_{j \not\in S}(\xi_{j}^2~\mathds{1}_{\{ \sigma|\xi_j|>\rho_{j} \}}) - d\alpha \hat{\sigma}^2\right|^{1/2}. \end{eqnarray} Here, \begin{eqnarray}\label{gf3} \left| \sqrt{ \sum_{j \in S}(Y_j^2~\mathds{1}_{\{ |Y_j|>\rho_{j} \}})} - \|{\boldsymbol \t}\|_2 \right| &\leq& \left| \sqrt{ \sum_{j \in S}(Y_j~\mathds{1}_{\{ |Y_j|>\rho_{j} \}} - \theta_{j})^{2}} \right| \\ \nonumber &\leq& \sqrt{\sum_{j\in S}\rho_{j}^2} + \sigma \sqrt{ \sum_{j \in S} \xi_{j}^{2}}. \end{eqnarray} Hence, writing for brevity $\mathbf E_{{\boldsymbol \t}, {\cal N}(0,1),\sigma}=\mathbf E$, we get \begin{eqnarray}\label{gf3a}\nonumber \mathbf E \left( \left| \sqrt{ \sum_{j \in S}(Y_j^2~\mathds{1}_{\{ |Y_j|>\rho_{j} \}})} - \|{\boldsymbol \t}\|_2 \right|^{2}\right) &\leq& 16 \mathbf E\left(\hat{\sigma}_{\sf med, 1}^{2}+\hat{\sigma}_{\sf med,2}^{2}\right) s \log\big(1+{d}/{s^{2}}\big) + 2\sigma^{2}s \\ & \leq& C\sigma^{2}s\log(1+d/s^{2}),\nonumber \end{eqnarray} where we have used the fact that $\mathbf E\left(|\hat{\sigma}_{\sf med, k}^{2} - \sigma^{2}| \right) \leq C \sigma^{2}$, ${\sf k}=1,2$, by Proposition~\ref{prop:gao}. Next, we study the term $\Gamma= \left| \sigma^{2}\sum_{j \not\in S}(\xi_{j}^2~\mathds{1}_{\{ \sigma|\xi_j|>\rho_{j} \}}) - d\alpha \hat{\sigma}^2\right| $. We first write \begin{eqnarray}\label{gf4} \Gamma &\leq& \left| \sigma^{2}\sum_{j \not\in S}\xi_{j}^2(~\mathds{1}_{\{ \sigma|\xi_j|>\rho_{j} \}} - ~\mathds{1}_{\{ \sigma|\xi_j|>t_{*} \}}) \right| + \left|\sigma^{2}\sum_{j \not\in S}(\xi_{j}^2 ~\mathds{1}_{\{ \sigma|\xi_j|>t_{*} \}}) - d\alpha\hat{\sigma}^2 \right|, \end{eqnarray} where $t_{*}= 2\sigma \sqrt{2\log (1+d/s^2)}$. For the second summand on the right hand side of \eqref{gf4} we have $$ \left|\sigma^{2}\sum_{j \not\in S}(\xi_{j}^2 ~\mathds{1}_{\{ \sigma|\xi_j|>t_{*} \}}) - d\alpha\hat{\sigma}^2 \right| \leq \sigma^{2}\left|\sum_{j \not\in S}(\xi_{j}^2 ~\mathds{1}_{\{ \sigma|\xi_j|>t_{*} \}}) - (d-|S|)\alpha \right| + \left| \sigma^{2}-\hat{\sigma}^2 \right|d\alpha+|S|\alpha\sigma^{2}, $$ where $|S|$ denotes the cardinality of $S$. By Proposition~\ref{prop:variance:gauss} we have $\mathbf E( |\hat{\sigma}^2 - \sigma^2|) \le C/\sqrt{d}$ for $s\le \sqrt{d}$. Hence, $$ \mathbf E\left|\sigma^{2}\sum_{j \not\in S}(\xi_{j}^2 ~\mathds{1}_{\{ \sigma|\xi_j|>t_{*} \}}) - d\alpha\hat{\sigma}^2 \right| \leq \sigma^{2}\sqrt{d\mathbf E\left(\xi_1^4~\mathds{1}_{\{ |\xi_1|> \sqrt{2\log (1+d/s^2)} \}}\right)} + C\alpha\sigma^{2}\left( \sqrt{d} + s\right). $$ It is not hard to check (cf., e.g., \cite[Lemma 4]{CollierCommingesTsybakov2017}) that, for $s\le \sqrt{d}$, $$\alpha \leq C (\log\left(1+d/s^{2} \right))^{1/2}\frac{s^{2}}{d},$$ and $$ \mathbf E\left(\xi_1^4~\mathds{1}_{\{ |\xi_1|> \sqrt{2\log (1+d/s^2)} \}}\right) \leq C (\log\left(1+d/s^{2}\right))^{3/2}\frac{s^{2}}{d}, $$ so that $$ \mathbf E\left|\sigma^{2}\sum_{j \not\in S}(\xi_{j}^2 ~\mathds{1}_{\{ \sigma|\xi_j|>t_{*} \}}) - d\alpha\hat{\sigma}^2 \right| \leq C\sigma^{2}s\log (1+d/s^2). $$ Thus, to complete the proof it remains to show that \begin{equation}\label{eqq2} \sigma^{2} \sum_{j \not\in S}\mathbf E\left|\xi_{j}^2(\mathds{1}_{\{ \sigma|\xi_j|>\rho_{j} \}} - \mathds{1}_{\{ \sigma|\xi_j|>t_{*} \}}) \right| \le C\sigma^{2}s\log (1+d/s^2). \end{equation} Recall that $\rho_{j}$ is independent from $\xi_{j}$. Hence, conditioning on $\rho_{j}$ we obtain \begin{align}\label{eqq4} \sigma^{2}\mathbf E\left(\left|\xi_{j}^2(\mathds{1}_{\{ \sigma|\xi_j|>\rho_{j} \}} - \mathds{1}_{\{ \sigma|\xi_j|>t_{*} \}}) \right|\rho_{j}\right) \le |\rho_{j}^{2}-t_{*}^2|e^{-t_{*}^2/(8\sigma^{2})} + \sigma^{2}\mathds{1}_{\{\rho_{j}< t_{*}/2\}}, \end{align} where we have used the fact that, for $b>a>0$, $$ \int_a^b x^2 e^{-x^2/2}dx \le \int_a^b x e^{-x^2/4}dx\le |b^2 - a^2| e^{-\min (a^2, b^2)/4}/2. $$ Using Proposition~\ref{prop:gao} and definitions of $\rho_{j}$ and $t_{*}$, we get that, for $s\le \sqrt{d}$, \begin{align}\label{eqq5} \mathbf E\left(|\rho_{j}^{2}-t_{*}^2|\right) e^{-t_{*}^2/(8\sigma^{2})} &\le 8 \max_{{\sf k}=1,2}\mathbf E( |\hat{\sigma}^{2}_{\sf med,k} - \sigma^{2}|) \frac{s^2}{d}\log(1+d/s^{2}) \\ \nonumber &\le C\sigma^{2} \frac{s}{d}\log(1+d/s^{2}). \end{align} Next, it follows from Proposition~\ref{prop:gao} that there exists $\gamma\in (0,1/8)$ small enough such that for $s\le \gamma d$ we have $\max_{{\sf k}=1,2}\mathbf P( \hat{\sigma}^{2}_{\sf med,k} < \sigma^{2}/2)\le 2 e^{-c_\gamma d}$ where $c_\gamma>0$ is a constant. Thus, $ \sigma^{2}\mathbf P(\rho_{j} < t_{*}/2) \le 2 \sigma^{2} e^{-c_\gamma d} \le C\sigma^{2}(s/d)\log (1+d/s^2). $ Combining this with \eqref{eqq4} and \eqref{eqq5} proves \eqref{eqq2}. \subsection{Proof of part (i) of Proposition~\ref{prop:norm:known_sigma} and part (i) of Proposition~\ref{prop:norm:poly:known_sigma}} We only prove Proposition \ref{prop:norm:known_sigma} since the proof of Proposition \ref{prop:norm:poly:known_sigma} is similar taking into account that $\mathbf{E}(\xi_{1}^{4})<\infty$. We consider separately the "dense" zone $s>\frac{\sqrt{d}}{\log^{\frac{2}{a}}(ed)}$ and the "sparse" zone $s \le \frac{\sqrt{d}}{\log^{\frac{2}{a}}(ed)}$ . Let first $s>\frac{\sqrt{d}}{\log^{\frac{2}{a}}(ed)}$ . Then the rate $\phi_{\sf exp}^\circ(s,d)$ is of order $d^{1/4}$ and thus we need to prove that \begin{equation*} \sup_{P_\xi \in \mathcal{G}_{a,\tau}}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t}, P_\xi,\sigma} \big( | \hat{N}_{\sf exp}^{\circ} - \|{\boldsymbol \t}\|_2 |^{2} \big)\le C\sigma^{2}\sqrt{d}. \end{equation*} Since $\sigma$ is known, arguing similarly to \eqref{g2} - \eqref{g3} we find $$ | \hat{N}_{\sf exp}^{\circ} - \|{\boldsymbol \t}\|_2| \le \left|\frac{2\sigma|({\boldsymbol \t},{\boldsymbol \xi})|}{\|{\boldsymbol \t}\|_2}\right|\mathds{1}_{{\boldsymbol \t} \neq 0} + \sigma \sqrt{\big|\|{\boldsymbol \xi}\|_2^2 -d\big| }. $$ As $\mathbf{E}(\xi_{1}^{4})<\infty$, this implies $$ \mathbf E_{{\boldsymbol \t}, P_\xi,\sigma} \big( | \hat{N}_{\sf exp}^{\circ} - \|{\boldsymbol \t}\|_2 |^{2} \big) \le 8 \sigma^{2} + C\sigma^{2}\sqrt{d}, $$ which proves the result in the dense case. Next, in the sparse cas $s \leq \frac{\sqrt{d}}{\log^{\frac{2}{a}}(ed)}$, we need to prove that \begin{equation*} \sup_{P_\xi \in \mathcal{G}_{a,\tau}}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf E_{{\boldsymbol \t}, P_\xi,\sigma} \big( | \hat{N}_{\sf exp}^{\circ} - \|{\boldsymbol \t}\|_2 |^{2} \big) \le C \sigma^{2}s\log^{\frac{2}{a}}(ed). \end{equation*} This is immediate by Theorem~\ref{theorem_adaptiveupperbound} and the fact that $ | \hat{N}_{\sf exp}^{\circ} - \|{\boldsymbol \t}\|_2 |^{2} \le \| \hat{{\boldsymbol \t}} - {\boldsymbol \t}\|^2_2 $ for the plug-in estimator $\hat{N}_{\sf exp}^\circ=\|\hat {\boldsymbol \t}\|_2$. } \subsection{Proof of Proposition~\ref{prop:gao}} Denote by $G$ the cdf of $(\sigma\xi_1)^2$ and by $G_d$ the empirical cdf of $((\sigma\xi_i)^2: i\not\in S)$, where $S$ is the support of ${\boldsymbol \t}$. Let $M$ be the median of $G$, that is $G(M)=1/2$. By the definition of $\hat{M}$, \begin{equation}\label{prop:gao_1} |F_d(\hat{M})-1/2|\le |F_d(M)-1/2|. \end{equation} It is easy to check that $|F_d(x)-G_d(x)|\le s/d$ for all $x>0$. Therefore, \begin{equation}\label{prop:gao_2} |G_d(\hat{M})-1/2|\le |G_d(M)-1/2| +2s/d. \end{equation} The DKW inequality \cite[page 99]{Wasserman}, yields that $ \mathbf P(\sup_{x\in \mathbb R}|G_d(x)-G(x)|\ge u)\le 2e^{-2u^2(d-s)}$ for all $u>0$. Fix $t>0$ such that $\sqrt{\frac{t}{d}}+ \frac{s}{d} \le 1/8$, and consider the event $$\mathcal A:=\left\{\sup_{x\in \mathbb R}|G_d(x)-G(x)|\le \sqrt{\frac{t}{2(d-s)}}\right\}. $$ Then, $\mathbf P(\mathcal A) \ge 1-2e^{-t}$. On the event $\mathcal A$, we have \begin{equation}\label{prop:gao_3} |G(\hat{M})-1/2|\le |G(M)-1/2| +2\left( \sqrt{\frac{t}{2(d-s)}}+ \frac{s}{d}\right) \le 2\left( \sqrt{\frac{t}{d}}+ \frac{s}{d}\right) \le \frac14, \end{equation} where the last two inequalities are due to the fact that $G(M)=1/2$ and to the assumption about $t$. Notice that \begin{equation}\label{prop:gao_4} |G(\hat{M})-1/2|= |G(\hat{M})-G(M)| = \big|F(\hat{M}/\sigma^{2})-F(M/\sigma^{2})\big|. \end{equation} Using \eqref{prop:gao_3}, \eqref{prop:gao_4} and the fact that $M= \sigma^2F^{-1}(1/2)$ we obtain that, on the event $\mathcal A$, \begin{equation}\label{prop:gao_5} F^{-1}(1/4) \le \hat{M}/\sigma^{2} \le F^{-1}(3/4). \end{equation} This and \eqref{prop:gao_4} imply \begin{equation}\label{prop:gao_6} |G(\hat{M})-1/2|\ge c_{**}\big|\hat{M}/\sigma^{2}- M/\sigma^{2}\big|= c_{**}\beta\,\big|\hat{\sigma}_{\sf med}^{2}/\sigma^{2}- 1\big|. \end{equation} where $c_{**}= \min_{x\in[ F^{-1}(1/4), F^{-1}(3/4)]}F'(x)>0$, and $\beta=F^{-1}(1/2)$. Combining the last inequality with \eqref{prop:gao_3} we get that, on the event $\mathcal A$, $$ \,\big|\hat{\sigma}_{\sf med}^{2}/\sigma^{2}- 1\big|\le c_{**}^{-1}\beta\left( \sqrt{\frac{t}{d}}+\frac{s}{d}\right). $$ Recall that we assumed that $\sqrt{\frac{t}{d}}+ \frac{s}{d} \le 1/8$. Thus, there exists a constant $c_*>0$ depending only on $F$ such that for $t>0$ and integers $s,d$ satisfying $\sqrt{\frac{t}{d}}+ \frac{s}{d} \le 1/8$ we have \begin{equation}{\label{eq:var:prob}} \sup_{\sigma>0}\sup_{\|{\boldsymbol \t}\|_0\le s} \mathbf P_{{\boldsymbol \t}, F,\sigma} \left( \Big|\frac{\hat{\sigma}_{\sf med}^2}{\sigma^2}-1\Big| \ge c_*\left(\sqrt{\frac{t}{d}}+\frac{s}{d}\right)\right) \le 2e^{-t}. \end{equation} This and the assumption that $\frac{s}{d} \le \gamma<1/8$ imply the result of the proposition in probability. We now prove the result in expectation. Set $Z=\left|\hat{\sigma}_{\sf med}^2-\sigma^{2}\right|/\sigma^{2}$. We have $$ \mathbf E_{{\boldsymbol \t}, F,\sigma}\left(Z \right) \le c_{*}s/d + \int_{c_{*}s/d}^{c_{*}/8} \mathbf P_{{\boldsymbol \t}, F,\sigma} \left( Z> u \right)du + \mathbf E_{{\boldsymbol \t}, F,\sigma}\left(Z\mathds{1}_{Z\ge c_{*}/8 } \right). $$ Using \eqref{eq:var:prob}, we get $$ \int_{c_{*}s/d}^{c_{*}/8} \mathbf P_{{\boldsymbol \t}, F,\sigma} \left( Z > u \right)du \leq \frac{2c_{*}}{\sqrt{d}}\int_{0}^{\infty}e^{-t^{2}}dt \leq \frac{C}{\sqrt{d}}. $$ As $s < d/2$, one may check that $\hat{\sigma}_{\sf med}^{2+\epsilon} \le \big(\max_{i\not\in S} (\sigma\xi_i)^2/\beta\big)^{1+\epsilon/2} \leq (\sigma^{2}/\beta)^{1+\epsilon/2} \sum_{i=1}^{d}|\xi_{i}|^{2+\epsilon} $. Since $\mathbf E|\xi_{1}|^{2+\epsilon}<\infty$ this yields $ \mathbf E_{{\boldsymbol \t}, F,\sigma}\left(Z^{1+\epsilon}\right) \le C d. $ It follows that \begin{align*} \mathbf E_{{\boldsymbol \t}, F,\sigma}\left(Z\mathds{1}_{Z\ge c_{*}/8 } \right) &\le \left(\mathbf E_{{\boldsymbol \t}, F,\sigma}\left(Z^{1+\epsilon}\right) \right)^{1/(1+\epsilon)} \mathbf P_{{\boldsymbol \t}, F,\sigma}\left(Z\ge c_{*}/8 \right)^{\epsilon/(1+\epsilon)} \le C d e^{-d/C}. \end{align*} Combining the last three displays yields the desired bound in expectation. \subsection{Proof of part (i) of Proposition~\ref{prop:variance:gauss}} In this proof, we write for brevity $\mathbf E=\mathbf E_{{\boldsymbol \t}, \sigma, \mathcal N(0,1)}$ and $\mathbf P=\mathbf P_{{\boldsymbol \t}, \sigma, \mathcal N(0,1)}$. Set $$\varphi_d(t)=\frac{1}{d}\sum_{i=1}^d e^{itY_j},\quad \varphi(t)= \mathbf E (\varphi_d(t)), \quad \varphi_0(t)=e^{-\frac{t^2\sigma^{2}}{2}}.$$ Since $s/d<1/8$ and $\varphi(t)=\varphi_0(t)\big( 1-\frac{|S|}{d}+\frac{1}{d}\sum_{j\in S} \exp(i\theta_j t)\big)$, we have \begin{equation}\label{propfourier1} \frac34\varphi_{0}(t)\le\Big(1-\frac{2s}{d}\Big)\varphi_0(t)\le |\varphi(t)|\le \varphi_0(t). \end{equation} Consider the events $$\mathcal B_1=\Big\{ \sigma^{2}/2 \leq \tilde{\sigma}^{2}\leq 3\sigma^{2}/2 \Big\}\quad\text{and}\quad \mathcal A_u=\Big\{ \sup_{v\in \mathbb R} |\varphi_d(v) -\varphi(v)|\le \sqrt{\frac{u}{d}}\Big\}, \quad u>0.$$ By Proposition \ref{proposition_over}, $\mathcal B_1$ holds with probability at least $1-e^{-cd}$ if the tuning parameter $\gamma$ in the definition of $\tilde{\sigma}^{2}$ is small enough. Using Hoeffding's inequality, it is not hard to check that $\mathcal A_u$ holds with probability at least $1-4e^{-u}$. Moreover, \begin{equation}\label{emp} \mathbf E\Big(\sqrt{d}\sup_{v\in \mathbb R} |\varphi_d(v) -\varphi(v)|\Big)\le C. \end{equation} Notice that on the event ${\cal D} = \{|\varphi_{d}(\hat{t}_{1})|> (es/\sqrt{d}+1)^{-1}/4\}$ we have $\hat{\sigma}^{2}=\tilde v^2 \le 2 \tilde{\sigma}^{2}$. First, we bound the risk restricted to ${\cal D}\cap \mathcal B_1^{c}$. We have $$ \mathbf{E}\big(|\hat{\sigma}^{2}-\sigma^{2}|\mathds{1}_{{\cal D}\cap\mathcal B_{1}^{c}}\big) \leq \mathbf{E}\big(|2\tilde{\sigma}^{2}+\sigma^{2}|\mathds{1}_{\mathcal B_{1}^{c}}\big). $$ Thus, using the Cauchy-Schwarz inequality and Proposition \ref{proposition_over} we find \begin{equation}\label{eq:B_comp} \mathbf{E}\big(|\hat{\sigma}^{2}-\sigma^{2}|\mathds{1}_{{\cal D}\cap\mathcal B_{1}^{c}}\big) \leq C\sigma^{2}e^{-d/C}\leq \frac{C'\sigma^{2}}{\sqrt{d}}. \end{equation} Next, we bound the risk restricted to ${\cal D}^{c}$. It will be useful to note that $ \mathcal A_{\log{d}} \cap \mathcal B_{1} \subset {\cal D}$. Indeed, on $ \mathcal A_{\log{d}} \cap \mathcal B_{1}$, using the assumption $s<d/8$ we have $$ |\varphi_{d}(\hat{t}_{1})| \geq \frac34\varphi_{0}(\hat{t}_{1}) - \sqrt{\frac{\log{d}}{d}} \geq \frac{3}{4({es}/{\sqrt{d}}+1)^{1/3}} - \sqrt{\frac{\log{d}}{d}} > \frac{1}{4({es}/{\sqrt{d}}+1)} . $$ Thus, applying again the Cauchy-Schwarz inequality and Proposition \ref{proposition_over} we find \begin{align}\label{eq:sigma_nul} \mathbf{E}\big(|\hat{\sigma}^{2}-\sigma^{2}|\mathds{1}_{{\cal D}^{c}}\big) &=\mathbf{E}\big(|\tilde{\sigma}^{2}-\sigma^{2}|\mathds{1}_{{\cal D}^{c}}\big) \le \left(\mathbf{E}\big(|\tilde{\sigma}^{2}-\sigma^{2}|^2\big)\right)^{1/2}\left(\mathbf P ({\cal D}^{c})\right)^{1/2} \\ \nonumber &\leq C\sigma^{2}\sqrt{ \mathbf P (\mathcal A_{\log{d}}^{c})+\mathbf P (\mathcal B_{1}^{c}) } \le C\sigma^{2}\sqrt{ \frac{4}{d} + e^{-cd} }\leq \frac{C'\sigma^{2}}{\sqrt{d}} . \end{align} To complete the proof, it remains to handle the risk restricted to the event $\mathcal{C} = {\cal D}\cap\mathcal B_{1}$. We will use the following decomposition \begin{equation}\label{propfourier3} |\hat{\sigma}^2-\sigma^2|\le \Big|\frac{2\log (|\varphi_d(\hat{t}_1)|)}{\hat{t}_1^2}- \frac{2 \log (|\varphi(\hat{t}_1)|)}{\hat{t}_1^2}\Big|+ \Big|-\frac{2 \log (|\varphi(\hat{t}_1)|)}{\hat{t}_1^2}-\sigma^2\Big|. \end{equation} Since $-{2 \log (|\varphi_0(\hat{t}_1)|)}/{\hat{t}_1^2}=\sigma^2$, it follows from \eqref{propfourier1} that \begin{equation* \Big|-\frac{2 \log (|\varphi(\hat{t}_1)|)}{\hat{t}_1^2}-\sigma^2\Big|\le \frac{Cs}{d\,\hat{t}_1^2} = \frac{Cs\tilde{\sigma}^{2}}{d\log(4(es/\sqrt{d}+1))}. \end{equation*} Therefore, \begin{equation}\label{eq:part1} \mathbf{E}\Big(\Big|-\frac{2 \log (|\varphi(\hat{t_1})|)}{\hat{t}_1^2}-\sigma^2\Big| \mathds{1}_{\mathcal{C}}\Big) \leq \frac{Cs\sigma^{2}}{d\log(es/\sqrt{d}+1)}. \end{equation} Next, using the inequality \begin{equation* \big|\log (|\varphi_d(t)|)-\log (|\varphi(t)|)\big|\le \frac{|\varphi_d(t)-\varphi(t)|} {|\varphi(t)| \wedge |\varphi_{d}(t)| }\,,\quad \forall t\in \mathbb R, \end{equation*} we find \begin{align*} \Big|\frac{\log (|\varphi_d(\hat{t}_1)|)}{\hat{t}_1^2}-\frac{ \log (|\varphi(\hat{t_1})|)}{\hat{t}_1^2}\Big| \mathds{1}_{\mathcal{C}} &\le\frac{\sup_{v\in \mathbb R} |\varphi_d(v) -\varphi(v)|} {\hat{t}^{2}_{1}|\varphi(\hat{t}_{1})| \wedge |\varphi_{d}(\hat{t}_{1})|}\mathds{1}_{\mathcal{C}} \\ &\leq \frac{C\sigma^{2}U}{\sqrt{d}\,\log(es/\sqrt{d}+1)}\left(\frac{es}{\sqrt{d}}+1\right)\, , \end{align*} where $U=\sqrt{d}\,\sup_{v\in \mathbb R} |\varphi_d(v) -\varphi(v)|$. Bounding $\mathbf E(U)$ by \eqref{emp} we finally get \begin{equation}\label{eq:part3} \mathbf E\left[\Big|\frac{\log (|\varphi_d(\hat{t}_1)|)}{\hat{t}_1^2}- \frac{ \log (|\varphi(\hat{t_1})|)}{\hat{t}_1^2}\Big|\mathds{1}_{\mathcal{C}}\right]\le C\sigma^2\max\left(\frac1{\sqrt{d}},\frac{s}{d\log (es/\sqrt{d}+1)}\right) . \end{equation} We conclude by combining inequalities \eqref{eq:B_comp} - \eqref{eq:part3}. } \subsection{Proof of Theorems~\ref{theorem_upperbound_noise_subgaussian} and~\ref{theorem_upperbound_noise_polynomial}} Let $\|{\boldsymbol \t}\|_0\le s$ and denote by $S$ the support of ${\boldsymbol \t}$. Note first that, by the definition of $\hat{\sigma}^2$, \begin{equation}\label{upper_crucial} \frac{\sigma^2}{d}\sum_{i=1}^{d-2s} \xi_{(i)}^2 \le \hat{\sigma}^2\le \frac{\sigma^2}{d}\sum_{i\in S^c} \xi_{i}^2, \end{equation} where $\xi_{(1)}^2\le \cdots\le \xi_{(d)}^2$ are the ordered values of $\xi_1^2,\dots,\xi_d^2$. Indeed, the right hand inequality in \eqref{upper_crucial} follows from the relations $$ \sum_{k=1}^{d-s} Y_{(k)}^2 = \min_{J: |J|=d-s} \sum_{i\in J} Y_{(i)}^2 \le \sum_{i\in S^c}Y_{(i)}^2 = \sum_{i\in S^c} \sigma^2\xi_{i}^2. $$ To show the left hand inequality in \eqref{upper_crucial}, notice that at least $d-2s$ among the $d-s$ order statistics $Y_{(1)}^2, \dots,Y_{(d-s)}^2$ correspond to observations $Y_k$ of pure noise, \textit{i.e., } $Y_k=\sigma \xi_k$. The sum of squares of such observations is bounded from below by the sum of the smallest $d-2s$ values $\sigma^2\xi_{(1)}^2, \dots, \sigma^2\xi_{(d-2s)}^2$ among $\sigma^2\xi_{1}^2, \dots, \sigma^2\xi_{d}^2$. Using \eqref{upper_crucial} we get \begin{equation*} \Big(\hat{\sigma}^2-\frac{\sigma^2}{d}\sum_{i=1}^d \xi_i^2 \Big)^2 \le \frac{\sigma^4}{d^2} \Big( \sum_{i=d-2s+1}^d \xi_{(i)}^2 \Big)^2, \end{equation*} so that \begin{equation*} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \Big(\hat{\sigma}^2-\frac{\sigma^2}{d}\sum_{i=1}^d \xi_i^2 \Big)^2 \le \frac{\sigma^4}{d^2} \Big(\sum_{i= 1}^{2s} \sqrt{ \mathbf E \xi_{(d-i+1)}^4} \Big)^2. \end{equation*} Then \begin{eqnarray*} \mathbf E_{{\boldsymbol \t},P_\xi,\sigma} (\hat{\sigma}^2-\sigma^2)^2 &\le& 2\mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \Big(\hat{\sigma}^2-\frac{\sigma^2}{d}\sum_{i=1}^d \xi_i^2 \Big)^2+ 2\mathbf E_{{\boldsymbol \t},P_\xi,\sigma} \Big(\frac{\sigma^2}{d}\sum_{i=1}^d \xi_i^2 -\sigma^2\Big)^2 \\ &\le &\frac{2\sigma^4}{d^2} \Big(\sum_{i= 1}^{2s} \sqrt{ \mathbf E \xi_{(d-i+1)}^4} \Big)^2 + \frac{2\sigma^4 \mathbf E(\xi_1^4)}{d}. \end{eqnarray*} Note that under assumption \eqref{definition_subgaussian} we have $\mathbf E(\xi_1^4)<\infty$ and Lemmas~\ref{lemma_esp4_subgaussian} and \ref{lemma:sum} yield \begin{align*} \sum_{i= 1}^{2s} \sqrt{ \mathbf E \xi_{(d-i+1)}^4} \le \sqrt{C} \sum_{i=1}^{2s} \log^{2/a}\big(ed/i\big) \le C'\sqrt{C} s \log^{2/a}\Big(\frac{ed}{2s}\Big) \end{align*} This proves Theorem~\ref{theorem_upperbound_noise_subgaussian}. To prove Theorem~\ref{theorem_upperbound_noise_polynomial}, we act analogously by using Lemma~\ref{lemma_esp4_polynomial} and the fact that $\mathbf E(\xi_1^4)<\infty$ under assumption \eqref{definition_polynomial} with $a>4$. \subsection{Proof of Theorem~\ref{theorem_adaptiveupperbound_variance}} With the same notation as in the proof of Theorem~\ref{theorem_adaptiveupperbound}, we have \begin{equation}\label{eq54} \hat{\sigma}^2-\sigma^2 = \frac{\sigma^2}d \big( \|{\boldsymbol \xi}\|_2^2-d \big) + \frac1d \left(\|{\boldsymbol u}\|_2^2 - 2\sigma {\boldsymbol u}^T {\boldsymbol \xi}\right). \end{equation} It follows from (\ref{combination}) that $$ \|{\boldsymbol u}\|_2^2 + 2\sigma |{\boldsymbol u}^T {\boldsymbol \xi} | \le 3 \sigma |{\boldsymbol u}^T {\boldsymbol \xi} | +\frac{\tilde{\sigma}}{2}\Big\{ \Big(\sum_{j=1}^s \lambda_j^2\Big)^{1/2} \| {\boldsymbol u}\|_2 -\sum_{j=s+1}^d \lambda_j |u|_{(d-j+1)} \Big\}. $$ Arguing as in the proof of Theorem ~\ref{theorem_adaptiveupperbound}, we obtain $$ \|{\boldsymbol u}\|_2^2 + 2\sigma |{\boldsymbol u}^T {\boldsymbol \xi} | \le \Big( U_1 + \frac{\tilde{\sigma}}{2}\Big(\sum_{j=1}^s \lambda_j^2\Big)^{1/2} + U_2\Big) \| {\boldsymbol u}\|_2, $$ where $$ U_1=3 \sigma \Big(\sum_{j=1}^s |\xi|_{(d-j+1)} ^2\Big)^{1/2}, \quad U_2= \Big(\sum_{j=s+1}^d \left(3\sigma |\xi|_{(d-j+1)} - \frac{\tilde{\sigma}}{2}\lambda_{j} \right)^{2}_{+} \Big)^{1/2} $$ Using the Cauchy-Schwarz inequality, Proposition \ref{proposition_over} and \eqref{eqq1} and writing for brevity $\mathbf E=\mathbf E_{{\boldsymbol \t},P_\xi,\sigma}$ we find \begin{equation*} \mathbf E\Big(\tilde{\sigma} \Big(\sum_{j=1}^s \lambda_j^2\Big)^{1/2} \|{\boldsymbol u}\|_2\Big)\le \Big(\sum_{j=1}^s \lambda_j^2\Big)^{1/2} \sqrt{\mathbf E(\tilde{\sigma}^2)} \sqrt{\mathbf E(\|{\boldsymbol u}\|^{2}_2)} \le C \sigma^{2}\sum_{j=1}^s \lambda_j^2. \end{equation*} Since $\mathbf{E}(\xi_{1}^{4})<\infty$ we also have $\mathbf E\big| \|{\boldsymbol \xi}\|_2^2-d \big|\le C\sqrt{d}$. Finally, using again \eqref{eqq1} we get, for $k=1,2$, $$ \mathbf E(U_k \|{\boldsymbol u}\|_{2})\le \sqrt{\mathbf E(\|{\boldsymbol u}\|^{2}_2)} \sqrt{\mathbf E(U_k^2)} \le \sigma \Big(\sum_{j=1}^s \lambda_j^2\Big)^{1/2} \sqrt{\mathbf E(U_k^2)}\le C \sigma^{2}\sum_{j=1}^s \lambda_j^2, $$ where the last inequality follows from the same argument as in the proof of Theorem ~\ref{theorem_adaptiveupperbound}. These remarks together with \eqref{eq54} imply $$ \mathbf E\left(|\hat{\sigma}^2-\sigma^2 |\right) \leq \frac{C}{d}\Big(\sigma^{2}\sqrt{d} + \sigma^{2}\sum_{j=1}^s \lambda_{j}^{2}\Big). $$ We conclude the proof by bounding $\sum_{j=1}^s \lambda_{j}^{2}$ in the same way as in the end of the proof of Theorem~\ref{theorem_adaptiveupperbound}. } \section{Proofs of the lower bounds} \subsection{Proof of Theorems~\ref{theorem_lowerbound_noise_subgaussian} and~\ref{theorem_lowerbound_noise_polynomial} and part (ii) of Proposition ~\ref{prop:variance:gauss}}\label{subsection_proof_lowerbound_noise} Since we have $\ell(t)\ge \ell(A)\mathds{1}_{t\ge A}$ for any $A>0$, it is enough to prove the theorems for the indicator loss $\ell(t)=\mathds{1}_{t\ge 1}$. This remark is valid for all the proofs of this section and will not be further repeated. (i) We first prove the lower bounds with the rate $1/{\sqrt{d}}$ in Theorems~\ref{theorem_lowerbound_noise_subgaussian} and~\ref{theorem_lowerbound_noise_polynomial}. Let $f_0:\mathbb R\to [0, \infty)$ be a probability density with the following properties: $f_0$ is continuously differentiable, symmetric about 0, supported on $[-3/2,3/2]$, with variance 1 and finite Fisher information $I_{f_0}= \int (f_0'(x))^2(f_0(x))^{-1}dx$. The existence of such $f_0$ is shown in Lemma~\ref{lemma_density}. Denote by $F_0$ the probability distribution corresponding to $f_0$. Since $F_0$ is zero-mean, with variance 1 and supported on $[-3/2,3/2]$ it belongs to $\mathcal{G}_{a,\tau}$ with any $\tau>0$, $a>0$, and to $\mathcal{P}_{a,\tau}$ with any $\tau>0$, $a\ge 2$. Define $\mathbf P_0=\mathbf P_{0,F_0,1}$, $\mathbf P_1=\mathbf P_{0,F_0,\sigma_1}$ where $\sigma_1^2=1+c_0/\sqrt{d}$ and $c_0>0$ is a small constant to be fixed later. Denote by $H(\mathbf P_1,\mathbf P_0)$ the Hellinger distance between $\mathbf P_1$ and $\mathbf P_0$. We have \begin{equation}\label{hellgr} H^2(\mathbf P_1,\mathbf P_0) = 2\big(1-(1-h^2/2)^d\big) \end{equation} where $h^2=\int (\sqrt{f_0(x)}-\sqrt{f_0(x/\sigma_1)/\sigma_1})^2 dx$. By Theorem 7.6. in~\citet*{Ibragimov}, $$ h^2 \le \frac{(1-\sigma_1)^2}{4}\sup_{t\in [1,\sigma_1]} I(t) $$ where $I(t)$ is the Fisher information corresponding to the density $f_0(x/t)/t$, that is $I(t)= t^{-2}I_{f_0}$. It follows that $h^2\le {\bar c}c_0^2/d$ where ${\bar c}>0$ is a constant. This and \eqref{hellgr} imply that for $c_0$ small enough we have $H(\mathbf P_1,\mathbf P_0)\le 1/2$. Finally, choosing such a small $c_0$ and using Theorem~2.2(ii) in~\citet*{Tsybakov2009} we obtain \begin{eqnarray*} &&\inf_{\hat{T}} \max\Big\{ \mathbf P_0 \Big(\Big|\hat{T}-1\Big|\ge \frac{c_0}{2(1+c_0)\sqrt{d}}\Big), \mathbf P_1 \Big(\Big|\frac{\hat{T}}{\sigma_1^2}-1\Big|\ge\frac{c_0}{2(1+c_0)\sqrt{d}}\Big)\Big\}\\ && \ge \inf_{\hat{T}} \max\Big\{ \mathbf P_0 \Big(|\hat{T}-1|\ge \frac{c_0}{2\sqrt{d}}\Big), \mathbf P_1 \Big(|\hat{T}-\sigma_1^2|\ge \frac{c_0}{2\sqrt{d}}\Big)\Big\} \ge \frac{1-H(\mathbf P_1,\mathbf P_0)}{2}\ge \frac14. \end{eqnarray*} (ii) We now prove the lower bound with the rate $\frac{s}{d}\log^{2/a}(ed/s)$ in Theorem~\ref{theorem_lowerbound_noise_subgaussian}. It is enough to conduct the proof for $s\ge s_0$ where $s_0>0$ is an arbitrary absolute constant. Indeed, for $s\le s_0$ we have $\frac{s}{d}\log^{2/a}(ed/s) \le C/\sqrt{d}$ where $C>0$ is an absolute constant and thus Theorem~\ref{theorem_lowerbound_noise_subgaussian} follows already from the lower bound with the rate $1/\sqrt{d}$ proved in item (i). Therefore, in the rest of this proof we assume without loss of generality that $s\ge 32$. We take $P_\xi= U$ where $U$ is the Rademacher distribution, that is the uniform distribution on $\{-1,1\}$. Clearly, $U\in\mathcal{G}_{a,\tau}$. Let $\delta_1,\ldots,\delta_d$ be i.i.d. Bernoulli random variables with probability of success $\mathbf P (\delta_1=1)=\frac{s}{2d}$, and let $ \varepsilon_1,\ldots,\varepsilon_d $ be i.i.d. Rademacher random variables that are independent of $(\delta_1,\ldots,\delta_d)$. Denote by $\mu$ the distribution of $(\alpha\delta_1\varepsilon_1,\ldots,\alpha\delta_d\varepsilon_d)$ where $\alpha=(\tau/2)\log^{1/a}(ed/s)$. Note that $\mu$ is not necessarily supported on $\Theta_s=\{{\boldsymbol \t}\in\mathbb R^d\,|\,\|{\boldsymbol \t}\|_0\le s\}$ as the number of nonzero components of a vector drawn from $\mu$ can be larger than $s$. Therefore, we consider a restricted to $\Theta_s$ version of $\mu$ defined by \begin{equation}\label{definition_barmu} \bar{\mu}(A) = \frac{\mu(A\cap\Theta_s)}{\mu(\Theta_s)} \end{equation} for all Borel subsets $A$ of $\mathbb R^d$. Finally, we introduce two mixture probability measures \begin{equation}\label{definition_apriori} {\mathbb P}_\mu = \int \mathbf P_{{\boldsymbol \t},U,1} \, \mu(d{\boldsymbol \t}) \quad\text{and}\quad {\mathbb P}_{\bar{\mu}} = \int \mathbf P_{{\boldsymbol \t},U,1} \, \bar\mu(d{\boldsymbol \t}). \end{equation} Notice that there exists a probability measure $\tilde P\in \mathcal{G}_{a,\tau}$ such that \begin{equation}\label{crucial} {\mathbb P}_\mu = \mathbf P_{0,\tilde P,\sigma_0} \end{equation} where $\sigma_0>0$ is defined by \begin{equation}\label{sigma0} \sigma_0^2=1+\frac{\tau^2 s}{8 d}\log^{2/a}(ed/s) \le 1+\frac{\tau^2}{8}. \end{equation} Indeed, $\sigma_0^2=1+\frac{\alpha^2s}{2d}$ is the variance of zero-mean random variable $\alpha\delta\varepsilon+\xi$, where $\xi\sim U$, $\varepsilon\sim U$, $\delta\sim \mathcal{B}\big(\frac{s}{2d}\big)$ and $\varepsilon,\xi,\delta$ are jointly independent. Thus, to prove \eqref{crucial} it is enough to show that, for all $t\ge2$, \begin{equation}\label{probb} \mathbf P\big((\tau/2)\log^{1/a}(ed/s) \,\delta\varepsilon + \xi>t \sigma_0\big) \le e^{-(t/\tau)^a}. \end{equation} But this inequality immediately follows from the fact that for $t\ge2$ the probability in \eqref{probb} is smaller than \begin{align} \mathbf P(\varepsilon=1, \delta=1)\,\mathds{1}_{(\tau/2)\log^{1/a}(ed/s)>t-1} \le \frac{s}{4d}\mathds{1}_{\tau\log^{1/a}(ed/s)>t} \le e^{-(t/\tau)^a}. \end{align} Now, for any estimator $\hat T$ and any $u>0$ we have \begin{eqnarray}\nonumber &&\sup_{P_\xi\in\mathcal{G}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s}\mathbf P_{{\boldsymbol \t},P_\xi,\sigma} \Big( \Big| \frac{ \hat{T}}{\sigma^2} -1\Big| \ge u \Big) \\ \nonumber &&\qquad \ge \max\Big\{ \mathbf P_{0,\tilde P,\sigma_0} ( | \hat{T} -\sigma_0^2| \ge \sigma_0^2u), \int \mathbf P_{{\boldsymbol \t},U,1} ( | \hat{T} -1| \ge u) {\bar \mu}(d{\boldsymbol \t})\Big \}\\ &&\qquad \ge \max\Big\{ {\mathbb P}_\mu( | \hat{T} -\sigma_0^2| \ge \sigma_0^2u), {\mathbb P}_{\bar \mu} ( | \hat{T} -1| \ge \sigma_0^2 u) \Big \} \label{lowerr} \end{eqnarray} where the last inequality uses \eqref{crucial}. Write $\sigma_0^2=1+2\phi$ where $\phi= \frac{\tau^2 s}{16 d}\log^{2/a}(ed/s)$ and choose $u=\phi/\sigma_0^2 \ge \phi/(1+\tau^2/8)$. Then, the expression in \eqref{lowerr} is bounded from below by the probability of error in the problem of distinguishing between two simple hypotheses ${\mathbb P}_{\mu}$ and ${\mathbb P}_{\bar \mu}$, for which Theorem~2.2 in~\citet*{Tsybakov2009} yields \begin{eqnarray} \max\Big\{ {\mathbb P}_\mu( | \hat{T} -\sigma_0^2| \ge \phi), {\mathbb P}_{\bar \mu} ( | \hat{T} -1| \ge \phi) \Big \} \ge \frac{1-V({\mathbb P}_{ \mu},{\mathbb P}_{\bar\mu})}{2} \label{lowerr1} \end{eqnarray} where $V({\mathbb P}_{ \mu},{\mathbb P}_{\bar\mu})$ is the total variation distance between ${\mathbb P}_{\mu}$ and ${\mathbb P}_{\bar \mu}$. The desired lower bound follows from \eqref{lowerr1} and Lemma~\ref{lemma_TV} for any $s\ge 32$. (iii) Finally, we prove the lower bound with the rate $\tau^2(s/d)^{1-2/a}$ in Theorem~\ref{theorem_lowerbound_noise_polynomial}. Again, we do not consider the case $s\le 32$ since in this case the rate $1/\sqrt{d}$ is dominating and Theorem~\ref{theorem_lowerbound_noise_polynomial} follows from item (i) above. For $s\ge 32$, the proof uses the same argument as in item (ii) above but we choose $\alpha=(\tau/2)(d/s)^{1/a}$. Then the variance of $\alpha\delta\varepsilon+\xi$ is equal to $$\sigma_0^2=1+ \frac{\tau^2(s/d)^{1-2/a}}{8}. $$ Furthermore, with this definition of $\sigma_0^2$ there exists $\tilde P\in {\cal P}_{a,\tau}$ such that \eqref{crucial} holds. Indeed, analogously to \eqref{probb} we now have, for all $t\ge 2$, \begin{align} \mathbf P\big(\alpha \,\delta\varepsilon + \xi>t \sigma_0\big) &\le \mathbf P(\varepsilon=1, \delta=1)\,\mathds{1}_{(\tau/2)(d/s)^{1/a}>t-1} \le \frac{s}{4d}\mathds{1}_{\tau(d/s)^{1/a}>t} \le (t/\tau)^a. \end{align} To finish the proof, it remains to repeat the argument of \eqref{lowerr} and \eqref{lowerr1} with $\phi=\frac{\tau^2(s/d)^{1-2/a}}{16}.$ \subsection{Proof of Theorem~\ref{theorem_lowerbound_norm_subgaussian}} We argue similarly to the proof of Theorems~\ref{theorem_lowerbound_noise_subgaussian} and~\ref{theorem_lowerbound_noise_polynomial}, in particular, we set $\alpha=(\tau/2)\log^{1/a}(ed/s)$ when proving the bound on the class ${\cal G}_{a,\tau}$, and $\alpha=(\tau/2)(d/s)^{1/a}$ when proving the bound on~${\cal P}_{a,\tau}$. In what follows, we only deal with the class ${\cal G}_{a,\tau}$ since the proof for~${\cal P}_{a,\tau}$ is analogous. Consider the measures $\mu$ $\bar{\mu}$, ${\mathbb P}_{\mu}$, ${\mathbb P}_{\bar{\mu}}$ and $\tilde{P}$ defined in Section~\ref{subsection_proof_lowerbound_noise}. Similarly to \eqref{lowerr}, for any estimator $\hat T$ and any $u>0$ we have \begin{eqnarray}\nonumber &&\sup_{P_\xi\in\mathcal{G}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s}\mathbf P_{{\boldsymbol \t},P_\xi,\sigma} \big( | \hat{T} -\|{\boldsymbol \t}\|_2| \ge \sigma u \big) \\ \nonumber &&\qquad \ge \max\Big\{ \mathbf P_{0,\tilde P,\sigma_0} ( | \hat{T}| \ge \sigma_0 u), \int \mathbf P_{{\boldsymbol \t},U,1} ( | \hat{T} -\|{\boldsymbol \t}\|_2| \ge u) {\bar \mu}(d{\boldsymbol \t})\Big \}\\ &&\qquad \ge \max\Big\{ {\mathbb P}_\mu( | \hat{T}| \ge \sigma_0u), {\mathbb P}_{\bar \mu} ( | \hat{T} -\|{\boldsymbol \t}\|_2| \ge \sigma_0u) \Big \} \nonumber \\ &&\qquad \ge \max\Big\{ {\mathbb P}_\mu( | \hat{T}| \ge \sigma_0u), {\mathbb P}_{\bar \mu} ( | \hat{T}| < \sigma_0u, \|{\boldsymbol \t}\|_2\ge 2\sigma_0u) \Big \}\nonumber \\ &&\qquad \ge \min_{B}\,\max\big\{ {\mathbb P}_\mu( B), {\mathbb P}_{\bar \mu} ( B^c) - {\bar \mu}( \|{\boldsymbol \t}\|_2< 2\sigma_0u)\big \} \nonumber \\ &&\qquad \ge \min_{B}\,\frac{ {\mathbb P}_\mu( B) + {\mathbb P}_{\bar \mu} ( B^c)}{2} - \frac{{\bar \mu}( \|{\boldsymbol \t}\|_2< 2\sigma_0u)}2 \phantom{\Big\}} \label{lowerr2} \end{eqnarray} where $\sigma_0$ is defined in \eqref{sigma0}, $U$ denotes the Rademacher law and $\min_{B}$ is the minimum over all Borel sets. The third line in the last display is due to \eqref{crucial} and to the inequality $\sigma_0\ge1$. Since $\min_{B}\,\big\{ {\mathbb P}_\mu ( B) + {\mathbb P}_{\bar \mu} ( B^c)\big\} = 1-V({\mathbb P}_{ \mu},{\mathbb P}_{\bar\mu})$, we get \begin{eqnarray}\label{lowerr3aa} &&\sup_{P_\xi\in\mathcal{G}_{a,\tau}} \sup_{\sigma>0} \sup_{\|{\boldsymbol \t}\|_0\le s}\mathbf P_{{\boldsymbol \t},P_\xi,\sigma} \big( | \hat{T} -\|{\boldsymbol \t}\|_2|/\sigma \ge u\big) \ge \frac{1-V({\mathbb P}_{ \mu},{\mathbb P}_{\bar\mu}) - \bar \mu (\|{\boldsymbol \t}\|_2 < 2\sigma_0u)}{2}. \end{eqnarray} Consider first the case $s\ge 32$. Set $u=\frac{\alpha\sqrt{s}}{4\sigma_0}$. Then \eqref{eq1:lemma_TV} and \eqref{eq1:lemma_concentration_barmu} imply that $$ V({\mathbb P}_{ \mu},{\mathbb P}_{\bar\mu}) \le e^{-\frac{3s}{16}}, \quad \bar \mu (\|{\boldsymbol \t}\|_2 < 2\sigma_0u)\le 2e^{-\frac{s}{16}}, $$ which, together with \eqref{lowerr3aa} and the fact that $s\ge 32$ yields the result. Let now $s< 32$. Then we set $u=\frac{\alpha\sqrt{s}}{8\sqrt{2}\sigma_0}$. It follows from \eqref{eq2:lemma_TV} and \eqref{eq2:lemma_concentration_barmu} that \begin{eqnarray*}\label{lowerr3a} 1-V({\mathbb P}_{ \mu},{\mathbb P}_{\bar\mu}) - \bar \mu (\|{\boldsymbol \t}\|_2 < 2\sigma_0u)\ge \mathbf P\Big(\mathcal{B}\big(d,\frac{s}{2d}\big)= 1\Big) = \frac{s}{2}\Big(1-\frac{s}{2d}\Big)^{d-1}. \end{eqnarray*} It is not hard to check that the minimum of the last expression over all integers $s,d$ such that $1\le s < 32$, $s\le d$, is bounded from below by a positive number independent of $d$. We conclude by combining these remarks with \eqref{lowerr3aa}. \subsection{Proof of part (ii) of Proposition~\ref{prop:norm:known_sigma} and part (ii) of Proposition~\ref{prop:norm:poly:known_sigma}} We argue similarly to the proof of Theorems~\ref{theorem_lowerbound_noise_subgaussian} and~\ref{theorem_lowerbound_noise_polynomial}, in particular, we set $\alpha=(\tau/2)\log^{1/a}(ed/s)$ when proving the bound on the class ${\cal G}_{a,\tau}$, and $\alpha=(\tau/2)(d/s)^{1/a}$ when proving the bound on~${\cal P}_{a,\tau}$. In what follows, we only deal with the class ${\cal G}_{a,\tau}$ since the proof for~${\cal P}_{a,\tau}$ is analogous. Without loss of generality we assume that $\sigma=1$. To prove the lower bound with the rate $\phi^{\circ}_{\sf exp}(s,d)$, we only need to prove it for $s$ such that $(\phi^{\circ}_{\sf exp}(s,d))^{2} \le c_{0}\sqrt{d}/\log^{2/a}(ed) $ with any small absolute constant $c_{0}>0$, since the rate is increasing with $s$. Consider the measures $\mu$ $\bar{\mu}$, ${\mathbb P}_{\mu}$, ${\mathbb P}_{\bar{\mu}}$ defined in Section~\ref{subsection_proof_lowerbound_noise} with $\sigma_0=1$. Let $\xi_1$ be distributed with c.d.f. $F_0$ defined in item (i) of the proof of Theorems~\ref{theorem_lowerbound_noise_subgaussian} and~\ref{theorem_lowerbound_noise_polynomial}. Using the notation as in the proof of Theorems~\ref{theorem_lowerbound_noise_subgaussian} and~\ref{theorem_lowerbound_noise_polynomial}, we define $\tilde{P}$ as the distribution of $\tilde{\xi}_1=\sigma_1\xi_1+\alpha\delta_1 \varepsilon_1$ with $\sigma_1^2=(1+\alpha^2s/(2d))^{-1}$ where now $\delta_1$ is the Bernoulli random variable with $\mathbf P(\delta_1=1)=\frac{s}{2d}(1+\alpha^2s/(2d))^{-1}$. By construction, $\mathbf E \tilde{\xi}_1=0$ and $\mathbf E \tilde{\xi}_1^2=1$. Since the support of $F_0$ is in $[-{3}/{2}, {3}/{2}]$ one can check as in item (ii) of the proof of Theorems~\ref{theorem_lowerbound_noise_subgaussian} and~\ref{theorem_lowerbound_noise_polynomial} that $\tilde{P}\in \mathcal{G}_{a,\tau}$. Next, analogously to \eqref{lowerr2} - \eqref{lowerr3aa} we obtain that, for any $u>0$, $$\sup_{P_\xi \in \mathcal{G}_{a,\tau}}\sup_{\|\theta\|_0\le s}\mathbf P_{{\boldsymbol \t},P_\xi,1} \big( | \hat{T} -\|{\boldsymbol \t}\|_2|\ge u\big) \ge \frac{1-V({\mathbb P}_{ \bar \mu},P_{0,\tilde{P},1}) - \bar \mu (\|{\boldsymbol \t}\|_2 < 2u)}{2}.$$ Let $\mathbf{P}_0$ and $\mathbf{P}_1$ denote the distributions of $(\xi_1,\ldots,\xi_d)$ and of $(\sigma_1\xi_1,\ldots,\sigma_1\xi_d)$, respectively. Acting as in item (i) of the proof of Theorems~\ref{theorem_lowerbound_noise_subgaussian} and~\ref{theorem_lowerbound_noise_polynomial} and using the bound $$|1-\sigma_1|\le {\alpha^{2}s}/{d} = \frac{\tau^2}{4} \frac{s}{d}\log^{2/a}(ed/s)\le C c_0 /\sqrt{d} $$ we find that $V(\mathbf{P}_0,\mathbf{P}_1)\le H(\mathbf{P}_0,\mathbf{P}_1) \le 2\kappa c_{0}^{2}$ for some $\kappa >0$. % Therefore, $V(\mathbb{P}_{\mu},P_{0,\tilde{P},1})= V(\mathbf{P}_0*\mathbf{Q}, \mathbf{P}_1*\mathbf{Q})\le V(\mathbf{P}_0,\mathbf{P}_1)\le 2\kappa c_{0}^{2}$ where $\mathbf{Q}$ denotes the distribution of $(\alpha\delta_1 \varepsilon_1, \ldots, \alpha\delta_d \varepsilon_d)$. This bound and the fact that $V(\mathbb{P}_{\bar{\mu}}, P_{0,\tilde{P},1})\le V(\mathbb{P}_{\bar{\mu}},\mathbb{P}_{\mu} )+ V( \mathbb{P}_{\mu},P_{0,\tilde{P},1})$ imply $$ \sup_{P_\xi \in \mathcal{G}_{a,\tau}}\sup_{\|\theta\|_0\le s}\mathbf P_{{\boldsymbol \t},P_\xi,1} \big( | \hat{T} -\|{\boldsymbol \t}\|_2|\ge u\big) \ge \frac{1-V({\mathbb P}_{ \mu},{\mathbb P}_{\bar\mu}) - \bar \mu (\|{\boldsymbol \t}\|_2 < 2u)}{2} - \kappa c_{0}^{2}. $$ We conclude by repeating the argument after \eqref{lowerr3aa} in the proof of Theorem~\ref{theorem_lowerbound_norm_subgaussian} and choosing $c_{0}>0$ small enough to guarantee that the right hand side of the last display is positive. } \subsection{Proof of part (ii) of Proposition~\ref{prop:variance:gauss}} The lower bound with the rate $1/{\sqrt{d}}$ follows from the argument as in item (i) of the proof of Theorems~\ref{theorem_lowerbound_noise_subgaussian} and~\ref{theorem_lowerbound_noise_polynomial} if we replace there $F_{0}$ by the standard Gaussian distribution. The lower bound with the rate $\frac{s}{d(1+\log_{+}(s^{2}/d))}$ follows from Lemma~\ref{lemma:lowerbound:norm:variance} and the lower bound for estimation of $\|{\boldsymbol \t}\|_{2}$ in Proposition~\ref{prop:norm:gauss}. } \subsection{Proof of Proposition~\ref{proposition_suboptimality}} Assume that ${\boldsymbol \t}=0$, $\sigma=1$ and set \begin{equation*} \xi_i = \sqrt3 \varepsilon_i u_i, \end{equation*} where the $\varepsilon_i$'s and the $u_i$ are independent, with Rademacher and uniform distribution on $[0,1]$ respectively. Then note that \begin{align}\label{43} \mathbf E_{0,P_\xi,1} \big(\hat{\sigma}_*^2-1\big)^2 &\ge \big(\mathbf E_{0,P_\xi,1} (\hat{\sigma}_*^2)-1\big)^2 = \Big(\mathbf E_{0,P_\xi,1} \Big\{ \hat{\sigma}_*^2-\frac3d \sum_{i=1}^d u_i^2\Big\} \Big)^2, \end{align} since $\mathbf E(u_i^2)=1/3$. Note also that $\hat{\sigma}_*^2=\frac{3}{d/2}\sum_{i=1}^{d/2} u_{(i)}^2$. Now, \begin{align*} \frac{1}{d/2}\sum_{i=1}^{d/2} u_{(i)}^2-\frac{1}{d}\sum_{i=1}^d u_i^2&=\frac{1}{d}\sum_{i=1}^{d/2} u_{(i)}^2-\frac{1}{d}\sum_{i=d/2+1}^d u_{(i)}^2\\ &\le \frac{1}{d}\sum_{i=1}^{d/4} u_{(i)}^2-\frac{1}{d}\sum_{i=3d/4+1}^d u_{(i)}^2\\ &\le \frac{1}{4}(u^2_{(d/4)}-u^2_{(3d/4)}). \end{align*} Since $u_{(i)}$ follows a Beta distribution with parameters $(i,d-i+1)$ we have $\mathbf E(u_{(i)}^2)=\frac{i(i+1)}{(d+1)(d+2)}$, and \begin{align*} \mathbf E_{0,P_\xi,1}\Big( \frac{1}{d/2}\sum_{i=1}^{d/2} u_{(i)}^2-\frac{1}{d}\sum_{i=1}^d u_i^2\Big)&\le \frac{1}{4}\mathbf E_{0,P_\xi,1}(u^2_{(d/4)}-u^2_{(3d/4)}) = -\frac{d}{8(d+2)} \le -\frac{1}{24}. \end{align*} This and \eqref{43} prove the proposition. \section{Lemmas} \subsection{Lemmas for the upper bounds} \begin{lemma}\label{lemma_esp4_subgaussian} Let $z_1,\ldots, z_d\overset{iid}{\sim} P$ with $P \in\mathcal{G}_{a,\tau}$ for some $a,\tau>0$ and let $z_{(1)}\le\cdots\le z_{(d)}$ be the order statistics of $|z_1|,\ldots, |z_d|$. Then for $u>2^{1/a}\tau\vee 2$, we have \begin{equation}\label{eq:lemma_esp4_subgaussianA} \mathbf P \Big( z_{(d-j+1)}\le u \log^{1/a}\big(ed/j\big) , \forall\; j=1,\ldots, d \Big) \ge 1 - 4e^{-u^a/2}, \end{equation} and, for any $r>0$, \begin{equation}\label{eq:lemma_esp4_subgaussian} \mathbf E \big( z_{(d-j+1)}^r \big)\le C \log^{r/a}\big(ed/j\big), \qquad j=1,\dots, d, \end{equation} where $C>0$ is a constant depending only on $\tau$, $a$ and $r$. } \end{lemma} \begin{proof} Using the definition of $\mathcal{G}_{a,\tau}$ we get that, for any $t\ge2$, \begin{equation*} \mathbf P\big( z_{(d-j+1)}\ge t\big)\le \binom{d}{j}\mathbf P^j(|z_1|\ge t)\le 2\Big(\frac{ed}{j}\Big)^j e^{-j(t/\tau)^a},\qquad j=1,\dots, d. \end{equation*} Thus, for $v\ge 2^{1/a}\vee (2/\tau)$ we have \begin{equation}\label{eqxx} \mathbf P ( z_{(d-j+1)}\ge v\tau \log^{1/a}({ed}/{j}))\le 2\Big(\frac{ed}{j}\Big)^{j(1-v^a)} \le 2 e^{-jv^a/2},\qquad j=1,\dots, d, \end{equation} and $$ \mathbf P \Big( \exists \ j\in\{1,\ldots, d\}: z_{(d-j+1)}\ge v \tau\log^{1/a}(ed/j) \Big)\le 2\sum_{j=1}^d e^{-jv^a/2}\le 4e^{-v^a/2} $$ implying \eqref{eq:lemma_esp4_subgaussianA}. Finally, \eqref{eq:lemma_esp4_subgaussian} follows by integrating \eqref{eqxx}. } \end{proof} \begin{lemma}\label{lemma_esp4_polynomial} Let $z_1,\ldots, z_d\overset{iid}{\sim} P$ with $P \in\mathcal{P}_{a,\tau}$ for some $a,\tau>0$ and let $z_{(1)}\le\cdots\le z_{(d)}$ be the order statistics of $|z_1|,\ldots, |z_d|$. Then for $u> (2 e)^{1/a} \tau\vee 2$, we have \begin{equation}\label{eq:lemma_esp4_polynomial} \mathbf P \Big( z_{(d-j+1)}\le u \Big(\frac{d}{j} \Big)^{1/a} , \forall\; j=1,\ldots, d \Big) \ge 1-\frac{2 e \tau^a}{u^a} \end{equation} and, for any $r\in (0,a)$, \begin{equation}\label{eq2:lemma_esp4_polynomial} \mathbf E \big( z_{(d-j+1)}^r \big)\le C \Big(\frac{d}{j}\Big)^{r/a}, \qquad j=1,\dots, d, \end{equation} where $C>0$ is a constant depending only on $\tau$, $a$ and $r$. } \end{lemma} \begin{proof} Using the definition of $\mathcal{P}_{a,\tau}$ we get that, for any $t\ge2$, \begin{equation*} \mathbf P\big( z_{(d-j+1)}\ge t\big)\le \Big(\frac{ed}{j}\Big)^j \Big(\frac{\tau}{t}\Big)^{ja}. \end{equation*} Set $t_j=u \Big(\frac{d}{j} \Big)^{1/a}$ and $q=e(\tau/u)^a$. The assumption on $u$ yields that $q<1/2$, so that $$ \mathbf P \Big( \exists \ j\in\{1,\ldots, d\}: z_{(d-j+1)}\ge u \Big(\frac{d}{j} \Big)^{1/a} \Big)\le \sum_{j=1}^d\Big(\frac{ed}{j}\Big)^j \Big(\frac{\tau}{t_j}\Big)^{ja} = \sum_{j=1}^d q^j\le 2q. $$ This proves \eqref{eq:lemma_esp4_polynomial}. The proof of \eqref{eq2:lemma_esp4_polynomial} is analoguous to that of~\eqref{eq:lemma_esp4_subgaussian}. \end{proof} \begin{lemma}\label{lemma:sum} For all $a>0$ and all integers $1\le s\le d$, $$\sum_{i=1}^{s} \log^{2/a}\big(ed/i\big) \le Cs \log^{2/a}\Big(\frac{ed}{s}\Big)$$ where $C>0$ depends only on $a$. \end{lemma} The proof is simple and we omit it. } \subsection{Lemmas for the lower bounds} For two probability measures ${\rm P}_1$ and ${\rm P}_2$ on a measurable space $(\Omega, \mathcal{U})$, we denote by $V({\rm P}_1,{\rm P}_2)$ the total variation distance between ${\rm P}_1$ and ${\rm P}_2$: $$V({\rm P}_1,{\rm P}_2)=\sup_{B\in \mathcal{U}}\left|{\rm P}_1(B)-{\rm P}_2(B)\right|.$$ \begin{lemma}[Deviations of the binomial distribution]\label{binomial} Let $\mathcal{B}(d,p)$ denote the binomial random variable with parameters $d$ and~$p\in (0,1)$. Then, for any $\lambda>0$, \begin{align}\label{binomial1} &\mathbf P\big(\mathcal{B}(d,p)\ge\lambda \sqrt{d}+dp\big) \le \exp\bigg(-\frac{\lambda^{2}}{2p(1-p)\big(1+\frac{\lambda}{3p\sqrt{d}}\big)}\bigg),\\ \label{binomial2} &\mathbf P\big(\mathcal{B}(d,p)\le -\lambda \sqrt{d}+dp\big) \le \exp\bigg(-\frac{\lambda^{2}}{2p(1-p)}\bigg). \end{align} \end{lemma} Inequality \eqref{binomial1} is a combination of formulas (3) and (10) on pages 440--441 in~\cite{ShorackWellner1986}. Inequality \eqref{binomial2} is formula (6) on page 440 in~\cite{ShorackWellner1986}. \begin{lemma}\label{lemma_TV} Let ${\mathbb P}_\mu$ and ${\mathbb P}_{\bar{\mu}}$ be the probability measures defined in~(\ref{definition_apriori}). The total variation distance between these two measures satisfies \begin{equation}\label{eq1:lemma_TV} V({\mathbb P}_\mu,{\mathbb P}_{\bar{\mu}}) \le \mathbf P\Big(\mathcal{B}\Big(d,\frac{s}{2d}\Big)>s\Big) \le e^{-\frac{3s}{16}}, \end{equation} and \begin{equation}\label{eq2:lemma_TV} V({\mathbb P}_\mu,{\mathbb P}_{\bar{\mu}}) \le 1- \mathbf P\Big(\mathcal{B}\Big(d,\frac{s}{2d}\Big)= 0\Big)- \mathbf P\Big(\mathcal{B}\Big(d,\frac{s}{2d}\Big)= 1\Big). \end{equation} \end{lemma} \begin{proof} We have $$ V({\mathbb P}_\mu,{\mathbb P}_{\bar{\mu}}) = \sup_{B}\left|\int\mathbf P_{{\boldsymbol \t},U,1} (B)d\mu({\boldsymbol \t})-\int \mathbf P_{{\boldsymbol \t},U,1}(B)d\bar{\mu}({\boldsymbol \t})\right| \le \sup_{|f|\le 1}\left|\int f d\mu-\int f d\bar{\mu}\right| = V(\mu,\bar{\mu}). $$ Furthermore, $V(\mu,\bar{\mu}) \le \mu(\Theta_s^c)$ since for any Borel subset $B$ of $\mathbb R^d$ we have $\big|\mu(B)-\bar{\mu}(B)\big|\le \mu(B\cap\Theta_s^c)$. Indeed, $$ \mu(B)-\bar{\mu}(B)\le \mu(B)-\mu(B\cap \Theta)= \mu(B\cap \Theta^c) $$ and $$ \bar{\mu}(B)-\mu(B) = \frac{\mu(B\cap \Theta)}{\mu(\Theta)}-\mu(B\cap \Theta)- \mu(B\cap \Theta^c) \ge - \mu(B\cap \Theta^c). $$ Thus, \begin{equation}\label{eq3:lemma_TV} V({\mathbb P}_\mu,{\mathbb P}_{\bar{\mu}}) \le \mu(\Theta_s^c) = \mathbf P\Big(\mathcal{B}\Big(d,\frac{s}{2d}\Big)>s\Big). \end{equation} Combining this inequality with \eqref{binomial1} we obtain \eqref{eq1:lemma_TV}. To prove \eqref{eq2:lemma_TV}, we use again \eqref{eq3:lemma_TV} and notice that $\mathbf P\Big(\mathcal{B}\Big(d,\frac{s}{2d}\Big)>s\Big)\le \mathbf P\Big(\mathcal{B}\Big(d,\frac{s}{2d}\Big)\ge 2\Big)$ for any integer $s\ge 1$. \end{proof} \begin{lemma}\label{lemma_concentration_barmu} Let $\bar{\mu}$ be defined in \eqref{definition_barmu} with some $\alpha>0$ Then \begin{equation}\label{eq1:lemma_concentration_barmu} \bar{\mu} \Big(\|{\boldsymbol \t}\|_2< \frac{\alpha}{2}\sqrt{s} \Big) \le 2e^{-\frac{s}{16}}, \end{equation} and, for all $s\le 32$, \begin{equation}\label{eq2:lemma_concentration_barmu} \bar{\mu} \Big(\|{\boldsymbol \t}\|_2< \frac{\alpha\sqrt{s}}{4\sqrt{2}} \Big) = \mathbf P\Big(\mathcal{B}\big(d,\frac{s}{2d}\big)= 0\Big). \end{equation} \end{lemma} \begin{proof} First, note that \begin{equation}\label{eq:lem:barmu} \mu \Big(\|{\boldsymbol \t}\|_2< \frac{\alpha}{2}\sqrt{s} \Big) = \mathbf P\Big(\mathcal{B}\big(d,\frac{s}{2d}\big)< \frac{s}{4}\Big) \le e^{-\frac{s}{16}} \end{equation} where the last inequality follows from \eqref{binomial2}. Next, inspection of the proof of Lemma~\ref{lemma_TV} yields that $\bar{\mu}(B)\le {\mu}(B) + e^{-\frac{3s}{16}}$ for any Borel set~$B$. Taking here $B= \{\|{\boldsymbol \t}\|_2\le \alpha\sqrt{s}/2\}$ and using~\eqref{eq:lem:barmu} proves \eqref{eq1:lemma_concentration_barmu}. To prove \eqref{eq2:lemma_concentration_barmu}, it suffices to note that $\mu \Big(\|{\boldsymbol \t}\|_2< \frac{\alpha\sqrt{s}}{4\sqrt{2}} \Big) = \mathbf P\Big(\mathcal{B}\big(d,\frac{s}{2d}\big)< \frac{s}{32}\Big)$. \end{proof} \begin{lemma}\label{lemma_density} There exists a probability density $f_0:\mathbb R\to [0, \infty)$ with the following properties: $f_0$ is continuously differentiable, symmetric about 0, supported on $[-3/2,3/2]$, with variance~1 and finite Fisher information $I_{f_0}= \int (f_0'(x))^2(f_0(x))^{-1}dx$. \end{lemma} \begin{proof} Let $K:\mathbb R\to [0, \infty)$ be any probability density, which is continuously differentiable, symmetric about 0, supported on $[-1,1]$, and has finite Fisher information $I_K$, for example, the density $K(x)= \cos^2(\pi x/2) \mathds{1}_{|x|\le 1}$. Define $f_0(x)= [K_h(x+(1-\varepsilon)) + K_h(x-(1-\varepsilon))]/2$ where $h>0$ and $\varepsilon \in (0,1)$ are constants to be chosen, and $K_h(u)=K(u/h)/h$. Clearly, we have $I_{f_0}<\infty$ since $I_{K}<\infty$. It is straightforward to check that the variance of $f_0$ satisfies $\int x^2 f_0(x) dx = (1-\varepsilon)^2 + h^2 \sigma_K^2$ where $\sigma_K^2 = \int u^2 K(u)du$. Choosing $h=\sqrt{2\varepsilon-\varepsilon^2}/\sigma_K$ and $\varepsilon \le \sigma_K^2/8$ guarantees that $\int x^2 f_0(x) dx = 1$ and the support of $f_0$ is contained in $[-3/2,3/2]$. \end{proof} \begin{lemma}\label{lemma:lowerbound:norm:variance} Let $\tau>0$, $a>4$ and let $s,d$ be integers satisfying $1\leq s \leq d$. Let $\mathcal{P}$ be any subset of $\mathcal{P}_{a,\tau}$. Assume that for some function $\phi(s,d)$ of $s$ and $d$ and for some positive constants $c_{1},c_{2},c'_{1},c'_{2}$ we have \begin{equation}\label{eq1:lemma:lowerbound:norm:variance} \underset{\hat{T}}{\inf} \underset{P_\xi \in \mathcal{P}}{\sup}\, \underset{\sigma>0}{\sup}\,\underset{\|{\boldsymbol \t}\|_{0}\leq s}{\sup}\mathbf{P}_{{\boldsymbol \t}, P_{\xi},\sigma} \left( \left| \frac{\hat{T}}{\sigma^{2}}-1\right|\geq \frac{c_{1}}{\sqrt{d}}\right) \geq c_1^{'}, \end{equation} and \begin{equation}\label{eq2:lemma:lowerbound:norm:variance} \underset{\hat{T}}{\inf} \underset{P_\xi \in \mathcal{P}}{\sup}\, \underset{\sigma>0}{\sup}\, \underset{\|{\boldsymbol \t}\|_{0}\leq s}{\sup}\mathbf{P}_{{\boldsymbol \t}, P_{\xi},\sigma}\left( \left| \frac{\hat{T}-\|{\boldsymbol \t}\|_{2}}{\sigma}\right|\geq c_{2}\phi(s,d)\right) \geq c_2^{'}. \end{equation} Then $$ \underset{\hat{T}}{\inf} \underset{P_\xi \in \mathcal{P}}{\sup}\, \underset{\sigma>0}{\sup}\, \underset{\|{\boldsymbol \t}\|_{0}\leq s}{\sup}\mathbf{P}_{{\boldsymbol \t}, P_{\xi},\sigma}\left( \left| \frac{\hat{T}}{\sigma^{2}}-1\right|\geq c_{3}\max\left(\frac{1}{\sqrt{d}},\frac{\phi^{2}(s,d)}{d}\right)\right) \geq c_3^{'} $$ for some constants $c_{3},c_{3}'>0$. \end{lemma} \begin{proof} Let $\hat{\sigma}^{2}$ be an arbitrary estimator of $\sigma^{2}$. Based on $\hat{\sigma}^{2}$, we can construct an estimator $\hat{T}= \hat{N}^*$ of $\|{\boldsymbol \t}\|_{2}$ defined by formula \eqref{eq:C}, case $s>\sqrt{d}$. It follows from \eqref{g2}, \eqref{g3} and \eqref{eq2:lemma:lowerbound:norm:variance} that \begin{align*} c_{2}'&\leq \mathbf{P}\left(2|({\boldsymbol \t},{\boldsymbol \xi})| \geq c_{2}\|{\boldsymbol \t}\|_{2}\phi(s,d)/3\right) + \mathbf{P}\left(\sqrt{|\|{\boldsymbol \xi}\|_{2}^{2}-d|} \geq c_{2}\phi(s,d)/3\right) \\ & \qquad + \mathbf{P}\left( \sqrt{d\left|\frac{\hat{\sigma}^{2}}{\sigma^{2}}-1\right|} \geq c_{2}\phi(s,d)/3\right), \end{align*} where we write for brevity $\mathbf{P} =\mathbf{P}_{{\boldsymbol \t}, P_{\xi},\sigma}$. Hence $$ \mathbf{P}\left( \left|\frac{\hat{\sigma}^{2}}{\sigma^{2}}-1\right| \geq c_{2}^{2}\phi^{2}(s,d)/(9d)\right) \geq c_{2}'-c^{*}\max\left(\frac{d }{\phi^{4}(s,d)}, \frac{1}{\phi^{2}(s,d)}\right) $$ for some constant $c^{*}>0$ depending only on $a$ and $\tau$. If $\phi^{2}(s,d) > \max\left( \sqrt{\frac{2c^{*}d}{c_{2}'}},\frac{2c^{*}}{c_{2}'} \right)$, then $$ \mathbf{P}\left( \left|\frac{\hat{\sigma}^{2}}{\sigma^{2}}-1\right|\geq C \max\left( \frac{1}{\sqrt{d}},\frac{\phi^{2}(s,d)}{d}\right)\right) \geq c_{2}'/2. $$ If $\phi^{2}(s,d) \le \max\left( \sqrt{\frac{2c^{*}d}{c_{2}'}},\frac{2c^{*}}{c_{2}'} \right)$, then $\max\left( \frac{1}{\sqrt{d}},\frac{\phi^{2}(s,d)}{d}\right)$ is of order $\frac{1}{\sqrt{d}}$ and the result follows from \eqref{eq1:lemma:lowerbound:norm:variance}. \end{proof} } \section{Acknowledgements} The work of O.~Collier has been conducted as part of the project Labex MME-DII (ANR11-LBX-0023-01). The work of M.Ndaoud and A.B. Tsybakov was supported by GENES and by the French National Research Agency (ANR) under the grants IPANEMA (ANR-13-BSH1-0004-02) and Labex Ecodec (ANR-11-LABEX-0047).
{ "timestamp": "2019-04-18T02:11:20", "yymm": "1802", "arxiv_id": "1802.04230", "language": "en", "url": "https://arxiv.org/abs/1802.04230" }
\section{Introduction} \fi \iffalse \keywords{zero-automata, probabilities, temporal logics} \maketitle \fi The theory of automata on infinite trees is rooted in Rabin's seminal theorem which establishes an effective correspondence between the monadic second order logic (MSO) theory of the infinite binary tree and the non-deterministic automata on this tree~\cite{rabinsem}. In this correspondence, the satisfiability of the logic is dual to the emptiness of the algorithm and both these algorithmic problems are mutually reducible to one another. This elegant setting has been partially extended to probabilistic logics~\cite{LS1982,brazdil2008controller,DBLP:conf/lfcs/MichalewskiM16,DBLP:journals/corr/MichalewskiMB16,DBLP:conf/icalp/Bojanczyk16} and automata with probabilistic winning conditions ~\cite{rabinsem, pazbook,DBLP:journals/jacm/BaierGB12,DBLP:journals/tocl/CarayolHS14,DBLP:conf/icalp/Bojanczyk16}. In this paper we make another step in this direction: we show a correspondence between the logic \ctls\allop\ and nonzero alternating automata with limited choice. Moreover we show that the emptiness problem of the automata is decidable and obtain as a corollary the decidability of the satisfiability of the logic. \paragraph*{Automata.} Alternating nonzero automata are an alternating version of \emph{non-deterministic nonzero automata} introduced in~\cite{DBLP:journals/corr/BojanczykGK17}, which themselves are equivalent to \emph{non-deterministic zero automata} introduced in~\cite{DBLP:conf/icalp/Bojanczyk16}. An alternating nonzero automaton takes as input a binary tree. Some states of the automaton are controlled by Eve, while other states are controlled by Adam, and the player controlling the current state chooses the next transition. Some transitions are \emph{local transitions}, in which case the automaton stays on the same node of the input tree while other are \emph{split transitions} in which case the automaton proceeds to the left son or to the right son of the current node with equal probability $\frac{1}{2}$. This interaction between Eve and Adam is seen as a game where Eve and Adam play according to some strategies. Once the strategies are fixed, one obtains a Markov chain whose trajectories are all possible plays consistent with the strategies. The winner is determined with respect to winning conditions introduced in~\cite{DBLP:conf/icalp/Bojanczyk16,DBLP:journals/corr/BojanczykGK17}, using a total order on the set of states (used to compute the limsup of a play which is the largest state seen infinitely often during the play) and three subsets of states, respectively called the \emph{sure}, \emph{almost-sure} and \emph{positive states}. Eve wins if and only if the three acceptance conditions hold: {\noindent \bf sure winning:} every play has limsup in sure states; and {\noindent \bf almost-sure winning:} almost-every play has limsup in almost-sure states; and {\noindent \bf positive winning:} whenever the play enters a positive state there is positive probability that the play never exits positive states The input tree is accepted by the alternating automaton iff Eve has a winning strategy. Alternating nonzero automata generalize both classical alternating automata with parity conditions~\cite{Chandra:1981:ALT:322234.322243, MULLER1987267} (when all states are almost-sure and positive) as well as non-determin\-istic nonzero automata~\cite{DBLP:journals/corr/BojanczykGK17} (in case Eve controls all states). We do not know whether the emptiness problem for these automata is decidable or not, however we show that the answer is positive for the subclass of alternating nonzero automata with \emph{\limch\ for Adam}. In these automata, some choices of Adam are canonical, at most one in every state, and Adam may perform at most a bounded number of non-canonical choices during a single play. We establish some properties of alternating nonzero automata with \limch\ for Adam. \begin{itemize} \item First, we show that the emptiness problem for alternating nonzero automata with \limch\ for Adam\ is in {\sc nexptime} $\cap$ co-{\sc nexptime}\ (Theorem~\ref{theo:recogalt}). The proof is an {\sc exptime}\ reduction to the emptiness problem for non-deterministic automata. This proof relies on the positional determinacy of the acceptance games for Eve (Lemma~\ref{lem:det}) and a characterization of positional winning strategies for Eve (Lemmas~\ref{caracsure},~\ref{defindex} and~\ref{lem:caracnonzero}). \item Second, we show that in the particular case where the sure winning condition is a B\"uchi condition, emptiness of non-deterministic nonzero automata is in {\sc ptime}\ (Theorem~\ref{theo:complexitiyemptiness}) hence, in case of a B\"uchi sure winning condition, emptiness of nonalternating nonzero automata is in {\sc exptime}\ (Theorem~\ref{theo:recogalt}). \end{itemize} \paragraph*{Logic.} The temporal logic {\ctl}$^*$\ introduced by Emerson and Halpern \cite{emerson1986sometimes} and its fragments {CTL}\ and LTL are prominent tools to specify properties of discrete event systems. A variant of {\ctl}$^*$\ is the logic p\ctls\ \cite{hansson1994logic} in which the universal and existential path quantifiers are \emph{replaced} by probabilistic path quantifiers which set upper or lower bounds on the probability of a path property in a Markov chain. For example the formula $\Proba_{\geq \frac{1}{2}}(FGa)$ specify that with probability at least $\frac{1}{2}$ eventually all the visited states are labelled with $a$. To our knowledge, the satisfiability problem for this logic is an open problem. However, for the qualitative fragment of p\ctls, where only two probabilistic quantifiers $\Proba_{>0}$ and $\Proba_{=1}$ are available, the satisfiability is decidable~\cite{brazdil2008controller}. In a variant of p\ctls\ called pECTL\ the path subformula are replaced by deterministic B\"uchi automaton, and the satisfiability of the qualitative fragment is 2-{\sc exptime}\ complete~\cite{brazdil2008controller}, the same complexity as for {\ctl}$^*$~\cite{vardi1985improved}. Remark that neither p\ctls\ nor pECTL\ includes the path operators $\forall$ and $\exists$, thus these two logics are incomparable in expressivity with {\ctl}$^*$. For example, on the alphabet $\{a,b\}$, the {\ctl}$^*$\ formula $\phi_1=\forall F G \neg b$, and the p\ctls\ formula $\phi_2=\proba_{=1}( F G \neg b)$ specify, that \emph{every} branch, respectively \emph{almost-every} branch, of the model has finitely many $b$. Neither $\phi_1$ can be expressed in p{\ctl}$^*$\ nor $\phi_2$ can be expressed in {\ctl}$^*$. \smallskip In this paper, we consider the logic \ctls\allop\ which is an extension of both {\ctl}$^*$\ and qualitative p\ctls and establish several properties of this logic. \begin{itemize} \item The satisfiability by an arbitrary $\Sigma$-labelled Markov chain reduces to the satisfiability by $(\Sigma\cup \{\circ\})$-labelled a binary tree with $\circ$ a fresh letter (Theorem~\ref{theo:reduc}). \item The satisfiability of \ctls\allop\ reduces to the emptiness of alternating nonzero automata with finite choice for Adam thus it is decidable in 3-{\sc nexptime} $\cap$co-3-{\sc nexptime}. In the variant \ECTL\allop, where path formula are deterministic B\"uchi automata, this reduction gives a 2-{\sc nexptime} $\cap$ co-2-{\sc nexptime}\ complexity and for the fragment {CTL}$[\PCTLE,\PCTLA,\PCTLPp,\PCTLPo]$\ the complexity is {\sc nexptime} $\cap$ co-{\sc nexptime}\ (Theorem~\ref{theo:pctls}). \item For the fragments {\ctl}$^*$$[\PCTLPp,\PCTLPo]$, ECTL$[\PCTLPp,\PCTLPo]$\ and {CTL}$[\PCTLPp,\PCTLPo]$\ (i.e. qualitative p{\ctl}$^*$, pECTL\ and p{CTL}{} respectively), the $F_\forall$ acceptance condition of the automaton is a B\"uchi condition and we retrieve the optimal complexity bounds of~\cite{brazdil2008controller,Brazdil2008}, i.e. 3-{\sc exptime}, 2-{\sc exptime}\ and {\sc exptime}, respectively. \end{itemize} \paragraph*{Organization of the paper.} Section~\ref{sec:nonzero} introduces alternating nonzero automata, an example is given in Section~\ref{sec:example}. Section~\ref{sec:nondet} focuses on non-deterministic automata, and provide an optimal algorithm to decide emptiness when the $F_\forall$ condition is B\"uchi. In Section~\ref{sec:emptiness} we prove that emptiness is decidable (\ref{theo:recogalt}) when Adam has limited choice. Section~\ref{sec:pctls} presents our complexity results for the satisfiability of \ctls\allop\ and its variants and fragments. \section{Alternating nonzero automata}\label{sec:nonzero} An alternating nonzero automaton on a finite alphabet $\Sigma$ is a finite-state machine processing binary trees, equipped with a game semantics: every tree is either accepted or rejected by the machine depending on who wins the acceptance game on the tree. \paragraph*{Trees.} A $\Sigma$-labelled binary tree is a function $t:\{0,1\}^*\to\Sigma$. An element $n\in\{0,1\}^*$ is called a \emph{node} of the tree and has exactly two sons $n0$ and $n1$. We use the usual notions of ancestors and descendants. A node $n'$ is \emph{(strictly) below} $n$ if $n$ is a (strict) prefix of $n'$. A \emph{path} in the tree is a finite or infinite sequence of nodes $n_0,n_1,\ldots$ such that for every $k$ the node $n_{k+1}$ is a son of the node $n_k$. A branch $b$ is an element of $\{0,1\}^\omega$. If a node $n$ is a prefix of $b$ we say that $n$ \emph{belongs} to $b$ or that $b$ \emph{visits} $n$. The set of branches is equipped with the uniform probability measure, denoted $\mu$, corresponding to an infinite random walk taking at each step either direction $0$ or $1$ with equal probability $\frac{1}{2}$. \paragraph*{Automata.} An alternating nonzero automaton on alphabet $\Sigma$ is presented as a tuple \[ \mathcal{A}=(\Astates,q_0,\Astates_E,\Astates_A,\to, F_\forall, F_1,F_{>0} )\text{ where:} \] \begin{itemize} \item $Q$ is a finite set of states, equipped with a total order $\leq$, containing the initial state $q_0$. \item $(\Astates_E,\Astates_A)$ is a partition of $\Astates$ into Eve and Adam states. \item $\to$ is the set of transitions, there are two types of transitions: \begin{itemize} \item \emph{local transitions} are tuples $(q,a,q')$ with $q,q'\in\Astates$ and $a\in \Sigma$, denoted $q\to_a q'$. \item \emph{split transitions} are tuples $(q,a,q_0,q_1)\in \Astates\times \Sigma\times Q^2$, denoted $q\to_a (q_0,q_1)$. \end{itemize} \item $F_\forall$, $F_1$ and $F_{>0}$ are subsets of $Q$ defining the acceptance condition. \end{itemize} The input of such an automaton is an infinite binary tree $t : \{0,1\}^* \to \Sigma$. The source (resp. the target) of a local transition $q\to_a q'$ is $q$ (resp $q'$). The source (resp. the targets) of a split transition $q \to_a (q_0,q_1)$ is $q$ (resp $q_0$ and $q_1$). A state is said to be controlled by Eve or Adam whether it belongs to $\Astates_E$ or $\Astates_A$. The controller of a transition is the controller of its source state. We always assume that \begin{itemize} \item[{\bf (HC)}] the automaton is {\bf complete}: for every state $q$ and letter $a$ there is at least one transition with source $q$ on $a$. \end{itemize} The (HC) condition makes it easier to define the game semantics of the automaton. \paragraph*{Game semantics.} The acceptance of an input binary tree by the automaton is defined by mean of a stochastic game between Eve and Adam called the \emph{acceptance game}. The game of acceptance of a binary tree $t:\{0,1\}^*\to \Sigma$ by $\mathcal{A}$ is a two-player stochastic game with perfect information played by two strategic players Eve and Adam. The vertices of the game are all pairs $(n,q)$ where $n\in\{0,1\}^*$ is a node of the infinite binary tree and $q$ is a state of the automaton. The game starts in the initial vertex $(\epsilon,q_0)$. Each vertex $(n,q)$ is controlled by either Eve or Adam depending whether $q\in \Astates_E$ or $q \in \Astates_A$. The controller of the current state chooses any transition with source $q$ and letter $t(n)$. Intuitively, depending whether the transition is a local or a split transition, the automaton stays on the current node $n$ or move with equal probability $\frac{1}{2}$ to either node $n0$ or $n1$. If the transition is a local transition $q\to_{t(n)} q'$, the new vertex of the game is $(n,q')$. If the transition is a split transition $q \to_{t(n)} (r_0,r_1)$ then the new vertex is chosen randomly with equal probability $\frac{1}{2}$ between vertices $(n0,r_0)$ or $(n1,r_1)$. A play is a finite or infinite sequence of vertices $\pi=(n_0,q_0)(n_1,q_1)\ldots $. We denote $\first(\pi) = (n_0,q_0)$ and $\last(\pi) = (n_k,q_n)$ (for finite plays). A strategy for Eve associates with every finite play whose last vertex is controlled by Eve a transition with source $q_n$ and letter $t(n_k)$ (such a transition always exists since the automaton is complete). Strategies for Adam are defined in a symmetric way. Strategies of Eve are usually denoted $\sigma$ while strategies for Adam are denoted $\tau$. \paragraph*{Measuring probabilities.} Once both players Eve and Adam have chosen some strategies $\sigma$ and $\tau$, this defines naturally a non-homogenous Markov chain whose states are the vertices of the game. According to Tulcea theorem, if we equip the set of plays with the $\sigma$-field generated by cylinders, then there is a unique probability measure $\mathbb{P}^{\sigma,\tau}$ such that after a play $\pi=(n_0,q_0)\ldots (n_k,q_k)$, if $\delta(\pi)$ denotes the transition chosen by Eve or Adam after $\pi$ (depending whether $q_k \in \Astates_E$ or $q_k \in \Astates_A$), the probability to go to vertex $(n_{k+1},q_{k+1})$ is: \[ \begin{cases} 1 & \text{ if $\delta(\pi)$ is the local transition $q_k\to_{t(n_k)} q_{k+1}$}\enspace,\\ \frac{1}{2} & \text{ if $\delta(\pi)$ is the split transition $q_k\to_{t(n_k)} (r_0,r_1)$ and }\\ &\hspace{2cm}\begin{cases} \text{$n_{k+1}=n_k0$ and $q_{k+1}= r_0$}\enspace; or\\ \text{$n_{k+1}=n_k1$ and $q_{k+1}=r_1$}\enspace. \end{cases}\\ 0 & \text{ otherwise\enspace.} \end{cases} \] This way we obtain a probability measure $\mathbb{P}^{\sigma,\tau}$ on the set of infinite plays. \paragraph*{Consistency and reachability.} If a finite play $\pi$ is the prefix of another finite or infinite play $\pi'$ we say that $\pi'$ is a \emph{continuation} of $\pi$. A finite $\pi$ play is \emph{consistent} with a strategy $\sigma$ or, more simply, is a \emph{$\sigma$-play} if there exists a strategy $\tau$ such that $\pi$ may occur in the non-homogenous Markov chain induced by $\sigma$ and $\tau$. In this case, the number $N$ of split transitions which occurred in $\pi$ is exactly the depth of the node of $\last(\pi)$ and \[ \mathbb{P}^{\sigma,\tau}(\{ \text{ continuations of $\pi$ }\}) = 2^{-N}\enspace. \] A vertex $w$ is \emph{$\sigma$-reachable} if there exists a finite $\sigma$-play from the initial vertex to $w$. An infinite play is consistent with $\sigma$ if all its prefixes are. \paragraph*{Bounded vs. unbounded plays.} There are two kinds of infinite plays: \emph{bounded plays} are plays whose sequence of nodes is ultimately constant, or equivalently which ultimately use only local transitions while \emph{unbounded plays} use infinitely many split transitions. Bounded plays consistent with $\sigma$ and $\tau$ are the atoms of $\mathbb{P}^{\sigma,\tau}$: a play $\pi$ is bounded and consistent with $\sigma$ and $\tau$ iff $\mathbb{P}^{\sigma,\tau}(\{\pi\})>0$. In this paper we will focus on subclasses of automata whose structural restrictions forbids the existence of bounded plays (see the {\bf (NLL)} hypothesis below). So in practice, every play $\pi=(n_0,q_0)(n_1,q_1)\ldots$ we consider will visit a sequence of nodes $n_0,n_1,n_2,\ldots$ which enumerates all finite prefixes of an infinite branch $b\in\{0,1\}^\omega$ of the binary tree, in a weakly increasing order: for every index $i$ either $n_{i+1}=n_{i}$ (the player controlling $(n_i,q_i)$ played a local transition) or $n_{i+1}=n_i d$ for some $d\in \{0,1\}$ (the player controlling $(n_i,q_i)$ played a split transition and the play followed direction $d$). \paragraph*{Winning strategies.} Whether Eve wins the game is defined as follows. The \emph{limsup} of an infinite play $(n_0,q_0)(n_1,q_1)\ldots$ is $\limsup_i q_i$ i.e. the largest automaton state visited infinitely often. An infinite play $\pi'$ is a \emph{positive continuation} of $\pi$ if all states of $\pi'$ visited after $\pi$ belongs to $F_{>0}$. Eve wins with $\sigma$ against $\tau$ if the three following conditions are satisfied. \begin{itemize} \item {\bf Sure winning.} Every play consistent with $\sigma$ and $\tau$ has limsup in $F_\forall$. \item {\bf Almost-sure winning.} Almost-every play consistent with $\sigma$ and $\tau$ has limsup in $F_1$. \item{\bf Positive winning.} For every finite play $\pi$ consistent with $\sigma$ and $\tau$ whose last state belongs to $F_{>0}$, the set of positive continuations of $\pi$ has nonzero probability. \end{itemize} We say that \emph{Eve wins} the acceptance game if she has a \emph{winning strategy} i.e. a strategy which wins the acceptance game against any strategy of Adam. \paragraph*{B\"uchi conditions.} A B\"uchi condition is a set of states $R\subseteq Q$ which is upper-closed with respect to $\leq$\enspace. Then a play has limsup in $R$ iff it visits $R$ infinitely often. \paragraph*{Language of an automaton.} \begin{definition}[Acception and language] A binary tree is \emph{accepted} by the automaton if Eve has a winning strategy in the acceptance game. The language of the automaton is the set of its accepted trees. \end{definition} We are interested in the following decision problem: \medskip {\bf Emptiness problem: } Given an automaton, decide whether its language is empty or not. \medskip The use of game semantics makes the following closure properties trivial. \begin{lemma}[Closure properties]\label{lem:closure} The class of languages recognized by alternating nonzero automata is closed under union and intersection. \end{lemma} \paragraph*{Normalization.} We assume all automata to be normalized in the sense where they satisfy: \begin{itemize} \item {\bf (N1)} every split transition whose source is in $F_{>0}$ has at least one successor in $F_{>0}$; and \item {\bf (N2)} every local transition whose source is in $F_{>0}$ has its target in $F_{>0}$ as well. \end{itemize} We can normalize an arbitrary automaton by removing all transitions violating {\bf (N1)} and {\bf (N2)}. This will not change the language because such transitions are never used by positively winning strategies of Eve. This normalization could lead to a violation of the completeness hypothesis, {\bf (HC)}. In this case we can also delete the corresponding states without modifying the language of the automaton. If one would drop {\bf (HC)} then the game graph may have dead-ends and the rules of the game would have to be extended to handle this case, typically the player controlling the state in the dead-end loses the game. This extension does not bring any extra expressiveness to our model of automaton, we can always make an automaton complete by adding local transitions leading to losing absorbing states. Moreover, we assume: \begin{itemize} \item {\bf (N3)} $F_1 \subseteq F_\forall\enspace.$ \end{itemize} This is w.l.o.g. since replacing $F_1$ with $F_1 \cap F_\forall$ does not modify the language of the automaton. \section{An example: the language of PUCE trees\label{sec:example}} A tree $t$ on the alphabet $\set{a,b}$ is \emph{positively ultimately constant everywhere} (\emph{PUCE} for short) if for every node $n$, \begin{enumerate} \item[i)] the set of branches visiting $n$ and with finitely many $a$-nodes has $>0$ probability; and \item[ii)] the set of branches visiting $n$ and with finitely many $b$-nodes has $>0$ probability. \end{enumerate} \paragraph*{No regular tree is PUCE.} There are two cases. If the regular tree has a node $n$ which is the root of a subtree labelled with only $a$ or $b$ then clearly the tree is not PUCE. Otherwise, by a standard pumping argument, every node labelled $a$ (resp. $b$) has a descendant labelled $b$ (resp. $a$) at some depth $\leq |S|$, where $S$ is the set of states of the regular tree. But in this second case from every node $n$ there is probability at least $\frac{1}{2^{|S|}}$ to reach a descendant with a different label, thus almost-every branch of the regular tree has infinitely many $a$ and $b$, and the tree is not PUCE either. \paragraph*{There exists a PUCE tree.} However it is possible to build a non-regular tree $t$ whose every node satisfies both $i)$ and $ii)$. For that, we combine together two partial non-regular trees. Let $H\subseteq \{0,1\}^*$ be a subset of nodes such that a) the set of branches which visit no node in $H$ has probability $\frac{1}{2}$, b) no node of $H$ is a strict ancestor of another node in $H$ ($H$ is a cut), c) every node in $\{0,1\}^*$ is either a descendant or an ancestor of a node in $H$. For example we can choose $ H= \{00, 100, 0100, 11000, 011000, 1010000, 11100000, \\010100000,011100000,\ldots \} $. To obtain $t$, we combine two partial trees $t_a$ and $t_b$ whose domain is $\{0,1\}^* \setminus H$ and $t_a$ is fully labeled with $a$ while $t_b$ is fully labelled with $b$. Since $H$ is a cut, the nodes in $H$ are exactly the leaves of $t_a$ and $t_b$. To obtain $t$, we plug a copy of $t_b$ on every leaf of $t_a$ and a copy of $t_a$ on every leaf of $t_b$. Then from every node, according to c) there is non-zero probability to enter either $t_a$ or $t_b$ and according to a) there is non-zero probability to stay in there forever. \paragraph*{An automaton recognizing PUCE trees.} We can design one automaton for each of the two conditions and combine them together with an extra state controlled by Adam (cf proof of Lemma~\ref{lem:closure}). We provide an alternating nonzero automaton checking condition ii), the automaton for condition i) is symmetric. The state space is: \[ Q = \set{s < w < g < \sharp}\enspace. \] Intuitively, Adam uses states $s$ to search for a node $n$ from which condition i) does not hold. Once on node $n$, Adam switches to state $w$ and challenges Eve to find a path to an $a$-node $n'$ which is the root of an $a$-labelled subtree $T_n$ of $>0$ probability. For that Eve navigates the tree in state $w$ to node $n'$, switches to state $g$ on node $n'$, stays in $g$ as long as the play stays in $T_n$ and switches definitively to $\sharp$ whenever leaving $T_n$. Formally, the only state controlled by Adam is $s$, i.e. $\Astates_A=\{s\}$, from which Adam can choose, independently of the current letter, between two split transitions $s \to (s,\sharp)$ and $s\to (\sharp,s)$ and a local transition $s \to w$. The state $\sharp$ is absorbing. From state $w$, Eve can guess the path to $n'$ using the split transitions: \[ w \to (\sharp, w) \quad w \to (w,\sharp)\enspace. \] Once $n'$ is reached Eve can switch to state $g$ with a local transition $w \to g$ and, whenever the current node is an $a$-node, she can choose among three split transitions: \[ g \to_a (g,g) \quad g \to_a (g,\sharp) \quad g \to_a (\sharp,g) \enspace. \] The acceptance conditions are: \begin{align*} F_\forall=F_1=Q \setminus \{w\} \quad \quad F_{>0}= \set{ g }\enspace, \end{align*} so that from $w$ Eve is forced to eventually switch to $g$ (otherwise $\limsup=w\not\inF_\forall$) and the $a$-subtree labelled by $g$ must have positive probability for Eve to win. Adam may never exit the pathfinding state $s$, in which case Eve wins. \section{Non-deterministic nonzero automata\label{sec:nondet}} Non-deterministic \emph{zero} automata were introduced in~\cite{DBLP:conf/icalp/Bojanczyk16}, followed by a variant of equivalent expressiveness, non-deterministic \emph{nonzero} automata~\cite[Lemma 5]{DBLP:conf/icalp/BojanczykGK17}. In those automata, Adam is a dummy player, i.e. $\Astates_A=\emptyset$ and moreover all transitions are split-transitions. \begin{theorem}\label{theo:complexitiyemptiness} The emptiness problem for non-deterministic nonzero automata is in {\sc np}$ \cap ${\normalfont co}{\sc np}. If $F_\forall$ is a B\"uchi condition then emptiness can be decided in {\sc ptime}. \end{theorem} The first statement is established in~\cite[Theorem 3]{DBLP:journals/corr/BojanczykGK17}. The second statement is proved in the appendix. The proof idea is as follows. Assume the alphabet to be a singleton, which is w.l.o.g. for non-deterministic automata. The existence of a winning strategy for Eve can be witnessed by a subset $W\subseteq Q$ which contains the initial state and two positional winning strategies $\sigma_1,\sigma_2: W \to W\times W$. Strategy $\sigma_1$ should be almost-surely and positively winning while strategy $\sigma_2$ should be surely winning. These two strategies can be combined into a (non-positional) strategy for Eve which satisfies the three objectives, thus witnesses non-emptiness of the automaton. \section{Deciding emptiness of automata with \limch\ for Adam\label{sec:emptiness}} In this section, we introduce the class of automata with \emph{\limch\ for Adam}, and show that emptiness of these automata is decidable. For that we rely on a characterization of positional strategies of Eve which satisfy the surely and almost-surely winning conditions (Lemma~\ref{caracsure}, Lemma~\ref{defindex}) and the positively winning condition (Lemma~\ref{lem:caracnonzero}). Then we represent the positional strategies of Eve as labelled trees, called \emph{strategic trees} (Definition~\ref{def:st}). Finally we show that the language of strategic trees whose corresponding positional strategy is winning can be recognized by a non-deterministic nonzero automaton (Theorem~\ref{theo:recogstrat}). \subsection{Automata with \limch\ for Adam} In the rest of the paper, we focus on the class of automata with \limch\ for Adam. Our motivation is that these automata capture the logic we are interested in and their acceptance games have good properties. In particular the existence of positional winning strategies for Eve is one of the key properties used to decide emptiness. To define the class of automata with limited choice\ for Adam, we rely on the transition graph of the automaton. \begin{definition}[Equivalent and transient states] The transitions of the automaton define a directed graph called the \emph{transition graph} and denoted $G_\to$. The vertices of $G_\to$ are $\Astates$ and the edges are labelled with $\Sigma$, those are all triplets $(q,a,r)$ such that $q\to_a r$ is a local transition or such that $q\to_a(r,q')$ or $q\to_a(q',r)$ is a split transition for some state $q'$. Two states $q,r$ are \emph{equivalent}, denoted $q\equiv r$, if they are in the same connected component of $G_\to$. A state is \emph{transient} if it does not belong to any connected component of $G_\to$, or equivalently if there is no cycle on this state in $G_\to$. \end{definition} \begin{definition} An automaton has \limch\ for Adam\ if for every state $q$ controlled by Adam, \begin{itemize} \item all transitions with source $q$ are local transitions; and \item for every letter $a$, at most one of the (local) transitions $q\to_a q'$ satisfies $q \equiv q'$. Such a transition is called a \emph{canonical} transition. \end{itemize} \end{definition} In a \limch\ for Adam\ automaton, the only freedom of choice of Adam, apart from playing canonical transitions, is deciding to go to a lower connected component of the transition graph. This non-canonical decision can be done only finitely many times, hence the name \emph{limited choice}. In the classical (non-probabilistic) theory of alternating automata, similar notions of limited alternation have already been considered, for example \emph{hesitant alternating automata}~\cite{ltl}. \begin{definition}[Canonical plays and transient vertices] A \emph{canonical play} is a play in which Adam only plays canonical transitions. A vertex $(n,q)$ of an acceptance game is \emph{transient} if it has no immediate successor $(n',q')$ (by a local or a split transition) such that $q \equiv q'$. \end{definition} In the acceptance game of an automaton with \limch\ for Adam, every infinite play visit finitely many transient vertices and has a canonical suffix. \paragraph*{The no local loop assumption.} We assume that every automata with \limch\ for Adam\ also satisfies: \begin{itemize} \item {\bf (NLL)} the automaton has {\bf no local loop}: there is no letter $a$ and sequence of local transitions $q_0 \to_a q_1 \to_a \cdots \to_a q_i$ such that $q_0=q_i$. \end{itemize} Under the hypothesis (NLL), for every infinite play $\pi$ there is a unique branch of the binary tree $b\in\{0,1\}^\omega$ whose every prefix is visited by $\pi$. We say that $\pi$ \emph{projects} to $b$. Assuming (NLL) does not reduce expressiveness. \begin{lemma}\label{NLL} Given an automaton $\mathcal{A}$ with \limch\ for Adam\ and set of states $Q$ one can effectively construct another automaton $\mathcal{A}'$ with \limch\ for Adam\ satisfying {\bf (NLL)} and recognizing the same language. \end{lemma} The interest of the {\bf (NLL)} assumption is to make the acceptance game acyclic, which in turn guarantees positional determinacy for Eve, as shown in the next section. The transformation performed in the proof of Lemma~\ref{NLL} creates an exponential blowup of the state space of the automaton, which is bad for complexity. We could do without this blowup by dropping the {\bf (NLL)} assumption, in which case Eve might need one extra bit of memory in order to implement local loops with priority in $F_\forall\setminusF_1$. However, we prefer sticking to the {\bf (NLL)} assumption, which makes the alternating automata and their accepting games simpler and is anyway not restrictive when it comes to translating temporal logics into alternating automata: the natural translation produces automata with no local loop. Another interest of the {\bf (NLL)} assumption is: \begin{lemma}\label{probmu} Assume the automaton has the (NLL) property. Let $\mu$ be the uniform measure on the set of branches of the infinite binary tree, equipped with the usual Borel $\sigma$-field. Let $t$ be an input tree, $\sigma$ and $\tau$ be two strategies in the corresponding acceptance game and $X$ be a measurable set of plays consistent with $\sigma$ and $\tau$. Let $Y\subseteq \{0,1\}^\omega$ be the set of infinite branches that $X$ projects to. If $X$ is measurable then $Y$ is measurable and \[ \mathbb{P}^{\sigma,\tau}(X) = \mu(Y) \enspace. \] \end{lemma} \subsection{Positional determinacy of the acceptance game} A crucial property of automata with \limch\ for Adam\ is that their acceptance games are positionally determined for Eve. \begin{definition}[Positional strategies] A strategy $\sigma$ of Eve in an acceptance game is \emph{positional} if for every finite plays $\pi,\pi'$ whose last vertices are controlled by $Eve$ and coincide, i.e. $\last(\pi)=\last(\pi')\in \{0,1\}^*\times \Astates_E$, then $\sigma(\pi)=\sigma(\pi')$. \end{definition} \begin{lemma}[Positional determinacy for Eve]\label{lem:det} Every acceptance game of an automaton with \limch\ for Adam\ is positionally determined for Eve: if Eve wins then she has a positional winning strategy. \end{lemma} \begin{proof}[Sketch of proof] Since the (NLL) hypothesis is assumed, the underlying acceptance game is acyclic. The construction of a positional winning strategy $\sigma'$ from a (non-positional) winning strategy $\sigma$ relies on the selection of a canonical way of reaching a $\sigma$-reachable vertex $w$ with a $\sigma$-play $\pi(w)$ and setting $\sigma'(w)=\sigma(\pi(w))$. \end{proof} \subsection{On winning positional strategies of Eve} In the next section we show how to use use automata-based techniques to decide the existence of a (positional) winning strategy for Eve. These techniques rely on characterizing whether a positional strategy of Eve is surely, almost-surely and positively winning. \subsubsection{Surely and almost-surely winning conditions} We characterize (almost-)surely winning strategies. \begin{definition}[$q$-branches] Let $q \in Q$ and $\sigma$ a strategy. An infinite branch of the binary tree is a $q$-branch in $\sigma$ if at least one $\sigma$-play which projects to this branch has limsup $q$. \end{definition} \begin{lemma}\label{caracsure} Assume the automaton has \limch\ for Adam. Let $\sigma$ be a positional strategy for Eve. Then $\sigma$ is surely winning iff for every $q \in (Q\setminus F_\forall)$ there is no $q$-branch in $\sigma$. Moreover $\sigma$ is almost-surely winning iff for every $q \in (Q\setminus F_1)$ the set of $q$-branches in $\sigma$ has measure $0$. \end{lemma} \begin{proof} We denote $\mu$ the uniform probability measure on $\{0,1\}^\omega$. For every state $q$, $Y_q$ denotes the set of $q$-branches in $\sigma$. We show the first statement about sure winning. For every $\sigma$-play $\pi$ there exists a strategy $\tau$ of Adam such that $\pi$ is consistent both with $\sigma$ and $\tau$. Thus there is $q\in (Q\setminus F_\forall)$ such that $Y_q \neq \emptyset$ iff there is a strategy $\tau$ of Adam and a play consistent with $\sigma$ and $\tau$ with limsup in $Q\setminusF_\forall$, iff $\sigma$ is \emph{not} accepting. We show that the condition $\forall q \in Q \setminus F_1, \mu(Y_q) = 0$ is sufficient for $\sigma$ to be almost-surely winning. Let $\tau$ be a strategy of Adam and $Y'$ the set of branches of plays consistent with $\sigma$ and $\tau$ which have limsup in $Q \setminus F_1$. Then $Y' \subseteq \bigcup_{q\in Q \setminus F_1} Y_q$. According to Lemma~\ref{probmu}, $\mathbb{P}^{\sigma,\tau}(\limsup \not \in F_1) = \mu(Y')\leq \mu\left(\bigcup_{q\in Q \setminus F_1} Y_q\right) = 0$. Thus $\sigma$ is almost-surely winning. We show that the condition $\mu(Y_q)>0$ for some $q\in Q \setminus F_1$ is sufficient for $\sigma$ \emph{not} to be almost-surely winning. For every infinite branch $b\in Y_q$ choose one $\sigma$-play $\pi_b$ with $\limsup \in Q\setminus F_1$. Since the automaton has \limch\ for Adam, a suffix of $\pi_b$ is canonical, let $w_b$ be the first vertex of this suffix. For every $\sigma$-reachable vertex $w$ denote $Z_{w} =\{ b \in Y_q \mid w_b = w \}$. Since $Y_q$ is the countable union of the sets $(Z_w)_{w\text{ $\sigma$-reachable}}$ there is at least one $\sigma$-reachable vertex $w$ such that $\mu(Z_{w})>0$. Let $\pi_w$ be a finite $\sigma$-play to $w$. Let $\tau_w$ a strategy for Adam which enforces $\pi_w$ with positive probability and plays canonically in every continuation of $\pi_w$ whenever possible. We show that $\mathbb{P}^{\sigma,\tau_w}(\limsup \not \in F_1)>0$. Let $X_w$ be the set of continuations of $\pi_w$ consistent with $\sigma$ and $\tau_w$ whose branch belongs to $Z_w$. An easy induction shows that every play $\pi'\in X_w$ with branch $b$ coincide with $\pi_b$ after $w_b$ ($\sigma$ is positional and $\tau_w$ plays only canonical moves). Thus every play in $X_w$ has $\limsup \in Q \setminus F_1$. Then $\mathbb{P}^{\sigma,\tau_w}(\limsup \not \in F_1) \geq \mathbb{P}^{\sigma,\tau_w}(X_w) = \mu(Z_w) > 0$, according to Lemma~\ref{probmu}. \end{proof} Whether a branch is a $q$-branch can be checked by computing a system of $\sigma$-indexes. Intuitively, all $\sigma$-reachable vertices receives a finite index, such that along a $\sigma$-play the index does not change except when Adam performs a non-canonical move or when two plays merge on the same vertex, in which case the smallest index is kept. After a non-canonical move of Adam, a new play may start in which case it receives a fresh index not used yet in the current neither in the parent node. For this less than $2|Q|$ indices are required. The important properties of $\sigma$-indexes are: \begin{lemma}[Characterization of $q$-branches]\label{defindex} Every positional strategy $\sigma$ of Eve can be associated with a function \[ \index_\sigma:\{0,1\}^*\times Q \to \left\{0,1,\ldots ,2|Q|, \infty\right\}^Q \] with the following properties. First, $\index_\sigma$ can be computed on-the-fly along a branch. For every node $n$ denote $\sigma_n$ the restriction of $\sigma$ on $\{n\}\times Q$. Then $\index_\sigma(\epsilon)$ only depends on $\sigma_\epsilon$. And for every node $n$ and $d\in\{0,1\}$, $\index_\sigma(nd)$ only depends on $\index_\sigma(n)$ and $\sigma_{nd}$. Second, a vertex $(n,q)$ is reachable from the initial vertex by a $\sigma$-play iff $\index_\sigma(n)(q)$ is finite. Third, let $b\in\{0,1\}^\omega$ be an infinite branch of the binary tree, visiting successively the nodes $n_0,n_1,n_2, \ldots$. Denote $R^\infty(b)$ the set of pairs $(k,q)\in \{0,\ldots, 2|Q|\}\times Q$ such that: \begin{itemize} \item $k \in \index_\sigma(n_i)(Q)$ for every $i\in \nats$ except finitely many; \item and $k= \index_\sigma(n_i)(q)$ for infinitely many $i\in \nats$. \end{itemize} Then for every state $q$, the branch $b$ is a $q$-branch if and only if there exists $k\in \{0,1,\ldots ,2|Q|\}$ such that $q=\max \{ r \in Q \mid (k,r) \in R^\infty(b)\}$. \end{lemma} \subsubsection{Checking the positively winning condition} In order to check with a non-deterministic automaton wheth\-er a positional strategy is positively winning, we rely on the notion of \emph{positive witnesses}. The point of positive witnesses is to turn the verification of up to $|Q|$ positively-winning conditions - depending on the decisions of Adam, there may be up to $|Q|$ different $\sigma$-reachable vertices on a given node - into a single one. This single condition can then be checked by a non-deterministic nonzero automaton equipped with a single positively-winning condition. \paragraph*{Everywhere thick subtrees.} We need the notion of everywhere thick subtrees. We measure sets of infinite branches with the uniform probability measure $\mu$ on $\{0,1\}^\omega$. \begin{definition}[Subtree] A set of nodes $T\subseteq \{0,1\}^*$ is a \emph{subtree} if it contains a node $r$, called the root of $T$, such that every node $n\in T$ is a descendant of $r$, $T$ contains all nodes on the path from $r$ to $n$. \end{definition} \begin{definition}[Everywhere thick sets of nodes] For every set $T\subseteq \{0,1\}^*$ of nodes denote $\vec{T}$ the set of branches in $\{0,1\}^\omega$ whose every prefix belongs to $T$. Then $T$ is \emph{everywhere thick} if starting from every node $n \in T$ there is nonzero probability to stay in $T$, i.e. if $\mu\left(\vec{T} \cap n\{0,1\}^\omega\right)>0$. \end{definition} Everywhere thick subtrees are almost everywhere. \begin{lemma} \label{lem:thick} Let $P\subseteq \{0,1\}^\omega$ be a measurable set of infinite branches. Assume $\mu(P)>0$. Then there exists an everywhere thick subtree $T$, with root $\epsilon$ such that $\vec{T} \subseteq P$. \end{lemma} The proof relies on the inner-regularity of $\mu$, so that $P$ can be assumed to be a closed set, i.e. a subtree from which we can prune leaves whose subtree has probability $0$. \paragraph*{Positive witnesses.} Positive witnesses can be used to check whether a strategy is positively winning: \begin{definition}[Positive plays and witnesses] \label{defnonzero} Let $t$ be a $\Sigma$-labelled binary tree and $\sigma$ a positional strategy of Eve in the acceptance game of $t$. Let $Z$ be the set of $\sigma$-reachable vertices whose state is in $F_{>0}$. A play is positive if all vertices it visits belong to $\{0,1\}^*\times F_{>0}$. A positive witness for $\sigma$ is a pair $(W,E)$ where: \begin{align*} &W \subseteq Z \text{ are the \emph{active} vertices},\\ &E \subseteq \{0,1\}^* \times \{0,1\}\text{ is the set of \emph{positive edges},} \enspace \end{align*} and they have the following properties. \begin{itemize} \item[a)] From every vertex $z \in Z$ there is a positive and canonical finite $\sigma$-play starting in $z$ which reaches a vertex in $W$ or a transient vertex. \item[b)] Let $z=(n,q) \in W$. Then $(n,0)\in E$ or $(n,1)\in E$, or both. If $z \to z'$ is a local transition then $z' \in W$ as well whenever ($q\in Q_E$ and $z \to z'$ is consistent with $\sigma$) or ($q\in Q_A$ and $z\to z'$ is canonical). If $z$ is controlled by Eve and $\sigma(z)$ is a split transition $q \to (q_0,q_1)$ then $ ((n,0)\in E \implies (n0,q_0) \in W)$ and $((n,1)\in E \implies (n1,q_1) \in W)$. \item[c)] The set of nodes $\{ nd\in\{0,1\}^* \mid (n,d) \in E \}$ is everywhere thick. \end{itemize} \end{definition} \begin{lemma} [Characterization of positively winning strategies] \label{lem:caracnonzero} Assume the automaton has \limch\ for Adam. A positional strategy $\sigma$ for Eve is positively winning iff there exists a positive witness for $\sigma$. \end{lemma} \subsection{Deciding emptiness} \newcommand{T}{T} A $\Sigma$-labelled binary tree $t$ and a positional strategy $\sigma$ in the corresponding acceptance game generate a tree \[ T_{t,\sigma} : \{0,1\}^* \to (Q \cup Q\times Q)^{Q_E}\enspace. \] For every vertex $(n,q)$ controlled by Eve, if $\sigma(n,q)$ is a local transition $q \to_{t(n)} q'$ then $T_{t,\sigma}(n)(q)=q'$ and if $\sigma(n,q)$ is a split transition $q \to_{t(n)} (q_0,q_1)$ then $T_{t,\sigma}(n)(q)=(q_0,q_1)$. \begin{definition}[Strategic tree]\label{def:st} A tree $ T: \{0,1\}^* \to (Q \cup Q\times Q)^{Q_E}$ is \emph{strategic} if there exists a tree $t:\{0,1\}^* \to \Sigma$ and a positional strategy $\sigma$ for Eve such that $T=T_{t,\sigma}$\enspace. \end{definition} We are interested in the strategic trees associated to winning strategies. The rest of the section is dedicated to the proof of the following theorem. \begin{theorem}\label{theo:recogstrat} Fix an alternating nonzero automata with limited choice for Adam. The language of strategic trees $T_{t,\sigma}$ such that $\sigma$ wins the acceptance game of $t$ can be recognized by a non-deterministic nonzero automaton of size exponential in $|Q|$. If $F_\forall=Q$ in the alternating automaton, then the sure condition of the non-deterministic automaton is B\"uchi. \end{theorem} \begin{proof} The characterizations of surely, almost-surely and positively winning strategies given in lemmas~\ref{caracsure},~\ref{defindex} and~\ref{lem:caracnonzero} can be merged as follows. \begin{corollary}\label{carac} Let $\sigma$ be a positional strategy $\sigma$ for Eve. For every branch $b$ denote \[ M(b)=\{ \max\{q \mid (k,q)\in R^\infty(b)\} \mid k\in 0\ldots 2|Q|\} \enspace. \] Then $\sigma$ is winning if and only if \begin{itemize} \item for every branch $b$, $M(b)\subseteq F_\forall$; \item and for almost-every branch $b$, $M(b)\subseteq F_1$; \item and there exists a positive witness for $\sigma$. \end{itemize} \end{corollary} First of all, the non-deterministic automaton $\mathcal{B}$ checks whether the input tree is a strategic tree, for that it guesses on the fly the input tree $t :\{0,1\}^*\to \Sigma$ by guessing on node $n$ the value of $t(n)$ and checking that for every $q\in\Astates_E$, $q\to_{t(n)} T(n)(q)$ is a transition of the automaton. On top of that $\mathcal{B}$ checks the three conditions of Corollary~\ref{carac}. For the first two conditions, it computes (asymptotically) along every branch $b$ the value of $R^\infty(b)$ and thus of $M(b)$. For that the automaton relies on a Last Appearance Record memory (LAR)~\cite{Gurevich:1982:TAG:800070.802177} whose essential properties are: \begin{lemma}[LAR memory~\cite{Gurevich:1982:TAG:800070.802177}] \label{LAR} Let $C$ be a finite set of symbols. There exists a deterministic automaton on $C$ called the \emph{LAR memory on $C$} with the following properties. First, the set of states, denoted $Q$, has size $\leq |C|^{|C|+1}$ and is totally ordered. Second, for every $u\in C^\omega$ denote $L^\infty(u)$ the set of letters seen infinitely often in $u$ and $\limsup_{\text{LAR}}(u)$ the largest state seen infinitely often during the computation on $u$. Then $L^\infty(u)$ can be inferred from $\limsup_{\text{LAR}}(u)$, precisely there is a mapping $\phi : Q \to 2^C$ such that: $ \forall u\in C^\omega, L^\infty(u) = \phi(\limsup_{\text{LAR}}( u) )\enspace. $ \end{lemma} In order to compute $R^\infty(b)$ along a branch $b$, the non-deterministic automaton $\mathcal{B}$ computes deterministically on the fly the $\sigma$-index of the current node $n$, as defined in Lemma~\ref{defindex}, and implements a LAR memory on the alphabet \[ C=\{0,\ldots, 2|Q|\}\times( Q \cup \{\bot\})\enspace. \] When visiting node $n$, $\mathcal{B}$ injects into the LAR memory all pairs $(\index_\sigma(q),q)$ such that $q\in Q$ and $\index_\sigma(q)\neq \infty$ plus all pairs $(k,\bot)$ such that $k \not \in \index_\sigma(n)(Q)$. For every branch $b$, the set $R^\infty(b)$ is equal to all pairs $(k,q)$ seen infinitely often such that $(k,\bot)$ is seen only finitely often. Thus, the LAR memory can be used to check the first two conditions of Corollary~\ref{carac}, more details are given at the end of the proof. \smallskip For now, we describe how the non-deterministic automaton $\mathcal{B}$ checks whether there exists a positive witness $(W,E)$ (Definition~\ref{defnonzero}). Denote by $Z$ the set of $\sigma$-reachable vertices whose state is in $F_{>0}$. On node $n$ the automaton guesses (resp. computes) the vertices of $W$ (resp. $Z$) of the current node and guesses the elements of $E$ by storing three sets of states: \begin{align*} &W_n=\{ q \in Q \mid (n,q) \in W\}\\ &Z_n = \{ q \in F_{>0} \mid \index_\sigma(n,q) < \infty \}\\ &E_n=\{ b \in \{0,1\} \mid (n,b) \in E\} \enspace. \end{align*} Then $\mathcal{B}$ checks all three conditions a), b) and c) in the definition of a positive witness as follows. \smallskip $\mathcal{B}$ checks condition a) in the definition of a positive witness by guessing on the fly for every vertex in $Z$ a canonical positive $\sigma$-play to a vertex which is either transient or in $W$, in which case we say the canonical positive play \emph{terminates}. For that $\mathcal{B}$ maintains an ordered list $P_n$ of states. On the root node, $P_\epsilon$ is $Z_\epsilon \setminus W_\epsilon$. When the automaton performs a transition, it guesses for each state $q$ in $P_n$ and direction $b_q$ a successor $s_q$, such that $(nb_q,s_q)$ can be reached from $(q,n)$ by a positive canonical $\sigma$-play. In direction $b$, every state $q$ for which $b_q \neq b$ is removed from the list, while every state $q$ for which $b_q = b$ is replaced by the corresponding $s_q$. Then all states in $Z_{nb}$ are added at the end of the list. In case of duplicates copies of the same state in the list, only the first copy is kept. In case the head of the list is in $W_{nb}$ or is transient, a B\"uchi condition is triggered and the head is moved at the back of the list. Finally all entries of the list which are in $W_{nb}$ are removed. This way, condition a) holds iff the B\"uchi condition is triggered infinitely often on every branch. We discuss below how to integrate this B\"uchi condition in the sure accepting condition of the automaton. \smallskip $\mathcal{B}$ checks condition b) in the definition of a positive witness by entering an absorbing error state as soon as \begin{enumerate}[1)] \item there is some local transition $(n,q)\to_{t(n)} (n,q')$ such that $q\in W_n$ and ($q\in Q_E$ and $z \to z'$ is consistent with $\sigma$) or ($q\in Q_A$ and $z\to z'$ is canonical); or \item there is some $q\in W_n$ controlled by Eve and $b\in E_n$ such that $\sigma(n,q)$ is a split transition $q \to_{t(n)}(q_0,q_1)$ but $q_b\not\in W_{nb}$. \end{enumerate} The guessed sets $W_n$ are bound to satisfy condition 1) and condition 2) is checked by storing a subset of $Q$. \smallskip $\mathcal{B}$ checks condition c) in the definition of a positive witness by triggering the positive acceptance condition whenever it moves in direction $b$ on a node $n$ such that $b \in E_n$. \smallskip The sure and almost-sure acceptance condition are defined as follows. The B\"uchi condition necessary for checking condition a) in the definition of a positive witness is integrated in the LAR memory, for that we add to the alphabet $C$ of the LAR memory a new symbol $\top$ which is injected in the LAR memory whenever the B\"uchi condition is triggered. The order between states of $\mathcal{B}$ is induced by the order of the LAR memory. This way, according to Lemma~\ref{LAR}, the largest state seen infinitely often along a branch $b$ reveals whether $\top$ was seen infinitely often, and reveals the value of $R^\infty(b)$ (the set of pairs $(k,q)$ seen infinitely often such that $(k,\bot)$ was seen finitely often) hence of $M(b)$ as well. The state is surely (resp. almost-surely) accepting iff $\top$ was seen infinitely often and $M(b)\subseteq F_\forall$ (resp. $M(b)\subseteq F_1$). In case $F_\forall=Q$ in the alternating automaton then the sure condition boils down to the B\"uchi condition. According to Corollary~\ref{carac}, and by construction of $\mathcal{B}$, the computation of $\mathcal{B}$ is accepting iff the input is a strategic tree whose corresponding strategy of Eve is winning. \end{proof} \begin{theorem}\label{theo:recogalt} Emptiness of alternating nonzero automata with limited choice for Adam is decidable in {\sc nexptime}$\cap$co-{\sc nexptime}. If $F_\forall=Q$, emptiness can be decided in {\sc exptime}. \end{theorem} \begin{proof} Emptiness of an alternating automaton reduces to the emptiness of a non-deterministic automaton of exponential size. This non-deterministic automaton guesses on-the-fly a tree $\{0,1\}^* \to (Q \cup Q\times Q)^{Q_E}$ and checks it is a winning strategic tree, using the automaton given by Theorem~\ref{theo:recogstrat}. In case the alternating automaton is $F_\forall$-trivial, the sure condition of the non-deterministic automaton is B\"uchi (Theorem~\ref{theo:recogstrat}). We conclude with Theorem~\ref{theo:complexitiyemptiness}. \end{proof} \section{Satisfiability of \ctls\allop\ }\label{sec:pctls} Our result on alternating nonzero automata can be applied to decide the satisfiability of the logic \ctls\allop, a generalization of CTL* which integrates both deterministic and probabilistic state quantifiers. \paragraph*{Markov chains.} The models of \ctls\allop\ formulas are Markov chains. A Markov chain with alphabet $\Sigma$ is a tuple $\mathcal{M}=(S,p,t)$ where $S$ is the (countable) set of \emph{states}, $p : S \to \mathcal{D}{(S)}$ are the \emph{transition probabilities} and $t:S\to \Sigma$ is the \emph{labelling function}. For every state $s\in S$, there is a unique probability measure denoted $\mathbb{P}_{\mathcal{M},s}$ on $S^\omega$ such that $\mathbb{P}_{\mathcal{M},s}(s S^\omega)=1$ and for every sequence $s_0\cdots s_n s_{n+1}\in S^*$, $\mathbb{P}_{\mathcal{M},s}(s_0\cdots s_n s_{n+1} S^\omega)=p(s_n,s_{n+1})\cdot \mathbb{P}_{\mathcal{M},s}(s_0s_1\cdots s_n S^\omega)$. When $\mathcal{M}$ is clear from the context this probability measure is simply denoted $\Proba_s$. A \emph{path} in $\mathcal{M}$ is a finite or infinite sequence of states $s_0s_1\cdots$ such that $ \forall n \in \nats, p(s_n,s_{n+1})>0\enspace. $. We denote $\pathes_{\mathcal{M}}(s_0)$ the set of such paths. A binary tree $t:\{0,1\}^*\to \Sigma$ is seen as a specific type of Markov chain, where from every node $n\in \{0,1\}^*$ there is equal probability $\frac{1}{2}$ to perform transitions to $n0$ or $n1$. \paragraph*{Syntax.} For a fixed alphabet $\Sigma$, there are two kinds of formula: state formula (typically denoted $\sf$) and path formula (denoted $\pathFormula$), generated by the following grammar: \begin{align*} \sf ::=& \top \mid \bot \mid a\in \Sigma \mid \sf \wedge\sf\mid \sf \vee\sf\mid \neg \sf \\ &\mid \exists \pathFormula \mid \forall \pathFormula \mid \mathbb{P}_{> 0}(\pathFormula) \mid \mathbb{P}_{= 1}(\pathFormula)\\ \pathFormula ::=& \sf\mid\neg \pathFormula \mid \pathFormula \wedge \pathFormula \mid \pathFormula\vee \pathFormula \mid \next \pathFormula \mid \pathFormula U \pathFormula \mid G\pathFormula\enspace. \end{align*} \paragraph*{Semantics.} Let $\mathcal{M}=(S,t,p)$ a Markov chain. We define simultaneously and inductively the satisfaction $ \mathcal{M},s \models \sf $ of a state formula $\sf$ by a state $s\inS$ and the satisfaction $ \mathcal{M},w \models \pathFormula $ of a path formula $\pathFormula$ by a path $w\in\pathes_{\mathcal{M}}$. When $\mathcal{M}$ is clear from the context, we simply write $s \models\sf$ and $w \models \pathFormula$. If a state formula is produced by one of the rules $\top \mid \bot \mid p \mid \sf \wedge\sf\mid \sf \vee\sf\mid \neg \sf$, its satisfaction is defined as usual. If $\pathFormula$ is a path formula and $\pCTLFormula\in\{\exists\pathFormula,\forall \pathFormula, \mathbb{P}_{> 0}(\pathFormula), \mathbb{P}_{=1}(\pathFormula)\}$ then \begin{align*} & s \models \exists \pathFormula &\text{ if } &\exists w \in \pathes_\mathcal{M}(s), w \models \pathFormula \\ & s \models \forall \pathFormula &\text{ if } &\forall w \in \pathes_\mathcal{M}(s), w \models \pathFormula\\ & s \models \mathbb{P}_{\sim b}( \pathFormula) &\text{ if } &\mathbb{P}_{\mathcal{M},s}(w\in \pathes_\mathcal{M}(s)\mid w\models \pathFormula)\sim b\enspace. \end{align*} The satisfaction of a path formula $\pathFormula$ by an infinite path $w=s_0s_1\dots\in \pathes_{\mathcal{M}}(s_0)$ is defined as follows. If $\pathFormula$ is produced by one of the rules $\neg \pathFormula \mid \pathFormula \wedge \pathFormula \mid \pathFormula\vee \pathFormula$ then its satisfaction is defined as usual. If $\pathFormula$ is a state formula (rule $\pathFormula:=\sf$) then $ w \models \sf$ if $s_0 \models \sf\enspace. $ Otherwise, $\pathFormula \in \{ \next \pathFormula',G \pathFormula', \pathFormula_1U \pathFormula_2 \}$ where $\pathFormula', \pathFormula_1$ and $\pathFormula_2$ are path formulas. For every integer $k$, we denote $w[k]$ the path $s_ks_{k+1}\dots\in \pathes_{\mathcal{M}}(s_k)$. Then: \begin{align*} &w \models \next \pathFormula' &\text{ f } &w[1] \models \pathFormula'\\ &w \models G \pathFormula' &\text{if } &\forall i\in\mathbb{N}, w[i]\models \pathFormula'\\ &w \models \pathFormula_1U \pathFormula_2 &\text{if }&\exists n \in \mathbb{N}, (\forall 0\leq i < n, w[i] \models \pathFormula_1 \land w[n] \models \pathFormula_2). \end{align*} The Markov chain given in Figure~\ref{fig:mc} satisfies the formula $(\forall( G \exists (\top U a) ))\wedge (\proba_{>0}( G \neg a))$. \begin{figure} \begin{tikzpicture}[node distance=1.3cm] \node (1) {b}; \node (11) [above right of=1] {a}; \node (12) [below right of=1] {c}; \node (2) [below right of=11]{b}; \node (21) [above right of=2] {a}; \node (22) [below right of=2] {c}; \node (d) [below right of=21]{\dots}; \node (311) [right of=d]{b}; \node (111) [above right of=311] {a}; \node (121) [below right of=311] {c}; \node (d) [below right of=111]{\dots}; \path[->,sloped,above] (1) edge node {$\frac{1}{2^2}$}(11) (1) edge node {$1-\frac{1}{2^2}$}(12) (11) edge (2) (12) edge (2) (2) edge node {$\frac{1}{2^3}$}(21) (2) edge node {$1-\frac{1}{2^3}$}(22) (311) edge node {$\frac{1}{2^n}$}(111) (311) edge node {$1-\frac{1}{2^n}$}(121) ; \end{tikzpicture} \caption{A model of $(\forall( G \exists (\top U a) ))\wedge (\proba_{>0}( G \neg a))$\label{fig:mc}} \end{figure} \paragraph*{Variants and fragments.} A formula of \ctls\allop\ belongs to the fragment {CTL}\ if in each of its state subformula $\sf$ of type $\exists \pathFormula \mid \forall \pathFormula \mid \mathbb{P}_{> 0}(\pathFormula) \mid \mathbb{P}_{= 1}(\pathFormula)$ the path formula $\pathFormula$ has type $\next \sf' \mid \sf'U \sf'' \mid G\sf'$ where $\sf'$ and $\sf''$ are state subformulas. In the variant ECTL, every path formula $\pathFormula$ is described as the composition of a deterministic B\"uchi automata on some alphabet $\{0,1\}^k$ with $k$ state subformulas. A path satisfies $\pathFormula$ if the B\"uchi automaton accepts the sequence of letters obtained by evaluating the $k$ state subformulas on every state along the path. This variant augments both the expressivity and the conciseness of the logic at the cost of a less intuitive syntax. For more details see~\cite{brazdil2008controller}. We are also interested in the fragments where the operators $\exists$ and $\forall$ are not used, i.e. the qualitative fragments of the logics p\ctls, pECTL\ and p{CTL}. \paragraph*{Satisfiability problem.} A Markov chain $\mathcal{M}$ \emph{satisfies} a formula $\xi$ at state $s$, or equivalently $(\mathcal{M},s)$ \emph{is a model} of $\xi$, if $\mathcal{M},s \models\xi$. We are interested in the problem: \smallskip \noindent{\bf MC-SAT}: given a formula, does it have a model? \smallskip The satisfiability of {\sc tmso+zero} is known to be decidable~\cite{DBLP:conf/icalp/Bojanczyk16, DBLP:journals/corr/BojanczykGK17}. Since \ctls\allop\ is a fragment of {\sc tmso+zero}, {\bf MC-SAT}\ is decidable with non-elementary complexity. A reduction to the emptiness of alternating nonzero automata gives better complexity: \begin{theorem} \label{theo:pctls} For \ctls\allop\ the satisfiability problem is in 3-{\sc nexptime}\ $\cap$ co-3-{\sc nexptime}. The following table summarizes complexities of the satisfiability problem for various fragments and variants of \ctls\allop: \smallskip \begin{tabular}{|c|c|c|} \hline & $[\PCTLE,\PCTLA,\PCTLPp,\PCTLPo]$ & $[\PCTLPp,\PCTLPo]$\\ \hline {\ctl}$^*$ & \makecell{3-{\sc nexptime}\\$\cap$ co-3-{\sc nexptime}} & \makecell{3-{\sc exptime}~\cite{brazdil2008controller}\\ (qualitative p\ctls)}\\ \hline ECTL & \makecell{2-{\sc nexptime}\\$\cap$ co-2-{\sc nexptime}} & \makecell{2-{\sc exptime}~\cite{brazdil2008controller}\\(qualitative pECTL)}\\ \hline {CTL} & \makecell{{\sc nexptime}\\$\cap$ co-{\sc nexptime}} & \makecell{{\sc exptime}~\cite{Brazdil2008}\\ (qualitative p{CTL})}\\ \hline \end{tabular} \end{theorem} According to~\cite{Brazdil2008,brazdil2008controller}, the complexities for ECTL$[\PCTLPp,\PCTLPo]$\ and {CTL}$[\PCTLPp,\PCTLPo]$\ are optimal. The first step in the proof of Theorem~\ref{theo:pctls} is a linear-time reduction from {\bf MC-SAT}\ to: \smallskip \noindent {\bf BIN-SAT}: given a formula, does it have a model among binary trees? \begin{theorem}\label{theo:reduc} Any formula $\pCTLFormula$ of \ctls\allop\ on alphabet $\Sigma$ can be effectively transformed into a formula $\pCTLFormula'$ of linear size on alphabet $\Sigma\cup \{\circ\}$ such that $\pCTLFormula$ is {\bf MC-SAT}\ iff $\pCTLFormula'$ is {\bf BIN-SAT}. As a consequence, {\bf MC-SAT}\ linearly reduces to {\bf BIN-SAT}. This transformation stabilizes the fragment {\ctl}$^*$$[\proba_{>0},\proba_{=1}]$. \end{theorem} The second step is a standard translation from logic to alternating automata~\cite{ltl}. \begin{lemma}\label{lem:pctltobc} For every formula $\xi$ of \ctls\allop\ (resp. \ECTL\allop), there is an alternating automaton $\mathcal{A}$ with \limch\ for Adam\ whose language is the set of binary trees satisfying the formula at the root. The automaton is effectively computable, of size $O(2^{2^{|\xi|}})$ (resp. $\mathcal{O}({2^{|\xi|}})$). If $\xi$ is a {CTL}\ formula, the size of $\mathcal{A}$ is $\mathcal{O}({{|\xi|}})$. In case the formula does not use the $\exists$ and $\forall$ operators, the $F_\forall$ condition is trivial i.e. $F_\forall=\Astates$. \end{lemma} \iffalse \begin{proof}[Sketch of proof] The construction is an adaptation of the standard classical case~\cite{ltl}, it is given in appendix. The state of the alternating automaton stores the current path formula being verified and the state of a B\"uchi automaton corresponding to the formula. Eve chooses in which directions the formula should be verified (all of them for $\proba_{>0}$ and $\forall$ formulas, one of them for $\exists$-formulas, one or both for $\proba_{>0}$-formulas). Moreover, Eve proposes to Adam valuations of the state subformulas of the current path formula. The canonical move of Adam is to accept the valuation, in which case the alternating automaton proceeds to the next nodes. The other option for Adam is to refute one of the valuations of the subformulas, in which case the automaton proceeds in the corresponding path subformula. This can happen no more time than the depth of $\xi$ thus Adam has finite choice. \end{proof} \fi \begin{proof}[Proof of Theorem~\ref{theo:pctls}] All the complexity results are obtained by reduction of {\bf MC-SAT}\ to the emptiness problem for an alternating nonzero automaton with limited choice for Adam, which is decidable in {\sc nexptime}$\cap$co-{\sc nexptime}\ (Theorem~\ref{theo:recogalt}). The size of the automaton varies from doubly-exponential to linear size depending whether the formula is in {\ctl}$^*$, ECTL\ or {CTL}\ (Lemma~\ref{lem:pctltobc}). In case the formula does not use the deterministic operators $\exists$ and $\forall$ (i.e. for qualitative p\ctls, pECTL\ and p{CTL}) the $F_\forall$ condition of the alternating automaton is trivial thus its emptiness is decidable in {\sc exptime}\ (Theorem~\ref{theo:recogalt}). \end{proof} \section*{Conclusion} We have introduced the class of \emph{alternating nonzero} automata, proved decidability of the emptiness problem for the subclass of automata with \limch\ for Adam\ and obtained as a corollary algorithms for the satisfiability of a temporal logic extending both CTL* and the qualitative fragment of pCTL*. A natural direction for future work is to find more general classes of alternating nonzero automata with a decidable emptiness problem, which requires some more insight on the properties of the acceptance games.
{ "timestamp": "2018-02-13T02:18:58", "yymm": "1802", "arxiv_id": "1802.04067", "language": "en", "url": "https://arxiv.org/abs/1802.04067" }
\section*{Abstract} We approach the problem of combining top-ranking association statistics or P\nobreakdash-values{} from a new perspective which leads to a remarkably simple and powerful method. Statistical methods, such as the Rank Truncated Product (RTP), have been developed for combining top-ranking associations and this general strategy proved to be useful in applications for detecting combined effects of multiple disease components. To increase power, these methods aggregate signals across top ranking SNPs, while adjusting for their total number assessed in a study. Analytic expressions for combined top statistics or P\nobreakdash-values{} tend to be unwieldy, which complicates interpretation, practical implementation, and hinders further developments. Here, we propose the Augmented Rank Truncation (ART) method that retains main characteristics of the RTP but is substantially simpler to implement. ART leads to an efficient form of the adaptive algorithm, an approach where the number of top ranking SNPs is varied to optimize power. We illustrate our methods by strengthening previously reported associations of $\mu$-opioid receptor variants with sensitivity to pain. \\ \clearpage \section*{Introduction} Complex diseases are influenced by multiple environmental and genetic risk factors. A specific factor, such as a single mutation, may convey a high risk, but population frequencies of high risk factors are usually low, and substantial contribution to disease incidence can be attributable to accumulation of multiple but weak determinants within individuals. Genetic determinants of complex diseases that had been identified by genetic association studies tend to carry modest effects, yet power to detect such variants, as well as accuracy of identifying individuals at risk, can be improved by combining multiple weak predictors. The main challenge in detecting specific variants is low statistical power, but the overall accumulated effect of many individually weak signals can be much stronger. It is convenient to combine statistical summaries of associations, for example, P\nobreakdash-values{}, and this approach can be nearly as efficient as analysis of raw data.\cite{DanyuLinMetaNoGain2009} In observational research, methods for combining P\nobreakdash-values{} are commonly associated with meta-analyses that pool results of multiple experiments studying the same hypothesis. The combined P-value then aggregates signals across all $L$ studies, potentially providing a higher level of assurance that the studied risk factor is associated with disease. Furthermore, if samples in those studies are taken from populations that are similar with respect to the effect size magnitude, the combined meta-analytic P\nobreakdash-value{} will well approximate the one that would have been obtained by pooling together all raw data and performing a single test.\cite{zaykin2011optimally} P\nobreakdash-values{} can also be combined when the $L$ hypotheses are distinct, and when the interest is in detecting the overall signal. Such applications are common and include gene set and pathway analyses. Specifically, a typical strategy in computation of gene- and pathway-scores includes (1) mapping individual SNPs to genes, followed by combining their association P-values into gene-scores, and (2) grouping genes into pathways and combining gene-scores into pathway-scores. Existing tools for combining P\nobreakdash-value{}s ($P_i, i=1,\dots, L$) are often based on the sum of $P_i$'s transformed by some function $H$. For example, Fisher test\cite{fisher1932statistical} is based on the log-transformed P\nobreakdash-values{}, $H(P_i) = -2 \ln(P_i)$, which are then added up to form a test statistics $T = \sum_{i=1}^{L} H(P_i) \sim \chi^2_{(L)}$, where $\chi^2_{(L)}$ has a chi-square distribution with $L$ degrees of freedom. When a portion of $L$ distinct associations is expected to be spurious, it is advantageous to combine only some of the predictors using a truncated variation of combined P\nobreakdash-value{} methods. For instance, Zaykin et al.\cite{Zaykin2002} proposed the Truncated Product Method (TPM) as a variation of the Fisher test, trimmed by the indicator function, $I(P_i \le \alpha)$, that is equal to zero if $P_i > \alpha$, and one if $P_i \le \alpha$; $0 < \alpha \le 1$ is a truncation threshold. The combined P\nobreakdash-value{}, $P_{\text{TPM}}$, is then given by the cumulative distribution function (CDF) of $W = \sum_{i=1}^{L} \ln(P_i) I(P_i \le \alpha)$. With the TPM approach, the threshold $\alpha$ is fixed while the number of P\nobreakdash-values{} that form the sum $W$ is random. A related popular method for combining top-ranking P\nobreakdash-values{} is the Rank Truncated Product (RTP).\cite{dudbridge2003,zaykin2007combining,ZaykinThesis} In RTP, the number of P\nobreakdash-values{} to be combined, $k$, is fixed, rather than the P\nobreakdash-value{} threshold, as in TPM. The resulting combined P\nobreakdash-value{} can be found from the cumulative distribution of the product: \begin{eqnarray*} P_{\text{RTP}} &=& \Pr \left\{ \prod_{i=1}^{k} P_{(i)} \le w \right\} = 1 - \Pr \left\{ \sum_{i=1}^{k} \ln \left[P_{(i)}\right] > \ln \left[w\right] \right\}, \end{eqnarray*} where $P_{(i)}$ is the $i$th smallest P\nobreakdash-value{}, $i = 1, \ldots, k$. RTP leads to an appealing extension, where $k$ can be chosen adaptively, to maximize statistical power.\cite{Hoh2001,Yu2009,zhang2013} Adaptive rank truncated product (aRTP) variations optimize selection of the truncation point $k$ among all (or a subset) of possible values $1 \le k \le L$. Adaptive extensions for TPM are not as straightforward because the threshold $\alpha$ is a continuous variable, but one can resort to evaluating the distribution over a set of grid points.\cite{sheng2013adaptive} In adaptive extensions of TPM and RTP, the final test statistic is the minimum P\nobreakdash-value{} observed at various candidate truncation points. The RTP null distribution is considerably more complicated than that of TPM. Complexity of the RTP distribution is due to dependency between ordered P\nobreakdash-values{}. When $k = L$, this dependency is inconsequential because a statistic is formed as a sum of $L$ terms and its value does not change if the terms are re-ordered. In fact, when $k=L$, the RTP P\nobreakdash-value{} is the same as the Fisher combined P\nobreakdash-value{}, derived via a CDF of a sum of independent chi-square variables. However, if $1 < k < L$, the $k$ smallest $P$-values remain correlated and dependent even if these $k$ values are randomly shuffled. The dependency is induced through $P_{(k+1)}$ being a random variable: when $P_{(k+1)}$ happens to be relatively small, the $k$ P\nobreakdash-values{} have to squeeze into a relatively small interval from zero to that value. This induces positive dependency between random sets of $k$ smallest P\nobreakdash-values{}, similar to the clustering effect in random effects models. Although the linear correlation can be eliminated by scaling the largest P-value, $P_{(k)}$, the $k$ values remain dependent, as illustrated in Figure \ref{fig:hole} (see ``Appendix (A-1)'' for discussion). Applications of combining independent P\nobreakdash-values{} remain important in statistical research, and there is clear preference among practitioners for methods that are based on simple and transparent approaches, such as the Fisher or the inverse normal (Stouffer's) tests.\cite{stouffer1949american,fisher1932statistical,zaykin2011optimally,whitlock2005combining,loughin2004systematic,won09} Here, we derive a simple, easily implemented theoretical from of the RTP distribution for independent P\nobreakdash-values{} which further leads to derivation of a new statistic. The new statistic, which we call the Augmented RTP, or ART, is based on the product of the first smallest P\nobreakdash-values{}, just like the RTP but, unlike the RTP, the distribution of the new statistic is given by standard functions and its computation avoids explicit integration. Despite simplicity, ART is at least as powerful as RTP, according to our simulation studies. Moreover, the ART leads to an adaptive statistic, where the number of the smallest P\nobreakdash-values{} to combine can be determined analytically to maximize power. Next, we extend our methods by allowing dependence in the observed P\nobreakdash-values{}. In genetic association studies, P\nobreakdash-values{} are often correlated due to linkage disequilibrium (LD). The LD correlation is typically accounted for through permutational or other resampling approaches, where P\nobreakdash-values{} are simulated under the null hypothesis while preserving LD between genetic variants. While such approaches are practical and easy to implement, it is also possible to de-correlate P\nobreakdash-values{} before combining them and then use any of the approaches developed under the independence assumption. Surprisingly, we find that the decorrelation step often improves power. In particular, we find that when association with disease is markedly different among variants within a gene, statistical power of standard methods (without the decorrelation step) approaches a plateau as a function of LD and does not improve as the number of SNPs increases. In contrast, power of our proposed decorrelation method increases steadily with the number of SNPs. Our analytical results as well as simulation experiments demonstrate this property for both ART (where $k$ is chosen beforehand and fixed) and for the adaptive variations of RTP and ART (aRTP and ART-A). Finally, we illustrate usefulness of the proposed methods by strengthening an overall, gene-based association via combining previously reported P\nobreakdash-values{} between pain sensitivity and individual SNPs within the $\mu$-opioid receptor. \section*{Material and Methods} \subsection*{Theoretical RTP distribution and Augmented RTP, the ART} Even when P\nobreakdash-values{} are independent, previously proposed theoretical forms of the RTP distribution are cumbersome and result in expressions that involve repeated integration.\cite{dudbridge2003,ZaykinThesis,CombPval07,Nagaraja2006} For example, Nagaraja\cite{Nagaraja2006} gives the cumulative distribution for the statistic $W_{k} = - \sum_{i=1}^k \ln P_{(i)}$ and $k<L$, as: \begin{eqnarray} \Pr(W_k > w) &=& \sum_{j=1}^k w_j \exp\left\{-\frac{c_j w}{c_{k+1}}\right\} \frac{1}{(L-k-1)!} \int_0^w \exp\left\{y \, d_j\right\} y^{L-k-1} d y \nonumber \\ &+& \sum_{s=0}^{L-k-1} \exp\left\{ -w \right\} \frac{w^s}{s!}, \quad \text{where}\label{nagaraja} \\ c_j &=& L-j+1, \nonumber\\ d_j &=& \frac{k+1-j}{L-k}, \nonumber\\ w_j &=& \frac{1}{L-j+1} \frac{L!}{(L-k)!} \frac{(-1)^k-j}{(j-1)!(k-j)!}. \nonumber \end{eqnarray} Theoretical forms of the RTP distribution (e.g., Eq. \ref{nagaraja}) may retain order-specific terms. Here, we proceed to a simpler representation by noting that every random realization of $k$ smallest P\nobreakdash-values{} can be shuffled. This step does not change the value of the product, $W_k$ (or its logarithm), which is our statistic of interest, but implies that we can treat the joint $k$-variate distribution as governed by the same pair-wise dependence for every pair of variables. Moreover, variables of that shuffled distribution are identical marginally. The dependency is induced completely through randomness of $P_{(k+1)}$, and conditionally on its value, the $\{W_k \mid p_{(k+1)}\}$ distribution is given by standard independence results. Then, $P_\text{RTP}$ is given by the marginal CDF of $W_k$. Based on this conceptual model, we derived the following representation of RTP where a single integral is evaluated in a bounded interval $(0,1)$, which allows one to evaluate the RTP distribution as a simple average of standard functions. Specifically, we derive a simple expression for the RTP distribution as the expectation of a function of a uniform (0 to 1) random variable: \begin{eqnarray} P_{\text{RTP}}(k) &=& \Pr(W_{k} \le w) = 1-\int_0^1 G_{k}\left\{\ln\left(\frac{\left[B^{-1}_{k+1}(u)\right]^{k}}{w}\right)\right\} du \label{eq:wk}\\ &=& E\left\{H(U \mid k, w)\right\}, \nonumber \end{eqnarray} where $B^{-1}_{k+1}(\cdot)$ is inverse CDF of $\text{Beta}(k+1, L-k)$ distribution, $G_k(\cdot)$ is CDF of Gamma$(k,1)$, and $H(u \mid k, w)=G_k\left(\ln \left(\frac{\left[B^{-1}_{k+1}(u)\right]^{k}}{w}\right)\right)$. $P_{\text{RTP}}(k)$ is the combined RTP P\nobreakdash-value{}. Notably, given the value of the product of $k$ P\nobreakdash-values{}, $W=w$, we can simultaneously evaluate $P_{\text{RTP}}(k+1)$: \begin{eqnarray} P_{\text{RTP}}(k+1) &=& \Pr(W_{k+1} \le w) = 1-\int_0^1 G_{k}\left\{\ln\left(\frac{\left[B^{-1}_{k+1}(u)\right]^{k+1}}{w}\right)\right\} du \label{eq:wk1}. \end{eqnarray} Details and the derivation are given in ``Appendix (A-2).'' The conditional independence of $k-1$ smallest P\nobreakdash-values{}, given a value of the beta-distributed $k$-th smallest P\nobreakdash-value{} (Eq. \ref{eq:X}, \ref{eq:Y}), leads to a simple statistic which (just as RTP) is a function of of the product of the $k$-th smallest P\nobreakdash-values{}. This statistic and its distribution are not an approximation to $W_k$ and the RTP distribution. However, similarly to RTP, the new statistic is designed to capture information contained in the first $k$ smallest $P$-values. To construct the new statistic, we propose the following transformation that involves the product $W_{k-1}$ and the variable $P_{(k)}$. These transformations yield three independent variables, that are next added together and give a gamma-distributed random variable, \begin{eqnarray} A_k &=& -\ln \left\{ W_{k-1} \right\} + (k-1) \ln \left\{ P_{(k)} \right\} + G_\lambda^{-1} \left\{1 - B_k(P_{(k)})\right\}, \label{ak.stat} \end{eqnarray} where $G_k^{-1}(\cdot)$ is inverse CDF of Gamma$(k,1)$, $$\lambda = (k-1) \times E\left\{-\ln\left(P_{(k)}\right)\right\} = (k-1) ( \Gamma' (L+1)/\Gamma (L+1) - \Gamma'(k)/ \Gamma(k)),$$ $\Gamma'$ is the first derivative of a gamma function; and $B_k(x)$ is the CDF of $\text{Beta}(k,L-k+1)$ distribution evaluated at $x$. The shape parameter $\lambda$ is chosen so that the two last terms in Eq. \ref{ak.stat} (that are both transformations of $P_{(k)}$) have the same expectation. Given the observed value $A_k = a_k$, the combined P\nobreakdash-value{} is \begin{eqnarray} \text{ART} = \Pr(A_k \le a_k) = 1 - G_{k + \lambda - 1} (a_k). \label{ak} \end{eqnarray} Under the null hypothesis, as illustrated by Figure \ref{fig:Ak_RTP}, combined P-values based on the proposed method (ART) are very similar to $P_{\text{RTP}}$, and approach $P_{\text{RTP}}$ as $k$ increases. However, under the alternative, we find (as described in ``Results'' section) that ART has either the same or higher power than RTP. Furthermore, the combined P\nobreakdash-value{}, $\text{ART}$, can be easily computed in R using its standard functions. A short example and an R code are given in ``Appendix (A-3).'' \subsection*{Adaptive ART method, ART-A} As we discussed earlier in Introduction, the number of $k$ P\nobreakdash-values{} to be combined by the RTP method (or ART) is fixed and needs to be pre-specified. The choice of $k$ is somewhat arbitrary, so a researcher may wish to evaluate $\text{ART}$ at several values of $k$, consequently choosing $k$ that corresponds to the smallest combined P\nobreakdash-value{}. However, this additional step creates another layer of multiple comparisons, which needs to be accounted for. Yu et al. \cite{Yu2009} proposed an empirical procedure to evaluate adaptive RTP (aRTP) method based on the minimum P\nobreakdash-value{} computed over various candidate truncation points. To avoid a cumbersome two-level permutation procedure, they built on the method suggested by Ge et al.\citep{Ge2003} to reduce computational time. While computationally efficient, the method requires to store a large $B \times L$ matrix, with every row containing $L$ P\nobreakdash-values{} generated under the null distribution over $B$ simulated experiments. Zhang et al.\cite{zhang2013} derived analytic but mathematically complex aRTP distribution, which needs to be evaluated using high-dimensional integration. Here, we propose a new and easily implemented version of the theoretical distribution for ART, ART-A. The method exploits the fact that ordered P\nobreakdash-values{} can be represented as functions of the same number of independent uniform random variables (Appendix (A-4)). The two main ideas behind ART-A are: first to approximate the Gamma distribution with a large shape parameter by the normal distribution, and second to use the fact that the joint distribution of the partial normal sums follows a multivariate normal distribution. \subsection*{Correlated $P$-values} We further extend the proposed methods to combine correlated P\nobreakdash-values{} via Decorrelation by Orthogonal Transformation approach, DOT. Let $L$ correlated $P$-values, $(p_1, p_2, \ldots, p_L)$, originate from statistics that jointly follow a multivariate normal disitrbution, $\mathbf{y} \sim \text{MVN}\left(\boldsymbol{\mu} = \matr{0}, \matr{\Sigma}\right)$, under $H_0$. Dependent variables can be transformed into independent variables by using eigendecomposition of $\matr{\Sigma}$, such that $\matr{\Sigma} = \matr{Q}\matr{\Lambda}\matr{Q^{-1}}$, where $\matr{Q}$ is a square matrix, with $i$th column containing eigenvector $\mathbf{q}_i$ of $\matr{\Sigma}$, and $\matr{\Lambda}$ is the diagonal matrix of eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_L$. Next, define an orthogonal matrix $\matr{H} = \matr{Q} \matr{\Lambda} ^{-1/2}\matr{Q}^T$ and $\matr{y}_e = \matr{H}^{T} \matr{y}$. P\nobreakdash-values{} are decorrelated as $1 - \Phi^{-1} (\matr{y}_e)$. Then, the first $k$ smallest decorrelated P\nobreakdash-values{} can be used to calculate various combined statistics. The choice of this particular transformation is motivated by its ``invariance to order'' property. Briefly, in the equicorrelation case, including the special case of $\rho=0$, i.e., independence, a permutation of $\matr{y}$ should yield the same (possibly permuted) values in the decorrelated vector, $\matr{y}_e$. Extensive evaluation of the decorrelation approach are presented by is elsewhere. \cite{VegasPreprint} \section*{Results} \subsection*{Simulation study results} We used simulation experiments to evaluate the Type I error rate and power of the proposed methods relative to the previously studied RTP (defined for a fixed $k$) and to the adaptive RTP (where $k$ is varied and the distribution is evaluated by single-layer simulations as in Yu et al, 2009).\cite{Ge2003,Yu2009} Performance of various methods was evaluated using $k$ first-ordered P\nobreakdash-values{}, with $k=\{10, 100\}$ and $L = \{100, 200, 500\}$. Details of the simulation design are given in ``Appendix (A-5).'' Table \ref{tab1}-\ref{tabcor1} present Type I error rates for combinations of independent and decorrelated P\nobreakdash-values{} respectively. In the tables, rows labeled ``ART-A'' refer to our newly proposed adaptive ART method, while ``aRTP'' rows label the results of the conventional approach.\cite{Yu2009} For the adaptive methods, the sequence of truncation points varied from 1 to $k$ or from 1 to $L$, if $k=L$. Both tables confirm that all methods maintain the correct Type I error rate. Tables \ref{tab2}-\ref{tab5} summarize a set of power simulations for independent P\nobreakdash-values{}. Results presented in Table \ref{tab2} were obtained under the assumption that all $L$ statistics had the same underlying effect size ($\mu = 0.5$). From this table, it is evident that our ART has the highest power, closely followed by RTP. In general, the ART P\nobreakdash-values{} tend to be similar to the P\nobreakdash-values{} obtained by the RTP, and we show their similarity graphically in Figure \ref{fig:Ak_RTP}. The Simes method has the lowest power, which is expected due to homogeneity in effect sizes across $L$ tests and absence of true nulls. For the results in Table \ref{tab3}, the effect size was allowed to randomly vary throughout the range from 0.05 to 0.45. In both of these tables, the ART method has the highest power, while the Simes method has the lowest power. The power of both adaptive methods is very similar to one another but lower than that of methods based on a fixed $k$ (RTP and ART). Nonetheless, in practice, a good choice for $k$ may not be immediately clear, so a small sacrifice in power may be preferable to an arbitrary and possibly poor choice of $k$. However, when $L$ is large, it can be impractical or unreasonable to vary candidate truncation points all the way up to $L$. Finally, Table \ref{tab5} summarizes results for simulations when some of the $L$ hypotheses were true nulls ($\mu=0$), while the remaining hypotheses were true signals ($\mu=0.5$). The results follow the same pattern as in the previous tables, with ART having the highest power. Table \ref{tabcor4} summarizes a set of power simulations for correlated P\nobreakdash-values{}. The effect sizes were randomly varied between -0.45 and 1.3 in each simulation. The correlation matrices were generated as described in ``Appendix (A-5).'' This set of simulations assumes that the P\nobreakdash-values{} were obtained from the same data set as the sample estimate of the correlation matrix. Under heterogeneous effect sizes (Table \ref{tabcor4}) the empirical versions of the tests (``RTP'', ``ART-A'') show nearly identical (and low) power for various combinations of $k$ and $L$ values. However, decorrelation-based methods become quite powerful, and their power is increasing with $k$ and $L$. The steady power increase is due to the decorrelation effect on the combined noncentrality that involves the sum $\sum^L_{i \ne j} (\mu_i - \mu_j)^2$, which increases with the increased heterogeneity of $\mu$. More details on the performance of the decorrelation approach are given by us elsewhere,\cite{VegasPreprint} but here we briefly note that this finding is practically relevant because substantial heterogeneity of associations is expected among genetic variants, leading to a substantial power boost, as we next illustrate via re-analysis of published associations of genetic variants within the $\mu$-opioid gene with pain sensitivity. \subsection*{Real data analysis} In several popular variations of the gene-based approach,\cite{neale2004future} association statistics or P\nobreakdash-values{} are combined across variants within a gene.\cite{Yu2009,peng2010gene,liu2010versatile,li2011gates} Gene-based approaches have some advantages over methods based on individual SNPs or haplotypes. In particular, gene-based P\nobreakdash-values{} may facilitate subsequent meta-analysis of separate studies and can be less susceptible to erroneous findings.\cite{neale2004future} To obtain a gene-based P\nobreakdash-value{}, one would need to account for LD among variants. The matrix of LD correlation coefficients can be obtained conveniently without access to individual genotypes if frequencies of haplotypes for SNPs within the genetic region of interest are available. The LD for alleles $i$ and $j$ is defined by the difference between the di-locus haplotype frequency, $P_{ij}$, and the product of the frequencies of two alleles, $D_{ij} = P_{ij} - p_{i}p_{j}$. The LD-correlation for SNPs $i$ and $j$ is $r_{ij} = \frac{D_{ij}}{p_i(1-p_i)p_j(1-p_j)}$. Shabalina et al.\cite{shabalina2008expansion} and Kuo et al. \cite{kuo2014discovering} reported SNP-based P\nobreakdash-values{} (Table \ref{SNP.Pval}), as well as results of several haplotype-based tests for genetic association of variants within the $\mu$-opioid receptor (\textit{MOR}) with pain sensitivity. Kuo et al. also reported estimated frequencies for 11-SNP haplotypes within the \textit{MOR} gene,\cite{kuo2014discovering} from which the $11 \times 11$ LD correlation matrix can be computed. The $P_{ij}$ frequencies are given by the sum of frequencies of those 11-SNP haplotypes that contain both of the minor alleles for SNPs $i$ and $j$. Similarly, $p_i$ allele frequency is the sum of haplotype frequencies that carry the minor allele of the SNP $i$. The LD correlations within the \textit{MOR} region spanned by the 11 SNPs ranged from -0.82 to 0.99, with the average absolute value $\approx 0.55$ and the median absolute value $\approx 0.66$. Half of pairwise LD correlations were smaller than -0.23 or larger than 0.82. Our analysis (Table \ref{muopioid}) showcases utility of the proposed methods. The columns show combined P-values, for $k$ varying from two up to all eleven SNPs ($k$=1 is equivalent to the Bonferroni correction, i.e., 0.007$\times$11). Similar to what we found via simulation experiments, where correlation is controlled by reshuffling the phenotype values while keeping the original LD structure intact, RTP and aRTP (without the decorrelation step) do not benefit from inclusion of additional SNPs. P\nobreakdash-values{} in the ART column are very similar to those in the RTP column, which reassures our theoretical expectations. In contrast to previously proposed methods that control correlation by resampling (i.e., RTP, aRTP and ART), the results in columns marked by ``decorr'' are substantially lower. In these columns, we used the proposed transformation to independence, which gives much stronger combined P\nobreakdash-values{}. In all ``decorr'' columns, $k$=7 results in the smallest combined P\nobreakdash-value{}, implying that the number of real effects (including proxy associations) is at least seven. \section*{Discussion} Complex diseases are influenced by collective effects of environmental exposures and genetic determinants. There can be numerous weak but biologically meaningful risk factors. The challenge is to distinguish between real and spurious statistical signals in the presence of multiple comparisons and low detection power. When the number of potential real associations is expected to be small, compared to the total number of variants evaluated within a study, it is advantageous to focus on the top-ranking associations. The rank truncated product method (RTP) has been designed with this objective in mind. The RTP and related approaches had been shown to be valuable tools in analysis of genetic associations with disease. In this article, we derive a mathematically simple form of the RTP distribution that leads a to new method, ART and its adaptive version, ART-A, that searches through a number of candidate values of truncation points and finds an optimal number in terms of combined P\nobreakdash-value{}. The ART is designed with the same objectives in mind as RTP and TPM: to facilitate detection of possibly weak signals among top-ranking predictors that could have been missed, unless combined into a single score. The ART is trivial to implement in terms of standard functions, provided by packages such as R, and its power characteristics are close to RTP or higher in all studied settings. Analytical forms of ART and ART-A are derived under independence. To accommodate LD, we propose a decorrelation step, by transformation of P\nobreakdash-values{} to independence. Our Decorrelation by Orthogonal Transformation approach (DOT) is analogous to the Mahalanobis transformation.\cite{hardle2007applied} We found DOT to be surprisingly powerful in many settings, compared to the usual method of evaluating the distribution of product of correlated P\nobreakdash-values{} under the null hypothesis. Theoretical properties and extensive numerical evaluation of DOT will be published elsewhere and currently these findings are available as a preprint.\cite{VegasPreprint} Further, we illustrate an application of our methods with analyses of variants within the $\mu$-opioid gene that have been shown to affect sensitivity to pain. We find strengthened evidence of overall association within the 11-SNP block. In this application, the LD correlation matrix was reconstructed from the haplotype frequencies, which might be slightly different from the correlation of (0,1,2) values between pairs of SNPs.\cite{zaykin2004bounds} Further studies are needed to investigate whether approaches such as this, or utilization of reference panel (external) data as a source of LD information, may lead to substantial bias. \section*{Declaration of Interests} The authors declare no competing interests. \section*{Acknowledgments} This research was supported in part by the Intramural Research Program of the NIH, National Institute of Environmental Health Sciences. \section*{Web Resources} The URL for software referenced in this article is available at: \mbox{}\\ \noindent {\small\url{https://github.com/dmitri-zaykin/Total_Decor}} \clearpage
{ "timestamp": "2019-06-12T02:03:12", "yymm": "1802", "arxiv_id": "1802.04321", "language": "en", "url": "https://arxiv.org/abs/1802.04321" }
\section{Introduction} The Loewner equation \begin{equation}\label{slit0} \frac{\partial g_t(z)}{\partial t} = \frac{1}{g_t(z) - U(t)}, \quad g_0(z)=z\in\Ha:=\{z\in\mathbb{C}\,|\, \Im(z)>0\}, \end{equation} where $U:[0,\infty)\to \mathbb{R}$ is continuous, is usually interpreted as describing a family $(g_t)_{t\geq 0}$ of conformal mappings $g_t:\Ha\setminus K_t\to \Ha$, where $(K_t)_{t\geq 0}$ is a family of growing, bounded subsets $K_t\subset \Ha,$ also called \emph{hulls}. \\ The most important example is the Schramm-Loewner evolution SLE$(\kappa)$, which is defined via \eqref{slit0} with $U(t)=\sqrt{\kappa/2}B_t$, where $B_t$ is a standard Brownian motion and $\kappa\geq0$.\\ A more general version for the growth of bounded hulls $(K_t)_{t\geq0}$ via conformal mappings\\ $g_t:\Ha\setminus K_t\to \Ha$ is given by the Loewner equation \begin{equation}\label{slitooo} \frac{\partial g_{t}(z)}{\partial t} = \int_\mathbb{R}\frac{\nu_t(du)}{g_t(z)-u} \quad \text{for a.e. $t\geq0$, $g_{0}(z)=z\in \Ha,$} \end{equation} where $(\nu_t)_{t\geq0}$ is a family of probability measures having some additional regularity properties.\\ Besides this analytic-geometric view, we might regard equation \eqref{slitooo} also as an evolution equation for a family $(\mu_t)_{t\geq 0}$ of probability measures on $\mathbb{R}$ defined via \[\frac1{g_t^{-1}(z)} = \int_\mathbb{R} \frac1{z-u} \mu_t(du).\] This interpretation is justified by quantum probability theory: Such families $(\mu_t)_{t\geq 0}$ arise as the distributions of certain quantum processes $(X_t)_{t\geq0}$ with monotonically independent increments. Here, a quantum process is simply a family of self-adjoint linear operators on a fixed Hilbert space. For the notions ``distribution of $X_t$'' and ``monotone independence'', we refer to Section \ref{mon_sec}.\\ For $U(t)\equiv 0$, the mappings $g_t$ from \eqref{slit0} are given as $g_t(z)=\sqrt{z^2+2t}$ and $K_t$ is the straight line segment between $0$ and $\sqrt{2t}i$. The corresponding measure $\mu_t$ is an arcsine distribution with mean 0 and variance $t$. In this case, the associated process $(X_t)$ is called a \emph{monotone Brownian motion}. We thus have the following different viewpoints on the dynamics of the Loewner equation with $U(t)\equiv 0$: \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline Conformal mappings & Growing sets & Distributions $\mu_t$ & Quantum process $(X_t)$ {\color{white}$\frac{\binom{8}{9}}{\binom{8}{9}}$}\\[1mm] \hline $g_t(z)=\sqrt{z^2+2t}$ & $K_t=[0,\sqrt{2t}i]$ & $\frac{dx}{\pi\sqrt{2t-x^2}}, x\in(-\sqrt{2t}, \sqrt{2t})$ & monotone Brownian motion {\color{white}$\frac{\binom{8}{9}}{\binom{8}{9}}$}\\ \hline \end{tabular} \end{center} \vspace{3mm} While the correspondence between the conformal mappings, the growing sets, and the distributions is derived from simple calculations, the construction of a monotone Brownian motion is rather non-trivial. \\ \begin{itemize} \item[(1)] Muraki constructed a monotone Brownian motion on a certain Fock space in \cite{MR1462227} (before he introduced the notion of monotone independence around the year 2000). \end{itemize} \vspace{2mm} Just as a classical Brownian motion can be approximated by a random walk, one can construct a sequence of growing graphs, a ``monotone quantum random walk'', which approximates a monotone Brownian motion:\\ \begin{itemize} \item[(2)] In \cite[Theorem 5.1]{acc}, the authors construct a sequence of undirected graphs $G_1, G_2, ...,$ whose adjacency matrices $A_1,A_2,...$ can be interpreted as a discrete approximation of a monotone Brownian motion. The graph $G_{n-1}$ is a subgraph of $G_{n}$, and $A_n$ can be regarded as self-adjoint operator on the Hilbert space $l^2(V_n)$, where $V_n$ denotes the vertex set of $G_n$. \\ Thus, the growing graph $(G_n)_{n\in\mathbb{N}}$ can be thought of as a ``monotone quantum random walk'', and the moments of $A_n$ (scaled in a suitable way) converge to the moments of a monotone Brownian motion. \end{itemize} \vspace{2mm} It is natural to ask whether the constructions (1) and (2) can be extended to more general processes. The construction of quantum processes with monotonically independent increments associated to \eqref{slitooo} has been established in the recent works \cite[Theorem 6.8]{jek17} and \cite[Theorem 1.14]{iu}. Both works regard even more general settings.\\ In this paper we are concerned with (2). O. Bauer already noted in \cite[Section A]{bauer03} that a discrete L\"owner evolution can be thought of as a monotone quantum random walk. Our main results explicitly describe theses random walks based on the construction from \cite{acc}. \\ \textbf{Outline of this work:}\\ In Section 2 we recall some facts about Loewner's differential equation and we explain its relation to monotone probability theory in Section 3. In Section 4 we recall the comb product of graphs and look at certain spidernets.\\ In Section 5 we then find discrete approximations as in (2) via comb products of those spidernets for equation \eqref{slit0} with continuous non-negative driving functions (Theorem \ref{theorem10}) and for equation \eqref{slit2} with measures $\nu_t$ with $\supp \nu_t\subset [0,M]$ for some $M>0$ (Theorem \ref{theorem11}). \newpage \section{Loewner's differential equation} \subsection{The slit Loewner equation}${}$\\[-2mm] The slit Loewner equation is given by \begin{equation}\label{slit} \frac{\partial g_t(z)}{\partial t} = \frac{1}{g_t(z) - U(t)}, \quad g_0(z)=z\in\Ha=\{z\in\mathbb{C}\,|\, \Im(z)>0\}, \end{equation} with a continuous driving function $U:[0,\infty)\to \mathbb{R}$. \\ The solution yields a family $(g_t)_{t\geq0}$ of conformal mappings $g_t:\Ha\setminus K_t\to \Ha$ with a strictly growing family $(K_t)_{t\geq0}$ of bounded sets, i.e. $K_s\subsetneq K_t$ whenever $0\leq s<t.$ The initial condition implies $K_0=\emptyset.$\\ Let $f_t=g_t^{-1}.$ The family $(f_t)_{t\geq 0}$ is also called a decreasing Loewner chain. From \eqref{slit} it follows that $(f_t)$ satisfies the following partial differential equation: \begin{equation}\label{slit2} \frac{\partial}{\partial t} f_{t}(z) = -\frac{\partial}{\partial z}f_{t}(z)\cdot \frac{1}{z-U(t)}, \quad f_{0}(z)=z\in \Ha. \end{equation} Each $f_t$ has hydrodynamic normalization. More precisely, \begin{equation}\label{hydro} f_t(z) = z - \frac{t}{z} + {\scriptstyle\mathcal{O}}(|z|^{-1})\end{equation} as $|z|\to\infty$ in the sense of a non-tangential limit. \begin{figure}[ht] \rule{0pt}{0pt} \centering \includegraphics[width=12cm]{1.jpg} \caption{The mappings $g_t$ and $f_t$.} \label{fig1} \end{figure} \begin{example}\label{ex_1} For $U(t)\equiv u\in\mathbb{R}$, we obtain \[g_t(z)=\sqrt{(z-u)^2+2t}+u \qquad \text{and} \qquad f_t(z)=\sqrt{(z-u)^2-2t}+u, \] where the square roots are chosen such that the functions map into the upper half-plane $\Ha$. We have $K_t=[u,u+\sqrt{2t}i]$, i.e. we describe the growth of a straight line starting at $u$. \hfill $\bigstar$ \end{example} \begin{remark}Assume that $K_t$ is a slit, i.e. $K_t=\gamma(0,t]$ for a simple curve $\gamma$ as in the previous example. Then $U$ is continuous and $g_t$ can be extended continuously to the tip $\gamma(t)$ of the slit $K_t$ and we have $U(t)=g_t(\gamma(t))$, see \cite[Lemma 4.2]{Lawler:2005}. \\ Not every continuous $U$ generates slits. However, if $U$ is sufficiently smooth, then $K_t$ is a slit, see \cite{LindMR:2010, Lind:2005, MarshallRohde:2005}. \hfill $\bigstar$ \end{remark} The celebrated Schramm-Loewner evolution can be defined as follows:\\ Let $\kappa\geq 0.$ Then SLE($\kappa$) is defined as the random family $(K_t)_{t\geq 0}$ obtained by \eqref{slit} with $U(t)=\sqrt{\kappa/2}B_t$, where $B_t$ is a standard Brownian motion. Fix some $T>0$. Then the random hull $K_T$ is a slit almost surely if and only if $\kappa\in[0,4]$.\\ The corresponding random growth process $(K_t)_{t\geq0}$ was shown to be the scaling limit of random curves from different statistical models. For SLE and the slit Loewner equation, we refer the interested reader to the book \cite{Lawler:2005}. \subsection{A more general Loewner equation for bounded hulls} Now we consider a more general version of equation \eqref{slit2}. \begin{definition} Let $(\nu_t)_{t\geq0}$ be a family of probability measures on $\mathbb{R}$ such that $t\mapsto H(t,z):=\int_\mathbb{R}\frac{\nu_t(du)}{z-u}$ is measurable for every $z\in\Ha$, and assume that there exists $M>0$ such that $\supp \nu_t \subset [-M,M]$ for all $t\geq0$. We call the function $H(t,z)$ a \emph{Herglotz vector field} and we denote the set of all such Herglotz vector fields by $\mathcal{H}_M$. \end{definition} \begin{definition} A \emph{decreasing Loewner chain} on $\Ha$ is a family $(f_t)_{t\geq0}$ of univalent mappings $f_t:\Ha\to\Ha$ such that $f_0$ is the identity, $f_t(\Ha)\subset f_s(\Ha)$ whenever $0\leq s\leq t$, and $t\mapsto f_t$ is continuous with respect to locally uniform convergence. \end{definition} Let $H\in \mathcal{H}_M$ and consider the Loewner equation \begin{equation}\label{slit33} \frac{\partial}{\partial t} f_{t}(z) = -\frac{\partial}{\partial z}f_{t}(z)\cdot H(t,z) \quad \text{for a.e. $t\geq 0$, $f_{0}(z)=z\in \Ha.$} \end{equation} \begin{theorem}\label{Houston} There exists a unique solution $(f_t)_{t\geq0}$ of equation \eqref{slit33}, which is a decreasing Loewner chain with normalization \eqref{hydro}. Furthermore, each $f_t$ maps $\Ha$ conformally onto $\Ha\setminus K_t$ for a bounded set $K_t\subset {\Ha}$. \\ There exists a bound $C(t,M)>0$ such that $\sup_{z\in K_t} |z|< C(t,M)$. \end{theorem} \begin{proof} The first statement follows from \cite[Theorem 4]{MR1201130}, see also \cite[Section 3]{iu}.\\ Furthermore, the condition $\supp \nu_t \subset [-M,M]$ can be used to show that there is a bound $A(t,M)>0$ such that every $f_t$ extends conformally onto $I(t,M):=\mathbb{R}\setminus[-A(t,M),A(t,M)]$ with $f_t(I(t,M))\subset \mathbb{R}$, see, e.g., \cite[Theorem 5.11]{jek17}.\\ This implies that there exists a bound $C(t,M)>0$ such that $\sup_{z\in K_t} |z|< C(t,M)$, see \cite[Inequality (3.14) on p.74]{Lawler:2005}. \end{proof} The following convergence result is standard in Loewner theory, see e.g. \cite[Lemma 4.12]{ghkk} for a slightly different setting. \begin{lemma}\label{aprox_lemma} Fix $T>0$. For every $n\in\mathbb{N}$, let $H_n(t,z)\in \mathcal{H}_M$. Assume that there exists $H(t,z)\in \mathcal{H}_M$ such that \[\int_0^t H_n(s,z)ds \to \int_0^t H(s,z)ds\] for every $t\in[0,T]$ locally uniformly in $\Ha$ as $n\to\infty$.\\ Let $f_{n,t}$ and $f_t$ be the solutions to \eqref{slit33} for the Herglotz vector fields $H_n(t,z)$ and $H(t,z)$ respectively. Then $f_{n,t}\to f_t$ for every $t\in[0,T]$ locally uniformly in $\Ha$. \end{lemma} \begin{proof}It is easy to see that the set $\{\int_\mathbb{R} \frac{\nu(du)}{z-u}\,|\, \text{$\nu$ is a prob. measure with $\supp \nu\subset [-M,M]$}\}$ is a normal family. Thus, if $G\in \mathcal{H}_M$ and $K\subset \Ha$ is a compact set, then there exists $L(K)>0$ such that $|G(t,z)-G(t,w)|\leq L(K)|z-w|$ for all $z,w\in K$ and all $t\in[0,T]$.\\ We now look at $g_{n,t}:=f_{n,t}^{-1}$, $g_t:=f_t^{-1}$. These functions satisfy \eqref{slitooo} and we have \begin{equation*} g_{n,t}(z) = z + \int_0^t H_n(s,g_{n,s}(z)) ds,\quad g_{t}(z) = z+\int_0^t H(s,g_{s}(z)) ds. \end{equation*} Now let $K\subset \Ha$ be a compact set on which all $g_{n,t}$ and $g_t$ are defined. The set $\{g_{n,t}\,|\, t\in[0,T],n\in\mathbb{N}\}\cup \{g_t\,|\, t\in[0,T]\}$ is also a normal family due to Theorem \ref{Houston}. Hence there exists a second compact set $K'\subset \Ha$, $K\subset K'$, such that $g_{n,t}(z), g_{t}(z)\in K'$ for all $z\in K$, $n\in\mathbb{N}$, and $t\in[0,T]$.\\ We know that $\int_0^t H_n(s,z)ds$ converges uniformly on $K'$ to $\int_0^t H(s,z)ds$ for all $t\in[0,T]$. Now fix $t\in[0,T]$. For $z\in K$ we have \[ |g_{n,t}(z)-g_t(z)|\leq \left| \int_0^t H_n(s,g_{n,s}(z))-H_n(s,g_{s}(z)) ds \right| + \left| \int_0^t H_n(s,g_{s}(z))-H(s,g_{s}(z)) ds \right| \leq \] \[ L(K') \int_0^t |g_{n,s}(z)-g_{s}(z)| ds + \eps_n, \] for a sequence $(\eps_n)_n$ converging to $0$. Gronwall's lemma implies that $g_{n,t}\to g_t$ uniformly on $K$. Hence also $f_{n,t}\to f_t$ locally uniformly in $\Ha$. \end{proof} We can now prove the following result, which will reduce our problem of constructing graphs for equation \eqref{slit33} to equation \eqref{slit2}. \begin{lemma}\label{Whitney}Let $H(t,z)=\int_\mathbb{R}\frac{\nu_t(du)}{z-u}\in \mathcal{H}_M$ and let $(f_t)_{t\geq0}$ be the corresponding solution to \eqref{slit33}. Furthermore, assume that $\supp \nu_t \subset [0,M]$ for all $t\geq0$.\\ Fix $T>0$. Then there exists a sequence $U_n:[0,T]\to [0,M]$ of continuous non-negative driving functions such that the corresponding solutions $(f_{n,t})_{t\geq0}$ to \eqref{slit2} converge locally uniformly to $f_t$ for every $t\in[0,T]$ as $n\to\infty$. \end{lemma} \begin{proof} Step 1: Assume that $H(t,z)=\frac1{z-U(t)}$ for a piecewise continuous and non-negative driving function $U$. Then we can clearly approximate $H(t,z)$ by a sequence $H_{n}(t,z)=\frac1{z-U_n(t)}$ with continuous non-negative driving functions $U_n:[0,T]\to [0,M]$ in the sense of Lemma \ref{aprox_lemma}.\\ Step 2: Next we consider the multi-slit equation, i.e. $H(t,z) = \sum_{k=1}^N\frac{\lambda_k(t)}{z-V_k(t)}$, where $\lambda_1,...,\lambda_N:[0,T]\to[0,1]$ are continuous weight functions with $\sum_{k=1}^N \lambda_k(t) = 1$ for all $t\in[0,T]$, and all driving functions $V_1,...,V_N:[0,T]\to[0,M]$ are continuous.\\ This Herglotz vector field can be approximated by a single-slit equation with a piecewise continuous non-negative driving function. We choose $m\in\mathbb{N}$ and divide the interval $[0,T]$ into $m$ intervals $I_1:=[0,\frac{T}{m}], I_2:=(\frac{T}{m},\frac{T}{m}+\frac{1}{m}],...,I_m:=(T-\frac{1}{m},T]$. We define the driving function $U_m$ on $I_1$ as follows: \begin{eqnarray*} U_m(t)&=&V_1(t) \text{\quad on \quad} \left[0,T/m\cdot \lambda_1\left(T/m\right)\right],\nonumber\\ U_m(t)&=&V_2(t) \text{\quad on \quad} \left(T/m\cdot \lambda_1(T/m),T/m\cdot (\lambda_1(T/m)+\lambda_2(T/m))\right], ..., \nonumber\\ U_m(t)&=&V_N(t) \text{\quad on \quad} \left(T/m\cdot (\lambda_1(T/m)+...+\lambda_{N-1}(T/m)), T/m\right]. \end{eqnarray*} We now repeat this construction for $I_2$,...,$I_m$. \\ Define $H_m(t,z)=\frac1{z-U_m(t)}$. Then $H_m(t,z)$ approximates $H(t,z)$ in the sense of Lemma \ref{aprox_lemma}. Together with step 1, we see that this multi-slit equation can be approximated by continuous non-negative driving functions.\\ Step 3: Next we consider $H(t,z) = \sum_{k=1}^N\frac{\lambda_k(t)}{z-V_k(t)}$, where $\lambda_1,...,\lambda_N:[0,T]\to[0,1]$ are measurable weight functions with $\sum_{k=1}^N \lambda_k(t) = 1$ for all $t\in[0,T]$, and all driving functions $V_1,...,V_N:[0,T]\to[0,M]$ are continuous.\\ For $m\in\mathbb{N}$, we let $H_m(t,z) = \sum_{k=1}^N\frac{\lambda_{k,m}(t)}{z-V_k(t)}$, where each $\lambda_{k,m}:[0,T]\to[0,1]$ is continuous, $\sum_{k=1}^N \lambda_{k,m}(t) = 1$ for all $t\in[0,T]$ and all $m\in\mathbb{N}$, and $\lambda_{k,m}\to \lambda_k$ in the $L^1$-norm as $m\to\infty$. Then $H_m(t,z)$ approximates $H(t,z)$ in the sense of Lemma \ref{aprox_lemma} as $m\to\infty$.\\ Step 4: Finally, assume that $H(t,z)=\int_\mathbb{R}\frac{\nu_t(du)}{z-u}\in\mathcal{H}_M$ is a general Herglotz vector field. Divide $[0,M]$ into $m\in\mathbb{N}$ intervals: $I_{1,m}=[0,M/m], I_{2,m}=(M/m,2M/m],...,I_{m,m}=((m-1)M/m,M]$. For $k=1,...,m$, define $\lambda_{k,m}(t)=\nu_t(I_{k,m})$ and let $V_{k,m}(t)$ be the midpoint of $I_{k,m}$ for all $t\in[0,T]$. Each $\lambda_{k,m}$ is measurable, which follows from the Stieltjes-Perron inversion formula and the fact that $t\mapsto H(t,z)$ is measurable. The Herglotz vector field $H_m(t,z) = \sum_{k=1}^m\frac{\lambda_{k,m}(t)}{z-V_{k,m}(t)}$ approximates $H(t,z)$ in the sense of Lemma \ref{aprox_lemma} as $m\to\infty$. \end{proof} \subsection{Probabilistic interpretation of Loewner's equation} While the geometric interpretation of Loewner's equation focuses on the growing sets $(K_t)_{t\geq 0}$ (or the mappings $(f_t)_{t\geq0}$), we now switch to a probabilistic point of view, which regards a family $(\mu_t)_{t\geq 0}$ of probability measures on $\mathbb{R}$ instead.\\ Let $\mu$ be a probability measure on $\mathbb{R}$. The $F$-transform $F_\mu$ of $\mu$ is defined as the multiplicative inverse of the Cauchy transform of $\mu$, i.e. as the mapping \[F:\Ha\to \Ha, \quad F_\mu(z) := \left(\int_{\mathbb{R}}\frac1{z-u}\, \mu({\rm d}u)\right)^{-1}.\] The measure $\mu$ can be recovered from $F$ via the Stieltjes-Perron inversion formula. We have the following simple characterization. \begin{lemma}${}$\label{prop0} \begin{itemize} \item[(a)] A holomorphic function $F:\Ha\to\mathbb{C}$ is the $F$-transform of a probability measure $\mu$ on $\mathbb{R}$ if and only if $F(\Ha)\subseteq \Ha$ and $F'(\infty)=1$ (as a nontangential derivative).\\ Furthermore, $\mu$ has mean $0$ and variance $\sigma^2$ if and only if \begin{equation*} F_\mu(z) = z - \frac{\sigma^2}{z} + {\scriptstyle\mathcal{O}}(|z|^{-1}) \end{equation*} as $|z|\to\infty$ in the sense of a non-tangential limit. \item[(b)] Let $\mu$, $\mu_n$, with $n\in\mathbb{N}$, be probability measures on $\mathbb{R}$. Then $\mu_n\to \mu$ with respect to weak convergence if and only if $F_{\mu_n}\to F_\mu$ locally uniformly on $\Ha$. \end{itemize} \end{lemma} \begin{proof} The first statement in (a) follows from the Nevanlinna representation formula and \cite[Prop. 2.1]{M92} and the second statement follows from \cite[Prop. 2.2]{M92}. Statement (b) follows from \cite[Theorem 2.5]{M92}. \end{proof} We can now reformulate Theorem \ref{Houston} in the following way. \begin{theorem}\label{Steve}Let $H\in \mathcal{H}_M$. Then there exists a unique family $(\mu_t)_{t\geq0}$ of probability measures such that $(f_t:=F_{\mu_t})_{t\geq0}$ solves \eqref{slit33}. Furthermore, each $\mu_t$ has compact support, mean $0$, and variance $t$. \\ There exists a bound $C(t,M)>0$ such that $\supp \mu_t \subset [-C(t,M),C(t,M)]$. \end{theorem} \begin{proof}The first statement follows from combining Lemma \ref{prop0} (a) and Theorem \ref{Houston}, see \cite[Theorem 3.6]{monotone}. The compactness of $\supp \mu_t$ and the existence of the uniform bound follow from Theorem \ref{Houston} and the Stieltjes-Perron inversion formula. A proof can also be found in \cite[Theorem 5.11]{jek17}. \end{proof} \begin{remark} Consider the more general Loewner equation \begin{equation}\label{EV_Loewner} \frac{\partial}{\partial t} f_{t}(z) = -\frac{\partial}{\partial z}f_{t}(z)\cdot M(z,t) \quad \text{for a.e. $t\geq 0$, $f_{0}(z)=z\in \Ha,$} \end{equation} where, for a.e. $t\geq 0,$ $M(\cdot, t)$ has the form \begin{equation*} M(z,t)=a_t + \int_\mathbb{R}\frac{1+xz}{x-z} \tau_t({\rm d}x), \end{equation*} with $a_t\in\mathbb{R}$ and $\tau_t$ is a finite, non-negative Borel measure on $\mathbb{R}$. Furthermore, $(z,t)\mapsto M(z,t)$ needs to satisfy certain regularity conditions.\\ Again, the solution $(f_t)$ is a family of univalent mappings $f_t:\Ha\to\Ha$ with $f_t(\Ha)\subseteq f_s(\Ha)$ for all $0\leq s\leq t$ and each $f_t$ is the $F$-transform of a probability measure on $\mathbb{R}.$\\ The following embedding result is proved in \cite[Theorem 1.16]{iu}: If $F_\mu$ is univalent, then there exists $T\geq 0$ and a function $M(z,t)$ of the above form such that the solution $(f_t)$ of \eqref{EV_Loewner} satisfies $f_T=F_\mu$. \hfill $\bigstar$ \end{remark} \begin{example}\label{arcs} The arcsine distribution $\mu_{Arc,t}$ with mean 0 and variance $t$ is given by the density \[\frac{dx}{\pi\sqrt{2t-x^2}}, \qquad x\in(-\sqrt{2t}, \sqrt{2t}).\] We have $F_{\mu_{Arc,t}}(z)=\sqrt{z^2-2t},$ which are the mappings from Example \ref{ex_1} for $u=0$.\hfill $\bigstar$ \end{example} The following simple scaling relation will be useful later on. \begin{lemma}\label{scale}Let $c,d>0$ and let $f_t=F_{\mu_t}$ be the solution to \eqref{slit2} with a piecewise continuous driving function $U(t)$. Consider the scaled measures $\nu_t(B)= \mu_{d\cdot t}(c\cdot B)$. Let $h_t=F_{\nu_t}$. Then $h_t$ solves \[\frac{\partial}{\partial t}h_{t}(z) = \frac{\partial}{\partial z}h_{t}(z)\cdot \frac{d/c^2}{h_{t}(z)-U(d\cdot t)/c}.\] \end{lemma} \begin{proof} We have \[h_t(z) = \left(\int_\mathbb{R} \frac1{z-u} \mu_{d\cdot t}(c\cdot du)\right)^{-1} = \left(\int_\mathbb{R} \frac1{z-u/c} \mu_{d\cdot t}(du)\right)^{-1} = \left(\int_\mathbb{R} \frac{c}{cz-u} \mu_{d\cdot t}(du)\right)^{-1}=f_{dt}(cz)/c.\] Then \eqref{slit2} leads to \[\frac{\partial}{\partial t}h_{t}(z) = \frac{d}{c} \frac{\partial}{\partial t}f_{dt}(cz) = \frac{d}{c}\frac{\partial}{\partial z}f_{dt}(cz)\cdot \frac{1}{f_{dt}(cz)-U(d\cdot t)}= \frac{\partial}{\partial z}h_{t}(z)\cdot \frac{d/c^2}{h_{t}(z)-U(d\cdot t)/c}.\] \end{proof} The reason why it makes sense to consider Loewner's differential equation in this way is given by monotone probability theory, more precisely, by monotone increment processes. \section{Monotone increment processes}\label{mon_sec} Let $H$ be a Hilbert space and denote by $B(H)$ the space of all bounded linear operators on $H.$ In quantum probability theory, elements of $B(H)$ are regarded as non-commutative random variables in the following way. \\ Fix a unit vector $\xi\in H.$ Then we can define a so called \emph{state} $\Phi$ as the $\mathbb{C}$-linear mapping \[\Phi:B(H)\to\mathbb{C}, \qquad \Phi(X)=\langle\xi, X\xi\rangle.\] Motivated by quantum mechanics, we can think of $\Phi(a)$ as the expectation of the quantum random variable $a\in B(H).$ \begin{definition}We call $(H,\xi)$ a \emph{quantum probability space}.\\ A self-adjoint element $a\in B(H)$ is called a \emph{quantum random variable}. There exists a unique probability measure $\mu$ on $\mathbb{R}$ such that the moments of $\mu$ are given by $\Phi(a^n)$, i.e. $\int_\mathbb{R} x^n \mu(dx) = \Phi(a^n)$ for all $n\in\mathbb{N}.$ We call $\mu$ the \emph{distribution} of $a.$ \end{definition} The notion of independence is of vital importance for classical probability theory. In a certain sense, there are only five suitable notions of independence in the non-commutative setting: tensor, Boolean, free, monotone and anti-monotone independence; see \cite{MR2016316}.\\ In all five cases, independence of two elements $a,b\in B(H)$ is expressed algebraically by computation rules for mixed moments. We consider monotone independence, introduced by N. Muraki (\cite{MR1853184}, \cite{MR1824472}). \begin{definition} Let $X_1,...,X_N\in B(H)$ be self-adjoint random variables in the quantum probability space $(H,\xi)$. The tuple $(X_1,X_2,...,X_N)$ is called \emph{monotonically independent} if $$\Phi(X_{i_1}^{p_1}\dots X_{i_k}^{p_k} \dots X_{i_m}^{p_m})=\Phi(X_{i_k}^{p_k})\cdot \Phi(X_{i_1}^{p_1}\dots X_{i_{k-1}}^{p_{k-1}} X_{i_{k+1}}^{p_{k+1}} \dots X_{i_m}^{p_m})$$ for all $m\in\mathbb{N}$, $p_1,...,p_m\in \mathbb{N}_0$, whenever $i_{k-1}<i_k>i_{k+1}$ (one of the inequalities is eliminated when $k=1$ or $k=m$). \end{definition} \begin{remark}We note that sometimes, e.g. in \cite{MR1853184}, \cite{MR1824472}, a stronger condition is imposed in the definition of monotone independence. As noted in \cite[Remark 3.2 (c)]{franz07b}, both definitions coincide if $\xi$ is cyclic with respect to $X_1,...,X_N$. \hfill $\bigstar$ \end{remark} Assume that $(X,Y)$ is a pair of monotonically independent self-adjoint random variables. If $\alpha$ and $\beta$ are the distributions of $X$ and $Y$ respectively, then it can be shown that the distribution $\gamma$ of $Z=X+Y$ can be computed by $$F_\gamma = F_\alpha \circ F_\beta,$$ see, e.g., \cite[Theorem 3.10]{franz07b}. This relation defines the additive monotone convolution $\alpha \rhd \beta := \gamma.$ \begin{remark}[Literature] For quantum probability theory (including its important relations to random matrices), we refer the reader to introductions such as \cite{Att, DNV92, meyer, musp}.\\ The five notions lead to central limit theorems, the investigation of quantum stochastic processes with independent increments, and to quantum stochastic differential equations. The latter topics are treated in detail in the books \cite{MR2132092, MR2213451}.\\ Finally, we also refer to \cite{Oba17}, where the author shows how quantum probability theory can be applied to the spectral analysis of graphs. The different notions of independence appear in connection with certain products for graphs. \hfill $\bigstar$ \end{remark} We now explain the relation of monotone independence to the Loewner equation. Let $(f_t)_{t\geq0}$ be the solution to \eqref{slit2} and let $0\leq s\leq t$. Then $f_t=f_s\circ f_{s,t}$ for some univalent function $f_{s,t}:\Ha\to\Ha$, as the image domains $f_t(\Ha)$ are decreasing.\\ As $f_0$ is the identity, we have $f_t=f_{0,t}$. We can apply Lemma \ref{prop0} to see that we can write $f_{s,t}=F_{\mu_{s,t}}$ for a probability measure $\mu_{s,t}$ on $\mathbb{R}$. Hence, we have \begin{equation}\label{triv} \mu_{0,t} = \mu_{0,s}\rhd \mu_{s,t}, \end{equation} which suggests that there might be an underlying family $(X_t)_{t\geq0}$ of self-adjoint operators such that $X_0=0$, $X_s$ and $X_t-X_s$ are independent for $s\leq t$, and $\mu_{s,t}$ is the distribution of $X_t-X_s.$ Equation \eqref{triv} would then follow from \[X_t = X_s + (X_t-X_s).\] This leads us to the following definition. \begin{definition}\label{def_saip} Let $(H,\xi)$ be a quantum probability space and $(X_t)_{t\ge 0}$ a family of bounded self-adjoint operators on $H$ with $X_0=0$. We call $(X_t)$ a \emph{self-adjoint operator-valued additive monotone increment process (SAMIP)} if the following conditions are satisfied: \begin{itemize} \item[(a)] For every $s\geq 0,$ the mapping $t\mapsto \mu_{s,t}$ is continuous w.r.t.\ weak convergence, where $\mu_{s,t}$ denotes the distribution of $X_t-X_s$. \item[(b)]The tuples \[ (X_{t_1},X_{t_2}-X_{t_1},\ldots,X_{t_n}-X_{t_{n-1}}) \] are monotonically independent for all $n\in\mathbb{N}$ and all $t_1,\ldots,t_n\in\mathbb{R}$ s.t.\ $0\le t_1\le t_2\le\cdots\le t_n$. \end{itemize} \end{definition} We also write $\mu_t$ instead of $\mu_{0,t}$ for the distribution of $X_t$. \begin{example}[Monotone Brownian motion] \label{ex_m_b_m} Recall the arcsine distribution $\mu_{Arc,t}$ with mean 0 and variance $t$ from Example \ref{arcs}. The normalized distribution $\mu_{Arc,1}$ is the monotone analogue of the normal distribution from classical probability, as it is the limit distribution in the central limit theorem of monotone probability theory, see \cite[Theorem 2]{MR1853184}. \\ A SAMIP $(X_t)$ with distributions $\mu_t=\mu_{Arc,t}$ is thus called a \emph{monotone Brownian motion}. We have $F_{\mu_{Arc,t}}(z)=\sqrt{z^2-2t}.$ These mappings simply describe the growth of a straight line starting at $0$, see Example \ref{ex_1}. In \cite{MR1462227}, Muraki constructed a monotone Brownian motion on a certain Fock space. \hfill $\bigstar$ \end{example} The following result follows from \cite[Theorem 6.8]{jek17} or \cite[Theorem 1.14]{iu}. \begin{theorem}\label{khl}Let $H\in\mathcal{H}_M$ and let $(f_t)_{t\geq0}$ be the solution to \eqref{slit33}. Write $f_t=f_s\circ f_{s,t}$ and define $\mu_{s,t}$ by $f_{s,t}=F_{\mu_{s,t}}.$ Then there exists a SAMIP $(X_t)_{t\geq 0}$ on a quantum probability space $(H,\xi)$ such that the distribution of $X_t-X_s$ is given by $\mu_{s,t}.$ \end{theorem} \newpage \section{Spidernets and comb products} We now follow the work \cite{acc} and modify its main result (Theorem 5.1), which can be interpreted as a discrete approximation of a monotone Brownian motion, a ``monotone quantum random walk'', via adjacency matrices of certain graphs.\\ Let $V$ be a vertex set, finite or countable infinite, with a distinguished vertex $o\in V$.\\ Let $A:V\times V\to \{0,1\}$ be a symmetric matrix with $A_{xx}=0$ for all $x\in V$. \\ We can interpret $A$ as the adjacency matrix of an undirected (loop-free) graph with vertex set $V$, where $A_{xy}=1$ if and only if $x\sim y$, i.e. $x$ and $y$ are connected by an edge. \begin{definition} We define a \emph{(rooted)} \emph{graph} as such a triple $G=(V,A,o)$.\\ For $x\in V$, the degree $\deg(x)$ of $x$ is defined as $\sum_{y\in V} A_{xy}$. The degree of the graph is defined as $\deg(G):=\deg(A):=\sup_{x\in V}\deg(x)$. \end{definition} If $\deg(A)<\infty$, then $A$ can be regarded as a bounded self-adjoint operator on the Hilbert space $l^2(V)$, see \cite[Theorem 3.1]{mw89}. The distinguished vertex $o\in V$ enables us to regard $A$ as a quantum random variable on the quantum probability space $(l^2(V), \delta_o)$, where $\delta_o\in l^2(V)$ with $(\delta_o)(o)=1, (\delta_o)(x)=0$ for $x\not=o$. \begin{example}\label{zet} Let $V=\mathbb{Z}$ with $A_{jk}=1$ if and only if $|j-k|=1$ and $0$ otherwise. Choose $o=0.$ Then the distribution of $A$ within the probability space $(l^2(\mathbb{Z}), \delta_0)$ is given by the arcsine distribution with mean 0 and variance 2, see \cite[Section 6.1]{acc}. \hfill $\bigstar$ \end{example} Let $G_1=(V_1, A^1, o_1), G_2=(V_2, A^2, o_2)$ be two graphs. Then the comb product $G_1 \rhd G_2 = (V_3,A^3,o_3)$ (with respect to $o_2$) is defined as the graph with vertices $V_3=V_1\times V_2$, distinguished vertex $o_3=(o_1,o_2)$, and \begin{equation}\label{comb_p} A^3_{(xx')(yy')} = A^1_{xx'}\delta_{yo_2}\delta_{y'o_2} + \delta_{xx'}A^2_{yy'}. \end{equation} Here we use the symbol $\delta_{xy}=1$ if $x=y$, $\delta_{xy}=0$ if $x\not=y$. It can be verified that $(x,y)\sim (x',y')$ if and only if \begin{itemize} \item $x\sim x'$, $x\not= x'$ and $y=y'=o_2$, or \item $x= x'$, $y=y'=o_2$, and $x\sim x$ or $o_2\sim o_2$, or \item $x=x'$ and $y\sim y',$ $(y,y')\not=(o_2,o_2)$. \end{itemize} \begin{figure}[ht] \rule{0pt}{0pt} \centering \includegraphics[width=8cm]{2.jpg} \caption{The comb product of two graphs.} \end{figure} If $\deg(G_1), \deg(G_2)<\infty$, then the adjacency matrix $A^3$ of $G_1 \rhd G_2$ acts on $l^2(V_1\times V_2)\simeq l^2(V_1) \otimes l^2(V_2).$ The following lemma is a slightly more general version of \cite[Theorem 3.1]{acc}. Its proof follows from definition \eqref{comb_p} and by induction. \begin{lemma}\label{dreidrei} Let $G_1=(V_1,A^1,o_1),...,G_n=(V_n,A^n,o_n)$ be graphs. Denote by $I^k$ the identity on $l^2(V_k)$ and by $P^k$ the projection from $l^2(V_k)$ onto the subspace spanned by $\delta_{o_k}$, i.e. $(P^k(\psi))(y) = \delta_{yo_k}\psi(o_k).$ Denote by $B$ the adjacency matrix of the graph $G_1 \rhd G_2 \rhd ... \rhd G_n$. Then \begin{equation}\label{sumii} B= \sum_{j=1}^n I^1 \otimes ... \otimes I^{j-1} \otimes A^j \otimes P^{j+1} \otimes ... \otimes P^n. \end{equation} \end{lemma} Assume that $\sup\{\deg(v)\,|\, v\in V_j\}<\infty$ for all $j=1,...,n$. Then the adjacency matrix $B$ can be regarded as a quantum random variable in $(l^2(V_1\times ... \times V_n), \delta_{o_1}\otimes ... \otimes \delta_{o_n}).$ By \cite[Proposition 4.1]{acc}, the random variables $(I^1 \otimes ... \otimes I^{j-1} \otimes A^j \otimes P^{j+1} \otimes ... \otimes P^n)_{j\in(1,...,n)}$ are monotonically independent. Thus the distribution of $B$ is given by the monotone convolution of the distributions of the summands in \eqref{sumii}. Furthermore, it is easy to see that the moments of $I^1 \otimes ... \otimes I^{j-1} \otimes A^j \otimes P^{j+1} \otimes ... \otimes P^n$ with respect to $(l^2(V_1\times ... \times V_n), \delta_{o_1}\otimes ... \otimes \delta_{o_n})$ agree with the moments of $A^j$ within $(l^2(V_j), \delta_{o_j}).$ Thus we obtain: \begin{lemma}\label{viervier}Assume that $\sup\{\deg(v)\,|\, v\in V_j\}<\infty$ for all $j=1,...,n$. Then the random variables $(I^1 \otimes ... \otimes I^{j-1} \otimes A^j \otimes P^{j+1} \otimes ... \otimes P^n)_{j\in(1,...,n)}$ are monotonically independent in the quantum probability space $(l^2(V_1\times ... \times V_n), \delta_{o_1}\otimes ... \otimes \delta_{o_n})$. Let $\mu_j$ be the distribution of $A_j$ within $(l^2(V_j), \delta_{o_j})$. Then $B$ has the distribution \[ \mu_1 \rhd \mu_2 \rhd ... \rhd \mu_n.\] \end{lemma} We now construct special graphs whose distributions will be related to the Loewner equation.\\ We denote by $d(x,y)$ the length of the shortest walk within a graph connecting $x$ and $y$. For $\eps\in\{-1,0,+1\}$, we define for any $x\in V$, \[\omega_{\eps}(x)=|\{y\in V\,|\, y\sim x, d(o,y)=d(o,x)+\eps\}|.\] Let $a\in\mathbb{N}, b\in\mathbb{N}\setminus\{1\}$ and $c\in \mathbb{N}$ with $c \leq b-1$. A \emph{spidernet with data $(a,b,c)$}, see \cite[Def. 4.25]{hora}), is a graph $(V,A,o)$ with root $o\in V$ such that \[ \omega_{+1}(o)=a,\quad \omega_{-1}(o)=\omega_0(o)=0, \quad \text{and} \quad \omega_{+1}(x)=c,\quad \omega_{-1}(x)=1,\quad \omega_{0}(x)=b-1-c\] for all $x\in V\setminus\{o\}$ (and $A_{xy}\in\{0,1\}$ for all $x,y\in V$). \begin{figure}[ht] \rule{0pt}{0pt} \centering \includegraphics[width=8cm]{3.jpg} \includegraphics[width=8cm]{4.jpg} \caption{Two spidernets with data $(4,4,2).$} \end{figure} \begin{lemma}[See Thm. 4.29 in \cite{hora}.]\label{lemma0} The spectrum of the adjacency matrix of a spidernet w.r.t. the quantum probability space $(l^2(V), \delta_o)$ is the free Meixner law $m_{a,c,b-1-c}$. \end{lemma} The free Meixner law is described in \cite[Section 4.5]{hora}. We will only need the following property. \begin{lemma}\label{lemma000} Let $n\in\mathbb{N}, u\in\mathbb{N}_0$. Then the distribution $m_{2n, n, u}$ has $F$-transform $\sqrt{(z-u)^2-4n}+u$. It has $0$ mean and variance $2n$. \end{lemma} \begin{proof} This can be easily verified by using the explicit formula \cite[Equation (B.1)]{io}. \end{proof} We now combine the following two observations: \begin{itemize} \item[(A)] On the one hand, by Lemma \ref{lemma0}, $m_{2n, n, u}$ is the distribution of a spidernet with data $(2n, n+1+u, n)$; provided such a spidernet exists.\\ From looking at the $2n$ vertices with $d(o,x)=1$, we get the necessary condition $b-1-c = u \leq 2n-1$. Conversely, one can verify that for each $n\in\mathbb{N}$ and every $u\in\{0,...,2n-1\}$ there exists a spidernet with data $(2n, n+1+u, n)$. We denote by $S_{n,u}$ a fixed spidernet with such data. \item[(B)] On the other hand, we obtain $F_{m_{2n,n,u}}(z)=\sqrt{(z-u)^2-4n}+u$ as the solution of the Loewner equation with $U(t)\equiv u$ at $t=2n$, see Example \ref{ex_1}. Obviously, we can also write $m_{2n,n,u}=\delta_{-u} \rhd \mu_{Arc,2n}\rhd \delta_{u}$, see Example \ref{ex_m_b_m}. \end{itemize} \vspace{2mm} Hence, approximating a driving function by piecewise constant driving functions is related to approximating the corresponding measures by distributions of spidernets. \begin{figure}[ht] \rule{0pt}{0pt} \centering \includegraphics[width=7.5cm]{5.pdf} \includegraphics[width=7.5cm]{6.pdf} \caption{Left: The free Meixner law $m_{4,2,0}$ is simply the arcsine distribution. Right: The density of $m_{4,2,1}$ in $[1-2\sqrt{2},1+2\sqrt{2}]$ and its atom at $-2$.} \end{figure} % \newpage \section{Approximation via spidernets} \subsection{Slit equation with continuous non-negative driving functions}\label{mons0}${}$\\[-2mm] We now consider a driving function $U:[0,\infty)\to\mathbb{R}$ which is continuous and non-negative. \\ Let $(f_t)_{t\geq0}$ be the solution to \eqref{slit2} and denote by $(\mu_t)_{t\geq0}$ the probability measures with $F_{\mu_t}=f_t$. Furthermore, let $(X_t)_{t\geq 0}$ be a corresponding SAMIP process given by Theorem \ref{khl}.\\ Fix some $T>0$. We would like to approximate $(X_t)_{t\in[0,T]}$ by a discrete quantum process, where each random variable is the adjacency matrix of a graph. By means of the lemmas above, we can now proceed as follows.\\ Choose $n_0\in\mathbb{N}$ such that \begin{equation}\label{est0}0 \leq U(t) \leq \sqrt{\frac{T}{2}}\left(2\sqrt{n}-\frac1{\sqrt{n^3}}\right) \quad \text{on $[0,T]$} \end{equation} for all $n\geq n_0$.\\ Now assume that $n\geq n_0$. For $k=1,...,n$, we define \[u_{n,k}= \lfloor \sqrt{2T}\sqrt{n}\cdot \frac{U(k/n\cdot T)}{\frac{T}{n}}\rfloor \in \{0, ..., 2n^2-1\}.\] Here, $\lfloor x \rfloor$ denotes the largest $m\in \mathbb{N}_0$ with $m\leq x$. Note that \eqref{est0} implies that the spidernet $S_{n^2,u_{n,k}}$ exists for all $k=1,...,n$. We denote by $V_{n,k}$ the vertex set and by $o_{n,k}$ the root of $S_{n^2,u_{n,k}}$. \begin{theorem}\label{theorem10} For $k=1,...,n$, let $\mathcal{C}_{n,k}$ be the graph \[ \mathcal{C}_{n,k} := S_{n^2, u_{n,1}} \rhd S_{n^2, u_{n,2}} \rhd ... \rhd S_{n^2, u_{n,k}}.\] Then $(\mathcal{C}_{n,k})_{k=1,...,n}$ is a an approximation of the quantum process $(X_t)_{t\in[0,T]}$ in the following sense: \begin{itemize} \item[(a)]Let $A_{n,k}$ be the adjacency matrix of $\mathcal{C}_{n,k}$. Denote by $\mu_{n,k}$ the distribution of $A_{n,k}$ with respect to the quantum probability space $(l^2(V_{n,1}\times ... \times V_{n,k}),\delta_{o_{n,1}} \otimes ... \otimes \delta_{o_{n,k}})$. Then \[ \lim_{n\to\infty} \mu_{n,\lfloor tn/T \rfloor}(\sqrt{2n^3/T}\; \cdot ) = \mu_t(\cdot) \] with respect to weak convergence for all $t\in[0,T]$. The limit also holds true with respect to the convergence of all moments. \item[(b)] Consider the quantum probability space $(l^2(V_{n,1}\times ... \times V_{n,n}),\delta_{o_{n,1}} \otimes ... \otimes \delta_{o_{n,n}})$. Extend $A_{n,k}$ to $l^2(V_{n,1}\times ... \times V_{n,n})$ by $\mathcal{A}_{n,k}:=A_{n,k}\otimes P^{n,k+1}\otimes ... \otimes P^{n,n},$ where $P^{n,j}$ denotes the projection in $l^2(V_{n,j})$ onto $\delta_{o_{n,j}}$. Then the increments $(\mathcal{A}_{n,1}, \mathcal{A}_{n,2}-\mathcal{A}_{n,1}, ..., \mathcal{A}_{n,n}-\mathcal{A}_{n,n-1})$ are monotonically independent. \end{itemize} \end{theorem} \begin{remark}Note that the graph that corresponds to $\mathcal{A}_{n,k}$ is simply an embedding of $\mathcal{C}_{n,k}$ within a larger vertex set. \hfill $\bigstar$ \end{remark} \begin{proof}Statement (b) follows directly from Lemmas \ref{dreidrei} and \ref{viervier}.\\ Let $U_n:[0,2n^3]\to\mathbb{R}$ be the function which is constant $u_{n,1}$ on $[0,2n^2]$, constant $u_{n,2}$ on $(2n^2,4n^2]$, etc.\\ Let $f_{n,t}$ be the solution to \eqref{slit2} with this driving function and define the measures $\alpha_{n,t}$ by $F_{\alpha_{n,t}}=f_{n,t}$. By Example \ref{ex_1} and Lemma \ref{lemma000} we have \[\alpha_{n,2n^2} = m_{2n^2,n^2,u_{n,1}}.\] Starting the Loewner equation \eqref{slit2} for $h_t$ at $t=2n^2$ with initial value $h_{2n^2}(z)=z$ and driving function $U_n(t)$ yields the mappings $(h_t)$ that satisfy $f_{n,t} = f_{n,2n^2} \circ h_{t}.$ Obviously, $h_{4n^2}=F_{m_{2n^2,n^2,u_{n,2}}}$ and thus $\alpha_{n,4n^2} =m_{2n^2,n^2,u_{n,1}}\rhd m_{2n^2,n^2,u_{n,2}}.$ By induction we obtain \begin{equation*} \alpha_{n,2kn^2} = \rhd_{j=1}^{k} m_{2n^2,n^2,u_{n,j}}. \end{equation*} On the other hand, Lemmas \ref{dreidrei}, \ref{viervier}, \ref{lemma0} imply \begin{equation}\label{uu00}\mu_{n,k}=\rhd_{j=1}^{k} m_{2n^2,n^2,u_{n,j}} \end{equation} for all $k=1,...,n$. The function $V_n:[0,T]\to\mathbb{R}, V_n(t):=\sqrt{\frac{T}{2n^3}} \cdot U_n(t/T\cdot 2n^3)$ is constant on the intervals $(\frac{(k-1)T}{n}, \frac{kT}{n}],$ $k=1,...,n$. We have \begin{eqnarray*} && U(k/n\cdot T)- V_n(k/n\cdot T) = U(k/n\cdot T)- \sqrt{\frac{T}{2n^3}} \cdot U_n(k\cdot 2n^2)=\nonumber\\ &&U(k/n\cdot T)- \sqrt{\frac{T}{2n^3}} \cdot \lfloor \sqrt{2T}\sqrt{n}\cdot \frac{U(k/n\cdot T)}{\frac{T}{n}}\rfloor \leq \sqrt{\frac{T}{2n^3}}. \end{eqnarray*} Now let $t\in(\frac{(k-1)T}{n}, \frac{kT}{n})$ and denote by $\omega:[0,T]\to[0,\infty)$ a modulus of continuity of $U$ for $[0,T]$, i.e. $|U(x)-U(y)|\leq \omega(|x-y|)$ for all $x,y\in[0,T]$, and $\omega$ is increasing, vanishes at $0$, and is continuous at $0$. We have \begin{eqnarray*} &&|U(t)-V_n(t)|=|U(t)-V_n(kT/n)|\leq \nonumber \\ &&|U(t)-U(kT/n)| + |U(kT/n)-V_n(kT/n)|\leq \omega\left(\frac{T}{n}\right) + \sqrt{\frac{T}{2n^3}}. \end{eqnarray*} Finally, for $t=0$ we have $V_n(0)=V_n(T/n)$ and thus \[|U(0)-V_n(0)|=|U(0)-U(T/n)| + |U(T/n)-V_n(T/n)|\leq \omega\left(\frac{T}{n}\right) + \sqrt{\frac{T}{2n^3}}. \] Hence, we obtain \begin{eqnarray}\label{conv0} \sup_{t\in[0,T]}|U(t)-V_n(t)|\to 0 \quad \text{as $n\to\infty$}. \end{eqnarray} Let $(h_{n,t})_{t\in [0,T]}$ be the Loewner chain that corresponds to $V_n$. Define the measures $\nu_{n,t}$ by $h_{n,t}=F_{\nu_{n,t}}.$ Note that $V_n$ has the form $V_n = U_n(d \cdot t)/c$ with $d=c^2$. Hence, by Lemma \ref{scale} we have \[\nu_{n,t}(M) = \alpha_{n,t/T\cdot 2n^3}(\sqrt{2n^3/T}\cdot M)\] for all $t\geq0$ and all Borel subsets $M\subset \mathbb{R}$. If $t$ has the form $t=kT/n, k=1,...,n,$ then \eqref{uu00} gives \begin{eqnarray*}\nu_{n,t}(M) &=& \mu_{n,k}(\sqrt{2n^3/T}\cdot M) = (\rhd_{j=1}^{k} m_{2n^2,n^2,u_{n,j}} )(\sqrt{2n^3/T}\cdot M) \nonumber\\ &=& (\rhd_{j=1}^{tn/T} m_{2n^2,n^2,u_{n,j}})(\sqrt{2n^3/T}\cdot M).\end{eqnarray*} For every $t\in[0,T]$ we have $h_{n,t} \to f_t$ locally uniformly because of \eqref{conv0} and Lemma \ref{aprox_lemma}. By Lemma \ref{prop0} (b) we have $\nu_{n,t} \to \mu_t$ with respect to weak convergence, or \[ \mu_{n,\lfloor tn/T \rfloor}(\sqrt{2n^3/T}\cdot ) = (\rhd_{j=1}^{\lfloor tn/T \rfloor} m_{2n^2,n^2,u_{n,j}})(\sqrt{2n^3/T}\, \cdot ) \to \mu_t(\cdot).\] It remains to show that this limit also holds with respect to convergence of all moments.\\ As there is a uniform bound for the family $(V_n)_{n}$ on $[0,T]$, Theorem \ref{Steve} implies that there exists $C(t)>0$ such that $\supp \nu_{n,t}\subset[-C(t),C(t)]$ for all $n$ and all $t\in[0,T]$. Thus, weak convergence of $\nu_{n,t}$ is equivalent to convergence of all its moments. \end{proof} \subsection{General Loewner equation}\label{mons1}${}$\\[-2mm] Consider equation \eqref{slit33} with the additional condition that $\supp \nu_t \subset [0,M]$ for all $t\geq0$. Let $(f_t)_{t\geq0}$ be the solution to the corresponding Loewner equation and denote by $(\mu_t)_{t\geq0}$ the probability measures with $F_{\mu_t}=f_t$. Furthermore, let $(X_t)_{t\geq 0}$ be a corresponding SAMIP process given by Theorem \ref{khl}. The process $(X_t)_{t\in[0,T]}$ can be approximated by graphs in the following way. \begin{theorem}\label{theorem11} Choose $n_0\in\mathbb{N}$ such that $M\leq\sqrt{\frac{T}{2}}\left(2\sqrt{n}-\frac1{\sqrt{n^3}}\right)$ for all $n\geq n_0$. There exists a family $(\mathcal{C}_{n,k})_{n\geq n_0, k=1,...,n}$ of rooted graphs such that: \begin{itemize} \item[(a)] For each $n\geq n_0$, $(\mathcal{C}_{n,k})_{k=1,...,n}$ can be considered as graphs with common vertex set $V_n$ and common root $o_n$. Let $A_{n,k}$ be the adjacency matrix of $\mathcal{C}_{n,k}$. Then the increments $(A_{n,1}, A_{n,2}-A_{n,1}, ..., A_{n,n}-A_{n,n-1})$ are monotonically independent with respect to the quantum probability space $(l^2(V_n),\delta_{o_n})$. \item[(b)] Denote by $\mu_{n,k}$ the distribution of $A_{n,k}$. Then \[ \lim_{n\to\infty} \mu_{n,\lfloor tn/T \rfloor}(\sqrt{2n^3/T}\; \cdot ) = \mu_t(\cdot) \] with respect to weak convergence for all $t\in[0,T]$. The limit also holds true with respect to the convergence of all moments. \end{itemize} \end{theorem} \begin{remark}\label{Joachim} By Theorem \ref{Steve} we know that $\int_\mathbb{R} x \mu_t(dx) = 0$ and $\int_\mathbb{R} x^2 \mu_t(dx) = t$ for all $t\geq0$. Theorem \ref{theorem11} implies that $\int_\mathbb{R} x^k \mu_t(dx) \geq 0$ for all $k\geq 3$, as the distributions $\mu_{n,k}$ of the adjacency matrices (whose entries are either $0$ or $1$) obviously have non-negative moments. \hfill $\bigstar$ \end{remark} \begin{proof} Due to Lemma \ref{Whitney} there exists a sequence of continuous non-negative driving functions $U_m:[0,T]\to[0,M]$ such that the corresponding solution $f_{m,t}$ to \eqref{slit2} converges locally uniformly to $f_t$ for all $t\geq0$ as $m\to\infty$. Write $f_{m,t}=F_{\mu_{m,t}}$. Then Lemma \ref{prop0} (b) implies that $\lim_{m\to\infty}\mu_{m,t}=\mu_t$.\\ Let $\mathcal{C}_{n,k;m}$ be the graphs from Theorem \ref{theorem10} for the driving function $U_m$ with distributions $\mu_{n,k;m}$. Note that $n\geq n_0$ and \eqref{est0} together with the bound $U_m(t)\leq M$ imply that $n$ is large enough to construct these graphs. Then \[ \lim_{n\to\infty} \mu_{n,\lfloor tn/T \rfloor;m}(\sqrt{2n^3/T}\; \cdot ) = \mu_{m,t}(\cdot). \] A diagonalization argument (note that there is a metric for probability measures on $\mathbb{R}$ which is compatible with weak convergence, e.g.\ the L\'evy-Prokhorov distance) gives us a sequence $m(n)$ converging to $\infty$ such that \[ \lim_{n\to\infty} \mu_{n,\lfloor tn/T \rfloor;m(n)}(\sqrt{2n^3/T}\; \cdot ) = \mu_{t}(\cdot). \] Hence, the graphs $\mathcal{C}_{n,k}:=\mathcal{C}_{n,k;m(n)}$ (where $\mathcal{C}_{n,k}$ is regarded as a subgraph of $\mathcal{C}_{n,n}$) satisfy all required conditions. \end{proof} \newpage \def$'${$'$}
{ "timestamp": "2019-08-01T02:12:51", "yymm": "1802", "arxiv_id": "1802.04318", "language": "en", "url": "https://arxiv.org/abs/1802.04318" }
\section{Introduction}\label{intro} In this paper we consider a generalized pseudo-relativistic Hartree equation \begin{equation}\label{original} \sqrt{-\Delta+m^2}\, u+Vu=\left(W*F(u)\right)f(u)\ \ \text{in }\ \mathbb{R}^{N},\end{equation} where $N\geq 2$, $F(t)=\int_0^t f(s)\mathrm{d} s$, assuming that the nonlinearity $f$ is a $C^1$ function, non-negative in $[0,\infty)$, that satisfies \begin{enumerate} \item [($f_1$)] $\displaystyle\lim_{t\to 0}\frac{|f(t)|}{t}=0$; \item [($f_2$)] $\displaystyle\lim_{t\to \infty}\frac{f(t)}{t^{\theta-1}}=0$ for some $2<\theta<2^{\#}=\frac{2N}{N-1}$; \item [($f_3$)] $\displaystyle\frac{f(t)}{t}$ is increasing for all $t>0$. \end{enumerate} We also postulate \begin{enumerate} \item [($V1$)] $V$ is continuous and satisfies $V(y)+V_0\geq 0$ for every $y\in\mathbb{R}^{N}$ and some constant $V_0\in (0,m)$; \item [($V2$)] $V_\infty=\displaystyle\lim_{|y|\to\infty}V(y)>0$; \item [($V3$)] $V(y)\leq V_\infty$ for all $y\in\mathbb{R}^N$, $V(y)\neq V_\infty$; \item [($W_h$)] $0\leq W=W_1+W_2\in L^r(\mathbb{R}^{N})+L^\infty(\mathbb{R}^{N})$ is radial, with $r>\frac{N}{N(2-\theta)+\theta}$. \end{enumerate} Therefore, we aim to generalize the results obtained by Coti Zelati and Nolasco \cite{ZelatiNolasco} and Cingolani and Secchi \cite{Cingolani}. In the last paper, the authors have studied the equation \[\sqrt{-\Delta+m^2}\, u+Vu=\left(W*u^\theta\right)|u|^{\theta-2}u,\] supposing, additionally to our hypotheses, that the potential $V$ is continuous and has a horizontal asymptote for $N\geq 3$. If $k\in \mathbb{N}$, our work covers the case \[W(x)=\frac{|x|^k}{1+|x|^k},\] while the hypothesis $W(y)\to 0$ when $|y|\to \infty$ is explicitly assumed in \cite[Section 7]{Cingolani}. Furthermore, the homogeneity of the equation is a key ingredient in the proofs presented. So, applying different methods, we generalize \cite{Cingolani}. A careful reading of our paper will also show that it generalizes \cite{ZelatiNolasco}. The equation \begin{equation}\label{original1}\left\{ \begin{array}{c} i \partial_t u =\sqrt{-\Delta+m^2}+ G(u)\ \ \text{in }\ \mathbb{R}^{N}, \\ u(x,0)=\phi(x),\ \ x\in \mathbb{R}^{N} \end{array} \right.\end{equation} where $N\geq 2$, $G$ is a nonlinearity of Hartree type , $m>0$ denotes the mass of bosons in units, was used to describe the dynamics of pseudo-relativistic boson stars in astrophysics. See \cite{Cho,Elgart,CSS,Lieb2} for more details. For the study of semiclassical analysis of the non-relativistic Hartree equations we would like to quote the papers \cite{CCS,Frohlich,Moroz,Wei} and the recent work \cite{Cingolani2} as well. For the Hartree equation without external potential $V$, we cite \cite{Lieb2} for radial ground state solution, \cite{Lenzmann} for uniqueness and nondegeneracy of ground state solutions, and \cite{ZelatiNolasco,ZelatiNolasco2} for the existence of positive and radially symmetric solutions. In \cite{Melgaard} is treated some Hartree problem imposing that the external potential $V$ is radial, while in \cite{Cingolani} this condition is dropped. By considering an extension problem from $\mathbb{R}^{N}$ to $\mathbb{R}^{N+1}_+$, an alternative definition of $\sqrt{-\Delta+m^2}$ is well-known (see \cite{ZelatiNolasco} or \cite{Caffarelli}), so that equation \eqref{original} can be written as \begin{equation}\label{P} \left\{\begin{aligned} -\Delta u +m^2u&=0, &&\mbox{in} \ \mathbb{R}^{N+1}_+,\\ -\displaystyle\frac{\partial u}{\partial x}(0,y)&=-V(y)u(0,y)+\left(W(y)*F(u(0,y))\right)f(u(0,y)) &&\mbox{in} \ \mathbb{R}^{N}.\end{aligned}\right. \end{equation} We summarize our results: \begin{theorem}\label{t1} Suppose that conditions \textup{($f_1$)-($f_3$)}, \textup{($V_1$)} and \textup{($W_h$)} are valid. Then, problem \eqref{P} has a non-negative ground-state solution $w\in H^1(\mathbb{R}^{N+1}_+)$. \end{theorem} \begin{theorem}\label{classical} Assuming that hypotheses already stated are satisfied by $f$, $V$ and $W$, any solution $v$ of problem \eqref{P} satisfies \[v\in C^{1,\alpha}(\mathbb{R}^{N+1}_+)\cap C^2(\mathbb{R}^{N+1}_+)\] and therefore is a classical solution of \eqref{P}. \end{theorem} We also prove that the ground station solution has exponential decay: \begin{theorem}\label{t3} Let $w$ be the ground state solution obtained in Theorem \ref{t1}. Then $w(x,y)>0$ in $[0,\infty)\times\mathbb{R}^{N}$ and, for any $\alpha\in (V_0,m)$ there exists $C>0$ such that \[0<w(x,y)\leq Ce^{-(m-\alpha)\sqrt{x^2+|y|^2}}e^{\alpha x}\] for any $(x,y)\in [0,\infty)\times\mathbb{R}^{N}$. In particular, \[0<w(0,y)\leq Ce^{-\delta|y|},\quad\forall\ y\in\mathbb{R}^{N},\] where $0<\delta<m-V_0$. \end{theorem} The natural setting for problem \eqref{P} is the Sobolev space \[H^1(\mathbb{R}^{N+1}_+)=\left\{u\in L^2(\mathbb{R}^{N+1}_+)\,:\, \iint_{\mathbb{R}^{N+1}_+}|\nabla u|^2\mathrm{d} x\mathrm{d} y<\infty \right\}\] endowed with the norm \[\|u\|^2=\iint_{\mathbb{R}^{N+1}_+}\left(|\nabla u|^2+u^2\right)\mathrm{d} x\mathrm{d} y.\] \noindent\textbf{Notation.} The norm in the space $\mathbb{R}^{N+1}_+$ will be denoted by $\|\cdot\|$. For all $q\in [1,\infty]$, we denote by $|\cdot|_q$ the norm in the space $L^q(\mathbb{R}^{N})$ and by $\|\cdot\|_q$ the norm in the space $L^{q}(\mathbb{R}^{N+1}_+)$. \vspace*{.3cm} It is well-known that traces of functions $H^1(\mathbb{R}^{N+1}_+)$ are in $H^{1/2}(\mathbb{R}^{N})$ and that every function in $H^{1/2}(\mathbb{R}^{N})$ is the trace of a function in $H^1(\mathbb{R}^{N+1}_+)$, see \cite{Tartar}. Denoting $\gamma\colon H^1(\mathbb{R}^{N+1}_+)\to H^{1/2}(\mathbb{R}^{N})$ the linear function that associates the trace $\gamma(v)\in H^{1/2}(\mathbb{R}^{N})$ of the function $v\in H^1(\mathbb{R}^{N+1}_+)$, then $\ker\,\gamma=H^1_0(\mathbb{R}^{N+1}_+)$. The immersions \begin{align}\label{immersions}H^1(\mathbb{R}^{N+1}_+)&\hookrightarrow L^q(\mathbb{R}^{N+1}_+)\\ H^{1/2}(\mathbb{R}^{N})&\hookrightarrow L^q(\mathbb{R}^{N})\end{align} are continuous for any $q\in [2,2^*]$ and $[2,2^{\#}]$ respectively, where \begin{equation}\label{2*}2^{*}=\frac{2(N+1)}{N-1}\qquad\textrm{and}\qquad 2^{\#}=\frac{2N}{N-1}.\end{equation} The space $H^{1/2}(\mathbb{R}^{N})$ is defined by means of Fourier transforms; therefore, we can not change $\mathbb{R}^{N}$ to a bounded open set $\Omega\subset\mathbb{R}^{N}$. However (see \cite{Demengel}), $H^{1/2}(\mathbb{R}^{N})=W^{1/2,2}(\mathbb{R}^{N})$ and $W^{1/2,2}(\Omega)$ is well-defined for an open set $\Omega\subset\mathbb{R}^{N}$. We recall its definition. Let $u\colon \Omega\to \mathbb{R}$ a measurable function and $\Omega$ a bounded open set (that, in the sequel, we suppose to have Lipschitz boundary). Denoting \[[u]^2_{\Omega}=\int_\Omega\int_\Omega\frac{|u(x)-u(y)|^2}{|x-y|^{N+1}}\mathrm{d} x\mathrm{d} y\] and \begin{align*}W^{1/2,2}(\Omega)&=\left\{u\in L^2(\mathbb{R}^{N})\,:\,[u]^2_{\Omega}<\infty\right\}, \end{align*} then $W^{1/2,2}(\Omega)$ is a reflexive Banach space (see, e.g., \cite{Demengel} and \cite{Guide}) endowed with the norm \[\|u\|_{W^{1/2,2}(\Omega)}=|u|_2+[u]_{\Omega}.\]\goodbreak The proof of the next result can be found in \cite[Theorem 4.54]{Demengel}. \begin{theorem}\label{immersionW} The immersion $W^{1/2,2}(\Omega)\hookrightarrow L^q(\Omega)$ is compact for any $q\in \left[1,2^{\#}\right)$. \end{theorem} As usual, the immersion $W^{1/2,2}(\Omega)\hookrightarrow L^{2^{\#}}(\Omega)$ is continuous: see \cite[Corollary 4.53]{Demengel}. We denote the norm in the space $L^q(\Omega)$ by $|\cdot|_{L^q(\Omega)}$. \section{Preliminaries} Let us suppose that $u\in H^1(\mathbb{R}^{N+1})\cap C^\infty_0(\mathbb{R}^{N+1}_+)$ and $u(x,y)\geq 0$. Let us proceed heuristically: since \[|u(0,y)|^t=\int_{\infty}^{0}\frac{\partial}{\partial x}|u(x,y)|^t\mathrm{d} x=\int_{\infty}^{0}t|u(x,y)|^{t-2}u(x,y)\frac{\partial}{\partial x}u(x,y)\mathrm{d} x,\] it follows from Hölder's inequality \begin{align}\label{Heu}\int_{\mathbb{R}^{N}}|\gamma(u)|^t=\int_{\mathbb{R}^{N}}|u(0,y)|^t\mathrm{d} y&\leq \int_{\mathbb{R}^{N}}\int_0^\infty t|u(x,y)|^{t-1}|\nabla u(x,y)|\mathrm{d} x\mathrm{d} y\nonumber\\ &\leq t\left(\int_{\mathbb{R}^{N+1}_+}|u|^{2(t-1)}\right)^{1/2}\left(\int_{\mathbb{R}^{N+1}_+}|\nabla u|^2\right)^{1/2}\nonumber\\ &\leq t\|u\|_{2(t-1)}^{t-1}\|\nabla u\|_{2}. \end{align} So, in order to apply the immersion $H^1(\mathbb{R}^{N+1}_+)\hookrightarrow L^q(\mathbb{R}^{N+1}_+)$ we must have $2\leq 2(t-1)\leq \frac{2(N+1)}{N-1}$, that is, \begin{equation}\label{p} 2\leq t\leq\frac{2N}{N-1}=2^{\#}. \end{equation} By density of $H^1(\mathbb{R}^{N+1})\cap C^\infty_0(\mathbb{R}^{N+1}_+)$ in $H^1(\mathbb{R}^{N+1}_+)$, the estimate \eqref{Heu} is valid for all $u\in H^1(\mathbb{R}^{N+1}_+)$. Taking into account \eqref{immersions}, Young's inequality applied to \eqref{Heu} yields \begin{align}\label{casep}|\gamma(u)|_{t}&\leq \|u\|_{2(t-1)}^{(t-1)/t}\left(t\|\nabla u\|_{2}\right)^{1/t}\\ &\leq \frac{t-1}{t}\|u\|_{2(t-1)}+\|\nabla u\|_{2}\nonumber\\ &\leq C_t\|u\|,\nonumber \end{align} where $C_t$ is a constant. We summarize: \begin{equation}\label{gammav} |\gamma(u)|\in {L^t(\mathbb{R}^{N})},\ \ \forall\ t\in [2,2^{\#}].\end{equation} The inequality \eqref{casep} will also be valuable in the special case $t=2$: \begin{align}\label{p=2} |\gamma(u)|^2_{2}&\leq \|u\|_{2}\left(2\|\nabla u\|_{2}\right)\nonumber\\ &\leq \lambda\iint_{\mathbb{R}^{N+1}_+}u^2+\frac{1}{\lambda}\iint_{\mathbb{R}^{N+1}_+}|\nabla u|^2 \end{align} where $\lambda>0$ is a parameter, the last inequality being a consequence of Young's inequality. \begin{remark}\label{obs1} It follows from \textup{($f_3$)} that $f$ satisfies the Ambrosetti-Rabinowitz inequality $2F(t)\leq f(t)t$, for all $t>0$. Furthermore, it follows from \textup{($f_1$)} and \textup{($f_2$)} that, for any fixed $\xi>0$, there exists a constant $C_\xi$ such that \begin{equation}\label{boundf}|f(t)|\leq\xi t+C_\xi t^{\theta-1},\quad\forall\ t\geq 0\end{equation} and analogously \begin{equation}\label{boundF}|F(t)|\leq\xi t^2+C_\xi t^{\theta}\leq C(t^2+t^\theta),\quad\forall\ t\geq 0.\end{equation} Observe that $\gamma(u)\in L^\theta(\mathbb{R}^{N})$ and $\gamma(u)\in L^2(\mathbb{R}^{N})$ imply $F(\gamma(u))\in L^1(\mathbb{R}^{N})$. \end{remark} \begin{proposition}[Hausdorff-Young]\label{HYoung} Assume that, for $1\leq p, q, s \leq \infty$, we have $f\in L^p (\mathbb{R}^{N})$, $g\in L^q (\mathbb{R}^{N})$ and \[\frac{1}{p}+\frac{1}{q}= 1 +\frac{1}{s}.\] Then \[|f*g|_{s} \leq |f|_{p}|g|_{q}.\] \end{proposition} We now enhance the result given by \eqref{gammav}. Observe that $\frac{N}{N(2-\theta)+\theta}\geq 1$ and $\frac{N}{N(2-\theta)+\theta}=1$ if, and only if $N=\theta=2$. The results in the sequel will be useful when addressing the regularity of the solution of problem \eqref{P}. \begin{lemma}\label{hipW} Concerning hypothesis $(W_h)$ we have: \begin{enumerate} \item [($i$)] if $r\in \left(\displaystyle\frac{N}{N(2-\theta)+\theta},\frac{2N}{N(2-\theta)+\theta}\right]$, there exists $\displaystyle p\in \left[1,\frac{2N}{(N-1)\theta}\right]$ such that \[|\gamma(u)|^\theta\in L^p(\mathbb{R}^{N})\] and \[\frac{1}{p}+\frac{1}{r}=1+\frac{N(2-\theta)+\theta}{2N}.\] Furthermore, $F(\gamma(u))\in L^p(\mathbb{R}^{N})$ and \[|W_1*F(\gamma(u))|=:g\in {L^{2N/[N(2-\theta)+\theta]}(\mathbb{R}^{N})}.\] \item [($ii$)] if $r'$ denotes the conjugate exponent of $r$ and $r>\displaystyle\frac{2N}{N(2-\theta)+\theta}$, then $F(\gamma(u))\in L^{r'}(\mathbb{R}^{N})$ and $W_1*F(\gamma(u))\in L^\infty(\mathbb{R}^{N})$. \end{enumerate} \end{lemma} \noindent\begin{proof}($i$) We verify the values of $r$ that satisfy the equality \[\frac{1}{p}+\frac{1}{r}=1+\frac{N(2-\theta)+\theta}{2N}.\] Observe that $r\in \left(\frac{N}{N(2-\theta)+\theta},\frac{2N}{N(2-\theta)+\theta}\right]$ if, and only if, $p\in \left[1,\frac{2N}{(N-1)\theta}\right)$. As consequence of \eqref{gammav} $|\gamma(u)|^\theta\in L^p(\mathbb{R}^{N})$ and thus $|\gamma(u)|^2\in L^p(\mathbb{R}^{N})$ and \eqref{boundF} yields $F(\gamma(u))\in L^p(\mathbb{R}^{N})$. So, $|W_1*F(\gamma(u))|=g\in {L^{2N/[N(2-\theta)+\theta]}(\mathbb{R}^{N})}$ follows from the Hausdorff-Young inequality. ($ii$) Since $W_1\in L^r(\mathbb{R}^{N})$ for $r=\frac{2N}{N(2-\theta)+\theta}$ and $r'=\frac{r}{r-1}=\frac{2N}{(N-1)\theta}$, applying ($i$) we conclude that $F(\gamma(u))\in L^{r'}(\mathbb{R}^{N})$ and $W_1*F(\gamma(u))\in L^\infty(\mathbb{R}^{N})$ is consequence of Proposition \ref{HYoung}. $\hfill\Box$\end{proof} \begin{corollary}\label{cor}We have $|W*F(\gamma(u))|\leq C+g$ with $g\in L^{{2N/[N(2-\theta)+\theta]}}(\mathbb{R}^{N})$. \end{corollary} \noindent\begin{proof}An immediately consequence of Lemma \ref{hipW}, since $W_2\in L^\infty(\mathbb{R}^{N})$. $\hfill\Box$\end{proof}\vspace*{.4cm} Following arguments in \cite{ZelatiNolasco}, we have: \begin{lemma}\label{c1} For all $\theta\in \left(2,\frac{2N}{N-1}\right)$, we have $|\gamma(u)|^{\theta-2}\leq 1+g_2$, where $g_2\in L^N(\mathbb{R}^{N})$. \end{lemma} \noindent\begin{proof}We have \[|\gamma(u)|^{\theta-2}=|\gamma(u)|^{\theta-2}\chi_{\{|\gamma(u)|\leq 1\}}+|\gamma(u)|^{\theta-2}\chi_{\{|\gamma(u)|>1\}}\leq 1+g_2,\] with $g_2=|\gamma(u)|^{\theta-2}\chi_{\{|\gamma(u)|>1\}}$. If $(\theta-2)N\leq 2$, then \[\int_{\mathbb{R}^{N}}|\gamma(u)|^{(\theta-2)N}\chi_{\{|\gamma(u)|>1\}}\leq \int_{\mathbb{R}^{N}}|\gamma(u)|^2\chi_{\{|\gamma(u)|>1\}}\leq\int_{\mathbb{R}^{N}}|\gamma(u)|^2<\infty.\] When $2<(\theta-2)N$, then $(\theta-2)N\in \left(2,\frac{2N}{N-1}\right)$ and $|\gamma(u)|^{\theta-2}\in L^N(\mathbb{R}^{N})$ as outcome of \eqref{gammav}. $\hfill\Box$\end{proof} \begin{lemma}\label{c2} For all $\theta\in \left(2,\frac{2N}{N-1}\right)$ we have $h=g|\gamma(u)|^{\theta-2}\in L^N(\mathbb{R}^{N})$, where $g$ is the function of Lemma \ref{hipW}. \end{lemma} \noindent\begin{proof} Application of the Hölder inequality yields \[\int_{\mathbb{R}^{N}}\left(g|\gamma(u)|^{\theta-2}\right)^N\leq \left(\int_{\mathbb{R}^{N}}g^{N\alpha}\right)^{\frac{1}{\alpha}}\left(\int_{\mathbb{R}^{N}}\left(|\gamma(u)|^{(\theta-2)N}\right)^{\alpha'}\right)^{\frac{1}{\alpha'}},\] if we define $\alpha$ so that $\alpha N=2N/[N(2-\theta)+\theta]$. Thus, $\alpha'=2/[(N-1)(\theta-2)]$ and we have $\alpha'N(\theta-2)=2N/[N-1]$. Since both integrals of the right-hand side of the last inequality are integrable, we are done. $\hfill\Box$\end{proof}\hspace*{.2cm} We now handle the existence of the ``energy'' functional. We denote by $L^q_w(\mathbb{R}^{N})$ the weak $L^q(\mathbb{R}^{N})$ space and by $|\cdot|_{q_w}$ its usual norm (see \cite{Lieb}). The next result is a generalized version of the Hardy-Littlewood-Sobolev inequality: \begin{proposition}[Lieb \cite{Lieb}]\label{pLieb} Assume that $p,q,r\in(1,\infty)$ and \[\frac{1}{p}+\frac{1}{q}+\frac{1}{r}=2.\] Then, for some constant $N_{p,q,t}>0$ and for any $f\in L^p(\mathbb{R}^{N})$, $g\in L^r(\mathbb{R}^{N})$ and $h\in L^q_w(\mathbb{R}^{N})$, we have the inequality \[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}f(t)h(t-s)g(s)\mathrm{d} t\mathrm{d} s\leq N_{p,q,t}|f|_{p}|g|_{r} |h|_{q_w}.\] \end{proposition} \begin{lemma}\label{estconv} For a positive constant $C$ holds \[\left|\frac{1}{2}\int_{\mathbb{R}^{N}}\big(W*F(\gamma(u))\big)F(\gamma(u))\right|\leq C\left(\|u\|^{2}+\|u\|^{\theta}\right)^2.\] \end{lemma} \noindent\begin{proof}Let us denote \[\Psi(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}\big[W*F(\gamma(u))\big]F(\gamma(u)).\] Since $W=W_1+W_2$, \begin{align}\label{I}\Psi(u)&=\frac{1}{2}\int_{\mathbb{R}^{N}}\big[W_1*F(\gamma(u))\big]F(\gamma(u))+\frac{1}{2}\int_{\mathbb{R}^{N}}\big[W_2*F(\gamma(u))\big]F(\gamma(u))\nonumber\\ &=:J_1(u)+J_2(u).\end{align} Let us suppose that $|\gamma(u)|^\theta\in L^t(\mathbb{R}^{N})$ for some $t\geq 1$. Then $|\gamma(u)|^2\in L^t(\mathbb{R}^{N})$ and $F(\gamma(u))\in L^t(\mathbb{R}^{N})$ (as consequence of \eqref{boundF}). Application of Proposition \ref{pLieb} yields \[|J_1(u)|=\left|\frac{1}{2}\int_{\mathbb{R}^{N}}W_1*F(\gamma(u))\,F(\gamma(u))\right|\leq N\,|W_1|_{r}|F(\gamma(u))|_{t}|F(\gamma(u))|_t.\] Since $\frac{1}{r}+\frac{2}{t}=2$ implies $t=\frac{2r}{2r-1}$, we have \begin{align}\label{J1} |J_1(u)| &\leq C|F(\gamma(u))|_{\frac{2r}{2r-1}}|F(\gamma(u))|_{\frac{2r}{2r-1}} \leq C' ( \left\| u \right\|^2 +\left\| u \right|^\theta)^2 <\infty, \end{align} (Observe that, in order to apply the immersion $H^1(\mathbb{R}^{N+1}_+)\hookrightarrow L^q(\mathbb{R}^{N+1}_+)$, we must have $t\theta<2N/(N-1)$, that is, $r>N/[N(2-\theta)+\theta]$.) In the case $W_2\in L^\infty(\mathbb{R}^{N})$ we can take $t=1$, therefore \begin{align}\label{J2} |J_2(u)|&=\left|\frac{1}{2}\int_{\mathbb{R}^{N}}\big[W_2*F(\gamma(u))\big]F(\gamma(u))\right|\leq C\left(|\gamma(u)|^2_{2}+|\gamma(u)|^\theta_{\theta}\right)^2\nonumber\\ &\leq C''\left(\|u\|^2+\|u\|^\theta\right)^2. \end{align} From \eqref{J2} and \eqref{J1} results the claim. $\hfill\Box$\end{proof}\vspace*{.3cm} \begin{lemma}\label{I1+I2}The functional \begin{align*} I(u) =&\frac{1}{2}\iint_{\mathbb{R}^{N+1}_+}\left(|\nabla u|^2+m^2u^2\right)+\frac{1}{2}\int_{\mathbb{R}^{N}}V(y)[\gamma(u(y))]^2\nonumber\\ &\quad-\frac{1}{2}\int_{\mathbb{R}^{N}}\big(W*F(\gamma(u))\big)F(\gamma(u))\nonumber\\ =&:I_1(u)+I_2(u)-\Psi(u) \end{align*} is well-defined. \end{lemma} \noindent\begin{proof}Of course \begin{align*} I_1(u)=\frac{1}{2}\iint_{\mathbb{R}^{N+1}_+}\left(|\nabla u|^2+m^2u^2\right)\leq \frac{k}{2}\|u\|^2<\infty, \end{align*} if we take $k=\max\{1,m^2\}$. Since hypothesis ($V_1$) implies $|V(y)|<C$, we have \begin{align*} |I_2(u)|=\left|\frac{1}{2}\int_{\mathbb{R}^{N}}V(y)[\gamma(u(y))]^2\right|\leq \frac{C}{2}\int_{\mathbb{R}^{N}}|\gamma(u)|^2=C'|\gamma(u)|^2_2\leq C'' \|u\|^2. \end{align*} Taking into account Lemma \ref{estconv}, the proof is complete. $\hfill\Box$\end{proof}\vspace*{.2cm} Since the derivative of the energy functional is given by \begin{align}\label{derivative} I'(u)\cdot \varphi =&\iint_{\mathbb{R}^{N+1}_+}\left[\nabla u\cdot\nabla \varphi+m^2u\varphi\right]+\int_{\mathbb{R}^{N}}V(y)\gamma(u)\gamma(\varphi)\nonumber\\ &\quad-\int_{\mathbb{R}^{N}}\left(W*F(\gamma(u))\right)f(\gamma(u))\gamma(\varphi),\ \forall\ \varphi\in H^1(\mathbb{R}^{N+1}_+), \end{align} we see that critical points of $I$ are weak solutions \eqref{P}. Because we are looking for a positive solution, we suppose that $f(t)=0$ for $t<0$. \begin{proposition}The quadratic form \[u\mapsto \frac{1}{2}\iint_{\mathbb{R}^{N+1}_+}\left(|\nabla u|^2+m^2u^2\right)+\frac{1}{2}\int_{\mathbb{R}^N}V(y)[\gamma(u(y))]^2\] defines an norm in the space $H^1(\mathbb{R}^{N+1}_+)$, which is equivalent to the norm $\|\cdot\|$. \end{proposition} \begin{proof}We keep up with the notation already introduced and note that $I_2(u)\geq -(1/2)V_0\int_{\mathbb{R}^N}|\gamma(u)|^2$. Furthermore, as consequence of \eqref{p=2}, we have \begin{align}\label{g2a}\int_{\mathbb{R}^{N}}|\gamma(u)|^2\leq m\iint_{\mathbb{R}^{N+1}_+}|u|^2+\frac{1}{m}\iint_{\mathbb{R}^{N+1}_+}|\nabla u|^2. \end{align} Therefore, \begin{align*} I_1(u)+I_2(u)&\geq \frac{1}{2}\iint_{\mathbb{R}^{N+1}_+}\left(|\nabla u|^2+m^2u\right)-\frac{V_0m}{2}\iint_{\mathbb{R}^{N+1}_+}|u|^2-\frac{V_0}{2m}\iint_{\mathbb{R}^{N+1}_+}|\nabla u|^2\\ &=\frac{1}{2}\left(1-\frac{V_0}{m}\right)\iint_{\mathbb{R}^{N+1}_+}|\nabla u|^2+\frac{1}{2}m(m-V_0)\iint_{\mathbb{R}^{N+1}_+}|u|^2. \end{align*} Defining $K=\min\left\{\frac{1}{2}\left(1-\frac{V_0}{m}\right),\frac{1}{2}m(m-V_0)\right\}>0$, we conclude that \[I_1(u)+I_2(u)\geq K\|u\|^2.\] By applying \eqref{g2a} it easily follows that \begin{align}\label{supbound} I_1(u)+I_2(u)&\leq \frac{1}{2}\left(1+\frac{V_0}{m}\right)\iint_{\mathbb{R}^{N+1}_+}|\nabla u|^2+\frac{1}{2}\left(m^2+V_\infty m\right)\iint_{\mathbb{R}^{N+1}_+}|u|^2\nonumber\\ &\leq C\|u\|^2 \end{align} for a constant $C>0$. We are done. $\hfill\Box$\end{proof} \section{Mountain pass geometry and Nehari manifold}\label{mpg} \begin{lemma}\label{gpm} $I$ satisfies the mountain pass theorem geometry. More precisely, \begin{enumerate} \item [$(i)$] There exist $\rho,\delta>0$ such that $I|_S\geq \delta>0$ for all $u\in S$, where \[S=\left\{u\in H^1(\mathbb{R}^{N+1}_+)\,:\, \|u\|=\rho\right\}.\] \item [$(ii)$] For each $u_0\in H^1(\mathbb{R}^{N+1}_+)$ such that $(u_0)_+\neq 0$, there exists $\tau\in \mathbb{R}$, satisfying $\|\tau u_0\|>\rho$ and $I(\tau u_0) <0$. \end{enumerate} \end{lemma} \noindent \begin{proof Since we have already showed that \begin{align}\label{I+}I_1(u)+I_2(u)\geq K\|u\|^2\end{align} and so $I(u)\geq K\|u\|^2-\Psi(u)\geq K\|u\|^2-C\left(\|u\|^2+\|u\|^\theta\right)^2$, we obtain ($i$) by choosing $\rho>0$ small enough. In order to prove ($ii$), fix $u_0\in H^1(\mathbb{R}^{N+1}_+)\setminus\{0\}$ such that $u_0\geq 0$. For all $t>0$ consider the function $g_{u_0}\colon(0,\infty)\to\mathbb{R}$ defined by \[g_{u_0}(t)=\Psi\left(\frac{tu_0}{\|u_0\|}\right)\] where, as before, \[\Psi(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}\big(W*F(\gamma(u))\big)F(\gamma(u)).\] An easy calculation shows that \begin{align*} g'_{u_0}(t)&=\frac{2}{t}\int_{\mathbb{R}^{N}} \left(W*F\left(\gamma\left(\frac{tu_0}{\|u_0\|}\right)\right)\right)\frac{f}{2}\left(\gamma\left(\frac{tu_0}{\|u_0\|}\right)\right)\gamma\left(\frac{tu_0}{\|u_0\|}\right \geq\frac{4}{t}g_{u_0}(t),\end{align*} the last inequality being a consequence of the Ambrosetti-Rabinowitz inequality. Observe that $g'_{u_0}(t)>0$ for $t>0$. Thus, we obtain \begin{align*} \ln g_{u_0}(t)\Big|_1^{\tau\|u_0\|}\geq 4\ln t\Big|_1^{\tau\|u_0\|}\quad\Rightarrow\quad \frac{g_{u_0}(\tau\|u_0\|)}{g_{u_0}(1)}\geq \left(\tau\|u_0\|\right)^{4}, \end{align*} proving that \begin{align}\label{H} \Psi(\tau u_0)=g_{u_0}(\tau\|u_0\|)\geq D\left(\tau\|u_0\|\right)^{4}.\end{align} for a constant $D>0$. It follows from \eqref{supbound} that \begin{align*}I(\tau u_0)&\leq C\tau^2\|u_0\|^2-D\tau^4\|u_0\|^4. \end{align*} Thus, it suffices to take $\tau$ large enough. $\hfill\Box$\end{proof}\vspace*{.4cm} The existence of a Palais-Smale sequence $(u_n)\subset H^1(\mathbb{R}^{N+1}_+)$ such that \[I'(u_n)\to 0\qquad\textrm{and}\qquad I(u_n)\to c,\] where \[c=\inf_{\alpha\in \Gamma}\max_{t\in [0,1]}I(\alpha(t)),\] and $\Gamma=\left\{\alpha\in C^1\left([0,1],H^1(\mathbb{R}^{N+1}_+)\right)\,:\,\alpha(0)=0,\,\alpha(1)<0\right\}$ results from the mountain pass theorem without the PS condition. \vspace*{.2cm} We now consider the Nehari manifold \begin{align*}\mathcal{N}&=\left\{u\in H^1(\mathbb{R}^{N+1}_+)\setminus\{0\}\,:\,I'(u)\cdot u=0\right\}. \end{align*} It is not difficult to see that $\mathcal{N}$ is a manifold in $H^1(\mathbb{R}^{N+1}_+)\setminus\{0\}$. The next result, which follows immediately from our estimates, proves that $\mathcal{N}$ is a closed manifold in $H^1(\mathbb{R}^{N+1}_+)$: \begin{lemma}\label{lN} There exists $\beta>0$ such that $\|u\|\geq \beta$ for all $u\in \mathcal{N}$. \end{lemma} An alternative characterization of $c$ is obtained by a standard method: for $u_+\neq 0$, consider the function $\Phi(t)=I_1(tu)+I_2(tu)-\Psi(tu)$, preserving the notation of Lemma \ref{gpm}. The proof of Lemma \ref{gpm} assures that $\Psi(tu)>0$ for $t$ small enough, $\Psi(tu)<0$ for $t$ large enough and $g'_u(t)>0$ if $t>0$. Therefore, $\max_{t\geq 0}\Psi(t)$ is achieved at a unique $t_u=t(u)>0$ and $\Psi'(tu)>0$ for $t<t_u$ and $\Psi'(tu)<0$ for $t>t_u$. Furthermore, $\Psi'(t_uu)=0$ implies that $t_uu\in \mathcal{N}$. The map $u\mapsto t_u$ ($u\neq 0$) is continuous and $c=c^*$, where \[c^*=\inf_{u\in H^1(\mathbb{R}^{N+1}_+)\setminus\{0\}}\max_{t\geq 0} I(tu).\] For details, see \cite[Section 3]{Rabinowitz} or \cite{Felmer}. Standard arguments prove the next affirmative: \begin{lemma}\label{bounded} Let $(u_n)\subset H^1(\mathbb{R}^{N+1}_+)$ be a sequence such that $I(u_n)\to c$ and $I'(u_n)\to 0$, where \[c=\inf_{u\in H^1(\mathbb{R}^{N+1}_+)\setminus\{0\}}\max_{t\geq 0}I(tu).\] Then $(u_n)$ is bounded and (for a subsequence) $u_n\rightharpoonup u$ in $H^1(\mathbb{R}^{N+1}_+)$. \end{lemma} \begin{lemma}\label{lK} Let $U\subseteqq \mathbb{R}^{N}$ be any open set. For $1<p<\infty$, let $(f_n)$ be a bounded sequence in $L^p(U)$ such that $f_n(x)\to f(x)$ a.e. Then $f_n\rightharpoonup f$. \end{lemma} The proof of Lemma \ref{lK} can be found, e.g., in \cite[Lemme 4.8, Chapitre 1]{Kavian}.\vspace*{.2cm} \section{The limit problem} In this section we consider a variant of problem \eqref{P}, changing the potential $V(y)$ for $V_\infty$. \begin{theorem} \label{teo ground state} Assuming $(f_1)$, $(f_2)$, $(f_3)$ and $(W_h)$, problem \begin{equation}\left\{\begin{array}{l} -\Delta u +m^2u=0 \ \ \text{in} \ \ \mathbb{R}^{N+1}_+ \\ \\ \displaystyle-\frac{\partial u}{\partial x}= -V_\infty u+ \left[W*F(u)\right]f(u), \ (x,y)\in \left\{ 0\right\} \times \mathbb{R}^N \simeq\mathbb{R}^N, \end{array}\right.\tag{$P_\infty$}\label{Pinfty} \end{equation} has a non-negative ground state solution. \end{theorem} \begin{proof}Let $(u_n)$ be the minimizing sequence given by Lemma \ref{gpm}. Then, there exist $R,\delta>0$ and a sequence $(z_n)\subset\mathbb{R}^{N}$ such that \begin{equation}\label{Lions}\liminf_{n\to\infty}\int_{B_R(z_n)}|\gamma(u_n)|^2\geq \delta.\end{equation} If false, a result of Lions (see \cite{CC}) guarantees that $\gamma(u_n)\to 0$ in $L^q(\mathbb{R}^{N})$ for $2<q<2^*$, thus implying that \[\int_{\mathbb{R}^N} (W*F(\gamma(u_n)))f(\gamma(u_n)) \gamma(u_n) \to 0,\] contradicting Lemma \ref{lN}. We define \[v_n(x)=u_n(x-z_n).\] From \eqref{Lions} we derive that \[\int_{B_R(0)}|\gamma(v_n)|^2\geq \frac{\delta}{2}.\] We observe that the energy functional \begin{align*}I_\infty(u) &= \frac{1}{2} \iint_{\mathbb{R}^{N+1}_+} \left(|\nabla u|^2 + m^2u^2\right) +\frac{1}{2}\int_{\mathbb{R}^N} V_\infty|\gamma(u)|^2\\ &\qquad-\frac{1}{2}\int_{\mathbb{R}^N}\big[W*F(\gamma(u))\big]F(\gamma(u)) \end{align*} and its derivative as well are translation invariant. Therefore, it also holds that \[I_\infty'(v_n) \to 0\quad\textrm{ and }\quad I_\infty(v_n) \to c_\infty,\] where \[c_\infty=\inf_{u\in H^1(\mathbb{R}^{N+1}_+)\setminus\{0\}}\max_{t\geq 0} I_\infty(tu).\] (Observe that all reasoning in Section \ref{mpg} is valid for $I_\infty$ and its minimizing sequence.) Since $(v_n)$ is bounded (see Lemma \ref{bounded}) it follows that $v_n\rightharpoonup v$. A standard argument shows that we can suppose $v_n(x)\to v(x)$ a.e. in $(\mathbb{R}^{N+1}_+)$, $v_n\to v$ in $L^s_{loc}(\mathbb{R}^{N+1}_+)$ for all $s\in [2,2^*)$, $\gamma(v_n(x))\to \gamma(v(x))$ a.e. in $(\mathbb{R}^{N})$ and $\gamma(v_n)\to \gamma(v)$ in $L^q_{loc}(\mathbb{R}^{N})$, for all $q\in [p,p^{\#})$. We will show that $v\in \mathcal{N}_\infty=\{u\in H^1(\mathbb{R}^{N+1_+})\setminus\{0\}\,:\,I'_\infty(u)\cdot u=0\}$. For all $\varphi \in C^\infty_0(\mathbb{R}^{N+1}_+)$, let us consider $\psi_n=(v_n - v)\varphi\in H^1(\mathbb{R}^{N+1}_+)$. We have \begin{align}\label{testfunction} \langle I'_\infty(v_n) , \psi_n\rangle&= \iint_{\mathbb{R}^{N+1}_+} \nabla v_n \cdot \nabla\psi_n+ \iint_{\mathbb{R}^{N+1}_+}m^2v_n\psi_n+ \int_{\mathbb{R}^N} V_\infty\gamma(v_n) \gamma(\psi_n) \nonumber \\ {}&\qquad-\int_{\mathbb{R}^N} ( W * F(\gamma(v_n) ) f(\gamma(v_n)) \gamma(\psi_n)\nonumber\\ &=J_1+J_2+J_3-J_4. \end{align} We start considering \begin{align*} J_4=\int_{\mathbb{R}^N} ( W * F(\gamma(v_n) ) f(\gamma(v_n)) \gamma(\psi_n). \end{align*} Because $\displaystyle\lim_{n\to\infty}\langle I'_\infty(v_n) , (v_n - v)\varphi \rangle=0$, it follows from \cite[Lemma 3.5]{Ackermann} that $J_4\to 0$ when $n\to\infty$ and thus is easily verified that $J_2+J_3-J_4\to 0$ when $n\to\infty$. We now consider $J_1$: \begin{align*} J_1&=\iint_{\mathbb{R}^{N+1}_+} \nabla v_n \cdot \nabla ( (v_n-v)\varphi)\\ &=\iint_{\mathbb{R}^{N+1}_+} \nabla v_n \cdot \varphi\nabla (v_n - v) + \iint_{\mathbb{R}^{N+1}_+} \nabla v_n\cdot(v_n-v) \nabla \varphi\\ &=\iint_{\mathbb{R}^{N+1}_+}|\nabla(v_n-v)|^2\varphi+\varphi \nabla v\cdot \nabla(v_n-v)+\nabla v_n\cdot(v_n-v) \nabla \varphi. \end{align*} We infer that \begin{align*}\lim_{n\to\infty}\iint_{\mathbb{R}^{N+1}_+} |\nabla(v_n-v)|^2 \varphi &=-\lim_{n\to\infty} \iint_{\mathbb{R}^{N+1}_+} \varphi\nabla v \cdot\nabla(v_n-v) \\ &\qquad - \lim_{n\to\infty}\iint_{\mathbb{R}^{N+1}_+} (v_n-v)\nabla v_n \cdot\nabla \varphi.\end{align*} Since \[\lim_{n\to\infty} \iint_{\mathbb{R}^{N+1}_+} \varphi\nabla v\cdot \nabla( v_n - v) =0\ \text{ and }\ \lim_{n\to\infty}\iint_{\mathbb{R}^{N+1}_+} (v_n - v)\nabla v_n \cdot \nabla \varphi =0\] (because $\nabla v_n$ is bounded), we deduce that \[ \nabla v_n \rightarrow \nabla v \quad \ \mbox{a.e. in} \quad \mathbb{R}^{N+1}_+.\] Thus \[I'_\infty(v)v =0\] and $v \in \mathcal{N}_\infty$. We now turn our attention to the positivity of $v$. Seeing that \[\iint_{\mathbb{R}^{N+1}_+}\left(\nabla v\cdot \nabla\varphi+m^2v\varphi\right)+\!\int_{\mathbb{R}^{N}}V_\infty\gamma(v)\gamma(\varphi)=\!\int_{\mathbb{R}^{N}}[W*F(\gamma(v))]f(\gamma(v))\gamma(\varphi)\] and choosing $\varphi=v^-$, the left-hand side of the equality is positive (by the definition of $I_\infty$ and equation \eqref{I+} applied to $I_\infty$), since $J_1+J_2+J_3=I_1+I_2\geq K\|v\|^2$), while $\Psi(v)=J_4\leq 0$. We are done. $\hfill\Box$\end{proof} \section{Proof of Theorem \ref{t1}} In order to consider the general case of the potential $V(y)$, we state a well-known result due to M. Struwe: \begin{lemma}[Splitting Lemma]\label{Struwe} Let $(v_n)\subset H^1(\mathbb{R}^{N+1}_+)$ be such that \[I(u_n)\to c,\qquad I'(u_n)\to 0\] and $u_n\rightharpoonup u$ weakly on $X$. Then $I'(u_0)=0$ and we have \emph{either} \begin{enumerate} \item [($i$)] $u_n\to u$ strongly on $X$; \item [($ii$)] there exist $k\in\mathbb{N}$, $(y^j_n)\in\mathbb{R}^N$ such that $|y^j_n|\to\infty$ for $j\in \{1,\ldots,k\}$ and nontrivial solutions $u^1,\ldots,u^k$ of problem \eqref{Pinfty} so that \[I(u_n)\to I(u_0)+\sum_{j=1}^k I_\infty (u_j)\] and \[\left\|u_n-u_0-\sum_{j=1}^ku^j(\cdot-y^j_n)\right\|\to 0.\] \end{enumerate} \end{lemma} \begin{lemma}\label{PS} The functional $I$ satisfies $(PS)_c$ for any $0\leq c<c_\infty$. \end{lemma} \begin{proof}Let us suppose that $(u_n)$ satisfies \[I(u_n)\to c<c_\infty\qquad\text{and}\qquad I'(u_n)\to 0.\] We can suppose that the sequence $(u_n)$ is bounded, according to Lemma \ref{bounded}. Therefore, for a subsequence, we have $u_n\hookrightarrow u_0$ in $H^1(\mathbb{R}^{N+1}_+)$. It follows from the Splitting Lemma (Lemma \ref{Struwe}) that $I'(u_0)=0$. Since \begin{align*} I'(u_0)\cdot u_0&=\iint_{\mathbb{R}^{N+1}_+}\left(|\nabla u_0|^2+m^2u^2_0\right)+\int_{\mathbb{R}^{N}}V(y)|\gamma(u_0)|^2\\ &\qquad-\int_{\mathbb{R}^{N}}[W*F(\gamma(u_0))]f(\gamma(u_0))\gamma(u_0)\\ \intertext{and} I(u_0)&=\frac{1}{2}\iint_{\mathbb{R}^{N+1}_+}\left(|\nabla u_0|^2+m^2u^2_0\right)+\frac{1}{2}\int_{\mathbb{R}^{N}}V(y)|\gamma(u_0)|^2\\ &\qquad-\frac{1}{2}\int_{\mathbb{R}^{N}}[W*F(\gamma(u_0))]F(\gamma(u_0)), \end{align*} we conclude that \begin{equation}\label{Iu0} I(u_0)=\int_{\mathbb{R}^{N}}[W*F(\gamma(u_0))]\left(\frac{1}{2}f(\gamma(u_0))\gamma(u_0)-F(\gamma(u_0))\right)>0,\end{equation} as consequence of the Ambrosetti-Rabinowitz condition. If $u_n\not\to u$ in $H^1(\mathbb{R}^{N+1}_+)$, by applying again the Splitting Lemma we guarantee the existence of $k\in\mathbb{N}$ and nontrivial solutions $u^1,\ldots,u^k$ of problem \eqref{Pinfty} satisfying \[\lim_{n\to\infty}I(u_n)=c=I(u_0)+\sum_{j=1}^kI_\infty(u^j)\geq kc_\infty\geq c_\infty\] contradicting our hypothesis. We are done. $\hfill\Box$\end{proof}\vspace*{.2cm} We prove the next result by adapting the proof given in Furtado, Maia e Medeiros \cite{FMM}: \begin{lemma}\label{ccinfty}Suppose that $V(y)$ satisfies $(V_3)$. Then \[0<c<c_\infty,\] where $c$ is characterized in Lemma \ref{bounded}. \end{lemma} \begin{proof}Let $\bar u\in \mathcal{N}_\infty$ be the weak solution of \eqref{Pinfty} given by Theorem \ref{teo ground state} and $t_{\bar u}>0$ be the unique number such that $t_{\bar u}\bar u\in \mathcal{N}$. We claim that $t_{\bar u}<1$. Indeed, \[\int_{\mathbb{R}^{N}}[W*F(\gamma(t_{\bar u}\bar u))]f(\gamma(t_{\bar u}\bar u))\gamma(t_{\bar u}\bar u)\hspace*{7cm}\] \vspace*{-.5cm}\begin{align*} &=t^2_{\bar u}\iint_{\mathbb{R}^{N+1}_+}\left(|\nabla {\bar u}|^2+m^2{\bar u}^2\right)+\int_{\mathbb{R}^{N}}V(y)|\gamma(\bar{u})|^2\\ &< t^2_{\bar u}\iint_{\mathbb{R}^{N+1}_+}\left(|\nabla {\bar u}|^2+m^2{\bar u}^2\right)+\int_{\mathbb{R}^{N}}V_\infty|\gamma(\bar{u})|^2\\ &=t^2_{\bar u}\int_{\mathbb{R}^{N}}[W*F(\gamma(\bar u))]f(\gamma(\bar u))\gamma(\bar u)\\ &=t^2_{\bar u}\left(\int_{\mathbb{R}^{N}}[W*F(\gamma(\bar u))]f(\gamma(\bar u))\gamma(\bar u)+\int_{\mathbb{R}^{N}}[W*F(\gamma(t_{\bar u}\bar u))]f(\gamma(\bar u))\gamma(\bar u)\right.\\ &\qquad\quad\left.-\int_{\mathbb{R}^{N}}[W*F(\gamma(t_{\bar u}\bar u))]f(\gamma(\bar u))\gamma(\bar u)\right) \end{align*} thus yielding \[\] \begin{align*} 0&>\int_{\mathbb{R}^{N}}[W*F(\gamma(t_{\bar u}\bar u))]\left(\frac{f(\gamma(t_{\bar u}\bar u))}{\gamma(t_{\bar u}\bar u)}-\frac{f(\gamma(\bar u))}{\gamma(\bar u)}\right)\\ &\qquad +t^2_{\bar u}\int_{\mathbb{R}^{N}}\left[W*\left(F(\gamma(t_{\bar u}\bar u))-F(\gamma(\bar u))\right)\right]f(\gamma(u))\gamma(u). \end{align*} If $t_{\bar u}\geq 1$, since $f(s)/s$ is increasing, the first integral is non-negative and, since $F$ is increasing, the second integral as well. We conclude that $t_{\bar u}<1$. Lemma \ref{bounded} and its previous comments show that \[c\leq \max_{t\geq 0}I(t\bar u)=I(t_{\bar u}\bar u)=\int_{\mathbb{R}^{N}}[W*F(\gamma(t_{\bar u}\bar u))]\left(\frac{1}{2}f(\gamma(t_{\bar u}\bar u))\gamma(t_{\bar u}\bar u)-F(\gamma(t_{\bar u}\bar u))\right).\] Since \[g(t)=\int_{\mathbb{R}^{N}}[W*F(\gamma(t\bar u))]\left(\frac{1}{2}f(\gamma(t\bar u))\gamma(t\bar u)-F(\gamma(t\bar u))\right) \] is a strictly increasing function, we conclude that \[c=g(t_{\bar u})<g(1)=\int_{\mathbb{R}^{N}}[W*F(\gamma(\bar u))]\left(\frac{1}{2}f(\gamma(\bar u))\gamma(\bar u)-F(\gamma(\bar u))\right)=c_\infty,\] proving our result. $\hfill\Box$\end{proof} \noindent\textit{Proof of Theorem \ref{t1}}. Let $(u_n)$ be the minimizing sequence given by Lemma \ref{gpm}. It follows from Lemmas \ref{PS} and \ref{ccinfty} that $u_n\to u$ such that $I(u)=c$ and $I'(u)=0$. We now turn our attention to the positivity of $u$. Seeing that \[\iint_{\mathbb{R}^{N+1}_+}\left(\nabla u\cdot \nabla\varphi+m^2u\varphi\right)+\int_{\mathbb{R}^{N}}V(y)\gamma(u)\gamma\varphi=\int_{\mathbb{R}^{N}}[W*F(\gamma(u))]f(\gamma(u))\gamma(\varphi)\] and choosing $\varphi=w^-$, the left-hand side of the equality is positive (by the definition of $I(u)$ and equation \eqref{I+}, since $I_1+I_2\geq K\|w\|^2$), while $\Psi(u)\leq 0$. The proof is complete. $\hfill\Box$ \section{Proof of Theorem \ref{classical}} The proof of the next result adapts arguments in \cite{Cabre} and \cite{ZelatiNolasco}. \begin{proposition}\label{p1} For all $\beta>0$ it holds \begin{align*} \hspace*{-.25cm}|\gamma(v_+)^{1+\beta}|^2_{2^{\#}}&\leq 2C^2_{2^{\#}}C_\beta\left[\left(|V|_\infty+CC_1(2+M)\right)|\gamma(v_+)^{1+\beta}|^2_{2}\right.\nonumber\\ &\qquad\qquad\quad\left.+C_1|g|_{2N/[N(2-\theta)+\theta]}|\gamma(v_+)^{1+\beta}|^2_{2^{\#}(2/\theta)}\right], \end{align*} where $C_\beta=\max\{m^{-2},\left(1+\frac{\beta}{2}\right)\}$, $C,C_1,\tilde{C}$ and $M=M(\beta)$ are positive constants and $g=|W_1*F(\gamma(v))|$ is the function given by Lemma \ref{hipW}. \end{proposition} \noindent\begin{proof} Choosing $\varphi=\varphi_{\beta,T}=vv^{2\beta}_T$ in \eqref{derivative}, where $v_T=\min\{v_+,T\}$ and $\beta>0$, we have $0\leq \varphi_{\beta,T}\in H^1(\mathbb{R}^{N+1}_+)$ and \begin{multline}\label{varphibetaT} \iint_{\mathbb{R}^{N+1}_+}\nabla v\cdot\nabla \varphi_{\beta,T}+m^2v\varphi_{\beta,T}\\ =-\int_{\mathbb{R}^{N}}V(y)\gamma(v)\gamma(\varphi_{\beta,T})+\int_{\mathbb{R}^{N}}\left(W*F(\gamma(v))\right)f(\gamma(v))\gamma(\varphi_{\beta,T}), \end{multline} Since $\nabla\varphi_{\beta,T}=v^{2\beta}_T\nabla v+2\beta vv^{2\beta-1}_T\nabla v_T$, the left-hand side of \eqref{varphibetaT} is given by \begin{multline}\label{varphibetaTl} \iint_{\mathbb{R}^{N+1}_+}\nabla v\cdot \left(v^{2\beta}_T\nabla v+2\beta vv^{2\beta-1}_T\nabla v_T\right)+m^2v\left(vv^{2\beta}_T\right)\\ =\iint_{\mathbb{R}^{N+1}_+}v^{2\beta}_T\left[|\nabla v|^2+m^2v^2\right]+2\beta\iint_{D_T}v^{2\beta}_T|\nabla v|^2, \end{multline} where $D_T=\{(x,y)\in (0,\infty)\times \mathbb{R}^{N}\,:\, v_T(x,y)\leq T\}$. Now we express \eqref{varphibetaTl} in terms of $\|vv^\beta_T\|^2$. For this, we note that $\nabla(vv^\beta_T)=v^\beta_T\nabla v+\beta vv^{\beta-1}_T\nabla v_T$. Therefore, \[\iint_{\mathbb{R}^{N+1}_+}|\nabla (vv^\beta_T)|^2=\iint_{\mathbb{R}^{N+1}_+}v^{2\beta}_T|\nabla v|^2+(2\beta+\beta^2)\iint_{D_T}v^{2\beta}_T|\nabla v|^2,\] thus yielding \begin{align}\label{norm} \|vv^\beta_T\|^ &=\left(\iint_{\mathbb{R}^{N+1}_+}v^{2\beta}_T|\nabla v|^2+(2\beta+\beta^2)\iint_{D_T}v^{2\beta}_T|\nabla v|^2\right)+\iint_{\mathbb{R}^{N+1}_+}(vv^\beta_T)^2\nonumber\\ &=\iint_{\mathbb{R}^{N+1}_+}v^{2\beta}_T\left(|\nabla v|^2+|v|^2\right)+2\beta\left(1+\frac{\beta}{2}\right)\iint_{D_T}v^{2\beta}_T|\nabla v|^2\nonumber\\ &\leq C_\beta\left[\iint_{\mathbb{R}^{N+1}_+}v^{2\beta}_T\left(|\nabla v|^2+m^2|v|^2\right)+2\beta\iint_{D_T}v^{2\beta}_T|\nabla v|^2\right], \end{align} where $C_\beta=\max\left\{m^{-2},\left(1+\frac{\beta}{2}\right)\right\}$. Gathering \eqref{varphibetaT}, \eqref{varphibetaTl} and \eqref{norm}, we obtain \begin{align}\label{norm=r} \|vv^\beta_T\|^2\leq& C_\beta\left[-\int_{\mathbb{R}^{N}}V\gamma(v)^2\gamma(v_T)^{2\beta}\right.\nonumber\\ &\qquad+\left.\int_{\mathbb{R}^{N}}\left(W*F(\gamma(v))\right)f(\gamma(v))\gamma(v)\gamma(v_T)^{2\beta}\right]. \end{align} We now start to consider the right-hand side of \eqref{norm=r}. Since $|f(t)|\leq C_1(|t|+|t|^{\theta-1})$, Corollary \ref{cor} shows that it can be written as \begin{align}\label{rhs} \leq& C_\beta\left[|V|_\infty\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^{2}+\int_{\mathbb{R}^{N}}(C+g)|f(\gamma(v))|\,|\gamma(v)|\gamma(v_T)^{2\beta}\right]\nonumber\\ \leq&C_\beta\left[|V|_\infty\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^{2}+C\int_{\mathbb{R}^{N}}C_1\left(|\gamma(v)|+|\gamma(v)|^{\theta-1}\right)|\gamma(v)|\gamma(v_T)^{2\beta}\right.\nonumber\\ &\qquad+\left.C_1\int_{\mathbb{R}^{N}}g\left(|\gamma(v)|+|\gamma(v)|^{\theta-1}\right)\,|\gamma(v)|\gamma(v_T)^{2\beta}\right]\nonumber\\ \leq& C_\beta\left[\left(|V|_\infty+CC_1\right)\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^{2}+CC_1\int_{\mathbb{R}^{N}}|\gamma(v)|^{\theta-2}\gamma(v)^2\gamma(v_T)^{2\beta}\right.\nonumber\\ &\qquad+\left.C_1\int_{\mathbb{R}^{N}}g\gamma(vv_T^\beta)^{2}+C_1\int_{\mathbb{R}^{N}}g|\gamma(v)|^{\theta-2}\gamma(vv_T^\beta)^{2}\right]. \end{align} Applying Lemmas \ref{c1} and \ref{c2}, inequality \eqref{rhs} becomes \begin{align* \leq&C_\beta\left[\left(|V|_\infty+CC_1\right)\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^{2}+CC_1\int_{\mathbb{R}^{N}}\left(1+g_2\right)\gamma(vv_T^\beta)^{2}\right.\nonumber\\ &\qquad+\left.C_1\int_{\mathbb{R}^{N}}g\gamma(vv_T^\beta)^{2}+C_1\int_{\mathbb{R}^{N}}h\gamma(vv_T^\beta)^{2}\right]\nonumber\\ \leq&C_\beta\left[\left(|V|_\infty+2CC_1\right)\!\!\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^{2}+CC_1\int_{\mathbb{R}^{N}}g\gamma(vv_T^\beta)^{2}+CC_1\int_{\mathbb{R}^{N}}G\gamma(vv_T^\beta)^{2}\right], \end{align*} where $G=g_2+h\in L^N(\mathbb{R}^{N})$, admitting that $CC_1\geq C_1$. Because $|\gamma(u)|_{2^\#}\leq C_{2^{\#}}\|u\|$ for all $u\in H^1(\mathbb{R}^{N+1}_+)$, the last inequality is equivalent to \begin{align}\label{rhs3} |\gamma(vv^{\beta}_T)|^2_{2^{\#}}&\leq C^2_{2^{\#}}C_\beta\left[\left(|V|_\infty+2CC_1\right)\!\!\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^{2}+CC_1\int_{\mathbb{R}^{N}}g\gamma(vv_T^\beta)^{2}\right.\nonumber\\ &\qquad\qquad\left. +CC_1\int_{\mathbb{R}^{N}}G\gamma(vv_T^\beta)^{2}\right]. \end{align} Let us consider the last integral in the right-hand side of \eqref{rhs3}. For all $M>0$, define $A_1=\{G\leq M\}$ and $A_2=\{G>M\}$. Then, whereas $G\in L^N(\mathbb{R}^{N})$, \begin{align* \int_{\mathbb{R}^{N}}G\gamma(vv_T^\beta)^{2}\leq& M\int_{A_1}\gamma(vv_T^\beta)^{2}+\left(\int_{A_2}G^N\right)^{\frac{1}{N}}\left(\int_{A_2}\gamma(vv_T^\beta)^{2\frac{N}{N-1}}\right)^{\frac{N-1}{N}}\nonumber\\ \leq&M\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^{2}+\epsilon(M)\left(\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^{2^{\#}}\right)^{\frac{N-1}{N}}, \end{align*} and $\epsilon(M)=\left(\int_{A_2}G^N\right)^{1/N}\to 0$ when $M\to\infty$. If $M$ is taken so that $\epsilon(M)C^2_{2^{\#}}C_\beta CC_1<1/2$, we have \begin{align}\label{rhs4} |\gamma(vv^{\beta}_T)|^2_{2^{\#}}&\leq 2C^2_{2^{\#}}C_\beta \left[\left(|V|_\infty+CC_1(2+M)\right)\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^{2}\right.\nonumber\\ &\qquad\qquad\quad\left.+CC_1\int_{\mathbb{R}^{N}}g\gamma(vv_T^\beta)^{2}\right]. \end{align} The Hölder inequality guarantees that \begin{align*} \int_{\mathbb{R}^{N}}g\gamma(vv^{\beta}_T)^{2}\leq |g|_{2N/[N(2-\theta)+\theta]}\left(\int_{\mathbb{R}^{N}}\gamma(vv^{\beta}_T)^{2\alpha'}\right)^{1/\alpha'}, \end{align*} where \[\alpha'=\frac{\frac{2N}{N(2-\theta)+\theta}}{\frac{2N}{N(2-\theta)+\theta}-1}=\frac{2N}{(N-1)\theta}=\frac{2^{\#}}{\theta}.\] Thus, \begin{align* \int_{\mathbb{R}^{N}}g\gamma(vv^{\beta}_T)^{2}\leq |g|_{2N/[N(2-\theta)+\theta]}\,|\gamma(vv^{\beta}_T)|^2_{2^{\#}(2/\theta)} \end{align*} and substitution on the right-hand side of \eqref{rhs4} yields \begin{align}\label{rhs5} \hspace*{-.25cm}|\gamma(vv^{\beta}_T)|^2_{2^{\#}}&\leq 2C^2_{2^{\#}}C_\beta\left[\left(|V|_\infty+CC_1(2+M)\right)|\gamma(vv^{\beta}_T)|^2_{2}\right.\nonumber\\ &\qquad\qquad\quad\left.+C_1|g|_{2N/[N(2-\theta)+\theta]}|\gamma(vv^{\beta}_T)|^2_{2^{\#}(2/\theta)}\right]. \end{align} Since $vv^\beta_T\to v^{1+\beta}_+$, it follows from \eqref{rhs5} that \begin{align*} \hspace*{-.25cm}|\gamma(v_+)^{1+\beta}|^2_{2^{\#}}&\leq 2C^2_{2^{\#}}C_\beta\left[\left(|V|_\infty+CC_1(2+M)\right)|\gamma(v_+)^{1+\beta}|^2_{2}\right.\\ &\qquad\qquad\quad\left.+C_1|g|_{2N/[N(2-\theta)+\theta]}|\gamma(v_+)^{1+\beta}|^2_{2^{\#}(2/\theta)}\right], \end{align*} and we are done. (Observe, however, that $M$ depends on $\beta$.) $\hfill\Box$\end{proof}\vspace*{.2cm} \begin{proposition}\label{p2} For all $p\in [2,\infty)$ we have $\gamma(v)\in L^p(\mathbb{R}^{N})$. \end{proposition} \noindent\begin{proof} Since $\frac{2N}{N-1}\frac{2}{\theta}\leq 2$ never occurs, we have $2<\frac{2^{\#}2}{\theta}=\frac{2N}{N-1}\frac{2}{\theta}<2^{\#}$. According to the Proposition \ref{p1}, we have \begin{align}\label{bs1} |\gamma(v_+)^{1+\beta}|^2_{2^{\#}}&\leq \left[D_1|\gamma(v_+)^{1+\beta}|^2_{2}+E_1|\gamma(v_+)^{1+\beta}|^2_{2^{\#}(2/\theta)}\right], \end{align} where $D_1$ and $E_1$ are positive constants. Choosing $\beta_1+1:=(\theta/2)>1$, it follows from \eqref{gammav} that \[|\gamma(v_+)^{1+\beta}|^{2}_{2^{\#}(2/\theta)}=|\gamma(v_+)|^{\theta}_{\frac{2N}{N-1}}<\infty,\] from what follows that the right-hand side of \eqref{bs1} is finite. We conclude that $|\gamma(v_+)|\in {L^{\frac{2N}{N-1}\frac{\theta}{2}}}(\mathbb{R}^{N})<\infty$. Now, we choose $\beta_2$ so that $\beta_2+1=(\theta/2)^2$ and conclude that \[|\gamma(v_+)|\in L^{\frac{2N}{N-1}\frac{\theta^2}{2^2}}(\mathbb{R}^{N}).\] After $k$ iterations we obtain that \[|\gamma(v_+)|\in L^{\frac{2N}{N-1}\frac{\theta^k}{2^k}}(\mathbb{R}^{N}),\] from what follows that $\gamma(v_+)\in L^p(\mathbb{R}^{N})$ for all $p\in [2,\infty)$. Since the same arguments are valid for $v_-$, we have $\gamma(v)\in L^p(\mathbb{R}^{N})$ for all $p\in [2,\infty)$. $\hfill\Box$\end{proof}\vspace*{.2cm} By simply adapting the proof given in \cite{ZelatiNolasco}, we present, for the convenience of the reader, the proof of our next result:\vspace*{.2cm} \begin{proposition}\label{t2} Let $v\in H^1(\mathbb{R}^{N+1}_+)$ be a weak solution of \eqref{P}. Then $\gamma(v)\in L^p(\mathbb{R}^{N})$ for all $p\in [2,\infty]$ and $v\in L^\infty(\mathbb{R}^{N+1}_+)$. \end{proposition} \noindent\begin{proof} We recall equation \eqref{norm=r}: \begin{align*} \|vv^\beta_T\|^2\leq& C_\beta\left[-\int_{\mathbb{R}^{N}}V\gamma(vv_T^\beta)^2 +\int_{\mathbb{R}^{N}}\left(W*F(\gamma(v))\right)f(\gamma(v))\gamma(v)\gamma(v_T)^{2\beta}\right], \end{align*} where $C_\beta=\max\{m^{-2},(1+\beta^2)\}$. It follows that $W*F(\gamma(v))\in L^\infty(\mathbb{R}^{N})$, since $\gamma(v)\in L^p(\mathbb{R}^{N})$ for all $p\geq 2$, by Proposition \ref{p2}. We also know that $|f(t)|\leq C_1(|t|+|t|^{\theta-1})$ and $V$ is bounded. Therefore, if $C=\max\{|V|_\infty, C_1|W*F(\gamma)|_\infty\}$, we have \begin{align*} \|vv^\beta_T\|^2&\leq C_\beta C\left[\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^ 2+\int_{\mathbb{R}^{N}}\left(|\gamma(v)|+|\gamma(v)|^{\theta-1}\right)\gamma(v)\gamma(v_T)^{2\beta}\right]\\ &\leq C_\beta\left[2 C\int_{\mathbb{R}^{N}}\gamma(vv_T^\beta)^ 2+ C\int_{\mathbb{R}^{N}}|\gamma(v)|^{\theta-2}\gamma(vv_T^{\beta})^2\right]. \end{align*} Since $|\gamma(v)|^{p-2}=|\gamma(v)|^{p-2}\chi_{\{|\gamma(v)\leq 1\}}+|\gamma(v)|^{p-2}\chi_{\{|\gamma(v)> 1\}}$, the fact that \[|\gamma(v)|^{p-2}\chi_{\{|\gamma(v)> 1\}}=:g_3\in L^{2N}(\mathbb{R}^{N})\] allows us to conclude that \begin{align*}2 C\gamma(vv_T^\beta)^ 2+ C|\gamma(v)|^{\theta-2}\gamma(vv_T^{\beta})^2\leq (C_3+g_3)\gamma(vv_T^\beta)^2 \end{align*} for a positive constant $C_3$ and a positive function $g_3\in L^{2N}(\mathbb{R}^{N})$ that depends neither on $T$ nor on $\beta$. Therefore, \begin{align*} \|vv^\beta_T\|^2&\leq \int_{\mathbb{R}^{N}}(C_3+g_3)\gamma(vv_T^\beta)^2. \end{align*} and \begin{align*} \|v^{\beta+1}_+\|^2&\leq C_\beta\int_{\mathbb{R}^{N}}(C_3+g_3)\gamma(v^{\beta+1}_+)^2. \end{align*} Since \begin{align*} \int_{\mathbb{R}^{N}}g_3\gamma(v^{\beta+1}_+)^2&\leq |g_3|_{2N}\,|\gamma(v_+)^{1+\beta}|_2\, |\gamma(v_+)^{1+\beta}|_{2^{\#}}\\ &\leq |g_3|_{2N}\left(\lambda|\gamma(v_+)^{1+\beta}|^2_2+\frac{1}{\lambda}|\gamma(v_+)^{1+\beta}|^2_{2^{\#}}\right), \end{align*} we conclude that \begin{align*} |\gamma(v_+)^{1+\beta}|^2_{2^{\#}}&\leq C^2_{2^{\#}} \|v^{\beta+1}_+\|^2\\ &\leq C^2_{2^{\#}}C_\beta\left(C_3+\lambda\,|g_3|_{2N}\right)|\gamma(v_+)^{1+\beta}|^2_2+\frac{C^2_{2^{\#}}C_\beta\,|g_3|_{2N}}{\lambda}|\gamma(v_+)^{1+\beta}|^2_{2^{\#}} \end{align*} and, by taking $\lambda>0$ so that \[\frac{C^2_{2^{\#}}C_\beta\,|g_3|_{2N}}{\lambda}<\frac{1}{2},\] we obtain \begin{align}\label{fest} |\gamma(v_+)^{1+\beta}|^2_{2^{\#}}&\leq C_\beta\left(2C^2_{2^{\#}}C_3+2C^2_{2^{\#}}\lambda\,|g_3|_{2N}\right)|\gamma(v_+)^{1+\beta}|^2_2\nonumber\\ &\leq C_4C_{\beta}|\gamma(v_+)^{1+\beta}|^2_2. \end{align} Since \[C_4C_\beta\leq C_4(m^{-2}+1+\beta)\leq M^2e^{2\sqrt{1+\beta}}\] for a positive constant $M$, it follows from \eqref{fest} that \begin{align*} |\gamma(v_+)|_{2^{\#}(1+\beta)}&\leq M^{1/(1+\beta)}e^{1/\sqrt{1+\beta}}|\gamma(v_+)|_{2(1+\beta)}. \end{align*} We now apply an iteration argument, taking $2(1+\beta_{n+1})=2^{\#}(1+\beta_n)$ and starting with $\beta_0=0$. This produces \[|\gamma(v_+)|_{2^{\#}(1+\beta_n)}\leq M^{1/(1+\beta_n)}e^{1/\sqrt{1+\beta_n}}|\gamma(v_+)|_{2(1+\beta_n)}.\] Because $(1+\beta_n)=\left(\frac{2^{\#}}{2}\right)^n=\left(\frac{N}{N-1}\right)^n$, we have \[\sum_{i=0}^\infty \frac{1}{1+\beta_n}<\infty\qquad\textrm{and}\qquad \sum_{i=0}^\infty\frac{1}{\sqrt{1+\beta_n}}<\infty.\] Thus, \[|\gamma(v_+)|_\infty=\lim_{n\to\infty}|\gamma(v_+)|_{2^{\#}(1+\beta_n)}<\infty,\] from what follows $|\gamma(v_+)|_p<\infty$ for all $p\in [2,\infty]$. The same argument applies to $\gamma(v_-)$, proving that $\gamma(v)\in L^p(\mathbb{R}^{N})$ for all $p\in [2,\infty]$. By taking $\lambda=1$ and $|\gamma(v_+)^{1+\beta}|_{p}<C_5$ for all $p$, we obtain for any $\beta>0$, \begin{align}\label{final0} \|v^{\beta+1}_+\|^2 &\leq C_\beta\left(C_3+|g_3|_{2N}\right)C^{2}_5+C_\beta\,|g_3|_{2N}C^{2}_5. \end{align} But $\|v_+\|^{1+\beta}_{2^*(1+\beta)}=\|v_+^{1+\beta}\|_{2^*}\leq C_{2^*}\|v_+^{1+\beta}\|$ and for a positive constant $\tilde{c}$ results from \eqref{final0} that \[\|v_+\|^{2(1+\beta)}_{2^*(1+\beta)}\leq \tilde{c}C_\beta C^{2(1+\beta)}_5.\] Thus, \[\|v_+\|_{2^*(1+\beta)}\leq \tilde{c}^{1/2(1+\beta)}C_\beta^{1/2(1+\beta)}C_5 \] and the right-hand side of the last inequality is uniformly bounded for all $\beta>0$. We are done. $\hfill\Box$\end{proof} We now state \cite[Proposition 3.9]{ZelatiNolasco}: \begin{proposition}\label{regZN}Suppose that $v\in H^1(\mathbb{R}^{N+1}_+)\cap L^\infty(\mathbb{R}^{N+1}_+)$ is a weak solution of \begin{equation}\label{C}\left\{\begin{aligned} -\Delta v +m^2v&=0, &&\mbox{in} \ \mathbb{R}^{N+1}_+,\\ -\displaystyle\frac{\partial v}{\partial x}(0,y)&=h(y) &&\mbox{for all} \ y\in\mathbb{R}^{N},\end{aligned}\right.\end{equation} where $h\in L^p(\mathbb{R}^{N})$ for all $p\in [2,\infty]$. Then $v\in C^{\alpha}([0,\infty)\times\mathbb{R}^{N})\cap W^{1,q}((0,R)\times\mathbb{R}^{N})$ for all $q\in [2,\infty)$ and $R>0$. In addition, if $h\in C^\alpha(\mathbb{R}^N)$, then $v\in C^{1,\alpha}([0,\infty)\times\mathbb{R}^N)\cap C^2(\mathbb{R}^{N+1}_+)$ is a classical solution of \textup{\eqref{C}}. \end{proposition} \noindent{\textit{Proof of Theorem \ref{classical}.} In the proof of Proposition \ref{regZN} (see \cite[Proposition 3.9]{ZelatiNolasco}), defining \[\rho(x,y)=\int_0^x v(t,y)\mathrm{d} t,\] taking the odd extension of $h$ and $\rho$ to the whole $\mathbb{R}^{N+1}$ (which we still denote simply by $h$ and $\rho$), in \cite{ZelatiNolasco} is obtained that $\rho$ satisfies the equation \begin{equation}\label{rho}-\Delta\rho+m^2\rho=h\quad\text{in }\ \mathbb{R}^{N+1}\end{equation} and $\rho\in C^{1,\alpha}(\mathbb{R}^{N+1})$ for all $\alpha\in(0,1)$ by applying Sobolev's embedding. Therefore, $v(x,y)=\frac{\partial \rho}{\partial x}(x,y)\in C^\alpha(\mathbb{R}^{N})$. In our case \[h(y)=-V(y)v(0,y)+\left(W*F\left(v(0,y)\right)\right)f\left(v(0,y)\right).\] We now rewrite equation \eqref{rho} as \[-\Delta \rho+V(y)\frac{\partial \rho}{\partial x}(0,y)+m^2\rho=\left(W*F\left(\frac{\partial \rho}{\partial x}(0,y)\right)\right)f\left(\frac{\partial \rho}{\partial x}(0,y)\right).\] Since $f\in C^1$ and $\frac{\partial \rho}{\partial x}(x,y)$ is bounded, the right-hand side of the last equality belongs to $C^\alpha(\mathbb{R}^{N+1})$. Thus, classical elliptic boundary regularity yields \[\rho\in C^{2}(\mathbb{R}^{N+1})\quad\Rightarrow\quad v\in C^{1,\alpha}(\mathbb{R}^N_+).\] Hence, by applying classical interior elliptic regularity directly to $v$, we deduce that $v\in C^{1,\alpha}(\mathbb{R}^N_+)\cap C^{2}(\mathbb{R}^N_+)$ is a classical solution of problem \eqref{P}. $\hfill\Box$ \section{Proof of Theorem \ref{t3}} We now adjust \cite[Theorem 3.14]{ZelatiNolasco} to our needs. The original statement guarantees that $v\in C^\infty([0,\infty)\times\mathbb{R}^{N})$, a result that depends on the function $h$ (of Proposition \ref{regZN}) considered in that paper. For the convenience of the reader, we present the proof of the next result: \begin{theorem}\label{314}Suppose that $v\in H^1(\mathbb{R}^{N+1}_+)$ is a critical point of the energy functional $I$, then \[|v(x,y)|e^{\lambda x}\to 0\] as $x+|y|\to \infty$, for any $\lambda<m$. \end{theorem} \noindent \begin{proof} Let us consider a solution $v$ of the problem \[\left\{\begin{array}{ll} -\Delta v+m^2v=0 &\text{in}\ \mathbb{R}^{N+1}_+\\ v(0,y)=v_0(y)\in L^2(\mathbb{R}^{N}), &y\in \mathbb{R}^{N}=\partial \mathbb{R}^{N+1}_+. \end{array}\right.\] By applying the Fourier transform with respect to variable $y\in\mathbb{R}^{N}$ we obtain \[\mathcal{F}v(x,k)=e^{-\sqrt{|2\pi k|^2+m^2}\,x}\mathcal{F}v_0(y),\] from what follows \[\sup_{y\in\mathbb{R}^{N}}|v(x,y)|\leq C|v_0|_2e^{-mx}.\] Since Proposition \ref{regZN} shows that $v\in W^{1,q}((0,R)\times\mathbb{R}^{N})$ for all $q\in [2,\infty)$ and $R>0$, we conclude that $|v(x,y)|\to 0$ when $|y|\to\infty$ for any $x$ and $|v(x,y)|e^{\lambda x}\to 0$ as $x+|y|\to \infty$ for any $\lambda<m$. $\hfill\Box$\end{proof}\hspace*{.2cm} We now adapt the proof of \cite[Theorem 5.1]{ZelatiNolasco}. In that paper is assumed that $W(y)\to 0$ as $|y|\to\infty$, a condition that is not necessary.\\ \noindent\textit{Proof of Theorem \ref{t3}}. We denote \[K(y)=W*F\left(\frac{\partial w}{\partial x}(0,y)\right).\] It follows easily that $K$ is bounded. By Theorem \ref{t1} we have $w(x,y)\geq 0$. Applying Harnack's inequality we conclude that $w$ is strictly positive. Following \cite{ZelatiNolasco}, for any $R>0$ we denote \begin{align*} B^+_R&=\{(x,y)\in \mathbb{R}^{N+1}_+\,:\, \sqrt{x^2+|y|^2}<R\}\\ \Omega^+_R&=\{(x,y)\in \mathbb{R}^{N+1}_+\,:\, \sqrt{x^2+|y|^2}>R\}\\ \Gamma^+_R&=\{(0,y)\in \mathbb{R}^{N+1}_+\,:\, |y|>R\} \end{align*} and define \[f_R(x,y)=C_Re^{-\alpha x}e^{-(m-\alpha)\sqrt{x^2+|y|^2}},\] where the positive constants $C_R$ and $\alpha\in (V_0,m)$ will be chosen later on. A simple computation shows that \[\Delta f_R=\left(\alpha^2+(m-\alpha)^2+\frac{2\alpha(m-\alpha)x}{\sqrt{x^2+|y|^2}}-\frac{N(m-\alpha)}{\sqrt{x^2+|y|^2}}\right)f_R.\] Thus, for $R$ large enough, we have \[\left\{\begin{array}{ll} -\Delta f_R+m^2f_R\geq 0 &\text{in }\ \Omega^+_R\\ -\frac{\partial f_R}{\partial x}=\frac{\partial f_R}{\partial \eta}=\alpha f_R &\text{on }\ \Gamma^+_R.\end{array} \right.\] We now define \[\rho(x,y)=f_R(x,y)-w(x,y).\] We clearly have $-\Delta\rho(x,y)-m^2\rho(x,y)\geq 0$ in $\Omega^+_R$. Choosing \[C_R=e^{mR}\max_{\partial B^+_R}v,\] we also have $\rho(x,y)\geq 0$ on $\partial B^+_R$ and $\rho(x,y)\to 0$ when $x+|y|\to\infty$. We claim that $\rho(x,y)\geq 0$ in $\overline{\Omega}^+_R$. Supposing the contrary, let us assume that $\inf_{\overline{\Omega}^+_R} \rho(x,y)<0$. By the strong maximum principle, there exist $(0,y_0)\in \Gamma^+_R$ such that $\rho(0,y_0)=\inf_{\overline{\Omega}^+_R} \rho(x,y)<\rho(x,y)$ for all $(x,y)\in \Omega^+_R$. Defining \[z(x,y)=\rho(x,y)e^{\lambda x}\] for some $\lambda\in (V_0,m)$, a straightforward calculation shows that \[-\Delta \rho+m^2\rho=e^{-\lambda x}\left(\-\Delta z+2\lambda \partial_xz+(m^2-\lambda^2)z\right).\] Since $-\Delta \rho+m^2\rho\geq 0$, we conclude that $\-\Delta z+2\lambda \partial_xz+(m^2-\lambda^2)z\geq0$. Another application of the strong maximum principle yields \[z(0,y_0)=\inf_{\Gamma^+_R}z=\inf_{\Gamma^+_R}\rho=\rho(0,y_0)<0.\] An application of Hopf's lemma produces $\frac{\partial z}{\partial \eta}(0,y_0)<0$, that is, \begin{equation}\label{z}-\frac{\partial z}{\partial x}(0,y_0)<0.\end{equation} Since $\frac{\partial z}{\partial x}=\frac{\partial \rho}{\partial x}e^{\lambda x}+\lambda \rho e^{\lambda x}$, we conclude that \[\frac{\partial z}{\partial x}(0,y_0)=\frac{\partial \rho}{\partial x}(0,y_0)+\lambda \rho(0,y_0)\] and so \begin{align*}-\frac{\partial z}{\partial x}(0,y_0 &=-\frac{\partial f_R}{\partial x}(0,y_0)+\frac{\partial w}{\partial x}(0,y_0)-\lambda f_R(0,y_0)+\lambda w(0,y_0)\\ &=(\alpha-\lambda)f_R(0,y_0)+V(y_0)w(0,y_0)-K(y_0)f(w(0,y_0))\\&\qquad+\lambda w(0,y_0)\\ &=(\alpha-\lambda)f_R(0,y_0)+(V(y_0)+V_0)w(0,y_0)-K(y_0)f(w(0,y_0))\\ &\qquad+(\lambda-V_0)w(0,y_0). \end{align*} Now, choosing $\alpha=\lambda$, since $\lambda>V_0$ (so that the last term in the above inequality is non-negative), the positiveness of $(V(y_0)+V_0)w(0,y_0)$ and hypothesis $(f_1)$ guarantees that $-\frac{\partial z}{\partial x}(0,y_0)>0$, thus reaching a contradiction with \eqref{z}. $\hfill\Box$
{ "timestamp": "2018-02-13T02:16:34", "yymm": "1802", "arxiv_id": "1802.03963", "language": "en", "url": "https://arxiv.org/abs/1802.03963" }
\subsection{The double sum in the energy correction} In this part we provide an integral representation for the double sum \begin{equation} \sum_{k_{1}\in\mathbb{Z}}\sum_{k_{2}\neq n_{q}}D_{1}\left(k_{1},k_{2}\right) \end{equation} with \begin{align} D_{1}(k_{1},k_{2}) =&\; \frac{1}{\omega_{k_{1}} \omega_{k_{2}} \omega_{k_{1}+k_{2}-n_{q}}} \left( \frac{1}{\omega_{k_{1}} + \omega_{k_{2}} + \omega_{k_{1}+k_{2}-n_{q}} + \omega_{n_{q}}} \right. \cr & \left. + \frac{1}{\omega_{k_{1}} + \omega_{k_{2}} + \omega_{k_{1}+k_{2}-n_{q}} - \omega_{n_{q}}} \right) \end{align} We start by applying the residue method to the $k_{1}$ variable. The analytically continued function $D_{1}\left(z,k_{2}\right)$ contains two pairs of branch cuts on the $z$ plane, starting from $\pm i\mu$ and $q-\frac{2\pi k_{2}}{L}\pm i\mu$, and going away from the real axis towards complex infinity in the imaginary direction. The integrals coming from the cuts below the real axis can be combined nicely to those above the real axis after a change of integration variable. Introducing $\kappa_{2}=2\pi k_{2}L^{-1}$, the resulting integral can be written as \begin{eqnarray} \sum_{k_{1}\in\mathbb{Z}}\sum_{k_{2}\neq n_{q}}D_{1}\left(k_{1},k_{2}\right) & = & \sum_{k_{2}\neq n_{q}}\intop_{1}^{\infty}du\:i\mu\frac{L}{2\pi}\coth\left(\frac{\mu Lu}{2}\right)\left(\Theta_{q,\mu}\left(\kappa_{2}-q,u\right)+\Delta_{q,\mu}\left(\kappa_{2},u\right)\right.\nonumber \\ & & \left.+\Theta_{q,\mu}\left(\kappa_{2}-q,-u\right)+\Delta_{q,\mu}\left(\kappa_{2},-u\right)\right)\label{eq:dsum1} \end{eqnarray} where \begin{equation} \Theta_{q,\mu}\left(\kappa,u\right)=\frac{\kappa\left(\mu+iqu\right)-\mu q\left(u^{2}-1\right)}{\kappa\mu^{2}\left(\kappa+q+i\mu u\right)\left(iq+\mu u\right)\sqrt{\left(u^{2}-1\right)\left(\mu^{2}+\left(\kappa+i\mu u\right)^{2}\right)}} \end{equation} and \begin{equation} \Delta_{q,\mu}\left(\kappa,u\right)=-\frac{\kappa qu+\mu^{2}u+i\mu q-i\kappa\mu}{\mu^{2}\left(\kappa-q\right)\left(q-i\mu u\right)\left(\kappa+i\mu u\right)\sqrt{\left(u^{2}-1\right)\left(\mu^{2}+\kappa^{2}\right)}} \end{equation} We now turn the remaining summation to integration. We note that in (\ref{eq:dsum1}) both $\Theta_{q,\mu}$ and $\Delta_{q,\mu}$ having $+ u$ argument are the contributions of the branch cuts starting from $+ i\mu$ and $-i\mu+q-\kappa_2$, whereas the terms of $- u$ argument correspond to the other two cuts. Interchanging the $k_{2}$ sum with the integral, the remaining sums to be evaluated have the form \begin{eqnarray} S_{\Theta} & = & \sum_{k_{2}\neq n_{q}}\left[\Theta_{q,\mu}\left(\kappa_{2}-q,u\right)+\Theta_{q,\mu}\left(\kappa_{2}-q,-u\right)\right]\cr & = & \sum_{k_{2}\neq0}\left[\Theta_{q,\mu}\left(\kappa_{2},u\right)+\Theta_{q,\mu}\left(\kappa_{2},-u\right)\right]\cr & = & \sum_{k_{2}\neq0} \left[ \Theta_{q,\mu} \left(\kappa_{2},u\right) + \Theta_{q,\mu} \left( -\kappa_{2},-u \right) \right] \cr & = & \sum_{k_{2}\neq0} \frac{2u\left(\mu^{2}+q^{2}\right) \left(\kappa_{2}+i\mu u\right)}{\mu^{2} (q^{2}+\mu^{2}u^{2}) \left[ (\kappa_{2}+i\mu u)^2 - q^2 \right] \sqrt{\left(u^{2}-1\right) \left[ \mu^{2}+\left(\kappa_{2}+i\mu u\right)^{2} \right]}} \label{eq:stheta} \end{eqnarray} and \begin{eqnarray} S_{\Delta} & = & \sum_{k_{2}\neq n_{q}}\left[\Delta_{q,\mu}\left(\kappa_{2},u\right)+\Delta_{q,\mu}\left(\kappa_{2},-u\right)\right]\cr & = & -\sum_{k_{2}\neq n_{q}}\frac{2i\kappa_{2}q\sqrt{u^{2}-1}}{\mu\omega_{k_{2}}\left(\kappa_{2}^{2}+\mu^{2}u^{2}\right)\left(q^{2}+\mu^{2}u^{2}\right)}\cr & = & \frac{2iq^{2}\sqrt{u^{2}-1}}{\mu\omega_{n_{q}}\left(q^{2}+\mu^{2}u^{2}\right)^{2}}\label{eq:sdelta} \end{eqnarray} where in the last step we used the antisymmetry of the summand. We now proceed by obtaining an integral representation of the sum $S_{\Theta}$. Analytically continuing the summand of (\ref{eq:stheta}) into the complex $\kappa_{2}$ plane, we find a pair of branch cuts and two single poles. However, this time the branch points of the cuts lie at $i\mu\left(\pm1-u\right)$, and since $u>1$, the upper cut intersects the real axis. The $k_{2}=n_{q}$ terms of the double sum in (\ref{eq:separsum}) were separated for precisely this reason. Now an integral representation can be achieved by writing $S_{\Theta}$ as a sum of two contour integrals \begin{equation} S_{\Theta}=\frac{L}{2\pi}\left[\intop_{C_{1}}dz\frac{e^{iLz}}{e^{iLz}-1}f_{\Theta}\left(z\right) + \intop_{C_{2}}dz\frac{e^{iLz}}{e^{iLz}-1}f_{\Theta}\left(z\right)\right]\label{eq:contour2} \end{equation} with \begin{equation} f_{\Theta}\left(z\right)=\frac{2u\left(\mu^{2}+q^{2}\right)\left(z+i\mu u\right)}{\mu^{2}\left(z-q+i\mu u\right)\left(z+q+i\mu u\right)\left(q^{2}+\mu^{2}u^{2}\right)\sqrt{\left(u^{2}-1\right)\left[\mu^{2}+\left(z+i\mu u\right)^{2}\right]}}. \end{equation} The closed contours $C_{1}$ and $C_{2}$ are chosen such that $C_{1}$ goes from $-\infty-i\epsilon$ to $-2\pi L^{-1}-i\epsilon$ just below the real axis, then from $-2\pi L^{-1}+i\epsilon$ back to $-\infty+i\epsilon$ just above the real axis, while $C_{2}$ is the mirror image of $C_{1}$ with respect to the imaginary axis except that it is also directed counterclockwise. Now both contours can be blown up such that they are tightened to the cuts. As a result of the deformation, the poles of $f_{\Theta}$ at $z=\pm q-i\mu u$ become encircled in the negative direction which results in additional residual terms. After the variable changes $u\rightarrow\cosh u$, $v\rightarrow\cosh v$, and extending the intagration domain over the real line by symmetrization\footnote{The region around the branch-overlapped pole needs special treatment.}, we get an integral representation of $S_{\Theta}$ as \begin{eqnarray} S_{\Theta} & = & \frac{2L}{i\mu^{2}}\frac{e^{\mu Lu}}{e^{\mu Lu}-1} \frac{\sqrt{\mu^{2}+q^{2}}u}{\left(q^{2}+\mu^{2}u^{2}\right)\sqrt{u^{2}-1}}+\frac{L}{i\pi\mu}\intop_{-\infty}^{\infty}dv \left(\lambda\left(u,v\right)s\left(u,v\right)+\mathrm{sing}_{\Theta}\left(u,v\right)\right) \quad \quad \label{eq:sthetint} \end{eqnarray} where \begin{eqnarray} \lambda\left(u,v\right) & = & \frac{e^{\mu L\cosh u}}{e^{\mu L\cosh v}-e^{\mu L\cosh u}}-\frac{e^{\mu L\left(\cosh u+\cosh v\right)}}{e^{\mu L\left(\cosh u+\cosh v\right)}-1}, \\ s\left(u,v\right) & = & \frac{\left(\mu^{2}+q^{2}\right)\cosh u\cosh v}{\left(q^{2}+\mu^{2}\cosh^{2}u\right)\left(q^{2}+\mu^{2}\cosh^{2}v\right)}, \\ \label{Theta singular part} \mathrm{sing}_{\Theta}\left(u,v\right) & = & \frac{2}{\mu L}\frac{1}{u^{2}-v^{2}}\frac{s\left(u,u\right)u}{\sinh u} \end{eqnarray} The term of (\ref{Theta singular part}) comes from the neighbourhood of the branch-overlapped pole. Note that both $\lambda\left(u,v\right)$ and $\mathrm{sing}_{\Theta}\left(u,v\right)$ are singular along the lines $u=\pm v$; their sum is, however, finite everywhere. At this stage, we can represent the original double sum as a formula containing the following double integral \begin{eqnarray} \sum_{k_{1},k_{2}\in\mathbb{Z}}D_{1}\left(k_{1},k_{2}\right) & = & L^{2}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\intop_{-\infty}^{\infty}\frac{dv}{2\pi}\coth\left(\frac{\mu L}{2}\cosh u\right)\left(\lambda\left(u,v\right)s\left(u,v\right)+\mathrm{sing}_{\Theta}\left(u,v\right)\right)\nonumber \\ & & +\left(\text{other terms}\right)\label{eq:DintPV} \end{eqnarray} Since the double integral is absolutely convergent, we can perform a symmetrization of the integrand as \begin{equation} \intop_{-\infty}^{\infty}\intop_{-\infty}^{\infty}dudv\:f\left(u,v\right)=\frac{1}{2}\intop_{-\infty}^{\infty} \intop_{-\infty}^{\infty}dudv\:\left(f\left(u,v\right)+f\left(v,u\right)\right) \label{eq:symmetriz} \end{equation} which leaves the value of the integral unchanged. Upon this transformation, the first term of (\ref{eq:DintPV}) becomes \begin{equation} L^{2}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\intop_{-\infty}^{\infty}\frac{dv}{2\pi}\frac{1+e^{\mu L\cosh u}+e^{\mu L\cosh v}-3e^{\mu L\left(\cosh u+\cosh v\right)}}{2\left(e^{\mu L\cosh u}-1\right)\left(e^{\mu L\cosh v}-1\right)}s\left(u,v\right)\label{eq:dterm} \end{equation} which can be further simplified as \begin{eqnarray} L^{2}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\intop_{-\infty}^{\infty}\frac{dv}{2\pi}\frac{1+e^{\mu L\cosh u}+e^{\mu L\cosh v}-3e^{\mu L\left(\cosh u+\cosh v\right)}}{2\left(e^{\mu L\cosh u}-1\right)\left(e^{\mu L\cosh v}-1\right)}s\left(u,v\right) & =\nonumber \\ -\frac{L^{2}}{\mu^{2}}\left(\frac{3}{8}+\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{1}{e^{\mu L\cosh u}-1}\frac{\mu\sqrt{\mu^{2}+q^{2}}\cosh u}{q^{2}+\mu^{2}\cosh^{2}u}\right)\label{eq:orig_result} \end{eqnarray} if we note that \begin{equation} \frac{1+e^{\mu L\cosh u}+e^{\mu L\cosh v}-3e^{\mu L\left(\cosh u+\cosh v\right)}}{2\left(e^{\mu L\cosh u}-1\right)\left(e^{\mu L\cosh v}-1\right)}=-\frac{3}{2}-\frac{1}{e^{\mu L\cosh u}-1}-\frac{1}{e^{\mu L\cosh v}-1} \end{equation} We now calculate the integral of the symmetrized second term of the integrand. For brevity, we introduce the function \[ \mathrm{sing}\left(u,v\right)=\coth\left(\frac{\mu L}{2}\cosh u\right)\mathrm{sing}_{\Theta}\left(u,v\right). \] In this notation, the symmetrized integral has the following form \[ \frac{L^{2}}{2}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\intop_{-\infty}^{\infty}\frac{dv}{2\pi}\left(\mathrm{sing}\left(u,v\right)+\mathrm{sing}\left(v,u\right)\right). \] Both terms of this integrand are divergent by themselves; their sum, however, is finite everywhere. To perform the integrations, we notice that due to the symmetry of the integrand, \begin{equation} \frac{L^{2}}{2}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\intop_{-\infty}^{\infty}\frac{dv}{2\pi}\left(\mathrm{sing}\left(u,v\right)+\mathrm{sing}\left(v,u\right)\right)=\frac{4L^{2}}{2}\intop_{0}^{\infty}\frac{du}{2\pi}\intop_{-u}^{u}\frac{dv}{2\pi}\left(\mathrm{sing}\left(u,v\right)+\mathrm{sing}\left(v,u\right)\right) \end{equation} which we regularize\footnote{This regularization comes from regularizing the contour integral around the overlapped pole and then performing the change of variables.} as \begin{eqnarray} 2L^{2}\lim_{\epsilon\rightarrow0}\left[\intop_{\epsilon}^{\infty}\frac{du}{2\pi}\intop_{-u+\epsilon}^{u-\epsilon}\frac{dv}{2\pi}\mathrm{sing}\left(u,v\right)+\intop_{\epsilon}^{\infty}\frac{du}{2\pi}\intop_{-u+\epsilon}^{u-\epsilon}\frac{dv}{2\pi}\mathrm{sing}\left(v,u\right)\right] & =\nonumber \\ 2L^{2}\lim_{\epsilon\rightarrow0}\left[\intop_{\epsilon}^{\infty}\frac{du}{2\pi}\intop_{-u+\epsilon}^{u-\epsilon}\frac{dv}{2\pi}\mathrm{sing}\left(u,v\right)+2\intop_{0}^{\infty}\frac{du}{2\pi}\intop_{u+\epsilon}^{\infty}\frac{dv}{2\pi}\mathrm{sing}\left(u,v\right)\right]\label{eq:singint} \end{eqnarray} In the last step we made use of the identity \begin{equation} \intop_{\epsilon}^{\infty}\frac{du}{2\pi}\intop_{-u+\epsilon}^{u-\epsilon}\frac{dv}{2\pi}=\intop_{-\infty}^{\infty}\frac{dv}{2\pi}\intop_{v+\epsilon}^{\infty}\frac{du}{2\pi}, \end{equation} and the fact that $\mathrm{sign}\left(u,v\right)$ is symmetric in $v$ together with the freedom to switch the labeling of integration variables. Now the integrals over $v$ in (\ref{eq:singint}) can be performed. Combining the remaining $u$ integrals, we obtain \begin{equation} \frac{8L}{\mu}\lim_{\epsilon\rightarrow0}\left[-\intop_{0}^{\epsilon}\frac{du}{\left(2\pi\right)^{2}}\frac{\tilde{s}\left(u\right)}{\sinh u}\mathrm{arctanh}\left(\frac{u}{u+\epsilon}\right)+\frac{1}{2}\intop_{\epsilon}^{\infty}\frac{du}{\left(2\pi\right)^{2}}\frac{\tilde{s}\left(u\right)}{\sinh u}\ln\left(1-\frac{2\epsilon}{\epsilon+2u}\right)\right] \end{equation} with \begin{equation} \tilde{s}\left(u\right)=s\left(u,u\right)\coth\left(\frac{\mu L}{2}\cosh u\right). \end{equation} Both integrands approximate Dirac $\delta$-like peaks centered at $u=0$ in the $\epsilon\rightarrow0$ limit. Thus, we approximate the regular part $\tilde{s}\left(u\right)$ with its value at the top of the peaks, and integrate analytically the singular part. Finally, taking the $\epsilon\rightarrow0$ limit, we get \begin{align} \frac{L^{2}}{2}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\intop_{-\infty}^{\infty}\frac{dv}{2\pi} \left(\mathrm{sing}\left(u,v\right)+\mathrm{sing}\left(v,u\right)\right) =\;& \frac{L}{\mu\left(\mu^{2}+q^{2}\right)\pi^{2}}\coth\left(\frac{\mu L}{2}\right) \bigg( \mathrm{Li}_{2}\left(-2\right) \cr & +\frac{1}{2}\mathrm{Li}_{2}\left(\frac{1}{4}\right)-\frac{\pi^{2}}{6}+\left(\ln\:2\right)^{2}\bigg) \label{eq:sing_result} \end{align} where $\mathrm{Li}_{2}\left(x\right)$ is the dilogarithm function \begin{equation} \mathrm{Li}_{2}\left(x\right)=\intop_{0}^{\infty}\frac{t}{e^{t}/x-1},\quad x\in\mathbb{C}\setminus\left\{ x\in\mathbb{R}\wedge x\geq1\right\} . \end{equation} Using the above integral representation, the Abel identity \begin{equation} \mathrm{Li}_{2}\left(\frac{x}{1-y}\right)+\mathrm{Li}_{2}\left(\frac{y}{1-x}\right)-\mathrm{Li}_{2}\left(\frac{xy}{\left(1-x\right)\left(1-y\right)}\right)=\mathrm{Li}_{2}\left(x\right)+\mathrm{Li}_{2}\left(y\right)+\ln\left(1-x\right)\ln\left(1-y\right) \end{equation} with $x=-1$ and $y=\frac{1}{2}$, and the special value \begin{equation} \mathrm{Li}_{2}\left(-1\right)=-\frac{\pi^{2}}{12}, \end{equation} we obtain \begin{equation} \mathrm{Li}_{2}\left(-2\right)+\frac{1}{2}\mathrm{Li}_{2}\left(\frac{1}{4}\right)-\frac{\pi^{2}}{6}+\left(\ln\:2\right)^{2}=-\frac{\pi^{2}}{4}\label{eq:Li_id} \end{equation} Putting everything together, the double sum of (\ref{eq:separsum}) can be written as \begin{equation} \sum_{k_{1},k_{2}\in\mathbb{Z}}D_{1}\left(k_{1},k_{2}\right)=C_{1}+C_{2}+C_{\Delta}+C_{r}+C_{ord}+C_{sing}\label{eq:sumc} \end{equation} where: \begin{itemize} \item $C_{1}$ and $C_{2}$ contains the single sums separated in (\ref{eq:separsum}). Using (\ref{eq:singsum1}) and (\ref{eq:singsum2}), \begin{eqnarray} C_{1} & = & \frac{L}{\omega_{n_{q}}\mu^{2}}\left(\frac{1}{2\pi}+\mu L\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{e^{\mu L\cosh u}}{\left(e^{\mu L\cosh u}-1\right)^{2}}\cosh u\right)\\ C_{2} & = & \frac{L}{4\omega_{n_{q}}^{2}\mu}\coth\left(\frac{\mu L}{2}\right)-\frac{L}{2\omega_{n_{q}}}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{\coth\left(\frac{\mu L}{2}\cosh u\right)}{\omega_{n_{q}}^{2}+\mu^{2}\sinh^{2}u} \end{eqnarray} \item $C_{\Delta}$ stands for the term coming from (\ref{eq:sdelta}) \begin{equation} C_{\Delta}=-\intop_{-\infty}^{\infty}\frac{du}{2\pi}\:\frac{L}{\omega_{n_{q}}}\coth\left(\frac{\mu L\cosh u}{2}\right)\frac{q^{2}\sinh^{2}u}{\left(q^{2}+\mu^{2}\cosh^{2}u\right)^{2}} \end{equation} \item $C_{r}$ contains the residual terms emerging from the contour deformation of the integral representation of $S_{\Theta}$ appearing in (\ref{eq:sthetint}) \begin{equation} C_{r}=\intop_{-\infty}^{\infty}\frac{du}{2\pi}\:\frac{L^{2}}{\mu}\coth\left(\frac{\mu L\cosh u}{2}\right)\frac{e^{\mu L\cosh u}}{e^{\mu L\cosh u}-1}\frac{\sqrt{\mu^{2}+q^{2}}\cosh u}{\left(q^{2}+\mu^{2}\cosh^{2}u\right)} \end{equation} \item $C_{ord}$ is the symmetrized double integral contribution (\ref{eq:orig_result}) \begin{equation} C_{ord}=-\frac{L^{2}}{\mu^{2}}\left(\frac{3}{8}+\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{1}{e^{\mu L\cosh u}-1}\frac{\mu\sqrt{\mu^{2}+q^{2}}\cosh u}{q^{2}+\mu^{2}\cosh^{2}u}\right) \end{equation} \item Finally, $C_{sing}$ is the symmetrized singular contribution (\ref{eq:sing_result}) \begin{equation} C_{sing}=-\frac{L}{4\mu\left(\mu^{2}+q^{2}\right)}\coth\left(\frac{\mu L}{2}\right). \end{equation} \end{itemize} Combining these terms, significant simplifiactions can be achieved. First of all, notice that $C_{sing}$ is cancelled by a similar term appearing in $C_{2}$. As a next step, we combine $C_{\Delta}$ and the integral part of $C_{2}$, and perform integration by parts. The resulting boundary term cancels the explicit term appearing in $C_{1}$. $C_{r}$ contains an infinite-volume term that can be separated and integrated analytically. Then, the remaining part of the integral in $C_{r}$, the integral part of $C_{ord}$, the integral appearing in $C_{1}$ and the result of the previous integration by parts can be combined beautifully together and lead to the following nice representation of the full double sum: \begin{equation} \sum_{k_{1},k_{2}}D_{1}\left(k_{1},k_{2}\right)=\frac{L^{2}}{\mu^{2}}\left(\frac{1}{8}+3\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{e^{\mu L\cosh u}}{\left(e^{\mu L\cosh u}-1\right)^{2}}\frac{1}{\cosh\left(u-\theta\right)}\right)\label{eq:dsum1-int-1} \end{equation} where we introduced $\theta$ as the rapidity variable $q=\mu\sinh\theta$. \subsection{Expansion of the form factor} We first provide an integral representation of the double sum $\sum_{k_{1},k_{2}}D_{2}\left(k_{1},k_{2}\right)$ , we then calculate the large volume expansion of the full form factor $\left\langle 0\left(b\right)\left|\varphi\right|q\left(b\right)\right\rangle $. The transformation of \begin{equation} \sum_{k_{1},k_{2}}D_{2}\left(k_{1},k_{2}\right) \end{equation} with \begin{align} D_{2}\left(k_{1},k_{2}\right) =\;& \frac{1}{\omega_{k_{1}}\omega_{k_{2}}\omega_{k_{1}+k_{2}-n_{q}}} \bigg[ \Bigl(\frac{1}{\omega_{k_{1}} +\omega_{k_{2}}+\omega_{k_{1}+k_{2}-n_{q}}+\omega_{n_{q}}}\Bigr)^{2} \cr & - \Bigl(\frac{1}{\omega_{k_{1}}+\omega_{k_{2}}+\omega_{k_{1}+k_{2}-n_{q}}-\omega_{n_{q}}}\Bigr)^{2}\bigg] \end{align} can be started in parallel to the steps done in the case of $D_{1}\left(k_{1},k_{2}\right)$. We first separate the $k_{2}=n_{q}$ terms analogously to (\ref{eq:separsum}) \begin{equation} \sum_{k_{1},k_{2}\in\mathbb{Z}}D_{2}\left(k_{1},k_{2}\right)=\sum_{k_{1}\in\mathbb{Z}}\sum_{k_{2}\neq n_{q}}D_{2}\left(k_{1},k_{2}\right)-\frac{1}{4\omega_{n_{q}}}\sum_{k_{1}\in\mathbb{Z}}\frac{1}{\omega_{k_{1}}^{4}} +\frac{1}{4\omega_{n_{q}}}\sum_{k_{1}\in\mathbb{Z}}\frac{1}{\omega_{k_{1}}^{2}}\frac{1}{\left(\omega_{k_{1}} +\omega_{n_{q}}\right)^{2}}\label{eq:separsum2} \end{equation} These separated terms can be easily calculated using the derivative of (\ref{eq:singsum2}) with respect to $A$ and the formula \begin{equation} \sum_{k\in\mathbb{Z}}\frac{1}{\omega_{k}^{4}}=\frac{2L\coth\left(\frac{\mu L}{2}\right)+\mu L^{2}\mathrm{csch}^{2}\left(\frac{\mu L}{2}\right)}{8\mu^{3}}.\label{eq:omega4} \end{equation} Now we turn the sum over $k_{1}$ into an integral. This can be done in a straightforward manner. Using the variable $\kappa_{2}=2\pi k_{2}L^{-1}$, we get \begin{align} \sum_{k_{1}\in\mathbb{Z}}\sum_{k_{2}\neq n_{q}}D_{2}\left(k_{1},k_{2}\right) = & \sum_{k_{2}\neq n_{q}}\frac{L}{2\pi}\intop_{-\infty}^{\infty}du\coth\left(\frac{\mu L}{2}\cosh u\right) \left(\Xi\left(k_{2},\omega_{n_{q}},q\right) \right. \cr & \left. + \Xi\left(k_{2},\omega_{n_{q}},-q\right)-\Xi\left(k_{2},-\omega_{n_{q}},q\right)-\Xi\left(k_{2}, -\omega_{n_{q}},-q\right)\right) \label{eq:D2} \end{align} with \begin{equation} \Xi\left(k_{2},A,q\right)=\frac{\left(\sqrt{\mu^{2}+\left(q-\kappa_{2}\right)^{2}}+\sqrt{\mu^{2}+\left(\kappa_{2}+i\mu\cosh u\right)^{2}}-i\mu\sinh u+A\right)^{-2}}{\sqrt{\mu^{2}+\left(q-\kappa_{2}\right)^{2}}\sqrt{\mu^{2}+\left(\kappa_{2}+i\mu\cosh u\right)^{2}}}. \end{equation} By means of equivalent transformations including a shift of the summation variable $\kappa_{2}=\tilde{\kappa}_{2}+q$, symmetrization of the integrand and algebraic manipulations, (\ref{eq:D2}) simplifies miraculously to a sum of two terms, plus the same sum with the sign of $q$ switched, each term containing only a single pair of branch cuts: \begin{align}\label{eq:D2sumG} \sum_{\substack{k_{1}\in\mathbb{Z} \\ k_2 \neq n_q}} D_{2}(k_{1},k_{2}) =& -\frac{L\omega_{n_{q}}}{16\mu^{2}} \sum_{k_2\neq0} \intop_{-\infty}^{\infty} \frac{du}{2\pi} \bigg[ \frac{\coth \big(\frac{\mu L}{2}\cosh u \big)}{\kappa_{2}^{2} (\kappa_{2}-q+i\mu\cosh u )^{2} (q+i\mu\cosh u )^{2}} \cr & \times \Big( \mathcal{G}(\kappa_{2}-q,q) + \mathcal{G}(-\kappa_{2}-i\mu\cosh u,q) \Big) + (q \rightarrow - q ) \bigg] \end{align} where \begin{eqnarray} \mathcal{G}\left(x,q\right) & =& \frac{4\mu^{2}\left(x+q\right)^{2}-2i\mu P_{1}\left(x,q\right)\cosh u-2P_{2}\left(x,q\right)\left[2\left(x+q\right)\cosh2u+i\mu\cosh3u\right]}{\sqrt{\mu^{2}+x^{2}}}\quad \quad \cr P_{1}\left(x,q\right) & =& 4x^{3}+6x^{2}q+x\left(\mu^{2}+4q^{2}\right)-\mu^{2}q \cr P_{2}\left(x,q\right) & =& 2x^{2}q-x\mu^{2}+\mu^{2}q \nonumber \end{eqnarray} Now we proceed to transform the remaining sum (over $k_{2}$) in these terms. Let us first examine the sum containing $\mathcal{G}\left(\kappa_{2}-q\right)$. The arising complex function contains the usual set of poles on the real line and a pair of branch cuts starting from $z=\pm i\mu+q$ to complex infinity. It also has one pole of order 3 at $z=0$ and another of order 2 at $z=q-i\mu\cosh u$. The latter is overlapped by the lower branch cut. After the deformation of the contour, the pole of order 3 is encircled in the clockwise direction. Other finite terms come from the overlapped second-order pole which cancel the divergences of the branch cut integral. We obtain \begin{equation} -\frac{L\omega_{n_{q}}}{16\mu^{2}}\sum_{k_{2}\neq0}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{\coth\left(\frac{\mu L}{2}\cosh u\right)\mathcal{G}\left(\kappa_{2}-q,q\right)}{\kappa_{2}^{2}\left(\kappa_{2}-q+i\mu\cosh u\right)^{2}\left(q+i\mu\cosh u\right)^{2}}=I_{1}^{+}\left(q\right)+I_{1}^{-}\left(q\right)+R_{1}\left(q\right)\label{eq:dsum2int2q} \end{equation} where \begin{align} I_{1}^{-}\left(q\right) =\;& -\frac{L^{2}\omega_{n_{q}}}{8\mu^{2}}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\intop\frac{dv}{2\pi} \bigg[ \coth\Bigl(\frac{\mu L}{2}\cosh u\Bigr) \frac{e^{\mu L\cosh v}}{e^{\mu L\cosh v}-1} \frac{G\left(\cosh u,\cosh v,q\right)}{\left(\cosh u-\cosh v\right)^{2}} \cr & +\mathrm{sing}_{1}\left(u,v,q\right) \bigg] \label{eq:I1minus} \end{align} \begin{align} I_{1}^{+}\left(q\right) =\;& -\frac{L^{2}\omega_{n_{q}}}{8\mu^{2}}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\intop\frac{dv}{2\pi}\frac{\coth\left(\frac{\mu L}{2}\cosh u\right)}{e^{\mu L\cosh v}-1}\frac{G\left(\cosh u,-\cosh v,q\right)}{\left(\cosh u+\cosh v\right)^{2}} \label{eq:I1plus} \\ R_{1}\left(q\right) =\;& \frac{L}{192\mu^{2}}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{\coth\left(\frac{\mu L}{2}\cosh u\right)P_{R1}\left(\cosh u,q\right)}{\left(q-i\mu\cosh u\right)^{4}\left(q+i\mu\cosh u\right)^{2}} \end{align} with \begin{eqnarray} G\left(x,y,q\right) & = & \frac{-4}{\left(q+i\mu x\right)^{2}\left(q-i\mu y\right)^{2}}\left[\mu^{2}xy\left(-1+x^{2}-xy+y^{2}\right)+q^{2}\left(1-xy-y^{2}+x^{2}\left(-1+2y^{2}\right)\right)\right.\nonumber \\ & & \left.+i\mu q\left(x-y+y^{3}-2x^{2}y^{3}+x^{3}\left(-1+2y^{2}\right)\right)\right] \end{eqnarray} \begin{align} \mathrm{sing}_{1}\left(u,v,q\right) =& -2\coth\left( \frac{\mu L}{2}\cosh u \right) \frac{e^{\mu L\cosh u}}{e^{\mu L\cosh u}-1} \left\{ \frac{u^{2}+v^{2}}{\left(u^{2}-v^{2}\right)^{2}}\frac{G\left(\cosh u,\cosh u,q\right)}{\sinh^{2}u} \right. \cr & + \frac{u}{\sinh u}\frac{1}{u^{2}-v^{2}} \left[ \left(\frac{\mu L}{e^{\mu L\cosh u}-1}+\frac{\cosh u}{\sinh^{2}u} \right) G(\cosh u,\cosh u,q) \right. \cr & \left. \left. - \frac{\partial G\left(\cosh u,y,q\right)}{\partial y}\bigg|_{y=\cosh u} \right] \right\} \end{align} and $P_{R1}\left(x,q\right)$ is some complicated polynomial of $x$,$q$,$L$ and $\mu$. We can immediately calculate the infinite volume limit and first L\"uscher correction of (\ref{eq:dsum2int2q}) and its opposite momentum pair. For the infinite volume limit, we can analytically take both integrals in (\ref{eq:I1minus}) and (\ref{eq:I1plus}). For the first L\"uscher correction, one integral can be done analytically, and we are led to a formula containing only a single integral, as expected. We note that higher L\"uscher corrections seem to be much harder to get in the form of explicit single-integral formulas. In the following, we use the rapidity variable $q=\mu\cosh\theta$. The term $I_{1}^{+}\left(q\right)+I_{1}^{+}\left(-q\right)$ does not contribute to the infinite volume limit, and gives a first order L\"uscher contribution \begin{eqnarray} \widetilde{I_{1}^{+}} & = & \frac{L^{2}}{\pi\mu^{3}} \intop_{-\infty}^{\infty} \frac{du}{2\pi} \frac{e^{-\mu L\cosh u}}{\left(\cosh2\theta +\cosh2u\right)^{3}} \bigg[ - \cosh\theta \Big( (3+\cosh4\theta) \cosh2u + \cosh2\theta (3+\cosh4u) \Big) \nonumber \\ & & + \pi\cosh u (-3 + \cosh4\theta - 2\cosh2\theta\cosh2u) \sinh^{2}u \nonumber \\ & & - u\cosh\theta (-3 + \cosh4\theta - 2\cosh2\theta\cosh2u ) \sinh2u \nonumber \\ & & - \theta\cosh\theta (-3 + \cosh4u - 2\cosh2\theta\cosh2u ) \sinh2\theta \bigg] \label{eq:I1+L} \end{eqnarray} The term $I_{1}^{-}\left(q\right)+I_{1}^{-}\left(-q\right)$ has the infinite volume limit \begin{equation} I_{1}^{\infty}=\frac{L^{2}\left(\pi^{2}-4\right)\cosh\theta}{8\pi^{2}\mu^{3}}\label{eq:I1inf} \end{equation} and admits a first L\"uscher correction \begin{eqnarray} \widetilde{I_{1}^{-}} & = & \frac{3L^{2}}{\pi\mu^{3}}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{e^{-\mu L\cosh u}}{\left(\cosh2\theta+\cosh2u\right)^{3}} \bigg[ -\cosh\theta \Big( \left(3+\cosh4\theta \right) \cosh2u + \cosh2\theta \left(3+\cosh4u\right) \Big) \nonumber \\ & & - \pi\cosh u \left(-3+\cosh4\theta-2\cosh2\theta\cosh2u\right)\sinh^{2}u\nonumber \\ & & - u \cosh\theta \left(-3+\cosh4\theta-2\cosh2\theta\cosh2u\right)\sinh2u\nonumber \\ & & - \theta \cosh\theta \left(-3+\cosh4u-2\cosh2\theta\cosh2u\right)\sinh2\theta \bigg] \label{eq:I1-L} \end{eqnarray} The residual part $R_{1}\left(q\right)+R_{1}\left(-q\right)$ contributes to the infinite volume limit with \begin{equation} R_{1}^{\infty}=-\frac{L^{2}\cosh\theta}{8\mu^{3}}+\frac{L\left(-1+2\theta\coth2\theta\right)}{\mu^{4}\pi\sinh^{2}2\theta}\label{eq:R1inf} \end{equation} while its first L\"uscher correction is \begin{align} \widetilde{R_{1}} =\;& -\frac{L}{\mu^{4}} \intop_{-\infty}^{\infty} \frac{du}{2\pi} \frac{e^{-\mu L \cosh u}}{\left(\cosh2u+\cosh2\theta\right)^4} \bigg[ -16 + 2\cosh4\theta + \cosh6u\cosh2\theta \cr & + \cosh4u (2-4\cosh4\theta) + \cosh2u\left(-18\cosh2\theta+\cosh6\theta\right) \bigg] \cr & - \frac{L^{2}}{\mu^{3}}\intop_{-\infty}^{\infty}\frac{du}{2\pi} \frac{e^{-\mu L \cosh u}}{(\cosh2u+\cosh2\theta)^4} 2\cosh u \sinh^2 u \cr & \times \bigg[4\cosh2u + \cosh2\theta (4 + \cosh4u - \cosh4\theta) \bigg] \label{eq:R1L} \end{align} To deal with the other sums of (\ref{eq:D2sumG}) containing $\mathcal{G}\left(-\kappa_{2}-i\mu\cosh u,q\right)$, it is expedient to desingularize the summand with a small auxiliary parameter $a$, in the following way: \begin{eqnarray} && -\frac{L\omega_{n_{q}}}{16\mu^{2}}\sum_{k_{2}\neq0}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{\coth\left(\frac{\mu L}{2}\cosh u\right)\mathcal{G}\left(-\kappa_{2}-i\mu\cosh u,q\right)}{\left(\kappa_{2}-\mu a\right)\left(\kappa_{2}+\mu a\right)\left(\kappa_{2}-q+i\mu\cosh u\right)^{2}\left(q+i\mu\cosh u\right)^{2}} \cr &=& I_{2a}^{+}\left(q\right)+I_{2a}^{-}\left(q\right)+R_{2a}\left(q\right)\label{eq:dsum2int2q-1} \end{eqnarray} Here $I_{2a}^{+}\left(q\right)$ contains the upper branch cut integral surrounding a single pole, similar to the one arising in the computation of $D_{1}\left(k_{1},k_{2}\right)$, plus the sum of residues $r_{2a}\left(u,v,q\right)$ of regularized poles at $z=\pm\mu a$. $I_{2a}^{-}\left(q\right)$ denotes the lower, regular branch cut integral, while $R_{2a}\left(q\right)$ is the residual term coming from the pole at $z=q-i\mu\cosh u$. The limit $a\rightarrow0$ can immediately be taken for $I_{2a}^{-}\left(q\right)$ and $R_{2a}\left(q\right)$, and the corresponding L\"uscher- and infinite volume corrections are easily obtained (after the final momentum-combination) as \begin{equation} I_{2}^{-,\infty}=-\frac{L^{2}\left(4+\pi^{2}\right)\cosh\theta}{8\mu^{3}\pi^{2}}\label{eq:I2-inf} \end{equation} \begin{eqnarray} \widetilde{I_{2}^{-}} & = & 2\widetilde{I_{1}^{+}}\label{eq:I2-L} \end{eqnarray} \begin{equation} R_{2}^{\infty}=\frac{L^{2}\cosh\theta}{4\mu^{3}}\label{eq:R2inf} \end{equation} \begin{align} \widetilde{R_{2}} =&\, \frac{6L^{2}}{\mu^{3}} \intop_{-\infty}^{\infty} \frac{du}{2\pi} \frac{e^{-\mu L \cosh u}}{\left(\cosh2u + \cosh2\theta \right)^{3}} \cosh u\sinh^{2}u \cr & \times \bigg[ 3+\cosh\left(2\left(u-\theta\right)\right) - \cosh4\theta + \cosh\left(2\left(u+\theta\right)\right) \bigg].\label{eq:R2L} \end{align} Extracting the finite volume corrections of $I_{2a}^{+}\left(q\right)$ is a harder task. The form of the term is \begin{align} I_{2a}^{+}\left(q\right) =\;& \frac{L^{2}\omega_{n_{q}}}{8\mu^{2}} \intop_{-\infty}^{\infty} \frac{du}{2\pi} \intop_{-\infty}^{\infty} \frac{dv}{2\pi} \left[\coth\left(\frac{\mu L}{2}\cosh u\right) \frac{e^{\mu L\cosh u}}{e^{\mu L\cosh u} - e^{\mu L\cosh v}} \right. \cr & \left. \times \frac{G\left(\cosh u,\cosh v,q\right)}{ a^{2}+\left(\cosh u-\cosh v\right)^2 } + \mathrm{sing}_{2a}\left(u,v,q\right) \right] + r_{2a}\left(q\right)\label{eq:I2aplus} \end{align} with \begin{equation} \mathrm{sing}_{2a}\left(u,v,q\right)=-\frac{2}{a^{2}\mu L}\coth\left(\frac{\mu L}{2}\cosh u\right)\frac{u}{\sinh u}\frac{G\left(\cosh u,\cosh u,q\right)}{u^{2}-v^{2}} \end{equation} To obtain the first L\"uscher correction, we apply the symmetrization transformation (\ref{eq:symmetriz}) to the double integral appearing in (\ref{eq:I2aplus}). The form of the resulting integral is analogous to what we already saw in the case of the one-particle energy. In contrast to that calculation, now the part involving the function $\mathrm{sing}_{2a}\left(u,v,q\right)$ does not contribute to the value of the integral, since a factor $\sinh^{2}u$ coming from $G\left(\cosh u,\cosh u\right)$ assures that the regular part of the integrand at $u=0$ is zero. The symmetrization removes the singularity of the integrand, and one can notice that there is no first order L\"uscher term in the resulting integral. This means that any first order L\"uscher correction of $I_{2a}^{+}\left(q\right)$ must come solely from the residual term $r_{2a}\left(q\right)$. The complication is that both $r_{2a}\left(q\right)$ and the double integral part of $I_{2a}^{+}\left(q\right)$ are divergent in the $a\rightarrow0$ limit, even after symmetrization. In the following, we will outline the circumvention of this problem. In the case of the double integral part, the root of the problem is that even though the series expansion starts with $\mathcal{O}\left(a^{0}\right)$, the integrand becomes divergent at $v\rightarrow\pm u$. At this point, it is convenient to reintroduce the variables $x=\cosh u$, $y=\cosh v$. In terms of these, we can write \begin{equation} I_{2a}^{+}-r_{2a}\left(q\right)=\intop_{1}^{\infty}dx\intop_{1}^{\infty}dy\frac{f\left(x,y\right)}{a^{2}+\left(x-y\right)^{2}} \end{equation} and we separate this integral as \begin{equation} \intop_{1}^{\infty}dx\intop_{1}^{\infty}dy\frac{f\left(x,y\right)}{a^{2}+\left(x-y\right)^{2}} = J_{1}+J_{2}\label{eq:divsep1} \end{equation} with \begin{eqnarray} J_{1} & = & \intop_{1}^{\infty}dx\intop_{1}^{\infty}dy\frac{1}{a^{2}+\left(x-y\right)^{2}}\left(f\left(x,y\right)-f\left(x,x\right)-\left(y-x\right)\frac{\partial f}{\partial y}\left(x,y=x\right)\right)\label{eq:divsep2}\\ J_{2} & = & \intop_{1}^{\infty}dx\intop_{1}^{\infty}dy\frac{1}{a^{2}+\left(x-y\right)^{2}}\left(f\left(x,x\right)+\left(y-x\right)\frac{\partial f}{\partial y}\left(x,y=x\right)\right)\label{eq:divsep3} \end{eqnarray} Now the integrand of $J_{1}$ remains regular after the $a\rightarrow0$ limit and the related integrals can be performed analytically. On the other hand, $J_{2}$ can be converted further by returning to the integration measure $\intop_{-\infty}^{\infty}dv$, and shifting the contour corresponding to a change of variables $v=\tilde{v}-i\pi$ . Upon shifting the contour, we have to encircle another pair of poles appearing at $z=-\mathrm{acosh}\left(-ia+x\right)$ and $z=\mathrm{acosh}\left(ia+x\right)$, respectively. We will call their contribution $r_{3a}\left(q\right)$. Aside from the residual terms, the shifted integrals are again finite at $a\rightarrow0$ and can be evaluated analytically. After momentum combination, these integrals yield the simple infinite-volume contributions \begin{eqnarray} J_{1} & = & -\frac{L^{2}\cosh\theta}{16\mu^{3}}\label{eq:J1inf}\\ J_{2}-r_{3a}\left(q\right) & = & \frac{L^{2}\cosh\theta}{4\pi^{2}\mu^{3}}\label{eq:J2inf} \end{eqnarray} All that remains to be done is the evaluation of the $a\rightarrow0$ limit of the residual contribution $r_{2a}\left(q\right)+r_{3a}\left(q\right)$. These terms can be expressed in terms of the integrals \begin{equation} \mathcal{J}_{k}\left(a,q\right)=\intop_{1}^{\infty}dy\:j_{k}\left(a,q,y\right)\label{eq:bigJ} \end{equation} \begin{equation} j_{k}\left(a,q,y\right)=\frac{\left(y-1\right)^{k}}{q^{2}+\mu^{2}y^{2}}\frac{1}{\sqrt{1+a^{2}+2iay-y^{2}}}\frac{1}{\sqrt{y^{2}-1}} \end{equation} as follows: \begin{eqnarray}\label{eq:singres} r_{2a}\left(q\right)+r_{3a}\left(q\right) & = & \intop_{1}^{\infty}dy \bigg[ \frac{1}{a^{2}} \bigg( \xi_{22}\left(q\right)\Re\mathrm{e} \left[j_{2}\left(a,q,y\right)\right] + \xi_{21} \left(q\right)\Re\mathrm{e} \left[j_{1}\left(a,q,y\right) \right] \bigg) \cr & & + \frac{1}{a} \bigg( \xi_{11}\left(q\right)\Im\mathrm{m}\left[j_{1}\left(a,q,y\right)\right] + \xi_{10}\left(q\right)\Im\mathrm{m} \left[j_{0}\left(a,q,y\right)\right] \bigg) \cr & & + \xi_{0}\left(q\right)\Re\mathrm{e}\left[j_{0}\left(a,q,y\right)\right] \bigg] +\mathcal{O} \left(a\right) \end{eqnarray} Next, we can separate these integrals analogously to (\ref{eq:divsep1})-(\ref{eq:divsep3}) into a part which can be expanded in $a$ up to $\mathcal{O}\left(a^{0}\right)$ preserving finiteness and another part that needs to be calculated explicitely in the regularization. Fortunately the indefinite integrals $\intop dy\:j_{k}\left(a,q,y\right)$ can be expressed for $k=0,1,2$ in terms of elliptic integrals: \begin{align} \intop dy\:j_{0}\left(a,q,y\right) =\; & \frac{2}{q\mu^{2}\cosh^{2}\theta\sqrt{4+a^{2}}} \left\{ qF\left(c_{2}\left(a\right)\mid c_{3}\left(a\right)\right) \right. \cr & \left. + i\mu \left[ \Pi\left(c_{1}\left(a,q\right), c_{2}\left(a\right) \mid c_{3} \left(a\right) \right) - \Pi\left(c_{1}\left(a,-q\right),c_{2}\left(a\right)\mid c_{3} \left(a\right) \right) \right] \right\} \\ \intop dy\:j_{1}\left(a,q,y\right) =\; & -\frac{2}{q\mu^{2}\cosh^{2}\theta\sqrt{4+a^{2}}} \left[ \left(i\mu+q\right) \Pi\left(c_{1}\left(a,q\right),c_{2}\left(a\right)\mid c_{3}\left(a\right) \right) \right. \cr & \left. +\left(-i\mu+q\right)\Pi\left(c_{1}\left(a,-q\right),c_{2}\left(a\right)\mid c_{3}\left(a\right)\right)\right] \\ \intop dy\:j_{2}\left(a,q,y\right) =\; & \frac{2i}{q\mu^3 \cosh^{2}\theta \sqrt{4+a^{2}}} \left[ \left(\mu-iq\right)^{2} \Pi\left(c_{1}\left(a,q\right),c_{2}\left(a\right)\mid c_{3}\left(a\right)\right) \right. \cr & \left. -\left(\mu+iq\right)^{2}\Pi\left(c_{1}\left(a,-q\right),c_{2}\left(a\right)\mid c_{3}\left(a\right)\right)\right] \end{align} These formulas enable us to calculate the definite integrals (\ref{eq:bigJ}) by taking the appropriate limits. In taking these limits, sometimes it is useful to use the identity \begin{equation} \Pi\left(n,i\sinh^{-1}\left(\tan z\right)\mid1-m\right)=\frac{i}{1-n}\left[F\left(z\mid m\right)-n\Pi\left(1-n,z\mid m\right)\right] \end{equation} It should be noted, however, that Newton-Leibniz formula assumes the starting and ending point of the integration lie on the same Riemann sheet of the function. Any possible branch cuts crossed along the line of integration need to be taken care of by hand. In the above formulas, \begin{eqnarray} c_{1}\left(a,q\right) & = & -\frac{a\left(\mu-iq\right)}{\left(2i+a\right)\left(\mu+iq\right)}\\ c_{2}\left(a\right) & = & \sin^{-1}\left(\sqrt{\frac{\left(2i+a\right)\left(1+y\right)}{a\left(-1+y\right)}}\right)\\ c_{3}\left(a\right) & = & \frac{a^{2}}{4+a^{2}} \end{eqnarray} One then needs to make a series expansion of the $\mathcal{J}_{k}$ in $a$, which can be performed by a lengthy calculation. This yields \begin{eqnarray} \Re e\mathcal{J}_{0} & = & \frac{\pi}{4\mu^{2}\cosh^{2}\theta} +\frac{a\left(1+\ln\left(\frac{a}{2\cosh^{2}\theta}\right)\right)}{2\mu^{2}\cosh^{4}\theta} -\frac{a^{2}\pi\left(21-2\sinh^{2}\theta+\sinh^{4}\theta\right)}{64\mu^{2}\cosh^{6}\theta}+\mathcal{O}\left(a^{3}\right)\nonumber \\ \Re e\mathcal{J}_{1} & = & -\frac{a\left[4\sinh\theta\tan^{-1}\left(\sinh\theta\right)+\sinh^{2}\theta\ln\left(\frac{a}{8}\right) +\ln\left(\frac{2a}{\cosh^{4}\theta}\right)\right]}{4\mu^{2}\cosh^{4}\theta} +\frac{a^{2}\pi\left(12+\text{\ensuremath{\cosh}}^{2}\theta\right)}{64\mu^{2}\cosh^{4}\theta} \cr && +\mathcal{O}\left(a^{3}\right) \cr \Re e\mathcal{J}_{2} & = & -\frac{a\left[\cosh^{2}\theta-4\sinh\theta\tan^{-1}\left(\sinh\theta\right) +2\left(1-\sinh^{2}\theta\right) \ln\left(\frac{\cosh\theta}{2}\right)\right]}{2\mu^{2}\cosh^{4}\theta}-\frac{a^{2}3\pi}{32\mu^{2}\cosh^{2}\theta} \cr && + \mathcal{O}\left(a^{3}\right) \\ \Im m\mathcal{J}_{0} & = & \frac{2\tan^{-1}\left(\sinh\theta\right)+\sinh\theta\ln\left(\frac{a}{8}\right)}{2\mu^{2}\sinh\theta\cosh^{2}\theta} -\frac{a\pi}{4\mu^{2}\cosh^{4}\theta}+\mathcal{O}\left(a^{2}\right)\nonumber \\ \Im m\mathcal{J}_{1} & = & -\frac{\tan^{-1}\left(\sinh\theta\right)+\sinh\theta\ln\left(\frac{\cosh\theta}{2}\right)}{\mu^{2}\sinh\theta\cosh^{2}\theta} +\frac{a\pi}{8\mu^{2}\cosh^{2}\theta}+\mathcal{O}\left(a^{2}\right) \end{eqnarray} The expanded part of (\ref{eq:singres}) can be integrated analytically (surprisingly, even the terms containing the L\"uscher correction $e^{-\mu L\cosh u}$ possess explicit integral formulas in terms of exponential integrals). As a result, all singular terms cancel, and we arrive at \begin{equation} r_{2a}\left(q\right)+r_{3a}\left(q\right)=-\frac{L\left(1-\sinh^{2}\theta\right)}{16\mu^{4}\cosh^{3}\theta}+\frac{e^{-\mu L}L\left(-1+\sinh^{2}\theta+L\mu\cosh^{2}\theta\right)}{8\mu^{4}\cosh^{3}\theta}\label{eq:r2ar3a} \end{equation} At this point, we are in the position to collect all contributions of the form factor (\ref{eq:formfac}). First of all, up to first L\"uscher order, the explicit terms in $N_{0}$ do not contribute. The sum $\sum_{k}S_{1}\left(k\right)$ can be transformed using (\ref{eq:singsum1}), (\ref{eq:singsum2}) and (\ref{eq:singsum3}). Since the sum is multiplied by $\bar{\rho}$, we only need to consider its infinite volume part, leading to terms of first L\"uscher order. After some cancellations, we find \begin{equation} \sum_{k\in\mathbb{Z}}S_{1}\left(k\right)=\frac{2L}{\mu^{4}\pi\cosh^{2}\theta}+\mathcal{O}\left(e^{-\mu L}\right) \end{equation} According to (\ref{eq:dsum1-int}), the double sum $\sum_{k_{1},k_{2}}D_{1}\left(k_{1},k_{2}\right)$ admits the L\"uscher expansion \begin{equation} \sum_{k_{1},k_{2}}D_{1}\left(k_{1},k_{2}\right)=\frac{L^{2}}{8\mu^{2}}+\frac{3L^{2}}{\mu^{2}}\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{e^{-\mu L\cosh u}}{\cosh\left(u-\theta\right)}+\mathcal{O}\left(e^{-2\mu L}\right) \end{equation} To deal with the nontrivial sum $\sum_{k_{1},k_{2}}D_{2}\left(k_{1},k_{2}\right)$, we first collect the explicit terms (i.e. those that can be expressed without integrals) appearing in the expansion. These terms come from the following places: \begin{itemize} \item The single sums separated in (\ref{eq:separsum2}) as well as the regularized residual term $r_{2a}\left(q\right)+r_{3a}\left(q\right)$ of (\ref{eq:r2ar3a}) contain explicit terms proportional to $L$, $Le^{-\mu L}$ and $L^{2}e^{-\mu L}$. \item The residual term $R_{1}^{\infty}$ defined in (\ref{eq:R1inf}) contains terms proportional to $L$ and $L^{2}$. \item The terms coming from the quantities $I_{1}^\infty$, $I_{2}^{-,\infty}$, $R_{2}^{\infty}$, $J_{1}$, $J_{2}-r_{3a}\left(q\right)$ (appearing in (\ref{eq:I1inf}), (\ref{eq:I2-inf}), (\ref{eq:R2inf}), (\ref{eq:J1inf}) and (\ref{eq:J2inf}), respectively) only contain terms proportional to $L^{2}$. \end{itemize} The above terms combine nicely so that all explicit contributions proportional to $L$, $Le^{-\mu L}$ and $L^{2}e^{-\mu L}$ cancel. All other terms sum up to \begin{equation} \sum_{k_{1},k_{2}}D_{2}\left(k_{1},k_{2}\right)=\frac{3L^{2}}{\mu^{3}}\left(\frac{1}{48}-\frac{1}{4\pi^{2}}\right) \cosh\theta + \mathcal{O} \left(e^{-\mu L}\right) \end{equation} We now proceed to combine the various integral contributions into a more transparent form. First, $\widetilde{I_{1}^{-}}$, $\widetilde{I_{1}^{+}}$ and $\widetilde{I_{2}^{-}}$ of (\ref{eq:I1-L}), (\ref{eq:I1+L}) and (\ref{eq:I2-L}) can be combined, and after trigonometric manipulations and the exploitation of the symmetry of the integration domain, we find \begin{equation} \widetilde{I_{1}^{+}}+\widetilde{I_{1}^{-}}+\widetilde{I_{2}^{-}}=\frac{3L^{2}\cosh\theta}{\mu^{3}\pi^{2}}\intop_{-\infty}^{\infty}du\left(\frac{w\sinh w}{\cosh^{3}w}-\frac{1}{\cosh^{2}w}\right) e^{-\mu L\cosh u} \end{equation} where \begin{equation} w=u-\theta. \end{equation} The integrand of the residual term $\widetilde{R_{1}}$ of (\ref{eq:R1L}) contains a part proportional in $Le^{-\mu L\cosh u}$. This term can be combined with a similar integrand coming from the first L\"uscher correction $\widetilde{Z}$ of the separated single sum $\sum_{k_{1}\in\mathbb{Z}}\frac{1}{\omega_{k_{1}}^{2}}\frac{1}{\left(\omega_{k_{1}}+\omega_{n_{q}}\right)^{2}}$. These can be integrated by parts, yielding an integrand that is proportional to $L^{2}e^{-\mu L\cosh u}$. This resulting integrand can be combined with the remaining part of $\widetilde{R_{1}}$ and also with $\widetilde{R_{2}}$ appearing in (\ref{eq:R2L}), and the result can be transformed using both trigonometric identities and the symmetry of the integration domain, to the following form \begin{equation} \widetilde{R_{1}}+\widetilde{R_{2}}+\widetilde{Z}=\frac{3L^{2}}{2\pi\mu^{3}}\intop_{-\infty}^{\infty}du\left(\frac{\cosh u}{\cosh^{2}w}-\frac{\cosh\theta}{\cosh^{3}w}\right) e^{-\mu L\cosh u} \end{equation} Using the above formulas, we can express the form factor as a function of the S-matrix parameter $\alpha$ \begin{eqnarray} \left\langle 0\left(b\right)\left|\varphi\right|q\left(b\right)\right\rangle & = & \frac{1}{\sqrt{2L\mu\cosh\theta}} \bigg\{ 1-\alpha\intop\frac{du}{2\pi} \left[\frac{e^{-\mu L\cosh u}}{\cosh^{2}\theta}\right] + \alpha^{2}\left(\frac{1}{48} + \frac{1}{24\cosh^{2}\theta} - \frac{1}{4\pi^{2}}\right) \cr && + \alpha^{2}\intop_{-\infty}^{\infty}\frac{du}{2\pi} e^{-\mu L\cosh u} \left[ \frac{\sinh u\sinh\theta}{\cosh^{2}\theta\cosh^{2}w} +\frac{2}{\cosh^{2}\theta\cosh w} - \frac{1}{\cosh^{3}w} \right. \cr && \left. +\frac{2}{\pi}\left(\frac{w\sinh w}{\cosh^{3}w}-\frac{1}{\cosh^{2}w}\right) \right] \bigg\} \label{eq:form_final1-1} \end{eqnarray} \section{Equivalence of finite and infinite volume regularizations} In this appendix we show that the heuristic regularization we used in the bulk of the paper is in fact completely equivalent to finite volume regularizations. Finite volume regularization for the finite temperature two-point function was suggested in \cite{Pozsgay:2010cr,Pozsgay:2014gza} and implemented for states with small particle numbers. Here, on the one hand, we follow their calculations for the term containing a finite volume one-particle and a two-particle state, while on the other hand, we recover the analogous terms from our infinite volume regularization% \footnote{In order to be comparable to the calculations of \cite{Pozsgay:2010cr,Pozsgay:2014gza} we use their normalization for form factors, which is related to the normalization $\langle\theta\vert\theta'\rangle=2\pi\delta(\theta'-\theta)$. % }. In calculating the finite temperature two-point function \begin{equation} \langle\mathcal{O}(x,t)\mathcal{O}\rangle_{L}=\Theta(x)\frac{\mbox{Tr}[\mathcal{O}(0,t)e^{-Hx}\mathcal{O}e^{-H(L-x)}]}{\mbox{Tr}[e^{-HL}]}+\Theta(-x)\frac{\mbox{Tr}[\mathcal{O}e^{Hx}\mathcal{O}(0,t)e^{-H(L+x)}]}{\mbox{Tr}[e^{-HL}]}\label{eq:2pt} \end{equation} we need to insert two complete system of states. We focus on the term which contains a one-particle and a two-particle state. \subsection{Finite volume regularization} In the finite volume regularization scheme the space is compactified on the circle of length $R$ with periodic boundary condition. Terms are organized in powers of $R$ and those having positive powers cancel with the corresponding terms from the denominator leading to a finite result in the $R\to\infty$ limit. The rapidity $u_{n}$ of a finite volume one-particle state satisfies the free quantization condition \begin{equation} e^{ip(u_{n})R}=1\quad;\qquad\phi(u_{n})\equiv p(u_{n})R=2\pi n \end{equation} The rapidities $\beta_{1}$, $\beta_{2}$ of a two particle state satisfy the Bethe-Yang equations \begin{equation} e^{ip_{1}R}S_{12}\equiv e^{ip(\beta_{1})R}S(\beta_{1}-\beta_{2})=1\quad;\qquad e^{ip_{2}R}S_{21}=1 \end{equation} Having taken logarithm the states are labelled by the quantization numbers $n_{1}$ and $n_{2}$: \begin{equation} \phi_{1}\equiv p_{1}R-i\log S_{12}=2\pi n_{1}\quad;\qquad\phi_{2}\equiv p_{2}R-i\log S_{21}=2\pi n_{2} \end{equation} The contribution of these one- and two-particle states to the numerator of the two-point function (\ref{eq:2pt}) has the structure \begin{equation} I=\sum_{n,n_{1}<n_{2}}\vert\langle u\vert\mathcal{O}\vert\beta_{1},\beta_{2}\rangle_{R}\vert^{2}g(u,\beta_{1},\beta_{2}) \end{equation} where $u\equiv u_{n}$, $\beta_{i}\equiv\beta_{i}(n_{1},n_{2})$ and we used the finite volume matrix element $\langle u\vert\mathcal{O}\vert\beta_{1},\beta_{2}\rangle_{R}$. In the calculation we will be quite general and do not specify $g$. It can be different for the two-point function or for its Fourier transform but the equivalence between the finite and infinite volume regularization will not be sensitive to it. Since for generic volumes the quantized rapidities for one- and two-particle states never agree we can use the finite volume non-diagonal form factor formula (\ref{FVFF}): \begin{equation} \langle u\vert\mathcal{O}\vert\beta_{1},\beta_{2}\rangle_{R}=\frac{F_{3}(u+i\pi,\beta_{1},\beta_{2})}{\sqrt{\rho_{1}(u)\rho_{2}(\beta_{1},\beta_{2})}}+O(e^{-mR}) \end{equation} which is valid up to exponentially small volume corrections negligible in the $R\to\infty$ limit. The relevant quantity we would like to evaluate is then \begin{equation} I=\sum_{n,n_{1}<n_{2}}\frac{F_{3}(u+i\pi,\beta_{1},\beta_{2})F_{3}(\beta_{2}+i\pi,\beta_{1}+i\pi,u)}{\rho_{1}(u)\rho_{2}(\beta_{1},\beta_{2})}g(u,\beta_{1},\beta_{2}) \end{equation} where the density of states are \begin{equation} \rho_{1}(u)=\phi'(u)\quad;\qquad\rho_{2}(\beta_{1},\beta_{2})=\frac{\partial\phi_{1}}{\partial\beta_{1}}\frac{\partial\phi_{2}}{\partial\beta_{2}}-\frac{\partial\phi_{1}}{\partial\beta_{2}}\frac{\partial\phi_{2}}{\partial\beta_{1}} \end{equation} We further use that for scalar operators the form factor axioms relate $F_{3}(\beta_{2}+i\pi,\beta_{1}+i\pi,u)$ to $F_{3}(u+i\pi,\beta_{1},\beta_{2})$ as \begin{equation} F_{3}(\beta_{2}+i\pi,\beta_{1}+i\pi,u)=F_{3}(u,\beta_{2}-i\pi,\beta_{1}-i\pi)=F_{3}(u+i\pi,\beta_{2},\beta_{1})=S_{21}F_{3}(u+i\pi,\beta_{1},\beta_{2}) \end{equation} In the following we turn the sums into integrals. We start with the sum for $n$. We use the following identity \begin{equation} \sum_{n}\frac{h(u_{n})}{\rho_{1}(u_{n})}=\sum_{n}\oint_{C_{n}}\frac{du}{2\pi i}\frac{ip'(u)R}{1-e^{-ip(u)R}}\frac{h(u)}{\rho_{1}(u)}=\sum_{n}\oint_{C_{n}}\frac{du}{2\pi}\frac{h(u)}{1-e^{-ip(u)R}} \end{equation} where the contour $C_{n}$ is surrounding the $pR=2\pi n$ singularity. We then would like to open the contours into $C_{\pm}$ which lie just above and below the real axis. In doing this contour deformation singularities of the form factor on the real line at $u=\beta_{i}$ have to be taken into account. We will collect these terms later, but now we focus on the shifted integrals. Taking the $R\to\infty$ limit on the upper contour we have $\frac{1}{1-e^{-ip(u)R}}\to0$, thus this term will not contribute, while on the lower contour we have $\frac{1}{1-e^{-ip(u)R}}\to1$ and the shifted integral $u\to u-i\eta$ remains. On this shifted $u$-contour the form factor $F_{3}(u+i\pi,\beta_{1},\beta_{2})$ has no singularity for any real $\beta_{i}$, thus the other two summations, in the $R\to\infty$ limit, can be safely turned into integrations $\sum_{n_{1}<n_{2}}\to\frac{1}{2}\int\frac{d\beta_{1}}{2\pi}\frac{d\beta_{2}}{2\pi}\rho_{2}(\beta_{1},\beta_{2})$ leading to the shifted finite integrals \begin{equation} I_{-}=\frac{1}{2}\int\frac{du}{2\pi}\frac{d\beta_{1}}{2\pi}\frac{d\beta_{2}}{2\pi}S(\beta_{2}-\beta_{1}) F_{3}(u+i\pi-i\eta,\beta_{1},\beta_{2})^{2}g(u-i\eta,\beta_{1},\beta_{2}) \end{equation} Now we focus on the singularities coming from the form factor at $u=\beta_{i}$, which can be written (in the normalization used in this appendix) as \begin{equation} F_{3}(u+i\pi,\beta_{1},\beta_{2})=\frac{i}{u-\beta_{1}}(1-S_{12})F_{1}+\frac{i}{u-\beta_{2}}(S_{12}-1)F_{1}+F_{3}^{c}(u+i\pi,\beta_{1},\beta_{2})\label{eq:Fconn} \end{equation} where the connected form factor was defined in eq. (\ref{Fcon}). Let us start with the singularity at $u=\beta_{1}$. We have simple and double poles: \begin{eqnarray} S_{21}F_{3}(u+i\pi,\beta_{1},\beta_{2})^{2} & = & -\frac{S_{21}(1-S_{12})^{2}F_{1}^{2}}{(u-\beta_{1})^{2}}+\\ & & \hspace{-1cm}\frac{2S_{21}i}{(u-\beta_{1})}(1-S_{12})\left(\frac{i}{u-\beta_{2}}(S_{12}-1)F_{1}+F_{3}^{c}(\beta_{1}+i\pi,\beta_{1},\beta_{2})\right)F_{1}+\dots\nonumber \end{eqnarray} where the dots represents terms regular at $u=\beta_1$. Using that at the pole position $1-e^{-ip(\beta_{1})R}=1-S_{12}$, the contribution of the simple pole term at $u=\beta_{1}$ gives \begin{equation} -2S_{21}\left(\frac{i}{\beta_{1}-\beta_{2}}(S_{12}-1)F_{1}+F_{3}^{c}(\beta_{1}+i\pi,\beta_{1},\beta_{2})\right)\frac{F_{1}g(\beta_{1},\beta_{1},\beta_{2})}{\rho_{2}(\beta_{1},\beta_{2})} \end{equation} In the double pole term we calculate the derivative $\partial_{u}\frac{g(u,\beta_{1},\beta_{2})}{1-e^{-ip(u)R}}\vert_{u=\beta_{1}}$ leading to \begin{equation} i\frac{(1-S_{21})F_{1}^{2}}{\rho_{2}(\beta_{1},\beta_{2})}\partial_{u}g(u,\beta_{1},\beta_{2})\vert_{u=\beta_{1}}-\frac{F_{1}^{2}\rho_{1}(\beta_{1})}{\rho_{2}(\beta_{1},\beta_{2})}g(\beta_{1},\beta_{1},\beta_{2}) \end{equation} Observe that $\rho_{1}(\beta_{1})=mR\cosh\beta_{1}$ is leading in the volume among all the terms. Its contribution is cancelled by a diagonal one-particle term in the denominator of the two point function (\ref{eq:2pt}). Similar calculations for the pole at $u=\beta_{2}$ leads to expressions, which can be obtained from the previous ones by the $\beta_{1}\leftrightarrow\beta_{2}$ replacement% \footnote{Actually $g(\beta_{1},\beta_{1},\beta_{2})$ should be replaced with $g(\beta_{2},\beta_{1},\beta_{2})$. In all the cases we considered however, $g(u,\beta_{1},\beta_{2})$ was symmetric in $\beta_{1}$ and $\beta_{2}$.% }. Now we have to turn the remaining summations for $n_{1}<n_{2}$ into integrations. We should be careful with the diagonal terms and use \begin{eqnarray} \sum_{n_{1}<n_{2}}f(\beta_{1},\beta_{2}) & = & \frac{1}{2}\sum_{n_{1},n_{2}}f(\beta_{1},\beta_{2})-\frac{1}{2}\sum_{n_{1}=n_{2}}f(\beta_{1},\beta_{1})\\ & \to & \frac{1}{2}\int\frac{d\beta_{1}}{2\pi}\frac{d\beta_{2}}{2\pi}\rho_{2}(\beta_{1},\beta_{2})f(\beta_{1},\beta_{2})-\frac{1}{2}\int\frac{d\beta_{1}}{2\pi}\rho_{1}(\beta_{1})f(\beta_{1},\beta_{1})\nonumber \end{eqnarray} Clearly, diagonal terms are supressed in the $R\to\infty$ limit only if the summand is not proportional to $\rho_{1}$. That is, the divergent term in the $R\to\infty$ limit \begin{equation} I_{R}=-\frac{F_{1}^{2}}{2}\int\frac{d\beta_{1}}{2\pi}\frac{d\beta_{2}}{2\pi}\left(\rho_{1}(\beta_{1})g(\beta_{1},\beta_{1},\beta_{2})+\rho_{1}(\beta_{2})g(\beta_{2},\beta_{1},\beta_{2})\right) \end{equation} which eventually will be cancelled by a term from the denominator, will lead to a finite diagonal contribution \begin{equation} \frac{1}{2}\int\frac{d\beta_{1}}{2\pi}\rho_{1}(\beta_{1})\frac{F_{1}^{2}}{\rho_{2}(\beta_{1},\beta_{2})}\left(\rho_{1}(\beta_{1})g(\beta_{1},\beta_{1},\beta_{2})+\rho_{1}(\beta_{2})g(\beta_{1},\beta_{1},\beta_{2})\right)\to I_{d}=F_{1}^{2}\int\frac{d\beta}{2\pi}g(\beta,\beta,\beta) \end{equation} In the remaining terms we have \begin{eqnarray*} I_{r}=\int\frac{d\beta_{1}}{2\pi}\frac{d\beta_{2}}{2\pi}F_{1}\Biggl[\frac{i}{\beta_{1}-\beta_{2}}(1-S_{12})F_{1}\left(S_{21}g(\beta_{1},\beta_{1},\beta_{2})+g(\beta_{2},\beta_{1},\beta_{2})\right)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\\ -S_{21}F_{3}^{c}(\beta_{1}+i\pi,\beta_{1},\beta_{2})g(\beta_{1},\beta_{1},\beta_{2})-F_{3}^{c}(\beta_{2}+i\pi,\beta_{1},\beta_{2})g(\beta_{2},\beta_{1},\beta_{2})\Biggr]+ \end{eqnarray*} \begin{equation} \int\frac{d\beta_{1}}{2\pi}\frac{d\beta_{2}}{2\pi}i(1-S_{21})F_{1}^{2}\left(\partial_{u}g(u,\beta_{1},\beta_{2})\vert_{u=\beta_{1}}-S_{12}\partial_{u}g(u,\beta_{1},\beta_{2})\vert_{u=\beta_{2}}\right) \end{equation} Observe that, since $S(\beta_{1}-\beta_{2})=-1$ for $\beta_{1}=\beta_{2}$, the integrand is not singular at all. The terms $I=I_{-}+I_{R}+I_{d}+I_{r}$ are the generalizations of the result \cite{Pozsgay:2010cr,Pozsgay:2014gza} for generic functions $g(u,\beta_{1},\beta_{2}$). In the following we show how these contributions can be extracted from an infinite volume calculation. \subsection{Infinite volume calculation} Let us calculate directly the contribution of the term having a one-particle and a two-particle state in infinite volume: \begin{equation} I=\frac{1}{2}\int\frac{du}{2\pi}\frac{d\beta_{1}}{2\pi}\frac{d\beta_{2}}{2\pi}\vert\langle u\vert\phi\vert\beta_{1},\beta_{2}\rangle\vert^{2}g(u,\beta_{1},\beta_{2}) \end{equation} The crossing relation of form factors is understood in the distributional sense \cite{Smirnov:1992vz}: \begin{equation} \langle u\vert\phi\vert\beta_{1},\beta_{2}\rangle=2\pi\delta(u-\beta_{1})F_{1}+S_{21}2\pi\delta(u-\beta_{2})F_{1}+F_{3}(u+i\pi-i\epsilon,\beta_{1},\beta_{2}) \end{equation} As we introduced in the bulk of the paper we regulate the $\delta$-functions as \begin{equation} 2\pi\delta(x)=\frac{i}{x+i\epsilon}-\frac{i}{x-i\epsilon} \end{equation} We will now show that it will be equivalent to the finite volume regularization. Using the definition of the connected form factor (\ref{eq:Fconn}) the pole contributions nicely combine together: \begin{eqnarray} \langle u\vert\phi\vert\beta_{1},\beta_{2}\rangle= & & F_{3}^{c}(u+i\pi-i\epsilon,\beta_{1},\beta_{2})+ \\ & & F_{1}\left(\frac{i}{u-\beta_{1}+i\epsilon}+\frac{iS_{12}}{u-\beta_{2}+i\epsilon}-\frac{iS_{12}}{u-\beta_{1}-i\epsilon}-\frac{i}{u-\beta_{2}-i\epsilon}\right) \nonumber \end{eqnarray} In order to make contact with the finite volume calculation we shift the $u$ contour to $-i\eta$, with $\eta>\epsilon$. This is a different contour deformation, what we used in Section 3, but is a completely equivalent regularization. On the shifted contour we can take the $\epsilon\to0$ limit, which basically kills the $\delta$ functions and we arrive at: \begin{equation} \frac{1}{2}\int\frac{du}{2\pi}\frac{d\beta_{1}}{2\pi}\frac{d\beta_{2}}{2\pi}S(\beta_{2}-\beta_{1})F_{3}(u+i\pi-i\eta,\beta_{1},\beta_{2})^{2}g(u-i\eta,\beta_{1},\beta_{2}) \end{equation} which is just the same as the surviving $C_{-}$ contour's contribution $I_{-}$. In the following we compare the remaining terms. In shifting the contour we should pick up the contributions of the poles at $u=\beta_{1}-i\epsilon$ and at $u=\beta_{2}-i\epsilon$. In the following we focus on the integrand only. It is understood that we integrate the expressions for $\beta_{1}$ and $\beta_{2}$. The pole at $u=\beta_{1}-i\epsilon$ has the structure \begin{eqnarray} S(\beta_{2}-\beta_{1})\langle u\vert\phi\vert\beta_{1},\beta_{2}\rangle^{2} & = & -\frac{S_{21}F_{1}^{2}}{(u-\beta_{1}+i\epsilon)^{2}}+\frac{2iS_{21}F_{1}}{u-\beta_{1}+i\epsilon}\times\\ & & \hspace{-1cm}\left(F_{3}^{c}(u+i\pi-i\epsilon,\beta_{1},\beta_{2})+\frac{iF_{1}S_{12}}{u-\beta_{2}+i\epsilon}-\frac{iF_{1}S_{12}}{u-\beta_{1}-i\epsilon}-\frac{iF_{1}}{u-\beta_{2}-i\epsilon}\right)\nonumber \end{eqnarray} The contribution of the double pole is \begin{equation} -\frac{1}{2}iS_{21}F_{1}^{2}\partial_{u}g(u,\beta_{1},\beta_{2})\vert_{u=\beta_{1}-i\epsilon} \end{equation} while the simple pole gives \begin{equation} -F_{1}S_{21}\left(F_{3}^{c}(\beta_{1}+i\pi-2i\epsilon,\beta_{1},\beta_{2})+\frac{iF_{1}S_{12}}{\beta_{1}-\beta_{2}}+\frac{F_{1}S_{12}}{2\epsilon}-\frac{iF_{1}}{\beta_{1}-\beta_{2}-2i\epsilon}\right)g(\beta_{1}-i\epsilon,\beta_{1},\beta_{2}) \end{equation} We also have similar contributions from the pole at $u=\beta_{2}-i\epsilon$, which can be obtained by the $\beta_{1}\leftrightarrow\beta_{2}$ transformation, (where we do not exchange the last two arguments of $g$ ). The divergent term in this formalism appears as \begin{equation} -\frac{F_{1}^{2}}{2\epsilon}\left(g(\beta_{1},\beta_{1},\beta_{2})+g(\beta_{2},\beta_{1},\beta_{2})\right) \end{equation} which is the analogue of $I_{R}$. This term is cancelled by a diagonal one-particle term in the denominator of the two-point function (\ref{eq:2pt}). By expanding the function $g$ in $\epsilon$ and combining with the double pole terms we get \begin{equation} \frac{iF_{1}^{2}}{2}(1-S_{21})\partial_{u}g(u,\beta_{1},\beta_{2})\vert_{u=\beta_{1}}+\frac{iF_{1}^{2}}{2}(1-S_{12})\partial_{u}g(u,\beta_{1},\beta_{2})\vert_{u=\beta_{2}} \end{equation} The contributions of the connected form factors are \begin{equation} -F_{1}\left(S_{21}F_{3}^{c}(\beta_{1}+i\pi,\beta_{1},\beta_{2})g(\beta_{1},\beta_{1},\beta_{2})+F_{3}^{c}(\beta_{2}+i\pi,\beta_{1},\beta_{2})g(\beta_{2},\beta_{1},\beta_{2})\right) \end{equation} There are two terms where we should be careful with the $\epsilon$ terms. There we use \begin{equation} \frac{1}{\beta_{1}-\beta_{2}\mp2i\epsilon}=P_{\frac{1}{\beta_{1}-\beta_{2}}}\pm i\pi\delta(\beta_{1}-\beta_{2}) \end{equation} The contribution of the $\delta$-function is \begin{equation} 2\pi F_{1}^{2}\delta(\beta_{1}-\beta_{2})g(\beta_{1},\beta_{1},\beta_{1}) \end{equation} which is equivalent to the term $I_{d}$. In the remaining terms the principal value description can be omitted as the full integrand is regular at $\beta_{1}=\beta_{2}$: \begin{equation} iF_{1}^{2}\frac{1}{\beta_{1}-\beta_{2}}\left((1-S_{12})g(\beta_{2},\beta_{1},\beta_{2})-(1-S_{21})g(\beta_{1},\beta_{1},\beta_{2})\right) \end{equation} Clearly, summing up the results we completely agree with the integrand of the finite volume regularization. \section{Introduction} \input intro.tex \section{Mirror representation} \input mirror \section{Sinh-Gordon form factors} \input sinhG \section{Hamiltonian perturbation theory} \label{RTPT} \input Hampert \section{Conclusions} \input{conc.tex} \vspace{5ex} \begin{center} {\large\bf Acknowledgments} \end{center} This investigation was supported by the Hungarian National Science Fund NKFIH (under K116505) and by a Lend\"ulet Grant. \par\bigskip \subsection{Finite volume form of the Hamiltonian} The sinh-Gordon model in a finite volume $L$ is described by a Hamilton operator $H_{L}$ of the following form \begin{equation} H_{L}=H_{L}^{0}+V_{L} \end{equation} where $H_{L}^{0}=\intop_{0}^{L}dx\::\:\frac{1}{2}\pi^{2}+\frac{1}{2}\left(\partial_{x}\varphi\right)^{2}+\frac{1}{2}\mu^{2}\varphi^{2}\::_{\mu,L}$ is a free Hamiltonian, and \begin{equation} V_{L}=\intop_{0}^{L}dx\::\:\frac{\mu^{2}}{8\pi b^{2}}\left(\cosh\left(\sqrt{8\pi}b\varphi\right)-1\right)-\frac{\mu^{2}}{2}\varphi^{2}\::_{\mu,L}+\mathcal{O}\left(e^{-mL}\right)\label{eq:VLexp} \end{equation} contains the interaction. The field operator admits a mode expansion \begin{equation} \varphi\left(x,t\right)=\sum_{n\in\mathbb{Z}}\frac{1}{\sqrt{2L\omega_{n}}}\left(a_{n}e^{i\left(k_{n}x-\omega_{n}t\right)}+a_{n}^{\dagger}e^{-i\left(k_{n}x-\omega_{n}t\right)}\right)\label{eq:modeexp} \end{equation} \begin{equation} \omega_{n}=\sqrt{\mu^{2}+k_{n}^{2}},\quad k_{n}=\frac{2\pi n}{L} \end{equation} where the ladder operators satisfy the usual bosonic commutation relations $\left[a_{n},a_{m}^{\dagger}\right]=\delta_{n,m}$ and $\left[a_{n},a_{m}\right]=0$. The normal ordering $:\::_{\mu,L}$ is understood in the sense that these creation operators (creating a particle of mass $\mu$ in the free theory of volume $L$) are arranged to the left of the annihilation operators. The spectrum of $H_{L}^{0}$ is generated by acting with creation operators on the lowest energy state, the vacuum $\left|0\right\rangle $: \begin{eqnarray} \left|N_{n_{1}},N_{n_{2}},\dots,N_{n_{k}}\right\rangle & = & \frac{1}{\mathcal{N}\left(\left\{ N_{n_{i}}\right\} \right)}\prod_{i=1}^{k}\left(a_{n_{i}}^{\dagger}\right)^{N_{n_{i}}}\left|0\right\rangle ,\quad n_{i}\in\mathbb{Z}\label{eq:fockv}\\ H_{L}^{0}\left|N_{n_{1}},N_{n_{2}},\dots,N_{n_{k}}\right\rangle & = & \sum_{i}N_{n_{i}}\omega_{n_{i}} \end{eqnarray} where we have introduced the symbol \begin{equation} \mathcal{N}\left(\left\{ N_{n_{i}}\right\} \right)=\sqrt{\prod_{i=1}^{k}N_{n_{i}}!} \end{equation} The exponential corrections to $V_{L}$ indicated in (\ref{eq:VLexp}) are due to the following \cite{Rychkov:2014eea}. In infinite volume, the sinh-Gordon Hamiltonian is naturally expressed in terms of operators normal ordered with respect to the ladder operators of an infinite volume free theory. As we decrease the volume, we want to keep the UV behaviour of the theory unaffected, which means leaving the coefficients of the \emph{bare} fields unchanged (instead of the normal ordered ones) in the Hamiltonian density. By temporarily introducing an UV regulator $\Lambda$, normal ordered powers of the field can be expressed in terms of the bare powers by utilizing Wick's theorem: \begin{equation} \varphi^{n}\left(x,t\right) = \sum_{k=0}^{\left\lfloor n/2\right\rfloor } \frac{n!}{2^{k}k! \left(n-2k\right)!} \left( \left\langle 0\left|\varphi^{2}\right|0\right\rangle _{\mu,L} \right)^k \::\:\varphi^{n-2k}\::_{\mu,L} \label{eq:wick} \end{equation} where $\left|0\right\rangle $ is the ground state of $H_{L}^{0}$. Moreover, \begin{eqnarray} \left\langle 0\left|\varphi^{2}\right|0\right\rangle _{\mu,L} & = & \left[\varphi_{+},\varphi_{-}\right]=\frac{1}{2L}\sum_{n=-N_{\Lambda}}^{N_{\Lambda}}\frac{1}{\omega_{n}}\label{eq:vacexpL}\\ \left\langle 0\left|\varphi^{2}\right|0\right\rangle _{\mu,\infty} & = & \left[\varphi_{+},\varphi_{-}\right]=\frac{1}{4\pi}\intop_{-\Lambda}^{\Lambda}\frac{dk}{\omega_{k}}.\label{eq:vacexpI} \end{eqnarray} Equation (\ref{eq:wick}) together with (\ref{eq:vacexpL}) and (\ref{eq:vacexpI}) can be used to derive the exponential corrections arising from the different normal ordering prescriptions at finite and infinite volume. After eliminating the cutoff $\Lambda$ we arrive at the following exact form of the finite volume interaction term \begin{equation} V_{L}=\intop_{0}^{L}dx\::\:\frac{\mu^{2}e^{\pi\bar{\rho}b^{2}}}{8\pi b^{2}}\left(\cosh\left(\sqrt{8\pi}b\varphi\right)-1\right)-\frac{\mu^{2}}{2}\varphi^{2}\::_{\mu,L}+E_{0}\left(L\right)\label{eq:intterm} \end{equation} where \begin{equation} \bar{\rho}=\frac{2}{\pi}\intop_{-\infty}^{\infty}du\frac{1}{e^{\mu L\cosh u}-1} \end{equation} (the bar indicates that now the bare Lagrangian parameter $\mu$ appears in the exponent) and $E_{0}$ is a (scalar) Casimir term whose value can be calculated exactly but does not affect the masses and form factors, and therefore we now neglect it. The interaction term (\ref{eq:intterm}) can be expanded in the coupling $b$ to yield \begin{equation} V_{L}=b^{2}V_{L}^{(1)}+b^{4}V_{L}^{(2)}+\mathcal{O}\left(b^{6}\right) \end{equation} \begin{equation} V_{L}^{(1)}=2\pi\mu^{2}\left(\frac{1}{6}O_{4}+\frac{\bar{\rho}}{4}O_{2}\right)\quad;\qquad V_{L}^{(2)}=4\pi^{2}\mu^{2}\left(\frac{1}{45}O_{6}+\frac{\bar{\rho}}{12}O_{4}+\frac{\bar{\rho}^{2}}{16}O_{2}\right) \end{equation} where \begin{equation} O_{n}=\intop_{0}^{L}\::\:\varphi^{n}\left(x\right)\::_{\mu,L}dx. \end{equation} \subsection{Time-independent perturbation theory} Since $H_{L}$ has a discrete spectrum, one can treat it as a conventional quantum mechanical Hamilton-operator and attempt to approximate the eigenvalues and eigenvectors by means of time-independent perturbation theory. In this framework the corrections of the energies (non-degenerate in the free theory) up to $b^{4}$ can be written in the form \begin{eqnarray} E_{n} & = & E_{n}^{\left(0\right)}+b^{2}E_{n}^{\left(1\right)}+b^{4}E_{n}^{\left(2\right)}+\mathcal{O}\left(b^{6}\right),\\ E_{n}^{\left(1\right)} & = & \left\langle n\left|V_{L}^{(1)}\right|n\right\rangle \equiv V_{nn}^{(1)}\\ E_{n}^{\left(2\right)} & = & V_{nn}^{(2)} + { \sum_{\left|k\right\rangle \in \mathcal H} }' \frac{\left|V_{kn}^{(1)}\right|^{2}}{E_{nk}^{(0)}} \quad; \qquad E_{nk}^{(0)} = E_{n}^{(0)}-E_{k}^{(0)}\quad;\qquad V_{kn}^{(i)}=\left\langle k\left|V_{L}^{(i)}\right|n\right\rangle \label{eq:2ndenergy} \end{eqnarray} In the above, $E_{n}^{\left(0\right)}$ denotes the energy of the $n$th lowest energy state in the free theory, the state vectors $\left|n\right\rangle ,\left|m\right\rangle $ are understood to be the eigenvectors of $H_{L}^{0}$, and the sum in (\ref{eq:2ndenergy}) is for all elements of an eigenbasis of $H_{L}^{0}$, except $\left|n\right\rangle $ itself, which is indicated by $\sum'$. Correspondingly, the expansion of the interacting eigenvectors have the form \begin{align} \left|n\left(b\right)\right\rangle = & \left|n\right\rangle +b^{2}\left|n\right\rangle ^{\left(1\right)}+b^{4}\left|n\right\rangle ^{\left(2\right)}+\mathcal{O}\left(b^{6}\right),\\ \left|n^{\left(1\right)}\right\rangle = & {\sum_{\left|k\right\rangle \in\mathcal H}}' \frac{V_{kn}^{(1)}}{E_{nk}^{(0)}}\left|k\right\rangle \\ \left|n^{\left(2\right)}\right\rangle = & {\sum_{\left|k\right\rangle \in\mathcal H}}' \frac{V_{kn}^{(2)}}{E_{nk}^{(0)}}\left|k\right\rangle + {\sum_{\left|k\right\rangle ,\left|l\right\rangle \in\mathcal H}}'' \frac{V_{kl}^{(1)}V_{ln}^{(1)}}{E_{nk}^{(0)}E_{nl}^{(0)}}\left|k\right\rangle - {\sum_{\left|k\right\rangle \in\mathcal H}}' \frac{V_{nn}^{(1)}V_{kn}^{(1)}}{(E_{nk}^{(0)})^{2}}\left|k\right\rangle \cr &- \frac{1}{2} {\sum_{\left|k\right\rangle \in\mathcal H}}' \frac{V_{nk}^{(1)}V_{kn}^{(1)}}{(E_{nk}^{(0)})^{2}}\left|n\right\rangle \label{eq:eigenvec} \end{align} where by $\sum^{''}$ we indicated that we leave out from the sum the $\vert k\rangle=\vert n\rangle$ and $\vert l\rangle=\vert n\rangle$ terms. The vector $ \left|n\left(b\right)\right\rangle $ at each order is normalized to $1$. \subsection{Corrections to the one-particle energy} In the following we use perturbation theory to calculate the energy corrections to a one-particle state at leading and next to leading orders. \subsubsection{$\mathcal{O}\left(b^{2}\right)$ correction} For a one-particle state $\left|q\left(b\right)\right\rangle $, having momentum $q=2\pi n_{q}L^{-1}$ in the free theory, the sole first-order ($\mathcal{O}\left(b^{2}\right)$) contribution to the energy difference $E\left(q\right)-E_{0}$ comes from the expectation value \begin{equation} \frac{\pi}{2}\mu^{2}b^{2}\bar{\rho}\left\langle n_{q}\left|O_{2}\right|n_{q}\right\rangle . \end{equation} The matrix element is easily evaluated using the mode expansion (\ref{eq:modeexp}), the explicit form of Fock vectors (\ref{eq:fockv}) and the commutation relations of ladder operators. As a result, one gets \begin{equation} E\left(q\right)-E_{0}=\omega_{n_{q}}+b^{2}\frac{\pi\mu^{2}\bar{\rho}}{2\omega_{n_{q}}}+\mathcal{O}\left(b^{4}\right) \end{equation} \subsubsection{$\mathcal{O}\left(b^{4}\right)$ correction} The second corrections can be obtained by a longer, but largely straightforward calculation. The general scheme of the computation can be summarized in the following steps. \begin{enumerate} \item First, observe that due to the absolute square appearing in (\ref{eq:2ndenergy}) and the fact that $V_{L}$ starts with terms proportional to $b^{2}$, only $V_{L}^{(1)}$ contributes to this order. In addition, $O_{2}$ and $O_{4}$ will only have nonzero matrix elements between states of equal overall momenta. Furthermore, due to normal ordering, the following restrictions apply to the Hilbert space sum in the above formula: \begin{enumerate} \item $\left\langle k\left|O_{2}\right|0\right\rangle $ is only nonzero if $\left|k\right\rangle $ is a two-particle state; \item $\left\langle k\left|O_{4}\right|0\right\rangle $ is only nonzero if $\left|k\right\rangle $ is a four-particle state; \item $\left\langle k\left|O_{2}\right|n_{q}\right\rangle $ is only nonzero if $\left|k\right\rangle $ is a three-particle state (for a one-particle state, $\left|k\right\rangle $ should be equal to $\left|n_{q}\right\rangle $ due to momentum conservation. However, this term is excluded from the sum); \item $\left\langle k\left|O_{4}\right|n_{q}\right\rangle $ is only non-zero for $\left|k\right\rangle $ containing either 3 or 5 particles. \end{enumerate} \item One then evaluates the relevant matrix elements $\left\langle k_{1},k_{2}\left|O_{2}\right|0\right\rangle $, $\left\langle k_{1},k_{2},k_{3},k_{4}\left|O_{4}\right|0\right\rangle $, $\left\langle k_{1},k_{2},k_{3}\left|O_{2}\right|n_{q}\right\rangle $,\linebreak{} $\left\langle k_{1},k_{2},k_{3}\left|O_{4}\right|n_{q}\right\rangle $ and $\left\langle k_{1},k_{2},k_{3},k_{4},k_{5}\left|O_{4}\right|n_{q}\right\rangle $ by commuting creation-annihilation operators. After collecting the symmetry factors arising from the fact that some subsets of $\left\{ k_{i}\right\} $ might be equal, these matrix elements turn out to be simply proportional to the symbol $1/\mathcal{N}\left(\left\{ k_{i}\right\} \right)$ (which actually comes from normalization), and generally consist of a sum of multiple terms containing a product of momentum-dependent Kronecker-deltas. For example, \[ \left\langle k_{1},k_{2},k_{3}\left|O_{2}\right|n_{q}\right\rangle =\frac{1}{\mathcal{N}\left(k_{1},k_{2},k_{3}\right)}\left(\frac{\delta_{k_{3},n_{q}}\delta_{k_{1}+k_{2},0}}{\omega_{k_{1}}}+\frac{\delta_{k_{2},n_{q}}\delta_{k_{1}+k_{3},0}}{\omega_{k_{1}}}+\frac{\delta_{k_{1},n_{q}}\delta_{k_{2}+k_{3},0}}{\omega_{k_{2}}}\right) \] \item Finally, these matrix elements are substituted back to (\ref{eq:2ndenergy}) and the appropriately restricted sums are reduced by the aid of the Kronecker deltas. Here another symmetry factor arises due to the fact that permutations of the order of different momenta as written inside a Fock vector $\left|k_{1},\dots,k_{i}\right\rangle $ denotes the same vector in the Hilbert space. This symmetry factor actually cancels the $1/\mathcal{N}$s coming from the matrix elements. \end{enumerate} This calculation leads to a linear combination of single, double and triple sums. The triple sums fortunately cancel, and we arrive at the following $\mathcal{O}\left(b^{4}\right)$ result: \begin{eqnarray} E\left(q\right)-E_{0} & = & \omega_{n_{q}}+b^{2}\frac{\pi}{2}\frac{\mu^{2}}{\omega_{n_{q}}}\bar{\rho}+b^{4}\left(\frac{\pi^{2}}{4}\frac{\mu^{2}}{\omega_{n_{q}}}\bar{\rho}^{2}-\frac{\pi^{2}}{8}\frac{\mu^{4}}{\omega_{n_{q}}^{3}}\bar{\rho}^{2}\right)-b^{4}\frac{\pi^{2}}{2}\frac{\mu^{4}}{\omega_{n_{q}}}\bar{\rho}\frac{1}{L}\sum_{k\in\mathbb{Z}}\frac{1}{\omega_{k}^{3}}\nonumber \\ & & -b^{4}\frac{2\pi^{2}}{3}\frac{\mu^{4}}{\omega_{n_{q}}}\frac{1}{L^{2}}\sum_{k_{1},k_{2}\in\mathbb{Z}}D_{1}\left(k_{1},k_{2}\right)+\mathcal{O}\left(b^{6}\right)\label{eq:energyb4} \end{eqnarray} where \begin{align} D_{1}(k_{1},k_{2}) =&\; \frac{1}{\omega_{k_{1}} \omega_{k_{2}} \omega_{k_{1}+k_{2}-n_{q}}} \left( \frac{1}{\omega_{k_{1}}+\omega_{k_{2}} + \omega_{k_{1} + k_{2}-n_{q}} + \omega_{n_{q}}} \right. \cr & \left. + \frac{1}{\omega_{k_{1}} + \omega_{k_{2}} + \omega_{k_{1}+k_{2}-n_{q}} -\omega_{n_{q}}} \right). \label{eq:D1} \end{align} \subsubsection{Extracting L\"uscher corrections} The single and double sums appearing in (\ref{eq:energyb4}) can be transformed into integrals by a method well-known from complex analysis. The idea is to introduce a complex function with an infinite number of poles at appropriate positions along the real line, such that one can reproduce a sum by means of a contour integral. In general, for a sum $\sum_{n\in\mathbb{Z}}f\left(n\right)$, one such integral representation is provided by \begin{equation} \sum_{n\in\mathbb{Z}}f\left(\frac{2\pi n}{L}\right)=\frac{L}{2\pi}\intop_{C}dz\frac{e^{iLz}}{e^{iLz}-1}f\left(z\right) \end{equation} where the closed contour $C$ moves from $-\infty-i\epsilon$ to $+\infty-i\epsilon$ infinitesimally below the real line, then from $\infty+i\epsilon$ to $-\infty+i\epsilon$ just above the real line (additional care is needed when $f\left(z\right)$ is not holomorphic in the region enclosed by $C$). Then the contour $C$ can be blown up assuming $f\left(z\right)$ is analytic on the complex plane, except possibly a number of poles and branch cuts. If $f\left(z\right)$decays rapidly enough at complex infinity, then the original sum can be turned into another one containing residual terms and integrals corresponding to different poles and branch cuts of $f\left(z\right)$. For example, the sum $\sum_{n\in\mathbb{Z}}\frac{1}{\omega_{n}^{3}}$ does not lead to additional poles in $f\left(z\right)=\left(\mu^{2}+z^{2}\right)^{-3/2}$. It has, however, two branch points at $z=\pm i\mu$. The two branch cuts lie along the imaginary axis of the $z$ plane. One connects $i\mu$ and $i\infty$, the other starts at $-i\mu$ and goes down to $-i\infty$. Upon deforming the contour, the neighborhood of the branch cut singularities needs careful analysis. The integrals coming from tightening the contour to the lower and upper branch cuts can be mapped onto each other. Then, after a variable change $u\rightarrow\cosh u$ and symmetrization in the integration domain, one gets \begin{equation} \sum_{k=-\infty}^{\infty}\frac{1}{\omega_{k}^{3}}=\frac{L}{\pi\mu^{2}}\left(1+\mu L\intop_{-\infty}^{\infty}du\frac{e^{\mu L\cosh u}}{\left(e^{\mu L\cosh u}-1\right)^{2}}\cosh u\right)\label{eq:singsum1} \end{equation} Transforming the double sum is more complicated. It is advantageous for later purposes to separate the $k_{2}=n_{q}$ part of the sum: \begin{equation} \sum_{k_{1},k_{2}\in\mathbb{Z}}D_{1}\left(k_{1},k_{2}\right)=\sum_{k_{1}\in\mathbb{Z}}\sum_{k_{2}\neq n_{q}}D_{1}\left(k_{1},k_{2}\right)+\frac{1}{2\omega_{n_{q}}}\sum_{k_{1}\in\mathbb{Z}}\frac{1}{\omega_{k_{1}}^{3}}+\frac{1}{2\omega_{n_{q}}}\sum_{k_{1}\in\mathbb{Z}}\frac{1}{\omega_{k_{1}}^{2}}\frac{1}{\omega_{k_{1}}+\omega_{n_{q}}}\label{eq:separsum} \end{equation} The last term is easily seen to be a special case of the integral formula \begin{equation} \sum_{k_{1}\in\mathbb{Z}}\frac{1}{\omega_{k_{1}}^{2}}\frac{1}{A+\omega_{k_{1}}}=\frac{L}{2A\mu}\coth\frac{\mu L}{2}-\frac{L}{2\text{\ensuremath{\pi}}}\intop_{-\infty}^{\infty}du\frac{\coth\left(\frac{\mu L}{2}\cosh u\right)}{A^{2}+\mu^{2}\sinh^{2}u}.\label{eq:singsum2} \end{equation} After a lengthy calculation, which we spell out in detail in Appendix B, we obtain the following nice representation of the double sum: \begin{equation} \sum_{k_{1},k_{2}}D_{1}\left(k_{1},k_{2}\right)=\frac{L^{2}}{\mu^{2}}\left(\frac{1}{8}+3\intop_{-\infty}^{\infty}\frac{du}{2\pi}\frac{e^{\mu L\cosh u}}{\left(e^{\mu L\cosh u}-1\right)^{2}}\frac{1}{\cosh\left(u-\theta\right)}\right)\label{eq:dsum1-int} \end{equation} where we introduced $\theta$ as the rapidity variable $q=\mu\sinh\theta$. We can now give an integral representation of the $\mathcal{O}\left(b^{4}\right)$ one-particle energy, from which all the $\mathcal{O}\left(b^{4}\right)$ Luscher corrections can be read directly: \begin{align} E\left(\theta\right)-E_{0} =&\, \mu\cosh\theta + b^{2}\frac{\pi}{2} \frac{\mu}{\cosh\theta}\bar{\rho} + b^{4}\left(\frac{\pi^{2}}{4}\frac{\mu}{\cosh\theta}\bar{\rho}^{2} - \frac{\pi^{2}}{8}\frac{\mu}{\cosh^{3}\theta}\bar{\rho}^{2}\right) \cr & - b^{4}\pi^{2} \frac{\mu}{\cosh\theta}\bar{\rho}\frac{1}{2\pi} - b^{4}\pi^{2}\frac{\mu}{\cosh\theta}\bar{\rho}\mu L\intop_{-\infty}^{\infty} \frac{du}{2\pi} \frac{e^{\mu L\cosh u}}{\left(e^{\mu L\cosh u}-1\right)^{2}}\cosh u - b^{4} \frac{\pi^{2}}{12} \frac{\mu}{\cosh\theta} \cr & - b^{4}2\pi^{2}\frac{\mu}{\cosh\theta} \intop_{-\infty}^{\infty}\frac{du}{2\pi} \frac{e^{\mu L\cosh u}}{\left(e^{\mu L\cosh u}-1\right)^{2}} \frac{1}{\cosh\left(u-\theta\right)} + \mathcal{O}\left(b^{6}\right) \label{eq:energb} \end{align} As a final step we expand this result in the bootstrap parameter $\alpha$ \begin{equation} \alpha=\sin\frac{\pi b^{2}}{1+b^{2}}\Leftrightarrow b^{2}=\frac{\alpha}{\pi}+\frac{\alpha^{2}}{\pi^{2}}+\mathcal{O}\left(\alpha^{3}\right) \end{equation} up to $\mathcal{O}\left(\alpha^{2}\right)$. The $\mathcal{O}\left(\alpha^{2}\right)$ term arising from the $\mathcal{O}\left(b^{2}\right)$ correction of the energy cancels with another term in (\ref{eq:energb}). Using the trigonometric identity \begin{equation} \frac{\cosh u}{\cosh\theta}=\frac{1}{\cosh\left(u-\theta\right)}+\frac{\sinh u}{\cosh\theta}\tanh\left(u-\theta\right) \end{equation} and performing an integration by parts, we arrive at \begin{align}\label{luscher correction simplified} E\left(\theta\right)-E_{0} =\; & \mu\cosh\theta+\alpha\frac{\mu\bar{\rho}}{2\cosh\theta}-\frac{\alpha^{2}}{12}\frac{\mu}{\cosh\theta} + \frac{\alpha^{2}\mu}{\cosh\theta} \left[ \left(1+\tanh^{2}\theta\right)\frac{\bar{\rho}^{2}}{8} \right. \cr & \left. - \left(\frac{\mu L\bar{\rho}}{2}\cosh\theta +1\right)\bar{\xi_{1}} \left(\theta\right)-\frac{\bar{\rho}}{2}\bar{f_{2}}\left(\theta\right)\right] + \mathcal{O}\left(\alpha^{3} \right) \end{align} where we introduced the functions \begin{eqnarray} \bar{\xi_{1}}\left(\theta\right) & = & \intop_{-\infty}^{\infty}\frac{du}{\pi}\frac{e^{\mu L\cosh u}}{\left(e^{\mu L\cosh u}-1\right)^{2}}\frac{1}{\cosh\left(u-\theta\right)}\\ \bar{f_{k}}\left(\theta\right) & = & \intop_{-\infty}^{\infty}\frac{du}{\pi}\frac{1}{e^{\mu L\cosh u}-1}\frac{1}{\cosh^{k}\left(u-\theta\right)} \end{eqnarray} However, since the first correction to the physical mass is of order $\alpha^{2}$, in an $\mathcal{O}\left(\alpha^{2}\right)$ formula we can actually omit the bars and arrive at the result from TBA what we calculate in Appendix A. \subsection{Corrections to the form factor $\left\langle 0\left(b\right)\left|\varphi\right|q\left(b\right)\right\rangle $} By an analogous, albeit more cumbersome calculation, we can obtain the coupling-expanded finite volume form factors and extract their first L\"uscher correction. Using the eigenstate expansion (\ref{eq:eigenvec}), we can expand the form factor as \begin{eqnarray} \left\langle 0\left(b\right)\left|\varphi\right|q\left(b\right)\right\rangle = & \left\langle 0\left|\varphi\right|q\right\rangle & \quad\left(\text{order }b^{0}\right)\\ & +\left\langle 0^{\left(1\right)}\left|\varphi\right|q\right\rangle +\left\langle 0\left|\varphi\right|q^{\left(1\right)}\right\rangle & \quad\left(\text{order }b^{2}\right)\\ & +\left\langle 0^{\left(2\right)}\left|\varphi\right|q\right\rangle +\left\langle 0^{\left(1\right)}\left|\varphi\right|q^{\left(1\right)}\right\rangle +\left\langle 0\left|\varphi\right|q^{\left(2\right)}\right\rangle & \quad\left(\text{order }b^{4}\right)\\ & +\mathcal{O}\left(b^{6}\right). \end{eqnarray} Note that since we are effectively working in Schr\"odinger picture, operators are time-independent, and as a consequence, we can use the free field operator (\ref{eq:modeexp}) in calculating these matrix elements. \subsubsection{$\mathcal{O}\left(b^{2}\right)$ correction} The zero order term $\left\langle 0\left|\varphi\right|q\right\rangle $ is easily evaluated and in our normalization its value is \begin{equation} \left\langle 0\left|\varphi\right|q\right\rangle =\frac{1}{\sqrt{2L\omega_{n_{q}}}}. \end{equation} The first order contribution comes solely from the $\left\langle 0\left|\varphi\right|q^{\left(1\right)}\right\rangle $ term. It takes the form \begin{equation} \left\langle 0\left|\varphi\right|q^{\left(1\right)}\right\rangle = - \frac{1}{\sqrt{2L\omega_{n_{q}}}} \frac{\mu^{2}b^{2}\pi\bar\rho}{4\omega_{n_{q}}^{2}} \end{equation} \subsubsection{$\mathcal{O}\left(b^{4}\right)$ correction} Following the steps outlined in subsection 5.3.2, an even lengthier calculation leads us to an explicit (volume-exact) order $b^{4}$ correction to the form factor in the form of single, double and triple sums.\footnote{ It should be noted that no $O_{6}$ matrix element gives contribution to the result, therefore the infinite-volume limit of the results obtained here are the same as in the $\varphi^{4}$ theory. The finite-volume behaviour is, however, different from the $\varphi^{4}$ case because $V_{L}$ gets different corrections.} Again, the triple sums cancel, leading to \begin{align} \left\langle 0\left(b\right)\left|\varphi\right|q\left(b\right)\right\rangle =\;& \frac{1}{\sqrt{2L\omega_{n_{q}}}}\Bigl \{ 1-\frac{\mu^{2}b^{2}\pi\bar{\rho}}{4\omega_{n_{q}}^{2}} + N_{0} + \frac{\mu^{4}\pi^{2}b^{4}\bar{\rho}}{8L}\sum_{k}S_{1}\left(k\right) \cr & + \frac{\mu^{4}\pi^{2}b^{4}}{3L^{2}} \sum_{k_{1},k_{2}} \Bigl( \frac{D_{1}\left(k_{1},k_{2}\right)}{\omega_{n_{q}}^{2}} + \frac{D_{2}\left( k_{1},k_{2} \right)}{\omega_{n_{q}}} \Bigr ) \Bigr\} \label{eq:formfac} \end{align} where \begin{eqnarray} N_{0}&=&\frac{\pi^{2}b^{4}\bar{\rho}^{2}}{8}\left(\frac{5\mu^{4}}{4\omega_{n_{q}}^{4}}-\frac{\mu^{2}}{\omega_{n_{q}^{2}}}\right) \cr S_{1}\left(k\right) & = & \frac{1}{\omega_{n_{q}}^{2}\omega_{k}^{3}}+\frac{1}{\omega_{n_{q}}^{2}\omega_{k}^{2}\left(\omega_{n_{q}}+\omega_{k}\right)} +\frac{1}{\omega_{n_{q}}\omega_{k}^{3}\left(\omega_{n_{q}}+\omega_{k}\right)} \cr D_{2}\left(k_{1},k_{2}\right) & = & \frac{1}{\omega_{k_{1}}\omega_{k_{2}}\omega_{k_{1}+k_{2}-n_{q}}}\Bigl(\Bigl(\frac{1}{\omega_{k_{1}}+\omega_{k_{2}} +\omega_{k_{1}+k_{2}-n_{q}}+\omega_{n_{q}}}\Bigr)^{2} \cr &-& \Bigl(\frac{1}{\omega_{k_{1}}+\omega_{k_{2}}+\omega_{k_{1}+k_{2}-n_{q}}-\omega_{n_{q}}}\Bigr)^{2}\Bigr) \end{eqnarray} and $D_{1}\left(k_{1},k_{2}\right)$ was defined in (\ref{eq:D1}). \subsubsection{Extracting first L\"uscher correction} We proceed with the complex analytical method presented previously to transform the sums to integrals from which the L\"uscher corrections can be obtained. The integral representations of some (parts) of these sums are already presented in formulas (\ref{eq:singsum1}), (\ref{eq:singsum2}) and (\ref{eq:dsum1-int}). The only single sum appearing in $\sum_{k}S_{1}\left(k\right)$ not covered before is a special case of the following sum possessing the integral representation \begin{eqnarray} \sum_{k}\frac{1}{\omega_{k}^{3}\left(A+\omega_{k}\right)} & = & -\frac{L}{2\mu A^{2}}\coth\left(\frac{\mu L}{2}\right) +\frac{LA}{\mu^{2}}\intop_{-\infty}^{\infty}\frac{du}{2\pi} \left[\frac{\mu L\cosh u}{2\sinh^{2}\left(\frac{\mu L}{2}\cosh u\right)\left(A^{2}+\mu^{2}\sinh^{2}u\right)} \right. \cr & & \left. + \frac{2\mu^{2}\cosh^{2}u\coth\left(\frac{\mu L}{2}\cosh u\right)}{\left(A^{2}+\mu^{2}\sinh^{2}u\right)^{2}}\right] \label{eq:singsum3} \end{eqnarray} The transformations of the double sums $\sum_{k_{1},k_{2}}D_{2}\left(k_{1},k_{2}\right)$ into integrals can be done similarly to the case $\sum_{k_{1},k_{2}}D_{i}\left(k_{1},k_{2}\right)$, which then can be expanded for large volumes. The detailed calculations are relegated to Appendix B and results in \begin{align} \left\langle 0\left(b\right)\left|\varphi\right|q\left(b\right)\right\rangle =\; & \frac{1}{\sqrt{2L\mu\cosh\theta}}\left\{ 1-\alpha\intop\frac{du}{2\pi}\left[\frac{e^{-\mu L\cosh u}}{\cosh^{2}\theta}\right]+\alpha^{2}\left(\frac{1}{48}+\frac{1}{24\cosh^{2}\theta} \right.\right. \cr & \left. -\frac{1}{4\pi^{2}} \right) + \alpha^{2}\intop_{-\infty}^{\infty}\frac{du}{2\pi}e^{-\mu L\cosh u}\left[\frac{\sinh u\sinh\theta}{\cosh^{2}\theta\cosh^{2}w}+\frac{2}{\cosh^{2}\theta\cosh w} \right. \cr & \left. -\frac{1}{\cosh^{3}w} + \frac{2}{\pi}\left(\frac{w\sinh w}{\cosh^{3}w}-\frac{1}{\cosh^{2}w} \right) \right] +\dots \label{eq:form_final1} \end{align} Finally, expressing the rhs in terms of the physical mass \begin{equation} m=\mu-\frac{\alpha^{2}}{12}\mu+\mathcal{O}\left(\alpha^{3}\right) \end{equation} we obtain our final result: \begin{align}\label{FF luscher correction} \left\langle 0\left(b\right)\left|\varphi\right|q\left(b\right)\right\rangle =\; & \frac{1}{\sqrt{2Lm\cosh\theta}}\left\{ 1-\alpha\intop\frac{du}{2\pi}\left[\frac{e^{-mL\cosh u}}{\cosh^{2}\theta}\right]+\alpha^{2}\left(\frac{1}{48}-\frac{1}{4\pi^{2}}\right)\right. \cr & + \alpha^{2}\intop_{-\infty}^{\infty}\frac{du}{2\pi}e^{-mL\cosh u} \left[ \frac{\sinh u\sinh\theta}{\cosh^{2}\theta\cosh^{2}w} +\frac{2}{\cosh^{2}\theta\cosh w} - \frac{1}{\cosh^{3}w} \right. \cr &\left. + \frac{2}{\pi}\left( \frac{w\sinh w}{\cosh^{3}w} - \frac{1}{\cosh^{2}w} \right) \right] + \dots \end{align} which completely agrees with the perturbative expansion of our exact L\"uscher correction. We perform an alternative check using Lagrangian perturbation theory in Appendix \ref{AppendixLPT}. \section{Overview of the method and summary of the results} In the following we analyze a relativistic integrable QFT in two dimensions with a single particle of mass $m$ and scattering matrix $S(\theta)$, which satisfies unitarity and crossing symmetry $ S(\theta)=S(-\theta)^{-1}=S(i\pi -\theta)$ and does not have any pole in the physical strip. Such theory is the sinh-Gordon theory and the generalization for more species with diagonal scatterings is straightforward. We put this QFT in a finite volume of size $L$ and focus on the finite size energy spectrum and the finite size form factors. \subsection{Finite size energy spectrum} We analyze the energy of an $N$ particle state with rapidities $\theta_k$, $k=1,\dots,N$. As explained in the introduction the polynomial corrections come from the quantization of momenta formulated by the Bethe-Yang equations \begin{equation} \epsilon^{(0)}(\theta_{j}^{(0)}+i\frac{\pi}{2})=i(2n_{j}+1)\pi\quad;\qquad\epsilon^{(0)}(\theta+i\frac{\pi}{2})=imL\sinh\theta+\sum_{k}\log S(\theta-\theta_{k}^{(0)}) \end{equation} where, by the superscript $(0)$, we indicated that only the polynomial volume corrections are kept. Given integers $n_{1},\dots,n_{N}$ the rapidities $\theta_{1}^{(0)},\dots,\theta_{N}^{(0)}$ can be determined leading to the energy formula \begin{equation} E_{N}(L)=\sum_{i}m\cosh\theta_{i}^{(0)}+O(e^{-mL}) \end{equation} The leading exponential correction was conjectured in \cite{BaJa} and has two sources. First one has to take into account how the sea of virtual particles changes the quantization condition \begin{eqnarray} \epsilon^{(1)}(\theta_{j}^{(1)}+i\frac{\pi}{2})=i(2n_{j}+1)\pi\quad;\qquad\epsilon^{(1)}(\theta)=\epsilon^{(0)}(\theta) +\delta\epsilon(\theta)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\nonumber \\ \delta\epsilon(\theta) = i\int_{-\infty}^{\infty} \frac{d\theta^{'}}{2\pi} \frac{S'(\theta-\theta^{'})}{S(\theta-\theta^{'})} \prod_{k} S(i\frac{\pi}{2}+\theta_{k}^{(0)}-\theta^{'}) e^{-mL\cosh\theta^{'}} \end{eqnarray} where $S'(\theta)$ denotes $\frac{dS(\theta)}{d\theta}$. We then have to add the direct energy contribution of the virtual particles. By expressing all contributions in terms of the leading rapidities, $\theta_{j}^{(0)}$, we have: \begin{eqnarray} E_{N}(L) & = & \sum_{k}m\cosh\theta_{k}^{(0)}+i\sum_{k,j}m\sinh\theta_{k}^{(0)}\left(\bar{\rho}_{N}^{(0)}\right)^{kj}\delta\epsilon(\theta_{j}^{(0)}+i\frac{\pi}{2})\nonumber \\ & & -m\int_{-\infty}^{\infty}\frac{d\theta}{2\pi}\,\cosh\theta\,\prod_{k}S(\frac{i\pi}{2}+\theta-\theta_{k}^{(0)})e^{-mL\cosh\theta} \end{eqnarray} where $\bar{\rho}_{N}^{(0)}$ is the inverse of the matrix $ \rho_N ^{(0)}$ with entries $\rho_{jk}^{(0)}=-i\partial_{\theta_j^{(0)}}\epsilon^{(0)}(\theta_{k}^{(0)}+i\frac{\pi}{2})$. The exact equations come either from an analytical continuation of the groundstate TBA result \cite{Dorey:1996re, Bajnok:2010ke} or from a continuum limit of a solved integrable lattice regularization \cite{Teschner:2007ng}. The quantization condition for the exact rapidities $\theta_{j}$ is \begin{equation} \epsilon(\theta_{j}+i\frac{\pi}{2})=i(2n_{j}+1)\pi \end{equation} where $\epsilon$ satisfies the coupled non-linear integral equation \begin{equation} \epsilon(\theta)=mL\cosh\theta+\sum_{j}\log S(\theta-\theta_{j}-\frac{i\pi}{2})+i\int_{-\infty}^{\infty}\frac{d\theta^{'}}{2\pi}\frac{S'(\theta-\theta^{'})}{S(\theta-\theta^{'})}\log(1+e^{-\epsilon(\theta^{'})}) \end{equation} and the energy is \begin{equation} E_{N}(L)=m\sum_{i}\cosh\theta_{i}-m\int_{-\infty}^{\infty}\frac{d\theta}{2\pi}\,\cosh\theta\,\log(1+e^{-\epsilon(\theta)}) \end{equation} In particular, for a moving one-particle state at leading order we obtain \begin{equation} -i\epsilon^{(0)}(\theta_{1}^{(0)}+i\frac{\pi}{2})=mL\sinh\theta_{1}^{(0)}+\pi=(2n_{1}+1)\pi \end{equation} and the corresponding energy is \begin{equation} E_{1}(L)=m\cosh\theta_{1}^{(0)}+O(e^{-mL}) \end{equation} The leading exponential correction of the quantization condition contains an extra term of the form \begin{eqnarray}\label{eq:leading exponential correction} \delta\epsilon \left(\theta_{1}^{(0)}+i\frac{\pi}{2}\right) = i\int_{-\infty}^{\infty} \frac{d\theta^{'}}{2\pi} S'(i\frac{\pi}{2} + \theta_{1}^{(0)} - \theta^{'}) e^{-mL\cosh\theta^{'}} \end{eqnarray} The one-particle energy (measured from the finite volume vacuum) is \cite{KlMe,JaLu}: \begin{eqnarray}\label{eq:1ptenergy} E_{1}(L)-E_{0}(L) & = & m\cosh\theta_{1}^{(0)}-\\ \nonumber && \frac{m}{\cosh\theta_{1}^{(0)}}\int_{-\infty}^{\infty}\frac{d\theta}{2\pi}\,\cosh(\theta-\theta_{1}^{(0)})\,(S(\frac{i\pi}{2} +\theta-\theta_{1}^{(0)})-1)e^{-mL\cosh\theta} \end{eqnarray} We will reproduce this result from the study of the finite volume two-point function. \subsection{Finite size form factors} Form factors are defined as the matrix elements of local operators sandwiched between finite volume energy eigenstates. These states are normalized to Kronecker-$\delta$ functions \begin{equation} \langle n_{1}',\dots , n_{M}'\vert n_{1},\dots,n_{N}\rangle_{L}=\delta_{N,M}\prod_{j}\delta_{n_{j}'n_{j}} \end{equation} opposed to infinite volume states, which are normalized to Dirac-$\delta$ functions: $\langle\theta'\vert\theta\rangle=\delta(\theta'-\theta)$. The finite volume states can be equivalently labeled by the rapidities $\vert n_{1},\dots,n_{N}\rangle_{L}\equiv\vert\theta_{1},\dots,\theta_{N}\rangle_{L}$. The space-time dependence of the form factors can be easily calculated \begin{equation} \langle\theta_{1}',\dots,\theta'_{M}\vert\mathcal{O}(x,t)\vert\theta_{1},\dots,\theta_{N}\rangle_{L}=e^{i\Delta Et-i\Delta Px}\langle\theta_{1}',\dots,\theta'_{M}\vert\mathcal{O}\vert\theta_{1},\dots,\theta_{N}\rangle_{L}\label{eq:FFxtdep} \end{equation} where $\Delta E=E_{M}(L)-E_{N}(L)$ and $\Delta P=P_{M}(L)-P_{N}(L)$ with $P_{N}(L)=\frac{2\pi}{L}\sum_{j}n_{j}$, while we simply abbreviated $\mathcal{O}(0,0)$ by $\mathcal{O}$. The polynomial finite size corrections purely change the normalization of states and give \cite{Pozsgay:2007kn}: \begin{equation} \langle\theta_{1}',\dots,\theta'_{M}\vert\mathcal{O}\vert\theta_{1},\dots,\theta_{N}\rangle_{L}=\frac{F_{M+N}^{\mathcal{O}}(\theta_{1}'+i\pi,\dots,\theta'_{M}+i\pi,\theta_{1},\dots,\theta_{N})}{\sqrt{(2\pi)^{-N-M}\mbox{det}\rho_{M}^{(0)}\det\rho_{N}^{(0)}}}+O(e^{-mL}) \label{FVFF} \end{equation} where $F_{M+N}^{\mathcal{O}}$ denotes the infinite volume form factor \begin{equation} F_{M+N}^{\mathcal{O}}(\theta_{1}',\dots,\theta'_{M},\theta_{1},\dots,\theta_{N})=\langle0\vert\mathcal{O}\vert\theta_{1}',\dots,\theta'_{M},\theta_{1},\dots,\theta_{N}\rangle \end{equation} and all the rapidities can be taken at the leading order values with superscript $(0)$. Since even the leading exponential correction is not known for these form factors we develop a systematic method based on the two-point function to calculate them. In particular, for the one-particle form factor the formulae simplify as \begin{equation} \langle0\vert\mathcal{O}\vert\theta_{1}\rangle_{L}=\frac{F_{1}^{\mathcal{O}}(\theta_{1}^{(0)})}{\sqrt{\rho_{1}^{(0)}/(2\pi)}}+O(e^{-mL}) \end{equation} where \begin{equation} \rho_{1}^{(0)}=-i\partial_{\theta_1^{(0)}}\epsilon^{(0)}(\theta_1^{(0)}+i\frac{\pi}{2})=mL\cosh\theta_{1}^{(0)} \end{equation} and the aim of our paper is to calculate the leading exponential corrections to these formulae. \subsection{Finite volume two-point function} Let us focus on the Euclidean finite volume two-point function, which is defined by the path integral% \footnote{We restrict our attention to the case when the two operators are the same. The generalization for different operators is straightforward.% } \begin{equation} \langle\mathcal{O}(x,t)\mathcal{O}\rangle_{L}=\frac{\int[\mathcal{D}\phi]\mathcal{O}(x,t)\mathcal{O}(0,0)e^{-S[\phi]}}{\int[\mathcal{D}\phi]e^{-S[\phi]}} \end{equation} where configurations are periodic in $x$ with $L$ and $t\in\mathbb{R}$. The momentum space form is obtained by its Fourier transform \begin{equation} \Gamma(\omega,q)=\frac{1}{L}\int_{-L/2}^{L/2}{\rm d}x\int_{-\infty}^{\infty}{\rm d}t\,{\rm e}^{i(\omega t+qx)}\langle\mathcal{O}(x,t)\mathcal{O}\rangle_{L} \end{equation} where periodicity in $x$ requires that $e^{iqL}=1$. Taking $t$ as Euclidean time the two point function is the vacuum expection value of the time ordered product: \begin{equation} \langle\mathcal{O}(x,t)\mathcal{O}\rangle_{L}=\langle0\vert T(\mathcal{O}(x,t)\mathcal{O})\vert0\rangle_{L}=\Theta(t)\langle0\vert\mathcal{O}(x,t)\mathcal{O}\vert0\rangle_{L}+\Theta(-t)\langle0\vert\mathcal{O}\mathcal{O}(x,t)\vert0\rangle_{L} \end{equation} We can insert a complete system of finite volume energy-momentum eigenstates and use the Euclidean version of the space-time dependence (\ref{eq:FFxtdep}). By performing the integrals we obtain \begin{equation} \Gamma(\omega,q)=\sum_{N}\vert\langle0\vert\mathcal{O}\vert\theta_{1},\dots,\theta_{N}\rangle_{L}\vert^{2}\left\{ \frac{\delta_{q-P_{N}(L)}}{E_{N}(L)-i\omega}+\frac{\delta_{q+P_{N}(L)}}{E_{N}(L)+i\omega}\right\} \end{equation} For a fixed $q$ we can determine the energy levels by searching for poles in the analytically continued $\omega$. For a generic volume and fixed momentum $q$ the energy levels are never degenerate. Thus the poles are located at $\omega=\pm iE_{N}(L)$ with residue \begin{equation} \lim_{\omega\to \pm iE_{N}(L)}(E_{N}(L)\pm i\omega)\Gamma(\omega,\pm P_{N}(L))=\vert\langle0\vert\mathcal{O}\vert\theta_{1},\dots,\theta_{N}\rangle_{L}\vert^{2} \end{equation} which is nothing but the square of the finite volume form factor. In order to obtain the exponential corrections of these form factors we have to expand the two point function on the space-time cylinder in $L$. The Euclidean version of this cylinder can be thought of as the large size limit of the torus. On the torus we can exchange the role of the Euclidean time and space and represent the two point function as \begin{equation} \langle\mathcal{O}(x,t)\mathcal{O}\rangle_{L}=\Theta(x)\frac{\mbox{Tr}[\mathcal{O}(0,t)e^{-Hx}\mathcal{O}e^{-H(L-x)}]}{\mbox{Tr}[e^{-HL}]}+\Theta(-x)\frac{\mbox{Tr}[\mathcal{O}e^{Hx}\mathcal{O}(0,t)e^{-H(L+x)}]}{\mbox{Tr}[e^{-HL}]} \end{equation} Inserting two complete system of (mirror) states denoted by $\vert\mu\rangle$ and $\vert\nu\rangle$ and exploiting the $\vert\langle\nu\vert\mathcal{O}\vert\mu\rangle|=\vert\langle\mu\vert\mathcal{O}\vert\nu\rangle|$ symmetry together with $e^{iqL}=1$ we obtain: \begin{equation} Z\Gamma(\omega,q)=\frac{2\pi}{L}\sum_{\mu,\nu}\vert\langle\nu\vert\mathcal{O}\vert\mu\rangle|^{2}e^{-E_{\nu}L}\delta(P_{\mu}-P_{\nu}+\omega)\left\{ \frac{1}{E_{\mu}-E_{\nu}-iq}+\frac{1}{E_{\mu}-E_{\nu}+iq}\right\} \label{eq:mirror2pt} \end{equation} where $Z=\mbox{Tr}[e^{-HL}]$. Note that the expansion in $\nu$ naturally corresponds to expansions in L\"uscher orders. In the bulk of the paper we perform a systematic expansion related to a moving one-particle state. Let us summarize the result we got. For a one-particle state we focus on the one-particle finite volume pole \begin{equation} \Gamma(\omega,q)=\frac{\mathcal{F}(q)^{2}}{E(q)+i\omega}+\dots\quad;\qquad\mathcal{F}(q)=\langle0\vert\mathcal{O}\vert q\rangle \end{equation} where $E(q)$ is the exact finite volume energy with momentum $q$ and $\mathcal{F}(q)$ is the corresponding exact finite volume form factor. We choose the phase of the one-particle state so that $\mathcal{F}(q)$ is real and positive. We used the momentum variable to label the state, which is related to the rapidity as $q=m\sinh\theta_{1}$, such that the corresponding energy is $\mathcal{E}(q)=m\cosh\theta_{1}$. We can expand $\Gamma$ around the large volume Bethe-Yang pole at $\omega=i\mathcal{E}(q)$. At the leading L\"uscher order we have first and second order poles \begin{equation} \Gamma(\omega,q) = \frac{2\pi F_{1}^{2}}{L\mathcal{E}(q)}\frac{-i}{\omega-i\mathcal{E}(q)} + \frac{\mathcal{L}_{0}(q)}{(\omega-i\mathcal{E}(q))^{2}} + \frac{\mathcal{L}_{1}(q)}{\omega-i\mathcal{E}(q)}+\mathrm{regular} \end{equation} such that the leading exponential corrections of the energy and form factor can be written as \begin{equation} E(q)=\mathcal{E}(q)\left\{ 1+\frac{L}{2\pi F_{1}^{2}}\mathcal{L}_{0}(q)+\dots\right\} \quad;\quad\mathcal{F}(q)=\frac{\sqrt{2\pi}F_{1}}{\sqrt{L\mathcal{E}(q)}}\left\{ 1+\frac{iL\mathcal{E}(q)}{4\pi F_{1}^{2}}\mathcal{L}_{1}(q)+\dots\right\} \label{LuscherEFF} \end{equation} We calculate $\Gamma$ in the mirror channel (\ref{eq:mirror2pt}). The leading order result comes from terms, when $\langle\nu\vert$ is the vacuum state $\langle0\vert$ and $\vert\mu\rangle$ is a one-particle state. The leading L\"uscher corrections, $\mathcal{L}_{0}$ and $\mathcal{L}_{1}$, come from terms when $\langle\nu\vert$ is a one-particle state and $\vert\mu\rangle$ is either the vacuum or a two-particle state. Having performed the calculations we could reproduce the L\"uscher correction of the 1-particle energy (\ref{eq:1ptenergy}). For the form factors we obtained the result \begin{equation} \mathcal{F}(q)=\frac{\sqrt{2\pi}}{\sqrt{\rho_{1}^{(1)}}}\left\{ F_{1}+\int_{-\infty}^{\infty}d\theta\, F_{3}^{\mathrm{reg}}(\theta+i\pi,\theta,\theta_{1}^{(0)}-i\frac{\pi}{2})e^{-mL\cosh\theta}+\dots\right\} \end{equation} where the density of states at the leading exponential order is \begin{equation} \rho_{1}^{(1)}=-i\partial_{\theta^{(1)}}\epsilon^{(1)}(\theta^{(1)}+i\frac{\pi}{2}) \label{rho1} \end{equation} and the regularized form factor is defined to be \begin{equation} F_{3}^{\mathrm{reg}}(\theta,\theta_{1},\theta_{2}) = F_{3}(\theta,\theta_{1},\theta_{2}) - \frac{iF_1}{2\pi} \frac{ 1 - S(\theta_{1} - \theta_{2}) }{\theta - \theta_{1} - i\pi} + \frac{iF_1}{4\pi} S'(\theta_{1}-\theta_{2}) \label{Freg} \end{equation} In the rest of the paper we derive this result and check it by a second order perturbative calculation in the sinh-Gordon theory. \subsection{Regularization} We will use the regularized delta function \begin{equation} \delta(x)\rightarrow\frac{i}{2\pi}\left(\frac{1}{x+i\epsilon}-\frac{1}{x-i\epsilon}\right) \end{equation} in (\ref{mat}) and take the limit $\epsilon\to0$ only at the end of the calculation. The regularized delta function terms can be nicely combined with those coming from the pole terms in the 3-particle form factor and the regularized matrix element becomes \begin{equation} \begin{split} \langle u\vert{\cal O}\vert\beta_1,\beta_2\rangle^{\rm reg}= \frac{iF_1}{2\pi}&\left[\frac{1}{u-\beta_1+i\epsilon}-\frac{S(\beta_1-\beta_2)}{u-\beta_1-i\epsilon} +\frac{S(\beta_1-\beta_2)}{u-\beta_2+i\epsilon}-\frac{1}{u-\beta_2-i\epsilon}\right]\\ &+F_3^c(u+i\pi-i\epsilon,\beta_1,\beta_2). \end{split} \end{equation} Here $F_3^c$ is the finite part of the form factor, defined by \begin{align} F_3(u,\beta_1,\beta_2) &= F_3^c(u,\beta_1,\beta_2) + \frac{iF_1}{2\pi(u-\beta_1-i\pi)}[1-S(\beta_1-\beta_2)] \cr & + \frac{iF_1}{2\pi(u-\beta_2-i\pi)}[S(\beta_1-\beta_2)-1]. \label{Fcon} \end{align} The finite part is obtained by explicitly removing the pole singularities required by the form factor axioms \cite{Smirnov:1992vz}. $F_3^c(u,\beta_1,\beta_2)$ is finite at $u=\beta_1+i\pi$, $u=\beta_2+i\pi$. For later use, we now also define the modified form factor $\hat F_3$: \begin{equation} F_3(u,\beta_1,\beta_2)= \frac{iF_1}{2\pi(u-\beta_1-i\pi)}[1-S(\beta_1-\beta_2)]+\hat F_3(u,\beta_1,\beta_2). \end{equation} $F_3^{\rm reg}$, defined by (\ref{Freg}), can be written as \begin{equation} F_3^{\rm reg}(u,\beta_1,\beta_2)=\hat F_3(u,\beta_1,\beta_2)+ \frac{iF_1}{4\pi}\,S^\prime(\beta_1-\beta_2). \end{equation} Next we introduce the variables $b$, $w$ by \begin{equation} \beta_1=b+w,\qquad\qquad \beta_2=b-w \end{equation} and integrate (\ref{36}) over $b$ using the delta function. This means that after this integration $b$ stands for the solution of \begin{equation} \sinh b=\frac{\sinh u+\sinh\psi}{2\cosh w}. \label{sinhb} \end{equation} We have \begin{equation} j(u,\psi,q)=\int_{-\infty}^\infty{\rm d}w \vert\langle u\vert{\cal O}\vert b+w,b-w\rangle^{\rm reg}\vert^2 \frac{1}{C(C-\cosh u-i\hat q)}, \end{equation} where \begin{equation} C=\cosh(b+w)+\cosh(b-w). \end{equation} Next we make use of the analyticity of the form factors and shift the $w$ integral from real $w$ to $w=v+i\gamma$, where $\gamma>0$ is small. We have to pay attention to the following. \begin{itemize} \item[A)] The right hand side of (\ref{sinhb}) must not cross the cut of the {\tt arcsinh} function (which runs from $i$ to $i\infty$ along the imaginary axis). \item[B)] Avoid points where $C=\cosh u+i\hat q$. \item[C)] Take into account the poles of the regularized matrix elements at $w=\pm(u-b\pm i\epsilon)$. \end{itemize} Problems A) and B) can be easily avoided if $\cosh u<2$ and the parameter $\gamma$ is small enough. The form factor poles can be taken into account explicitly, using the residue theorem. (Only two of the poles lie above the real axis.) After a long computation, we find (up to terms vanishing in the $\epsilon\to0$ limit): \begin{equation} \begin{split} J(u,\psi,q)=&\left(\frac{F_1^2}{2\pi\epsilon}-\delta(0)F_1^2\right) \frac{1}{\cosh\psi(\cosh\psi-i\hat q)}+I(u,\psi,q)\\ &+\frac{F_1}{\cosh\psi(\cosh\psi-i\hat q)}[F_3^c(u+i\pi,u,\psi)+F_3^c(u+i\pi,\psi,u)]\\ &+\frac{iF_1^2}{4\pi}\frac{\sinh(u-\psi)[S(\psi-u)-S(u-\psi)]}{\cosh^2\psi(\cosh\psi-i\hat q)^2}\\ &+\frac{iF_1^2}{4\pi}\frac{1}{\cosh\psi(\cosh\psi-i\hat q)}\Big[ \frac{2[S(u-\psi)-S(\psi-u)]}{u-\psi}+\frac{\nu[S^\prime(\psi-u)+S^\prime(u-\psi)]}{\cosh\psi}\\ &+\frac{\sinh(u-\psi)[S(\psi-u)-S(u-\psi)]}{\nu\cosh\psi}\\ &+\frac{(\sinh\psi+\sinh u)(1+\sinh u\sinh\psi)[S(u-\psi)-S(\psi-u)]}{\nu\cosh^2\psi}\Big]. \end{split} \label{final} \end{equation} Here the notation \begin{equation} \nu=\cosh\psi+\cosh u \end{equation} is used and $I(u,\psi,q)$ is the shifted integral ($w=v+i\gamma$): \begin{align} I(u,\psi,q)=& \int_{-\infty}^\infty\frac{{\rm d}v}{C(C-\cosh u-i\hat q)}S(-2w) \bigg\{ \frac{iF_1}{2\pi} \left[ \frac{1-S(2w)}{u-b-w} + \frac{S(2w)-1}{u-b+w} \right] \cr & + F_3^c(u+i\pi-b,w,-w) \bigg\}^2. \label{shiftI} \end{align} The (negative) divergent term coming from the denominator is accompanied with a (positive) divergent term coming from the calculation of the numerator. They both multiply the same function. Our main assumption is that the divergences cancel\footnote{Note that putting blindly $x=0$ to the definition of the regularized delta function gives $\delta(0)=1/\pi\varepsilon$.} and the remaining finite terms are correct. Indeed, in appendix \ref{AppendixB} we show that our heuristic regularization is completely equivalent to the well-defined finite volume regularization. We will make the substitution \begin{equation} \left(\frac{1}{2\pi\epsilon}-\delta(0)\right)\rightarrow\Delta, \end{equation} where $\Delta$ is a finite renormalization constant, which will be fixed later. \subsection{Analytic continuation} (\ref{final}) is our final result for the Fourier space 2-point function for real $\omega$. We need to analytically continue this function towards $\omega\to i{\cal E}(q)$. We will do it in two steps. First we extend it to a small region where $\omega$ is just above the real axis. The explicit terms are analytic, so we have to concentrate on the integral $I(u,\psi,q)$. In this region there is no problem with A) and B), but as we increase the imaginary part of $\omega$, the integration contour will cross the double pole at $w=(u-\psi)/2$ coming from the form factor function squared. We can take into account the effects of this pole explicitly, using the residue theorem. After a second long calculation, we find that adding these new contributions to (\ref{final}) many terms cancel and we have \begin{equation} \begin{split} J(u,\psi,q)&=I_0(u,\psi,q)+\frac{iF_1^2}{2\pi}\frac{\sinh(u-\psi)[1-S(u-\psi)]} {\cosh^2\psi(\cosh\psi-i\hat q)^2}\\ &+\frac{1}{\cosh\psi(\cosh\psi-i\hat q)}\Big\{F_1^2\Delta+2F_1\hat F_3(u+i\pi,u,\psi)+\\ &\frac{iF_1^2}{2\pi}\Big[\frac{\nu S^\prime(u-\psi)}{\cosh\psi}+ \frac{\sinh\psi\cosh u}{\cosh^2\psi}[S(u-\psi)-1]\Big]\Big\}, \end{split} \label{final2} \end{equation} where $I_0(u,\psi,q)$ is the same integral as (\ref{shiftI}), but with the $w$ integration contour moved back to the real axis. (We are allowed to do this after $\omega$ is already above the contour.) In the second step we continue $\omega$ further towards $\omega=i{\cal E}(q)$. We can show (for $\cosh u<2$) that $I_0(u,\psi,q)$ is analytic in $\omega$ in the vicinity of the imaginary axis, except for a cut starting at $\omega=im$. The cut appears as the consequence of the definition $\omega=-m\sinh\psi$ and the limit $\omega\to i{\cal E}(q)$ in the language of the $\psi$ variable becomes \begin{equation} \psi\to-\frac{i\pi}{2}\pm\theta, \end{equation} where $q=m\sinh\theta$. The sign is $\pm$ according to whether we go around the branch point from the right or from the left. Since no pole terms are coming from the integral, we are left with the explicitly evaluated terms in (\ref{final2}) and the singular terms of the L\"uscher correction can be written \begin{equation} {\cal L}^{\rm sing}(\omega,q)=\frac{4\pi}{L}\int_{\cosh u<2}{\rm d}u\,{\rm e}^{-mL\cosh u} \left\{\frac{\tilde R(\omega,q)}{[\omega^2+{\cal E}^2(q)]^2} +\frac{\tilde Q(\omega,q)}{\omega^2+{\cal E}^2(q)}\right\}, \end{equation} where \begin{equation} \tilde R(\omega,q)=\frac{iF_1^2}{2\pi}\frac{m^2(m^2+\omega^2-q^2)}{m^2+\omega^2} \sinh(u-\psi)[1-S(u-\psi)], \end{equation} \begin{equation} \tilde Q(\omega,q)=F_1^2\Delta+2F_1\hat F_3(u+i\pi,u,\psi)+ \frac{iF_1^2}{2\pi}\left[\frac{\nu S^\prime(u-\psi)}{\cosh\psi}+\frac{\sinh\psi \cosh u} {\cosh^2\psi}[S(u-\psi)-1]\right]. \end{equation} Finally we calculate the residues of the simple and double poles of the L\"uscher term: \begin{equation} {\cal L}_0(q)=\frac{2\pi}{L}\int_{\cosh u<2}{\rm d}u\,{\rm e}^{-mL\cosh u} \,R(i{\cal E}(q),q), \label{L0} \end{equation} \begin{equation} {\cal L}_1(q)=\frac{2\pi}{L}\int_{\cosh u<2}{\rm d}u\,{\rm e}^{-mL\cosh u} \,\left[Q(i{\cal E}(q),q)+\frac{{\rm d}R}{{\rm d}\omega}(i{\cal E}(q),q)\right]. \label{L1} \end{equation} Here \begin{equation} R(\omega,q)=-\frac{1}{2{\cal E}^2(q)}\tilde R(\omega,q),\qquad\quad Q(\omega,q)=-\frac{i}{{\cal E}(q)}\tilde Q(\omega,q) -\frac{i}{2{\cal E}^3(q)}\tilde R(\omega,q). \end{equation} \subsection{L\"uscher's formula} From (\ref{LuscherEFF}) and (\ref{L0}) we can now calculate the L\"uscher (Klassen-Melzer, Janik-Lukowski, Bajnok-Janik) correction \cite{Luscher,KlMe,JaLu,BaJa} to the 1-particle energy: \begin{equation} E(q)={\cal E}(q)-\frac{m}{2\pi\cosh\theta}\int_{\cosh u<2}{\rm d}u\, {\rm e}^{-mL\cosh u}\,\cosh(u\mp\theta)[\Sigma(u\mp\theta)-1]. \end{equation} Here \begin{equation} \Sigma(\Theta)=S\left(\frac{i\pi}{2}+\Theta\right). \end{equation} The S-matrix is real analytic and satisfies crossing: \begin{equation} [S(\Theta)]^*=S(-\Theta^*),\qquad\quad S(i\pi-\Theta)=S(\Theta), \end{equation} from which we conclude that for real $\Theta$ $\Sigma(\Theta)$ is real and satisfies \begin{equation} \Sigma(\Theta)=\Sigma(-\Theta). \end{equation} Thus $E(q)$ is real and independent of the $\pm$ sign. \begin{equation} \end{equation} \subsection{Finite volume form factor} Finally using (\ref{LuscherEFF}) and (\ref{L1}) the L\"uscher correction to the finite volume form factor can be written as \begin{equation} {\cal F}(q)=\frac{\sqrt{2\pi}F_1}{\sqrt{L{\cal E}(q)}}\{1+\delta \mathcal{F}(q)+\dots\}, \label{finff} \end{equation} where \begin{equation} \begin{split} \delta\mathcal{F}(q)=\int_{\cosh u<2}{\rm d}u\,&{\rm e}^{-mL\cosh u}\Big\{ \frac{\Delta}{2}+\frac{1}{F_1}F_3^{{\rm reg}}(u+i\pi,u,-\frac{i\pi}{2}\pm\theta)\\ -&\frac{1}{4\pi\cosh\theta}\sinh u\,\Sigma^\prime(u\mp\theta)\mp \frac{\sinh\theta\sinh u}{4\pi\cosh^2\theta}[\Sigma(u\mp\theta)-1]\Big\}. \end{split} \label{deltaq} \end{equation} $\delta \mathcal{F}(q)$ is real and independent of the $\pm$ sign, since using the form factor axioms we can show that \begin{equation} \left\{F_3^{\rm reg}(u+i\pi,u,-\frac{i\pi}{2}+\theta)\right\}^*= F_3^{\rm reg}(-u+i\pi,-u,-\frac{i\pi}{2}-\theta)= F_3^{\rm reg}(u+i\pi,u,-\frac{i\pi}{2}+\theta). \end{equation} If we require that at infinite energy the interaction can be neglected and the form factor is given by its free field value, \begin{equation} \lim_{q\to\infty}\delta\mathcal{F}(q)=0, \end{equation} then this fixes the integration constant to $\Delta=0$. Finally if we notice that in the first L\"uscher approximation (\ref{rho1}) can be written \begin{equation} \rho_1^{(1)}(q)=mL\cosh\theta\left\{1+\frac{1}{2\pi}\int_{-\infty}^\infty{\rm d}u\, {\rm e}^{-mL\cosh u}\sinh u\left[\frac{\sinh\theta}{\cosh^2\theta}\Sigma(u-\theta) +\frac{\Sigma^\prime(u-\theta)}{\cosh\theta}\right]\right\} \end{equation} we can rewrite (\ref{finff}) in the suggestive form \begin{equation} {\cal F}(q)=\sqrt{\frac{2\pi}{\rho_1^{(1)}(q)}}\left\{F_1+\int_{-\infty}^\infty{\rm d}u\, {\rm e}^{-mL\cosh u}\,F_3^{\rm reg}(u+i\pi,u,-\frac{i\pi}{2}+\theta)\right\}. \end{equation}
{ "timestamp": "2018-07-11T02:06:53", "yymm": "1802", "arxiv_id": "1802.04021", "language": "en", "url": "https://arxiv.org/abs/1802.04021" }
\section{Conclusion and Future Work} \section{Introduction} The inclusion of human-centered approaches in software engineering has received a lot of attention in recent years \cite{da2011user,salah2014systematic,brhel2015exploring,schon2017agile}. Human-centered design (HCD) \cite{dis20099241}, design thinking (DT) \cite{brown2009change} and participatory design (PD) \cite{spinuzzi2005methodology} have been shown to be beneficial for the software design and development process \cite{da2011user}, especially if the designer is aware of their challenges and limitations \cite{salah2014systematic,Bordin2016}. While there is a growing body of research in exploring user research and agile methods, very little has been done in the area of designing software for \textit{vulnerable populations}. The exact definition of the term "vulnerable populations" is the focus of many discussions~\cite{ruof2004vulnerability}. Sometimes it is defined vaguely, other times by extension (listing the conditions of users). In this paper we use it to refer to people potentially exposed to harm or not capable of protecting their own interests. We argue that software engineering methods - and software engineers - are ill-prepared for addressing this type of projects. An obvious first consideration is that reasoning on ethics and values tends to be more complex and that each user study requires a very careful design - as well as the need to follow specific guidelines and undergo reviews by an Institutional Review Board (IRB) \cite{national1978belmont}. AS we will see the issues go much beyond the incorporation of an ethical approval process (which we found beneficial, besides being appropriate and required). In this short paper we join the thread of work on the interplay between values and software engineering methods \cite{ferrario2014software,ferrario2016values}. We report on our experiences in developing applications for institutionalised older adults over the past years and summarise the lessons we have learned, especially in terms of how we adapted the agile processes we used to follow to cope with the scenarios at hand, and translate the lessons into a corresponding set of recommendations. We hope and trust that this will help teams be more effective when dealing with this scenario and avoid mistakes that can be very costly and hard to recover from. \section{Issues and Recommendations} We now list challenges we had to face due to working with vulnerable subjects and our recommendations. \textbf{Iterations are limited, errors are costly.} Agile processes allow us to iterate often with users and to correct course of actions as needed. This is often done by pushing "software probes" at early phases (e.g., \cite{weber2016closing}). In our scenario, we found that the number of possible design iterations are limited: There is a relatively small number of institutions willing at the start to go through an adventure with you, the personnel is often under stress, residents and family members face loads of challenges. Even those who are enthusiastic at the beginning can become less cooperative (or simply less available) as time goes by. Furthermore, while learning from errors is a positive aspect in agile methods, continuous changes can be disruptive to a population not used to change \cite{simm2014prototyping} and can harm their interest to be involved and participate. More importantly, errors in dealing with vulnerable populations can damage the trust, and that is something very difficult to recover from \cite{mara2013ethics}. Here an "error" can be simply giving a hint of suspicion that the system you are building goes in a direction that does not fit the needs of the individual you are speaking with, even if that is not the case. Once this happens, even reassurances that their feedback will be taken into account might not have the hoped effect. Something we found useful and highly recommend here is to observe users and analyse the relation between stakeholders and technology through the lenses of \emph{appropriation} \cite{dourish2003appropriation}, which is often revealing desired technology features that are satisfied via other means today. Furthermore, we recommend spending more time at the start assessing assess to participants, and specifically identifying participants for which we can have an "agile" style of access and interactions, and participants for which i) we have reduced access and ii) errors are sensitive. For the latter categories, we recommend deeper studies before introducing software probes or mockups. The need for going through \textbf{ethical approval processes} impacts the process in many ways (including positive ones). Obtaining an approval requires time, both to write the necessary documentation and to go through the approval process, which may involve one or more entities. In our case we went through both University and NH committees. Timescales here are typically of 1-3 months, depending on frequency of committee meetings and on whether clarifications are requested. This timescale is already beyond any modern agile standard. However the process is also an opportunity to carefully think and design the study protocol and receive suggestions. Since we have to be very conservative with iterations and user access, this step is actually helpful especially for a team with an agile mentality which might be tempted to reduce planning and have a bias towards action. A related aspect is the need to re-assess ethical considerations in an "agile" settings. This not only requires continuous adjustments in the research practice \cite{rashid2015managing}, but can also result in further changes to initial study protocols. Therefore, our recommendation here is to include in the approval process a structured plan of actions, carefully considering and anticipating possible outcomes and designing subsequent study steps accordingly, as opposed to iterating in an agile way. This is both to solicit more informed feedback, but also to limit further requests to the ethical committee to hopefully minor modifications to a plan (if at all), which typically result in faster feedback. Related to appropriation and ethics is the issue of \textbf{workarounds} that people (and specifically staff members) take, very often with the intent to help people in need. We found this to happen in all scenarios involving vulnerable subjects, beyond the case of institutionalised older adults. Observing workarounds is very useful in design, but in sensitive settings such as the one investigated, the "creativity", ingenuity, or simply the commitment and dedication of participants might be perceived as deviations from procedures and sometimes even from the law. In this sense, ethical considerations of reporting or incorporating such learnings in the design arise. What we recommend here is to ensure that the team is aware and follows recommended ethical guidelines and practices for these cases \cite{mara2013ethics}. This is something that even people trained in user studies may not be familiar with, and an error here may cost people their job. \textbf{Participant involvement.} From the perspective of participant involvement \cite{radermacher2006participatory}, our approach qualifies as research-initiated, with shared decisions with the industry partner, and nursing homes actors being consulted and informed. This puts us halfway between traditional and more participatory approaches \cite{ferrario2014software}. Incorporating a human-centered approach proved to be successful in identifying needs and materialising technology concepts, but limited in testing more forward thinking features. In a scenario where technology should also follow regulations and (sometimes anticipate) changes in policies, following a full community-driven or human-centered approach is not always feasible \cite{norman2005human}. Thus, rather than taking a decision a priori, we recommend to base the decision on the level of involvement according to the design goals, potential bias, conflictive views, and ethical considerations. \textbf{Personas} are designed to evoke emotional responses, creating empathy and keeping the team focused on the target users. In our work we had two challenges related to personas: one is access. Even if it is rather easy to identify personas and in general to cluster users into characteristics and needs, access to personas in different groups is hard to organise for many reasons. The other is that sometimes people had strong feelings about the party they interact, which can result in colorful or emotionally provoking personas. We therefore recommend i) to tone down description of personas when discussing with end users and ii) carefully assess at the start of the project (or after the personas have been identified) the access to different personas and prioritise requests: do not expect to manage to have access to all personas \textbf{Varying feedback.} Response bias is a widely studied behavior. With vulnerable subjects, we found the problem to be exacerbated. It manifested itself particularly when staff members, in high level discussions, reported an open attitude towards technology supporting staff-FM interactions (in accordance with the management and the political decision makers). However, when we drilled down into understanding how the communication should take place in very specific scenarios, we observed a certain resistance in by staff members related to some potential features. This occurred despite the research team being competent and trained in how to run studies with vulnerable subjects. In retrospect, drilling down to details earlier would have avoided us to work on features that are unlikely to make it in the system. We recommend therefore to identify features that might be contested and to drill down on them early, by providing concrete and realistic examples to validate acceptance. \textbf{Multidisciplinary teams} are essential but difficult to manage. Our core team was comprised of a multidisciplinary group of: i) sociologists with background in participatory action research and qualitative research methods, ii) researchers with background in software engineering and human-computer interaction, iii) product managers with experience in the healthcare sector, and iv) cognitive scientists and psychologists with competence on interactions and stress. Interactions with vulnerable populations require empathy, soft skills, experiences in designing studies in a way that is mindful of biases, and the ability to avoid putting the participants in an uncomfortable situation. This is hardly something that can be learned with a crash course. Nonetheless, involving software engineers in informal visits -- and user studies when possible -- has proven useful in our experience in creating empathy and having a more realistic view of the context, which was later useful when incorporating and discussing the lessons learned. The communication and collaboration in the team was facilitated by the cross-functional team members with experience in software engineering and human factors. This setup is known to facilitate the integration of design and development \cite{brhel2015exploring}, minimising the need for resorting to, for example, translations between team members \cite{leonardi2011design}. This was further facilitated by the use of scenarios, personas and mockups that were concrete materialisations of lessons learned. However, coordinating efforts towards activities that would maximise requirement elicitation has been more challenging. We see this as a consequence of competing views on what qualifies as useful insights among sociologists and engineers. Cross-functional members were fundamental in mediating these differences. Their importance cannot be overestimated and we feel that the lack of such competences can jeopardise the design effort. In summary, what we take home is the need to mix agile approaches with waterfall concepts. With some participants we can follow and iterate with short-lived design or development sprints, while with vulnerable populations we execute much longer sprints, characterised by a thorough design process that anticipates possible alternatives as opposed to designing them iteratively. Participants involvement needs the same flexibility: with some users observation only is appropriate, with others we can leverage PD, with others again we can follow traditional user research. We find that the challenge of design lies therefore not in the choice of a specific process and model for the project, but in identifying which participants and which tasks are suited for a given process and design approach. \section{The case of residential care} We describe our findings based on a joint university-industry project aiming at designing a set of innovative solutions for the residential care scenario focused at increasing the emotional well-being of residents, staff, and family members and at facilitating interactions. We also build on previous experience \cite{fiore2017understanding} studying analogous issues in the pediatric palliative care case. In this section we describe the context and give an overview of the project setup. Transitioning to long-term residential care is one of the more difficult moments in the life of an older adult and their family \cite{lee2002review}. This is complicated by the perceived negative social view on residential care that sees institutionalisation as a failure by family members \cite{ryan2000nursing}, due to cultural stereotypes about care systems, resulting in a sense of guilt, loss and abandonment in the family, as well as a challenging work environment for care professionals. The relationship between family members and professional caregivers is not without tensions. From the family members side, communication is challenged by a perceived lack of meaningful, timely and understandable information \cite{fiore2017understanding}. Professional caregivers instead consider "dealing" with family members as part of the job, but can see communications as potentially problematic. This can lead to professionals avoiding family members and vague communications \cite{hertzberg2003relatives}. Another critical point is the collaboration between these actors. Though collaboration from family members is in principle welcomed by professionals, this is not always translated into practice as traditional care models are not designed for full collaboration \cite{haesler2004constructive}. Family members can also be overly demanding, having unrealistic expectations about what the professionals have to do, and even taking issue in care practices \cite{vinton1998intervening}. Finally, as we experienced, NH staff members work at full capacity and in rather stressful conditions, both in terms of helping residents and in terms of managing family members demands and expectations. This is the scenario we were set to work with: very frail participants, emotionally demanding, and possibly conflictive. We followed a mix of agile and human-centered approaches, iterating on the following three main phases: product discovery, development and validation. Agile methodologies do not address the product discovery phase \cite{brhel2015exploring} but the need for a dedicated phase is evident in human-centered approaches and the software processes that incorporate them (e.g. \cite{ferrario2014software}). As a result of the process we identified and developed three IT-based solutions: i) a reminiscence-based tool to stimulate social interactions between family members and residents, ii) a personalised magazine to build a sense of community and stimulate conversations, and iii) a communication and collaboration tool to facilitate information sharing and family involvement. Describing the tools its outside the scope of this paper, and the interested reader is referred to \cite{fiore2017understanding,ibarra2017stimulating,caforio2017viability}. \section{Background} \subsection{Human-centered agile development} A vast literature has investigated how to combine human-centered approaches with agile methodologies. Systematic literature reviews have focused on the principles user-centered agile development~\cite{brhel2015exploring}, recurring patterns in the integration~\cite{da2011user} and characteristics of stakeholder involvement \cite{schon2017agile}. The most relevant to our discussion is that of Salah et. al~\cite{salah2014systematic}, which summarises the challenges of integrating agile methodologies with HCD. This review analysed 71 articles on the topic and derived the following challenges: i) lack of time for upfront activities, due to the nature of agile development to encourage responsiveness to changes instead of upfront planning, ii) conflicts in prioritising UCD and development activities, given the different views on what constitutes progress, iii) negative work dynamics arising from potentially competing goals and different communication practices, iv) difficulty in organising usability testing and incorporating feedback, due to the time restrictions in agile, v) lack of documentation, which creates confusion to UCD practitioners, who are used to record the trace of the design and rationale. These reviews, and the works they are based on, provide an insightful perspective on the practices and challenges surrounding human-centered approaches in agile development. However, when vulnerable settings are involved, the challenges become amplified because, as we will see, the timing of access to users, the kind of users we can involve, and the nature and number of iterations is subject to constraints, both self-imposed and imposed by the environment. \subsection{Engineering for vulnerable populations} Efforts have been made in incorporating human-centered approaches in sensitive contexts, and especially in the development of healthcare systems. Carroll and Richardson \cite{carroll2016aligning} make a case for the lack of a established framework to guide software developers in identifying requirements in healthcare, and propose integrating design thinking as an entire pre-requirements phase. Another interesting take by Texeira et. al \cite{teixeira2011using} combines more traditional system analysis techniques with UCD and PD, which required a facilitator to translate requirements back and forth between stakeholders. The scenario addressed by both works is certainly sensitive, though no actual emphasis is given in dealing with vulnerable populations. In a similar setting, Kieffer et al. \cite{kieffer2017agile} applied agile methods in combination to formative usability to the development of an application for patients with diabetes. In reporting the challenges, the authors mention i) the access to users in the medical context, ii) the recruitment process that took about six months in total, iii) the time to get the study protocol validated by an ethical committee, which was four months. On these challenges, the authors reflect that the medical expert should have been involved much earlier in the process. These insights give us a dimension of the practical difficulties in involving vulnerable populations. Knowledge transfer and communication among multidisciplinary teams is another topic investigated. Weber and Price \cite{weber2016closing} propose a knowledge transfer model between clinicians and software engineers to facilitate the development of healthcare systems. The model is comprised of a knowledge tailoring loop with three main phases: i) monitoring and evaluation of software, in order to collect observation in "real settings" right from the start, ii) identification of problems, involving a qualitative understanding of previous results, and iii) adaptation and tailoring of software, which involves design sessions with lead users and synthesis of results in multidisciplinary teams. While valid in the setting described, using software upfront to gain insights might produce undesired effects, such as the loss of interest by users and stakeholders to continue the collaboration, which is why we believe its applicability with vulnerable populations can be considered limited. A more extreme case of team communication was studied by Leonardi et al. \cite{leonardi2011design}, reporting on the experience of team members with background in HCD and semi-formal requirement engineering. The challenge was framed as an inter-cultural dialogue between professionals from different disciplines. The authors stress the importance of mutual learning, especially via the definition of a shared dictionary to bridge the gap between the disciplines. Speedplay \cite{ferrario2014software} is a software project management framework that integrates action research, participatory design and agile development, to approach relatively small projects targeting specific community needs. It is particularly targeted at multi-disciplinary projects seeking social innovation, where the community, researchers and engineers work actively together. The process model is comprised of four main steps: prepare, design, build and sustain, and it is characterised by slower cycles at the beginning followed by faster paced cycles by the end of the project. The model also promotes mutual learning while assigning responsibilities based on skills. An application of Speedplay is presented by Simm et al. \cite{simm2014prototyping} in the context of a tool for anxiety management for adults with high functioning autism. In working with this vulnerable population, the authors mention participants reacting poorly to changes and fluctuating participation as some of the challenges to an agile research and development. While we found many of the guidelines from the literature quite useful and we recognised the same challenges, in our experience we have stumbled upon difficulties and derived insights that we have not seen discussed deeply and uncovered several aspects that have not emerged yet. We discuss them in the following.
{ "timestamp": "2018-02-13T02:19:48", "yymm": "1802", "arxiv_id": "1802.04100", "language": "en", "url": "https://arxiv.org/abs/1802.04100" }
\section*{Acknowledgements} This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n$^o$ 725594 - time-data), and was supported by the Swiss National Science Foundation (SNSF) under grant number 200021\_178865 / 1. \section{Introduction} The simple zero-sum games have been studied extensively, often from the standpoint of analyzing the convergence to the Nash equilibrium. At the equilibrium, the players employ a min-max pair of strategies where no player can improve their pay-off by a unilateral deviation \citep{von1928theory}. In this setting, one can expect that the players arrive at the equilibrium via decentralized, no-regret learning algorithms, which hold even in the presence of potential adversarial behavior, and which also better model selfish play. The resulting dynamics is of great interest in optimization and behavioral economics \citep{myerson1999nash}, especially under communication constraints. When the behavior of each player is explained by a no-regret algorithm, it is possible to significantly improve convergence rates beyond the so-called black-box, adversarial dynamics. This observation was first made by \citep{daskalakis2011near}, which tailored a decentralized version of Nesterov's primal-dual method based on the excessive gap condition. Intriguingly, \citep{daskalakis2011near} left it as an open question on the existence of a simple algorithm that converges at optimal rates for both regret and the value of the game in an uncoupled manner, both against honest (i.e., cooperative) and dishonest (i.e., arbitrarily adversarial) behavior. The challenge was partially settled by the modified optimistic mirror descent (OMD) framework in \citep{rakhlin2013optimization}. While the framework of \citep{daskalakis2011near} is considered unnatural and involves additional logarithmic factors, similar arguments apply to \citet{rakhlin2013optimization}'s framework: The modified OMD needs to know the game horizon a priori to determine the step-sizes. Their analysis also results in non-optimal regret and logarithmic factors in convergence to the value of the game. Besides the aforementioned drawbacks, neither approaches can accommodate natural switches between honest and dishonest behavior. In this work, we propose a simple algorithmic framework that closes the gap between upper and lower bounds for adversarial regret as well as convergence to the value of the game, while maintaining the best known rate for honest regret, thereby resolving the open problem posed by \citep{daskalakis2011near}. We achieve the desiderata as follows: First, we provide a novel analysis of OMD and show that it can obtain fast convergence for both honest regret and value of the game, when both players are honest. Second, we introduce robust optimistic mirror descent (ROMD), which attains optimal adversarial regret without knowing the time horizon. Finally, we propose a simple signaling scheme, which enables us to bridge OMD and ROMD to achieve the best of both worlds, and seamlessly handle honest and dishonest behavior. \begin{table*}[!h] \centering \begin{tabular}{|l||*{5}{c|}}\hline &\makebox[6em]{\small\textbf{Honest $R_T$}}&\makebox[6em]{\small\textbf{Adversarial $R_T$}}&\makebox[6em]{\small\textbf{Game Value}} &\makebox[6em]{\small\textbf{Oracle}}&\makebox[6em]{\textbf{\small Algorithm}} \\\hline \hline \small\citet{daskalakis2011near} & {\small$O(\log T)$} & \small $O(\sqrt{T})$ & \small$O(T^{-1}{\log^{\frac{3}{2}} T})$ & \small$|A|_{\max}$ &Complicated\\\hline \small\citet{rakhlin2013optimization} & $?$ & \small$O(\sqrt{T}\log T)$ & \small$O\l(T^{-1}{\log T}\r)$ & \small$T, |A|_{\max}$ &Simple \\\hline \small This paper & \small$O(\log T)$ & \small$O(\sqrt{T})$ & \small$O\l(T^{-1}\r)$ & \small$|A|_{\max}$ &Simple \\\hline \end{tabular}\caption{\label{tab:convergence_comparison} A convergence rate comparison in the context of assumptions. } \end{table*} \input{related_work} \section{Preliminaries and Notation} Let $\psi$ be a mirror map over the convex domain ${ \mathcal{D} }$, and let $D(\cdot, \cdot)$ be the Bregman divergence associated with $\psi$. We assume the knowledge of the three-point identity for Bregman divergence in the sequel: \begin{equation} { \nonumber } D({\mathbf x},{\mathbf y}) + D({\mathbf y},{\mathbf z}) = D({\mathbf x},{\mathbf z}) + \ip{{\mathbf x}-{\mathbf y}}{ \nabla \psi ({\mathbf z}) - \nabla \psi ({\mathbf y}) }. \end{equation} We use the notation ${\mathbf z} = MD_\eta({\mathbf x},{\mathbf g})$ to denote: \begin{align*} {\mathbf z}= \nabla \psi^\star \Big( \nabla\psi({\mathbf x}) - \eta {\mathbf g} \Big) \end{align*}where $\psi^\star$ is the Fenchel dual of $\psi$. Let $\psi$ be 1-strongly convex with respect to the norm $\| \cdot \|$. We define \begin{equation} { \nonumber } D^2 \coloneqq \max \left\{ \sup_{{\mathbf x}, {\mathbf x}' \in { \mathcal{D} }} \frac{1}{2}\|{\mathbf x}-{\mathbf x}'\|^2, \sup_{{\mathbf x} \in { \mathcal{D} }}D({\mathbf x}, {\mathbf x}_c) \right\} \end{equation} where ${\mathbf x}_c \coloneqq \argmin_{{\mathbf x}\in { \mathcal{D} }} \psi({\mathbf x})$ is the prox center. Hence $D$ controls both the diameter (in $\|\cdot\|$) and the Bregman divergence to the prox center. We frequently use the fact that \begin{equation} { \nonumber } \ip{{\mathbf x}}{A {\mathbf y}} \leq | A |_{\max} \quad \forall {\mathbf x} \in \Delta_m, \ {\mathbf y} \in \Delta_n \end{equation} where $| A |_{\max}$ is the maximum entry of $A$ in absolute value, and $\Delta_m \coloneqq \{ {\mathbf x} \in {\mathbb R}^m \ | \ \sum_{i=1}^m x_i = 1, x_i \geq 0 \}$ is the standard simplex. On a simplex, we will only consider the entropic mirror map: \begin{equation} { \nonumber } \psi({\mathbf x}) = \sum_{i=1}^k x_i \log x_i, \quad k = m \text{ or } n \end{equation} which is well-known to be 1-strongly convex in $\| \cdot \|_1$. We use $\frac{1}{m}1_m$ to denote the uniform distribution on $\Delta_m$. \section{Problem Formulation and Main Result} An (offline) two-player zero-sum game with payoff matrix $A$ refers to the solving the minimax problem: \begin{equation} \label{eq:two_player_zero_sum_game} V \coloneqq \min_{{\mathbf y} \in \Delta_n} \max_{{\mathbf x} \in \Delta_m} \ip{{\mathbf x}}{A{\mathbf y}}. \end{equation} The quantity $ V$ in \eqref{eq:two_player_zero_sum_game} is called the \textbf{value} of the game, or the Nash Equilibrium Value. Any pair $(\bar{{\mathbf x}}, \bar{{\mathbf y}})$ attaining the game value is called an equilibrium strategy. In the decentralized setting (aka., the ``strongly uncoupled'' setting), the payoff matrix and the number of opponent's strategies are unknown to both players, and their goal is to learn a pair of equilibrium strategy through repeated game plays. Moreover, each player aims to suffer a low individual regret, even in the presence of an adversary or a corrupted channel that distorts the feedback. Specifically, at each round $t$, the players take actions ${\mathbf x}_t$ and ${\mathbf y}_t$, and then receive the loss vectors $-A{\mathbf y}_t$ (for ${\mathbf x}$-player) and $A^\top {\mathbf x}_t$ (for ${\mathbf y}$-player). In the honest setting, we assume that the two players take actions according to a prescribed algorithm, and we say the setting is adversarial if only one player (the ${\mathbf x}$-player in this paper) adheres to the prescribed algorithm and the other player arbitrary. As in previous work, we assume that an upper bound $|A|_{\textup{max}}$ on the maximum absolute entry of $A$ is available to both players. The goal is to achieve \begin{align*} \left| V- \ip{{\mathbf x}_T }{A{\mathbf y}_T} \right| &\leq r_1(T), \\ R_T \coloneqq \max_{{\mathbf x} \in \Delta_m} \sum_{t=1}^T \ip{{\mathbf x}_t - {\mathbf x}}{ -A{\mathbf y}_t} & \leq r_2(T) \end{align*}for fast-decaying $r_1$ and sublinear $r_2$ in $T$. The first requirement is to approximate the game value in \eqref{eq:two_player_zero_sum_game}, and the second one asks to minimize the regret $R_T$. Our main result can be stated as follows: \begin{theorem}[Main result, informal]\label{thm:informal} For \eqref{eq:two_player_zero_sum_game}, there is a simple decentralized algorithm with non-adaptive step-size such that \begin{align*} r_1(T) = O\l(\frac{1}{T}\r), \quad \quad r_2(T) = O\l(\log T\r), \end{align*}if the opponent is honest (i.e., playing collaboratively to solve the game). Moreover, against any adversary, we have \begin{equation} r_2(T) = O\l(\sqrt{T}\r). { \nonumber } \end{equation} \end{theorem} Except for the $O\l({\log T}\r)$ honest regret, these rates are known to be optimal \citep{cesa2006prediction,daskalakis2015near}. We are also the first to remove $\log T$ factors in convergence to the value of the game, an open question posed by the very first work in learning decentralized games \citep{daskalakis2011near}. \section{A family of optimistic mirror descents: Classical, Robust, and Let's be honest} We first illustrate the high-level ideas to prove \textbf{Theorem \ref{thm:informal}} in Section \ref{subsec:ideas}. A novel analysis for OMD in the honest setting is given in Section \ref{subsec:omd}, and we propose a new algorithm for the adversarial setting in Section \ref{subsec:romd}. Finally, the full algorithm is presented in Section \ref{subsec:full}, along with the rigorous version of the main result (\textit{cf.,} \textbf{Theorem \ref{thm:ADMD}}). \subsection{High-Level Ideas}\label{subsec:ideas} Our algorithms are inspired by the iterates of the form: \begin{equation} \label{eq:omd_equiv} \left\{ \begin{array}{ll} {\mathbf x}_{t+1} = MD_{\eta}({\mathbf x}_t, -2A{\mathbf y}_t + A{\mathbf y}_{t-1})\\ {\mathbf y}_{t+1} = MD_{\eta}({\mathbf y}_t, 2A^\top {\mathbf x}_t - A^\top{\mathbf x}_{t-1}) \end{array}, \right. \end{equation} which are equivalent to the OMD in \citep{rakhlin2013optimization} (see Appendix \ref{app:quiv}). It is known that directly applying \eqref{eq:omd_equiv} to \eqref{eq:two_player_zero_sum_game} yields $O\l(\frac{1}{T}\r)$ convergence in the game value, however without any guarantee on the regret. To make OMD optimal for zero-sum games, we improve \eqref{eq:omd_equiv} on two fronts. First, in the honest setting, we make the following simple observation: Although the iterates ${\mathbf x}_{t}$ are not guaranteed to possess sublinear regret, the averaged iterates $\frac{1}{t}\sum_{i=1}^t {\mathbf x}_i$ do enjoy logarithmic regret, and hence, it suffices to play the averaged iterates in the honest setting. Second, in order to make OMD robust against any adversary, we utilize the ``mixing steps'' of \citep{rakhlin2013optimization} with an important improvement: Our step-sizes do not depend on the time horizon. This new feature is crucial in removing $\log T$ factors in both the convergence to game value and adversarial regret. In fact, our analysis is arguably simpler than \citep{rakhlin2013optimization}. \input{HDMD} \input{RDMD} \input{SDMD} \section{Experiments}\label{sec:experiments} The purpose of this section is to provide numerical evidence to the following claims of our theory: \begin{enumerate} \item The LbH algorithm does not require knowing the time horizon beforehand, and our step-sizes are non-adaptive. Therefore, all quantities of interest, such as regrets or game value, should steadily decrease along the algorithm run. \item The LbH algorithm automatically adjusts to honest and adversarial opponents. \end{enumerate} For comparison, we include the modified OMD (henceforth abbreviated as m-OMD) of \citep{rakhlin2013optimization} in our experiment, for different choices of time horizon. We generate the entries of $A$ uniformly at random in the interval $[-1, 1]$, and we set $m=200$ and $n=300$. We consider two scenarios: \begin{enumerate} \item \textit{Honest setting}: Both players adhere to the prescribed algorithms and try to reach the Nash equilibrium collaboratively. \item \textit{Adversarial setting}: The ${\mathbf y}$-player greedily maximizes the instantaneous regret of the ${\mathbf x}$-player. \end{enumerate} \begin{figure*} \centering \subfigure[{\label{fig:aq1}}Regret comparison.]{{\includegraphics[keepaspectratio=true,scale=0.65]{figs/RegretA4.pdf} }}% \subfigure[{\label{fig:bq1}}Upper bound.]{{\includegraphics[keepaspectratio=true,scale=0.65]{figs/Bound_lbh.pdf} }}% \caption{Adversarial setting.} \label{fig:adv} \end{figure*} \subsection{Honest Setting} The convergence for the honest setting is reported in \textbf{Figure \ref{fig:honest}}, for two different parameter choices of LbH and m-OMD. For both convergence to the game value and individual regret, after a short burn-in period (due to not knowing the $C_1$ in \eqref{eq:ADMD_hold1} and \eqref{eq:ADMD_hold2}), the LbH algorithm enters a steady $O\l(\frac{1}{T}\r)$-decreasing phase, as expected from our theory. ~On the other hand, as the m-OMD chooses step-sizes according to the time horizon, it eventually saturates in both plots. As noted by \citep{rakhlin2013optimization}, it is possible to prevent the saturation of m-OMD by employing the doubling trick or the techniques in \citep{auer2002adaptive}. However, doing so not only complicates the algorithm, but also introduces extra $\log T$ factors in the convergence of honest regret, since the doubling trick loses a $\log T$ factor for logarithmic regrets. Such rates are sub-optimal given our results. \subsection{Adversarial Setting} We report the regret comparison in \textbf{Figure \ref{fig:adv}}. In the adversarial setting, the LbH algorithm is essentially running the ROMD, and hence we see a straight $O({T}^{-\frac{1}{2}})$ decrease in the regret, as dictated by our upper bound in \textbf{Theorem \ref{thm:RDMD}}; see \textbf{Figure \ref{fig:adv}}-(b). The parameter choice does not effect the performance. The m-OMD slightly outperforms LbH for a short period, but eventually blows up in regret. We remark that the short-term good empirical performance is due to the adaptive step-sizes of m-OMD, which require additional work per-iteration. Our LbH algorithm is non-adaptive, but is already competitive in terms of empirical performance. \section{Conclusion and Future Work} We studied the problem of zero-sum games in the decentralized setting, and we resolved an open problem of achieving optimal convergence to the game value while maintaining low regrets. Our techniques were based on several simple but novel observations in the game dynamics. Namely, we noticed that the averaged iterates of OMD enjoy logarithmic regret in the honest setting, we provided horizon-independent mixing steps for the OMD to achieve optimal adversarial regret, and we designed a singaling scheme to losslessly bridge OMD and ROMD. In essence, we showed that it is not necessary, as done in the work of \citep{rakhlin2013optimization}, to fix the time horizon beforehand and modify OMD accordingly. Our observations were instrumental in removing $\log T$ terms in all convergence rates. Our framework suggests several research directions. First, instead of assuming that we observe the full loss vector, we may pose our problem in the \emph{bandit} setting, where only the payoff value of the current strategy is observed. Second, for practical purposes, it is interesting to see whether there exists an \emph{adaptive} step-size version of our algorithm. Finally, generalizing our framework to \emph{multiplayer} games is a challenging future work. \section{Equivalence Formulations of Optimistic Mirror Descent}\label{app:quiv} In this appendix, we show that the ${\mathbf x}_t$ iterates in \eqref{eq:omd_equiv} of the main text is equivalent to the following iterates given in \citep{chiang2012online, rakhlin2013online}: \begin{equation} \label{eq:omd_equiv1} \left\{ \begin{array}{ll} {\mathbf x}_t &= MD_\eta\l(\tilde{{\mathbf x}}_t, -A{\mathbf y}_{t-1} \r)\\ \tilde{{\mathbf x}}_{t+1} &= MD_\eta \l(\tilde{{\mathbf x}}_t, -A{\mathbf y}_{t} \r) \end{array}. \right. \end{equation} By the optimality condition for \eqref{eq:omd_equiv1}, we have \begin{align} \nabla \psi({\mathbf x}_t) &= \nabla \psi (\tilde{{\mathbf x}}_t) - \eta \l( -A{\mathbf y}_{t-1}\r),\label{eq:equiv_hold1}\\ \nabla \psi(\tilde{{\mathbf x}}_t) &= \nabla \psi (\tilde{{\mathbf x}}_{t-1}) - \eta \l( -A{\mathbf y}_{t-1}\r),\label{eq:equiv_hold2}\\ \nabla \psi(\tilde{{\mathbf x}}_{t-1}) &= \nabla \psi ({\mathbf x}_{t-1}) + \eta \l( -A{\mathbf y}_{t-2}\r). \label{eq:equiv_hold3} \end{align}We hence get \eqref{eq:omd_equiv} by applying \eqref{eq:equiv_hold3} to \eqref{eq:equiv_hold2} and then \eqref{eq:equiv_hold2} to \eqref{eq:equiv_hold1}. \section{Optimistic Mirror Descent}\label{app:HDMD_proof} In this appendix, we prove \textbf{Theorem \ref{thm:HDMD}}, restated below for convenience. \begin{theorem} Suppose two players of a zero-sum game have played $T$ rounds according to \textbf{Algorithm \ref{alg:HDMDx}} and \textbf{\ref{alg:HDMDy}} with $\eta = \frac{1}{2|A|_{\max}}$. Then \begin{enumerate} \item The ${\mathbf x}$-player suffers a $O\l(\frac{\log(T)}{T}\r)$ regret: \begin{align} \max_{{\mathbf z}\in\Delta_m}\sum_{t=3}^T \ip{{\mathbf z}_t - {\mathbf z}}{ -A{\mathbf w}_t} &\leq { \Big(\log (T-2) + 1 \Big)\Big(20 + \log m +\log n \Big)|A|_{\max}} \\ &= O\l( {\log T}\r) { \nonumber } \end{align} and similarly for the ${\mathbf y}$-player. \item The strategies $({\mathbf z}_T,{\mathbf w}_T)$ constitutes an $O\l(\frac{1}{T} \r)$-approximate equilibrium to the value of the game: \begin{equation} \left| V - \ip{{\mathbf z}_T}{ A {\mathbf w}_T} \right| \leq \frac{ \Big(20 + \log m +\log n \Big)|A|_{\max}} {T-2} = O\l( \frac{1}{T}\r). \end{equation} \end{enumerate} \end{theorem} \begin{proof} Define ${\mathbf x}^*$ as \begin{align} {\mathbf x}^* = \arg\min_{{\mathbf x} \in \Delta_m} \left\langle {\mathbf x} , -A \l( \frac{1}{T-2}\sum_{t=3}^{T}{\mathbf y}_t \r) \right \rangle. \end{align} We define an auxiliary individual regret $\textup{R}_T^{\mathbf x}$ as \begin{equation} \textup{R}_T^{\mathbf x} \coloneqq \sum_{t = 3}^{T} \langle {\mathbf x}_t - {\mathbf x}^*, -A{\mathbf y}_t \rangle. \end{equation} Notice that this is the regret on the ${\mathbf x}_t$ sequence versus ${\mathbf y}_t$ sequence, while we are playing ${\mathbf z}_t$'s and ${\mathbf w}_t$'s in the algorithm. We then have \begin{align} \textup{R}_T^{\mathbf x}&= \sum_{t = 3}^{T} \langle {\mathbf x}_t - {\mathbf x}^*, -A{\mathbf y}_t \rangle { \nonumber }\\ &= \langle {\mathbf x}_3 - {\mathbf x}^*,-A {\mathbf y}_3\rangle + \sum_{t=4}^{T} \langle {\mathbf x}_t - {\mathbf x}^* , -A{\mathbf y}_t\rangle { \nonumber }\\ &\leq 2|A|_{\max} + \sum_{t=4}^{T}\langle {\mathbf x}_t - {\mathbf x}^*, -A{\mathbf y}_t - {\mathbf g}_{t-1}\rangle + \sum_{t=4}^{T}\langle {\mathbf x}_t - {\mathbf x}^*, {\mathbf g}_{t-1}\rangle { \nonumber } \end{align} where ${\mathbf g}_t \coloneqq -2(t-2)A {\mathbf w}_t+3(t-3)A{\mathbf w}_{t-1} - (t-4)A{\mathbf w}_{t-2}$. Inserting ${\mathbf w}_t = \frac{1}{t-2}\sum_{i = 3}^{t}{\mathbf y}_i$ into the definition of ${\mathbf g}_t$, we get ${\mathbf g}_t = -2A{\mathbf y}_{t} + A{\mathbf y}_{t-1}$. Straightforward calculation then shows: \begin{align} \textup{R}_T^{\mathbf x}&\leq 2|A|_{\max} + \sum_{t=4}^{T}\langle {\mathbf x}_t - {\mathbf x}^*, -A{\mathbf y}_t + 2A{\mathbf y}_{t-1} - A{\mathbf y}_{t-2}\rangle + \sum_{t=4}^{T}\langle {\mathbf x}_t - {\mathbf x}^*, -2A{\mathbf y}_{t-1} + A{\mathbf y}_{t-2}\rangle { \nonumber }\\ &= 2|A|_{\max} + \sum_{t=4}^{T}\langle {\mathbf x}_t - {\mathbf x}^*, (-A{\mathbf y}_t + A{\mathbf y}_{t-1}) - (-A{\mathbf y}_{t-1} + A{\mathbf y}_{t-2})\rangle { \nonumber }\\ &\qquad \quad \qquad \quad \quad \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{1}{\eta}\sum_{t=4}^{T} \Big(D({\mathbf x}^*,{\mathbf x}_{t-1}) - D({\mathbf x}^*,{\mathbf x}_{t}) - D({\mathbf x}_t,{\mathbf x}_{t-1}) \Big) { \nonumber } \\ &= 2|A|_{\max} + \sum_{t=4}^{T-1}\langle {\mathbf x}_t - {\mathbf x}_{t+1}, -A{\mathbf y}_{t} + A{\mathbf y}_{t-1}\rangle + \langle {\mathbf x}_4 - {\mathbf x}^*, A{\mathbf y}_3 - A{\mathbf y}_2\rangle { \nonumber } \\ &\quad \ \ \ \ + \langle {\mathbf x}_T - {\mathbf x}^* , -A{\mathbf y}_T + A{\mathbf y}_{T-1}\rangle + \frac{1}{\eta}\sum_{t=4}^{T} \Big(D({\mathbf x}^*,{\mathbf x}_{t-1}) - D({\mathbf x}^*,{\mathbf x}_{t}) - D({\mathbf x}_t,{\mathbf x}_{t-1}) \Big) { \nonumber } \\ &\leq 10|A|_{\max}+ \sum_{t=4}^{T-1}\langle {\mathbf x}_t - {\mathbf x}_{t+1}, -A{\mathbf y}_{t} + A{\mathbf y}_{t-1}\rangle { \nonumber } \\ & \hspace{150pt} + \frac{1}{\eta}\sum_{t=4}^{T} \Big(D({\mathbf x}^*,{\mathbf x}_{t-1}) - D({\mathbf x}^*,{\mathbf x}_{t}) - D({\mathbf x}_t,{\mathbf x}_{t-1}) \Big) { \nonumber } \\ &\leq 10|A|_{\max} + \sum_{t=4}^{T-1}\|{\mathbf x}_t - x_{t+1}\|_1 \cdot |A|_{\max} \cdot\| {\mathbf y}_{t} - {\mathbf y}_{t-1}\|_1 { \nonumber } \\ &\hspace{150pt} + \frac{1}{\eta}\Big(D({\mathbf x}^*,{\mathbf x}_3) - D({\mathbf x}^*,{\mathbf x}_T) \Big) +\sum_{t=4}^{T}\frac{-1}{\eta} D({\mathbf x}_t, {\mathbf x}_{t-1}) { \nonumber } \\ &\leq 10|A|_{\max} + \frac{1}{2}\sum_{t=4}^{T-1} \Big( |A|_{\max} \cdot \|{\mathbf x}_t - {\mathbf x}_{t+1}\|_1^2 + |A|_{\max} \cdot \|{\mathbf y}_{t} - {\mathbf y}_{t-1} \|_1^2 \Big) { \nonumber }\\ &\hspace{150pt} + \frac{1}{\eta}\Big(D({\mathbf x}^*, {\mathbf x}_3) - D({\mathbf x}^*,{\mathbf x}_T) \Big) +\sum_{t=4}^{T}\frac{-1}{\eta} D({\mathbf x}_t, {\mathbf x}_{t-1}). { \nonumber } \end{align} Using the fact that $\psi$ is 1-strongly convex with respect to the $\ell_1$-norm, we have $ -D({\mathbf x},{\mathbf x}') \leq -\frac{1}{2}\|{\mathbf x}-{\mathbf x}' \|_1^2 \leq 0$. Also, we have $D({\mathbf x}^*,{\mathbf x}_3) \leq \log m$. Combining these facts in the last inequality gives: \begin{align*} \textup{R}_T^{\mathbf x}&\leq 10|A|_{\max} + \frac{\log m}{\eta} + \frac{|A|_{\max}}{2}\sum_{t=4}^{T-1} \|{\mathbf x}_t - {\mathbf x}_{t+1}\|_1^2 { \nonumber } \\ &\hspace{70pt} + \frac{|A|_{\max}}{2}\sum_{t=4}^{T-1} \|{\mathbf y}_t - {\mathbf y}_{t-1}\|_1^2 - \frac{1}{2\eta} \sum_{t=4}^{T} \|{\mathbf x}_{t-1}-{\mathbf x}_t \|_1^2. \end{align*} Similarly, for the second player we define \begin{equation} \textup{R}^{\mathbf y}_T \coloneqq \sum_{t=3}^T \ip{{\mathbf y}_t - {\mathbf y}^*}{ A^\top{\mathbf x}_t} \end{equation} where ${\mathbf y}^* \coloneqq \arg\min_{\mathbf y} \ip{{\mathbf y}}{ A^\top \l( \frac{1}{T-2}\sum_{t=3}^T {\mathbf x}_t \r)}$. We then have \begin{align*} \textup{R}^{\mathbf y}_T &\leq 10|A|_{\max} + \frac{\log n}{\eta} + \frac{|A|_{\max}}{2}\sum_{t=4}^{T-1} \|{\mathbf y}_t - {\mathbf y}_{t+1}\|_1^2 { \nonumber } \\ &\hspace{70pt} + \frac{|A|_{\max}}{2}\sum_{t=4}^{T-1} \|{\mathbf x}_t - {\mathbf x}_{t-1}\|_1^2 - \frac{1}{2\eta} \sum_{t=4}^{T} \|{\mathbf y}_{t-1}-{\mathbf y}_t \|_1^2. \end{align*} Setting $\eta = \frac{1}{2|A|_{\max}}$, we get \begin{equation} \label{eq:HDMD_proof_hold1} \textup{R}_T^{\mathbf x}+ \textup{R}^{\mathbf y}_T \leq \Big(20 + \log m +\log n \Big){|A|_{\max}}. \end{equation} Now, recalling that ${\mathbf z}_T = \frac{\sum_{t=3}^T {\mathbf x}_t}{T-2}$ and ${\mathbf w}_T = \frac{\sum_{t=3}^T {\mathbf y}_t}{T-2}$ and using the definition of $\textup{R}_T^{\mathbf x}$ and $\textup{R}^{\mathbf y}_T$, we get \begin{align} \label{eq:HDMD_proof_hold2} \frac{1}{T-2} \Big( \textup{R}_T^{\mathbf x}+ \textup{R}^{\mathbf y}_T \Big) = \max_{{\mathbf x} \in \Delta_m} \ip{{\mathbf x}}{ A{\mathbf w}_T} - \min_{{\mathbf y}\in\Delta_n} \ip{{\mathbf z}_T}{A{\mathbf y}}. \end{align} Furthermore, by the definition of the value of the game, we have \begin{equation} \label{eq:HDMD_proof_hold3} \min_{{\mathbf y} \in \Delta_n} \ip{{\mathbf z}_T}{A{\mathbf y}} \leq V \leq \max_{{\mathbf x} \in \Delta_m} \ip{{\mathbf x}}{A{\mathbf w}_T}. \end{equation} We also trivially have \begin{equation} \label{eq:HDMD_proof_hold4} \min_{{\mathbf y} \in \Delta_n} \ip{{\mathbf z}_T}{A{\mathbf y}} \leq \ip{{\mathbf z}_T}{A{\mathbf w}_T} \leq \max_{{\mathbf x} \in \Delta_m} \ip{{\mathbf x}}{A{\mathbf w}_T}. \end{equation} Combining \eqref{eq:HDMD_proof_hold2} - \eqref{eq:HDMD_proof_hold4} in \eqref{eq:HDMD_proof_hold1} then establishes \eqref{eq:HDMD_value_of_the_game}: \begin{equation} { \nonumber } \left |V - \ip{{\mathbf z}_T}{A{\mathbf w}_T} \right | \leq \frac{ \Big(20 + \log m +\log n \Big)|A|_{\max}} {T-2}. \end{equation} We now turn to \eqref{eq:HDMD_low_regret}. Let $\textup{R}_T^{\mathbf z} \coloneqq \max_{{\mathbf z}\in \Delta_m}\sum_{t=3}^T \ip{{\mathbf z}_t - {\mathbf z}}{ -A{\mathbf w}_t}$ and let $\tilde{\textup{R}}_T^{\mathbf z} \coloneqq \sum_{t=3}^T \ip{{\mathbf z}_t - {\mathbf z}_t^*}{ -A{\mathbf w}_t}$ where ${\mathbf z}^*_t = \arg\min_{{\mathbf z}\in\Delta_m} \ip{{\mathbf z}}{-A{\mathbf w}_t}$. Evidently we have $\textup{R}_T^{\mathbf z} \leq \tilde{\textup{R}}_T^{\mathbf z}$. Notice that (with ${\mathbf w}_t^*$ similarly defined) \begin{align} \ip{{\mathbf z}_t - {\mathbf z}_t^*}{ -A{\mathbf w}_t} &= \ip{{\mathbf z}^*_t}{ A{\mathbf w}_t} - \ip{{\mathbf z}_t}{ A{\mathbf w}_t} { \nonumber } \\ &\leq \ip{{\mathbf z}^*_t}{ A{\mathbf w}_t} - \ip{{\mathbf z}_t}{ A{\mathbf w}^*_t} { \nonumber }\\ &\leq \frac{\Big(20 + \log m +\log n \Big){|A|_{\max}}}{t-2} \label{eq:HDMD_proof_hold5} \end{align}by \eqref{eq:HDMD_proof_hold1} and \eqref{eq:HDMD_proof_hold2}. Using these inequalities, we get \begin{align*} \frac{1}{T-2} \textup{R}_T^{\mathbf z} \leq \frac{1}{T-2} \tilde{\textup{R}}_T^{\mathbf z} &= \frac{1}{T-2}\sum_{t=3}^T \ip{{\mathbf z}_t - {\mathbf z}^*_t}{ -A{\mathbf w}_t} \\ &\leq \frac{1}{T-2} \sum_{t=3}^T \frac{ \Big(20 + \log m +\log n \Big)|A|_{\max}} {t-2} \\ &\leq \frac{ \Big(\log (T-2) + 1 \Big)\Big(20 + \log m +\log n \Big)|A|_{\max}} {T-2} \end{align*}which finishes the proof. \end{proof} \section{Robust Optimistic Mirror Descent}\label{app:RDMD_proof} In this appendix, we prove \textbf{Theorem \ref{thm:RDMD}}, repeated below for convenience. \begin{theorem}[$O(\sqrt{T})$-Adversarial Regret] Suppose that $\| \nabla f_t \|_* \leq G$ for all $t$. Then playing $T$ rounds of \textbf{Algorithm \ref{alg:RDMD}} with $\eta_t = \frac{1}{G\sqrt{t}}$ against an arbitrary sequence of convex functions has the following guarantee on the regret: \begin{align} \max_{{\mathbf x}\in\Delta_m}\sum_{t=1}^T \ip{{\mathbf x}_t - {\mathbf x}}{ \nabla f_t ({\mathbf x}_t)} &\leq G\sqrt{T}\l( 18+2D^2\r) + GD\l(3\sqrt{2} + 4D \r) { \nonumber }\\ & = O\l( {\sqrt{T} }\r). { \nonumber } \end{align} \end{theorem} \def{ 9 }{{ 9 }} \begin{proof} Define $\textup{R}^{\mathbf x}_T \coloneqq \sum_{t=1}^T \ip{{\mathbf x}_t - {\mathbf x}^*}{ \nabla f_t({\mathbf x}_t)} $ where ${\mathbf x}^* \coloneqq \arg\min_{{\mathbf x}\in\Delta_m} \ip{{\mathbf x}}{\sum_{t=1}^T \nabla f_t({\mathbf x}_t)}$. Let $\tilde{\nabla }_t = 2\nabla f_t({\mathbf x}_t)-\nabla f_{t-1}({\mathbf x}_{t-1})$, and let $\eta_t = \frac{1}{\alpha \sqrt{t}}$ for some $\alpha >0$ to be chosen later. Then \begin{align*} \textup{R}^{\mathbf x}_T &= \sum_{t=1}^{T} \langle {\mathbf x}_t - {\mathbf x}^*, \nabla f_t({\mathbf x}_t)\rangle \\ &\leq \sqrt{2}DG + \sum_{t=2}^{T} \langle {\mathbf x}_t - {\mathbf x}^*, \nabla f_t({\mathbf x}_t) - \tilde{\nabla}_{t-1}\rangle +\sum_{t=2}^{T} \langle {\mathbf x}_t - {\mathbf x}^*, \tilde{\nabla}_{t-1}\rangle \\ &\leq \sqrt{2}DG + \sum_{t=2}^{T} \langle {\mathbf x}_t - {\mathbf x}^*, \nabla f_t ({\mathbf x}_t)- \nabla f_{t-1}({\mathbf x}_{t-1})\rangle - \sum_{t=2}^{T} \langle {\mathbf x}_t - {\mathbf x}^*, \nabla f_{t-1}({\mathbf x}_{t-1}) - \nabla f_{t-2}({\mathbf x}_{t-2})\rangle +\sum_{t=2}^{T} \langle {\mathbf x}_t - {\mathbf x}^*, \tilde{\nabla}_{t-1}\rangle \\ &\leq 3\sqrt{2}DG + \sum_{t=2}^{T-1} \langle {\mathbf x}_t - {\mathbf x}_{t+1}, \nabla f_t({\mathbf x}_t) - \nabla f_{t-1}({\mathbf x}_{t-1})\rangle + \sum_{t=2}^{T}\frac{1}{\eta_t}\Big(D({\mathbf x}^*,\tilde{{\mathbf x}}_{t-1})-D({\mathbf x}^*, {\mathbf x}_{t}) - D({\mathbf x}_{t},\tilde{{\mathbf x}}_{t-1}) \Big) \\ &\leq 3\sqrt{2}DG + \sum_{t = 2}^{T-1} \l(\frac{\sqrt{t}G}{9}\|{\mathbf x}_t - {\mathbf x}_{t+1}\|^2 + \frac{9 G}{\sqrt{t}} \r) \\ & \hspace{100pt} + \alpha\sum_{t=1}^{T}\sqrt{t}\Big(D({\mathbf x}^*,\tilde{{\mathbf x}}_{t-1})-D({\mathbf x}^*, {\mathbf x}_{t}) - D({\mathbf x}_{t},\tilde{{\mathbf x}}_{t-1}) \Big) . \end{align*} Using the joint convexity of $D({\mathbf x},{\mathbf y})$ in ${\mathbf x}$ and ${\mathbf y}$ and the strong convexity of the entropic mirror map, we get: \begin{align} -D({\mathbf x}_{t}, \tilde{{\mathbf x}}_{t-1}) &\leq - \frac{1}{2} \| \tilde{{\mathbf x}}_t -{\mathbf x}_{t+1} \|^2 { \nonumber }\\ &\leq -\frac{1}{4} \left\|\frac{t-1}{t} ({\mathbf x}_t - {\mathbf x}_{t+1}) \right\|^2 +\frac{1}{2} \l(\frac{1}{t}\r)^2 \left\|{\mathbf x}_c - {\mathbf x}_{t+1} \right\|^2 { \nonumber }\\ &\leq -\frac{(t-1)^2}{4 t^2}\| {\mathbf x}_t - {\mathbf x}_{t+1}\|^2 + \frac{D^2}{t^2} , { \nonumber } \end{align} and \begin{align} &D({\mathbf x}^*,\tilde{{\mathbf x}}_t) \leq \frac{t-1}{t}D({\mathbf x}^*,{\mathbf x}_t)+ \frac{1}{t}D \l({\mathbf x}^*,{\mathbf x}_c \r).{ \nonumber } \end{align} Meanwhile, straightforward calculations show that \begin{align} &\sum_{t=2}^T \frac{D \l({\mathbf x}^*, {\mathbf x}_c \r)}{\sqrt{t}} \leq 2D^2\sqrt{T},{ \nonumber } \end{align} and \begin{align} \sum_{t=2}^T \l(\sqrt{t} \cdot \frac{t-1}{t} D({\mathbf x}^*, {\mathbf x}_{t-1}) - \sqrt{t} D({\mathbf x}^*, {\mathbf x}_t)\r) &\leq \sum_{t=2}^T \l(\sqrt{t-1} D({\mathbf x}^*, {\mathbf x}_{t-1}) - \sqrt{t} D({\mathbf x}^*, {\mathbf x}_t)\r) { \nonumber }\\ &\leq D({\mathbf x}^*, {\mathbf x}_1) \leq D^2.{ \nonumber } \end{align} We can hence continue as \begin{align} \textup{R}^{\mathbf x}_T &\leq 3\sqrt{2}DG + \sum_{t = 2}^{T-1} \l(\frac{\sqrt{t}}{9}G\|{\mathbf x}_t - {\mathbf x}_{t+1}\|^2 + \frac{ 9G}{\sqrt{t}} \r) + 2\alpha D^2\sqrt{T}{ \nonumber } \\ & +\alpha D^2 - \frac{ \alpha }{4}\sum_{t=2}^{T}\sqrt{t} \cdot \l(\frac{t-1}{t}\r)^2 \|{\mathbf x}_{t-1} - {\mathbf x}_{t}\|^2 + \alpha D^2\sum_{t=2}^{T}\frac{\sqrt{t}}{t^2}. \label{eq:RDMD_proof_hold1} \end{align} Elementary calculations further show \begin{align} &\sum_{t = 2}^{T-1}\frac{9G}{\sqrt{t}} \leq 18G\sqrt{T} { \nonumber }, \\ & \sum_{t=2}^T \frac{1}{t\sqrt{t}} \leq 3. { \nonumber } \end{align} Finally, since $(\frac{t-1}{t})^2 \geq \frac{4}{9}$ for $t\geq 3$, we can further bound \eqref{eq:RDMD_proof_hold1} as \begin{align*} \textup{R}^{\mathbf x}_t &\leq 3\sqrt{2}DG + 18G \sqrt{T} + 2\alpha D^2\sqrt{T} + 4\alpha D^2 \\ & \hspace{80pt} + \l( \frac{G}{9}\sum_{t=2}^{T-1} \sqrt{t} \| {\mathbf x}_t - {\mathbf x}_{t+1} \|^2 - \frac{\alpha}{4} \cdot \frac{4}{9}\sum_{t=2}^{T-1} \sqrt{t+1} \| {\mathbf x}_t - {\mathbf x}_{t+1}\|^2 \r). \end{align*}The proof is finished by choosing $\alpha = G.$ \end{proof} \subsection{Optimistic Mirror Descent}\label{subsec:omd} \begin{algorithm} [h] \caption{Optimistic Mirror Descent: ${\mathbf x}$-Player} Set $\eta= \frac{1}{2|A|_{\max}}$\\ Play ${\mathbf z}_1 = {\mathbf z}_2 = {\mathbf z}_3 = \frac{1}{m}1_m $\\ For $t \geq 3$: \begin{algorithmic}[1] \STATE Compute \begin{align*} {\mathbf x}_{t+1} = MD_\eta &({\mathbf x}_t,-2(t-2)A{\mathbf w}_t \\&+3(t-3)A{\mathbf w}_{t-1} - (t-4)A{\mathbf w}_{t-2} ) \end{align*} \STATE Play ${\mathbf z}_{t+1} = \frac{1}{t-1}\sum_{i = 3}^{t+1}{\mathbf x}_i$ \STATE Observe $-A{\mathbf w}_{t+1}$ \end{algorithmic}\label{alg:HDMDx} \end{algorithm} \begin{algorithm}[h] \caption{Optimistic Mirror Descent: ${\mathbf y}$-Player} Set $\eta= \frac{1}{2|A|_{\max}}$\\ Play ${\mathbf w}_1 = {\mathbf w}_2 = {\mathbf w}_3 = \frac{1}{n}1_n$ \\ For $t \geq 3$: \begin{algorithmic}[1] \STATE Compute \begin{align*} {\mathbf y}_{t+1} = MD_\eta &({\mathbf y}_t,2(t-2)A^\top{\mathbf z}_t \\ &-3(t-3)A^\top{\mathbf z}_{t-1} + (t-4)A^\top{\mathbf z}_{t-2}) \end{align*} \STATE Play ${\mathbf w}_{t+1} = \frac{1}{t-1}{\sum_{i = 3}^{t+1}{\mathbf y}_i}$ \STATE Observe $A^\top {\mathbf z}_{t+1}$ \end{algorithmic}\label{alg:HDMDy} \end{algorithm} As alluded to in Section \ref{subsec:ideas}, we will play OMD with the averaged iterates. The algorithms are given explicitly in \textbf{Algorithm \ref{alg:HDMDx}} and \textbf{\ref{alg:HDMDy}}. \begin{remark} Note that there is no need to play $\frac{1}{m}1_m$ and $\frac{1}{n}1_n$ three times in \textbf{Algorithm \ref{alg:HDMDx}} and \textbf{\ref{alg:HDMDy}}. The players could just play once $\l(\frac{1}{m}1_m\r)^\top A \l(\frac{1}{n}1_n\r)$ and would have enough information to run OMD from ${\mathbf x}_4$ and ${\mathbf y}_4$. Our choices are motivated by the resulting ease of the notation. \end{remark} We analyze our version of OMD below. The crux of our analysis is to first look at the regrets of auxiliary sequences ${\mathbf x}_t$ and ${\mathbf y}_t$, and we show that the \emph{sum} of the auxiliary regrets, not any individual of them, controls both the convergence to the value of the game and the honest regret for the averaged sequences ${\mathbf z}_t$ and ${\mathbf w}_t$. \begin{theorem}\label{thm:HDMD} Suppose two players of a zero-sum game have played $T$ rounds according to the OMD algorithm with $\eta = \frac{1}{2|A|_{\max}}$. Then \begin{enumerate} \item The ${\mathbf x}$-player suffers an $O\l({\log T}\r)$ regret: \begin{align} \max_{{\mathbf z}\in\Delta_m}\sum_{t=3}^T \ip{{\mathbf z}_t - {\mathbf z}}{ -A{\mathbf w}_t} &\leq \log 2(T-2)|A|_{\max} \times { \nonumber }\\ &\hspace{25pt} \Big(20 + \log m +\log n \Big) \label{eq:HDMD_low_regret}\\ &= O\l( {\log T}\r) { \nonumber } \end{align} and similarly for the ${\mathbf y}$-player. \item The strategies $({\mathbf z}_T,{\mathbf w}_T)$ constitutes an $O\l(\frac{1}{T} \r)$-approximate equilibrium to the value of the game: \begin{align} \left| V - \ip{{\mathbf z}_T}{ A {\mathbf w}_T} \right| &\leq \frac{ \Big(20 + \log m +\log n \Big)|A|_{\max}} {T-2}\label{eq:HDMD_value_of_the_game}\\ & = O\l( \frac{1}{T}\r). { \nonumber } \end{align} \end{enumerate} \end{theorem} \begin{proof} See \textbf{Appendix \ref{app:HDMD_proof}}. \end{proof} \subsection{Robust Optimistic Mirror Descent}\label{subsec:romd} In this section, we introduce \textbf{robust optimistic mirror descent} (ROMD), which is a novel algorithm even for online convex optimization. Let $\psi$ be 1-strongly convex with respect to $\| \cdot \|$, and suppose we are minimizing the regret against an arbitrary sequence of convex functions $f_1, f_2, \ldots $ in a constraint set $\mathcal{D}$. Assume that each function is $G$-Lipschitz in $\| \cdot \|$. Assume also that no Bregman projection is needed (i.e., $MD_{\eta}({\mathbf x},{\mathbf g}) \in { \mathcal{D} }$ for any ${\mathbf x}$ and ${\mathbf g}$); this is, for instance, the case for the entropic mirror map. We state ROMD in the general form in \textbf{Algorithm \ref{alg:RDMD}}. \begin{theorem}[$O(\sqrt{T})$-Adversarial Regret]\label{thm:RDMD} Suppose that $\| \nabla f_t \|_* \leq G$ for all $t$. Then playing $T$ rounds of \textbf{Algorithm \ref{alg:RDMD}} with $\eta_t = \frac{1}{G\sqrt{t}}$ against an arbitrary sequence of convex functions has the following guarantee on the regret: \begin{align} \max_{{\mathbf x}\in\Delta_m}\sum_{t=1}^T \ip{{\mathbf x}_t - {\mathbf x}}{ \nabla f_t ({\mathbf x}_t)} &\leq G\sqrt{T}\l( 18+2D^2\r) { \nonumber }\\ &\hspace{40pt}+ GD\l(3\sqrt{2} + 4D \r) { \nonumber } \\ & = O\l( {\sqrt{T} }\r). { \nonumber } \end{align} \end{theorem} \begin{proof} See \textbf{Appendix~\ref{app:RDMD_proof}}. \end{proof} \begin{algorithm}[h] \caption{Robust Optimistic Mirror Descent} \begin{algorithmic}[1] \STATE Initialize ${\mathbf x}_1 = {\mathbf x}_c$, $\nabla f_{0}=0$, $\eta_t = \frac{1}{G\sqrt{t}}$\\ \FOR{$t = 1, 2, ...$, } \STATE $\tilde{{\mathbf x}}_t = (\frac{t-1}{t}) {\mathbf x}_t + \frac{1}{t} {\mathbf x}_c$ \STATE Set $\tilde{\nabla}_t = 2\nabla f_{t}({\mathbf x}_t) - \nabla f_{t-1} ({\mathbf x}_{t-1})$, \\ play ${\mathbf x}_{t+1} = MD_{\eta_t}(\tilde{{\mathbf x}}_t, \tilde{\nabla}_t)$ \STATE Observe $f_{t+1}$ \ENDFOR \end{algorithmic}\label{alg:RDMD} \end{algorithm} When specialized to zero-sum games, it suffices to take ${\mathbf x}_c = \frac{1}{m}1_m$, $G = |A|_{\max}$, $D = \log m$, and $\psi$ being the entropic mirror map. \begin{remark} Our analysis of ROMD crucially relies on the assumption that no Bregman projection is needed. We have not been able to generalize our analysis to the case with Bregman projections. \end{remark} \subsection{Related Work} \noindent\textbf{Algorithms for Decentralized Games:} To our knowledge, the only two explicit algorithms capable of solving zero-sum games in the decentralized setting are given by \citep{daskalakis2011near} and \citep{rakhlin2013optimization}, respectively. A comparison of their convergence rates versus ours is presented in Table 1. The algorithm of \citep{daskalakis2011near} is a decentralized primal-dual method based on Nesterov's excessive gap technique \citep{nesterov2005excessive}. Its convergence guarantees are only slightly worse than ours (\textit{cf.,} Table \ref{tab:convergence_comparison}). However, due to the presence of complicated and unnatural scheduling steps, the authors in \citep{daskalakis2011near} themselves were not convinced by the practicality of their algorithm and stated the result as merely an ``existence proof.'' Later on, \citet{rakhlin2013optimization} proposed an algorithm based on the Optimistic Mirror Descent (OMD), initially introduced in a special case by \citep{chiang2012online} and also studied in detail by \citep{rakhlin2013online}. While the algorithm is simple, it features several drawbacks. Foremost, it requires the time horizon beforehand, which is unsatisfactory. Second, when both players are playing collaboratively, their regret is sub-optimal. Third, its adversarial regret and convergence to the game value has extra $\log T$ factors, which require additional cautions to remove. Finally, the algorithm uses \emph{adaptive} step-sizes, requiring additional work per-iteration. \noindent\textbf{Meta-Algorithms:} There exist some work on ``meta-algorithms'' for games \citep{syrgkanis2015fast, foster2016learning}, which can turn certain learning algorithms into solving zero-sum games. For instance, leveraging the framework in \citep{syrgkanis2015fast}, one can modify OMD to achieve $O(T^{\frac{1}{4}})$ for honest regret + $\tilde{O}\l( \sqrt{T}\r)$ for adversarial regret. Our algorithm uniformly outperforms these rates \subsection{Let's be honest: The full framework}\label{subsec:full} We now present our approach for solving \eqref{eq:two_player_zero_sum_game}. To ease the notation, define \begin{equation} {\mathbf z}^*_t \coloneqq \arg\min_{{\mathbf x}\in\Delta_m} \ip{{\mathbf x}}{-A{\mathbf w}_t} { \nonumber } \end{equation} and \begin{equation} {\mathbf w}^*_t = \arg\min_{{\mathbf y}\in\Delta_n} \ip{{\mathbf z}_t}{A{\mathbf y}}. { \nonumber } \end{equation} Let constants $C_1, C_2$, and $C_3$ be such that (see \textbf{Thereom \ref{thm:HDMD}}, \textbf{Theorem \ref{thm:RDMD}}, and \eqref{eq:HDMD_proof_hold5}) \begin{align} &\ip{{\mathbf z}_t - {\mathbf z}_t^*}{ -A {\mathbf w}_t} \leq \frac{C_1}{t} ,\ \quad {\mathbf z}_t, {\mathbf w}_t \text{ from OMD}, \label{eq:ADMD_hold1}\\ &\ip{{\mathbf w}_t - {\mathbf w}_t^*}{ A^\top {\mathbf z}_t} \leq \frac{C_1}{t} , \quad {\mathbf z}_t, {\mathbf w}_t \text{ from OMD}, \label{eq:ADMD_hold2}\\ &\sum_{t=1}^T \ip{{\mathbf z}_t - {\mathbf z}^*}{ -A {\mathbf y}_t} \leq C_2 \sqrt{T}, \quad {\mathbf z}_t \text{ from ROMD and } { \nonumber }\\ &\hspace{140pt}{\mathbf y}_t \text{ arbitrary}, \label{eq:ADMD_hold4} \\ &| V - {\mathbf z}_T A {\mathbf w}_T| \leq \frac{C_3}{T}, \quad {\mathbf z}_T, {\mathbf w}_T \text{ from OMD}. \label{eq:ADMD_hold3} \end{align} From a high-level, our approach exploits the following simple observation: Suppose that we know $C_1$ above. If the instantaneous regret bound \eqref{eq:ADMD_hold1} and \eqref{eq:ADMD_hold2} hold true for all $t$, then we would trivially have the desired convergence. In contrast, if at any round the bound \eqref{eq:ADMD_hold1} is violated for the ${\mathbf x}$-player, then it must be due to an adversarial play, and we can simply switch to ROMD to get $O(\sqrt{T})$ regret. However, since $C_1$ (\textit{cf.}, \eqref{eq:HDMD_proof_hold5}) involves $n$, the number of opponent's strategies, the ${\mathbf x}$-player cannot compute it exactly. The situation is similar for the ${\mathbf y}$-player. We hence need to come up with a way to estimate $C_1$ for both players. \begin{algorithm}[!t] \caption{Let's Be Honest Optimistic Mirror Descent: ${\mathbf x}$-Player} \begin{algorithmic}[1] \STATE Initialize $b=1, t=1, {\mathbf w}_0 = \frac{1}{n}1_n$ and ${\mathbf z}_0 = \frac{1}{m}1_m$ \STATE Play $t$-th round of OMD-${\mathbf x}$, observe $-A{ \mathbf{p} }_t$ \STATE \IF {$G_t^{{\mathbf w}} \coloneqq \ip{{\mathbf w}_{t-1}}{A^\top{\mathbf z}_{t-1}} - \ip{{ \mathbf{p} }_t }{A^\top{\mathbf z}_{t-1}} > \frac{b}{t-1}$ }{\STATE \hspace{20pt} Play $b^4 -1$ rounds of ROMD \STATE \hspace{20pt} $t \leftarrow t+1$ \STATE \hspace{23pt}$b \leftarrow 2b$ \STATE \hspace{20pt} Go to line 2.} \ENDIF \STATE $-A{\mathbf w}_t \leftarrow -A{ \mathbf{p} }_t$ \STATE \IF {$G_t^{{\mathbf z}} \coloneqq \ip{{\mathbf z}_{t}}{-A{\mathbf w}_{t}} - \ip{{\mathbf z}^*_t }{-A {\mathbf w}_{t}} > \frac{b}{t}$ } {\STATE \hspace{20pt} Play ${ \check{\x} }_{t+1} \coloneqq {\mathbf z}_t^* \STATE \hspace{20pt} Play $b^4 -1$ rounds of ROMD \STATE \hspace{20pt} $t \leftarrow t+2$ \STATE \hspace{23pt}$b \leftarrow 2b$ \STATE \hspace{20pt} Go to line 2.} \ENDIF \STATE $t \leftarrow t+1$ \STATE Go to line 2. \end{algorithmic}\label{alg:ADMD-x} \end{algorithm} It is important to note that one can not na\"ively estimate $C_1$ by binary search separately on both players. The reason, and the major difficultly to the above approach, is as follows: Since in general $\ip{{\mathbf z}_t - {\mathbf z}_t^*}{ -A {\mathbf w}_t} \neq \ip{{\mathbf w}_t - {\mathbf w}_t^*}{ A^\top {\mathbf z}_t}$, it could be the case that, at the same round, the ${\mathbf x}$-player detects a bad instantaneous regret and switch to ROMD, while the ${\mathbf y}$-player remains in OMD, even though two players are both honest. However, our entire analysis of OMD would breakdown if the OMD is not played cohesively. Furthermore, recall that we also want robustness against any adversary. Therefore, a bad instantaneous regret indicates the possibility of receiving an adversarial play, and we need to switch to ROMD whenever this occurs. To resolve such issues, we devise a simple \textbf{signaling} scheme ($\check{{\mathbf x}}_t$ and $\check{{\mathbf y}}_t$ below), which synchronizes both players' $C_1$ estimate and also the OMD plays while guaranteeing robustness. In words, our signaling scheme is a ``Let's be honest'' message to the opponent: ``I am having a bad instantaneous regret. Please update your $C_1$ with me, and please pretend that I am adversarial for a small number of rounds, so that we can play honest OMD cohesively.'' It turns out that doing these extra signaling rounds do not hurt the convergence rates in OMD and ROMD at all. Our full algorithm, termed \textbf{Let's Be Honest} (LbH) \textbf{Optimistic Mirror Descent}, is presented in \textbf{Algorithm \ref{alg:ADMD-x}} and \textbf{\ref{alg:ADMD-y}}. {\remark In \textbf{Algorithm \ref{alg:ADMD-x}} and \textbf{\ref{alg:ADMD-y}}, the role of $b$ is to estimate the constant $C_1$ in \eqref{eq:ADMD_hold1}. Since our analysis requires $b$ to be the same for both players throughout the algorithm run, a simple way is to assume that, say, $m=n = 5$, compute the corresponding $\tilde{C}_1$, and set the initial $b\leftarrow\tilde{C}_1$. Doing so indeed improves upon constants in our convergence; we chose $b=1$ only for simplicity.} \begin{remark} There are some degree of freedom in \textbf{Algorithm \ref{alg:ADMD-x}} and \textbf{\ref{alg:ADMD-y}}. For instance, instead of doubling $b$ in Line 16, one can do $b \leftarrow (1+\epsilon)b$ for some $\epsilon >0$. In Line 5, one can also play $b^2-1$ rounds, rather than $b^4-1$. As will become apparent in \textbf{Theorem \ref{thm:ADMD}}, these variants only effect the constants but not the convergence rates. However, they do have impact on empirical performance; \textit{cf.}, Section \ref{sec:experiments}. \end{remark} \begin{algorithm}[!t] \caption{Let's Be Honest Optimistic Mirror Descent: ${\mathbf y}$-Player} \begin{algorithmic}[1] \STATE Initialize $b=1, t=1, {\mathbf w}_0 = \frac{1}{n}1_n$ and ${\mathbf z}_0 = \frac{1}{m}1_m$ \STATE Play $t$-th round of OMD-${\mathbf y}$, observe $A^\top\mathbf{o}_t$ \STATE \IF{$G_t^{{\mathbf z}} \coloneqq \ip{{\mathbf z}_{t-1}}{-A{\mathbf w}_{t-1}} - \ip{\mathbf{o}_t }{-A{\mathbf w}_{t-1}} > \frac{b}{t-1}$ }{\STATE \hspace{20pt} Play $b^4 -1$ rounds of ROMD \STATE \hspace{20pt} $t \leftarrow t+1$ \STATE \hspace{23pt}$b \leftarrow 2b$ \STATE \hspace{20pt} Go to line 2.} \ENDIF \STATE $A{\mathbf z}_t \leftarrow A^\top\mathbf{o}_t$ \STATE \IF {$G_t^{{\mathbf w}} \coloneqq \ip{{\mathbf w}_{t}}{A^\top{\mathbf z}_{t}} - \ip{{\mathbf w}^*_t }{A^\top {\mathbf z}_{t}} > \frac{b}{t}$ } {\STATE \hspace{20pt} Play ${ \check{\y} }_{t+1} \coloneqq {\mathbf w}_t^* \STATE \hspace{20pt} Play $b^4 -1$ rounds of ROMD \STATE \hspace{20pt} $t \leftarrow t+2$ \STATE \hspace{23pt}$b \leftarrow 2b$ \STATE \hspace{20pt} Go to line 2.} \ENDIF \STATE $t \leftarrow t+1$ \STATE Go to line 2. \end{algorithmic}\label{alg:ADMD-y} \end{algorithm} The following key lemma ensures the two players to enter the ROMD plays coherently. \begin{lemma}\label{lem:switching_together} If the ${\mathbf y}$-player enters Line 12 of \textbf{Algorithm \ref{alg:ADMD-y}} at the $t$-th round, then the ${\mathbf x}$-player enters Line 4 of \textbf{Algorithm \ref{alg:ADMD-x}} at the $(t+1)$-th round. Conversely, if, at the $t$-th round, the ${\mathbf y}$-player does not enter Line 12 of \textbf{Algorithm \ref{alg:ADMD-y}}, then the ${\mathbf x}$-player does not enter Line 4 of \textbf{Algorithm \ref{alg:ADMD-x}} at the $(t+1)$-th round. Exactly the same statements hold when the ${\mathbf x}$- and ${\mathbf y}$-player are reversed above. \end{lemma} \begin{proof} If the ${\mathbf y}$-player enters Line 12 of \textbf{Algorithm \ref{alg:ADMD-y}} at the $t$-th round, then ${ \check{\y} }_{t+1}$ is signalled at the $(t+1)$-th round, and it must be the case that $\ip{{\mathbf w}_{t} - {\mathbf w}_{t}^* }{A^\top{\mathbf z}_{t}} > \frac{b}{t}$ (\textit{cf.}, Line 12 of \textbf{Algorithm \ref{alg:ADMD-y}}). Therefore, at the $(t+1)$-th round, the ${\mathbf x}$-player would receive $-A{ \check{\y} }_{t+1} = -A {\mathbf w}^*_t$ and compute \begin{align*} G^{{\mathbf w}}_{t+1} &= \ip{{\mathbf w}_{t}}{A^\top{\mathbf z}_{t}} - \ip{{ \check{\y} }_{t+1} }{A^\top{\mathbf z}_{t}} \\ &= \ip{{\mathbf w}_{t} - {\mathbf w}_{t}^* }{A^\top{\mathbf z}_{t}}> \frac{b}{t} \end{align*}which then enters the Line 4 of \textbf{Algorithm \ref{alg:ADMD-x}}. Conversely, suppose that the ${\mathbf y}$-player does not enter Line 12 of \textbf{Algorithm \ref{alg:ADMD-y}} at the $t$-th round (or, equivalently, plays OMD at the $(t+1)$-th round). Then $\ip{{\mathbf w}_{t} - {\mathbf w}_{t}^* }{A^\top{\mathbf z}_{t}} \leq \frac{b}{t}$, implying that \begin{align*} G^{{\mathbf w}}_{t+1} &= \ip{{\mathbf w}_{t} - {\mathbf w}_{t+1} }{A^\top{\mathbf z}_{t}} \\ &\leq \ip{{\mathbf w}_{t} - {\mathbf w}_{t}^* }{A^\top{\mathbf z}_{t}} \leq \frac{b}{t} \end{align*}hence preventing the ${\mathbf x}$-player from entering Line 4 of \textbf{Algorithm \ref{alg:ADMD-x}}. Exactly the same computation holds when we reverse the role of ${\mathbf x}$- and ${\mathbf y}$-player. \end{proof} Given \textbf{Lemma \ref{lem:switching_together}}, we now know that the ${\mathbf x}$-player switches to ROMD \textbf{if and only if} the ${\mathbf y}$-player does. The rest of the proof then readily follows from \textbf{Theorems \ref{thm:HDMD}} and \textbf{\ref{thm:RDMD}}. \begin{theorem}\label{thm:ADMD} Suppose the ${\mathbf x}$-player plays according to \textbf{Algorithm \ref{alg:ADMD-x}} for $T$ rounds, and let $\textup{R}_T$ be the regret up to time $T$. Then \begin{enumerate} \item Let $T = T_1 + T_2 + T_3$ where $T_1$ is the number of OMD plays, $T_2$ is the number of ROMD plays, and $T_3$ is the number of signaling rounds (playing ${ \check{\x} }_t$ or ${ \check{\y} }_t$). Then there are constants $C$ and $C'$, depending only on $m,n$ and $|A|_{\max}$, such that \begin{equation} \label{eq:regret_overall} \frac{1}{T}\textup{R}_T \leq \frac{C\log T_1 + C' \sqrt{T_2}}{T_1+ T_2}. \end{equation} In particular, if the opponent plays honestly, then $\textup{R}_T = O(\log T_1) = O(\log T). $ If the opponent is adversarial, we have $\textup{R}_T = O(\sqrt{T_2}) = O(\sqrt{T}). $ \item Suppose that the honest ${\mathbf y}$-player plays \textbf{Algorithm \ref{alg:ADMD-y}}. Then the pair $({\mathbf z}_T, {\mathbf w}_T)$ constitutes an $O\l( \frac{1}{T}\r)$-approximate equilibrium: \begin{equation} \label{eq:value_ADMD} | V - \ip{{\mathbf z}_T}{A{\mathbf w}_T} | \leq \frac{C''}{T} \end{equation} for some constant $C''$. \end{enumerate} \end{theorem} \begin{figure*}[!h] \centering \subfigure[{\label{fig:aq}}Value of the game.]{{\includegraphics[keepaspectratio=true,scale=0.65]{figs/ValueH_comp.pdf} }} \subfigure[{\label{fig:bq}}Regret.]{{\includegraphics[keepaspectratio=true,scale=0.65]{figs/RegretH_comp.pdf} }}% \caption{Honest setting.} \label{fig:honest} \end{figure*} \begin{proof} Suppose first that both players are honest. We first prove the individual regret for the ${\mathbf x}$-player. We split the terms as follows: \begin{align} \label{eq:ADMD_hold5} \textup{R}_T &= \textup{R}_{T_1}(\textup{playing OMD}) + \textup{R}_{T_2}(\textup{playing ROMD}) { \nonumber }\\ &\hspace{30pt} + \textup{R}_{T_3}(\textup{signaling}) . \end{align} Recall \eqref{eq:ADMD_hold1}-\eqref{eq:ADMD_hold3}. We claim that \begin{enumerate}[label=(\alph*)] \item $T_3 \leq \lceil \log C_1 \rceil. $ \item $T_2 \leq 16\cdot\frac{16^{T_3-1} -1}{15} \coloneqq C_1'$. \end{enumerate} Indeed, after $\lceil \log C_1 \rceil$-times signaling, we would have $b = 2^{T_3} > C_1$. Then \eqref{eq:ADMD_hold1} and \eqref{eq:ADMD_hold2} imply that we will never enter Line 12 again. On the other hand, we have \begin{equation} { \nonumber } T_2 \leq \sum_{r=1}^{T_3} 2^{4r} = \frac{16^{T_3-1} -1}{15}. \end{equation} Combining (a), (b) and using \eqref{eq:ADMD_hold1}, \eqref{eq:ADMD_hold4} in \eqref{eq:ADMD_hold5}, we conclude that \begin{align*} \textup{R}_T &\leq C_1\log T_1 + C_2\sqrt{T_2} + 2|A|_{\max}T_3 \\ & \leq C_1\log T_1 + C_2 \sqrt{C_1'} + 2|A|_{\max} \lceil \log C_1 \rceil \\ &= O(\log T_1) = O(\log T) \end{align*}which establishes \eqref{eq:regret_overall} in the honest case. For convergence to the value of the game, we have, by \eqref{eq:ADMD_hold3}, \begin{align*} | V - \ip{{\mathbf z}_T}{A{\mathbf w}_T} | \leq \frac{C_3}{T-T_2-T_3} \leq \frac{C_3}{T- C^*} \end{align*}where $C^* = \lceil \log C_1 \rceil + C_1'.$ The proof of \eqref{eq:value_ADMD} is completed by using the fact that $\frac{1}{T-C^*} \leq \frac{C^*}{T}$ when $T\geq \frac{C^{*2}}{C^*-1}$. Finally, we show \eqref{eq:regret_overall} in the adversarial case. Let $T_1, T_2$, and $T_3$ be as before, and we again split the regret into: \begin{align} \textup{R}_T &= \textup{R}_{T_1}(\textup{playing OMD}) + \textup{R}_{T_2}(\textup{playing ROMD}) { \nonumber }\\ &\hspace{30pt} + \textup{R}_{T_3}(\textup{signaling}). { \nonumber } \end{align} Notice that this time the inequalities \eqref{eq:ADMD_hold1} and \eqref{eq:ADMD_hold2} do not apply since the opponent no longer plays OMD collaboratively. However, by Line 12 of \textbf{Algorithm \ref{alg:ADMD-x}}, for every OMD play we must have \begin{align*} \ip{{\mathbf z}_{t}}{-A{\mathbf w}_{t}} - \ip{{\mathbf z}^*_t }{-A {\mathbf w}_{t}} \leq \frac{b}{t} \leq \frac{2^{T_3}}{t}. \end{align*} Following the analysis as in the honest setting, we may further write \begin{equation} { \nonumber } \textup{R}_T \leq 2^{T_3}\log T_1 + C_2\sqrt{T_2} + 2|A|_{\max} T_3. \end{equation} It hence suffices to show that \begin{equation} \label{eq:ADMD_hold6} 2^{T_3} \log T_1 \leq C^{**}\sqrt{T_1 + T_2}. \end{equation} for some constant $C^{**}$. To see \eqref{eq:ADMD_hold6}, recall that \begin{equation} { \nonumber } T_2 = \frac{16(16^{T_3} -1)}{15} \geq 16^{T_3-1}. \end{equation} But then \begin{align*} \frac{2^{T_3} \log T_1 }{\sqrt{T_1 + T_2}} &\leq \frac{2^{T_3} \log T_1 }{\sqrt{2 \sqrt{T_1 T_2}}} \\ &\leq \frac{2^{T_3} \log T_1 }{2^{T_3-1} \cdot \sqrt{2} \cdot \sqrt[4]{T_1} } \leq C^{**}. \end{align*}for some universal constant $C^{**}$. \end{proof} \begin{remark} As is evident from the proof, we have made no attempt to sharpening the constants, and hence our bounds can be numerically loose. \end{remark}
{ "timestamp": "2018-06-07T02:17:45", "yymm": "1802", "arxiv_id": "1802.04221", "language": "en", "url": "https://arxiv.org/abs/1802.04221" }
\section{Introduction} \label{introduction} We study the maximum of a branching random walk in discrete time. The branching random walk can be described as follows. Given are two ingredients, an offspring distribution on $\N_0$ with weights $(p(k))_{k \in \N_0}$ and a (centred) step size distribution on $\R$. At time $0$ the process starts with one particle at the origin. At every time $n \in \N$ each particle repeats the following two steps, independently of everything else. First, it produces offspring according to the offspring distribution $(p(k))_{k \in \N_0}$, and then it dies. Afterwards, the offspring particles move according to the step size distribution. Let $m$ be the reproduction mean of the offspring distribution. We always assume $m > 1$. We are interested in the position of the rightmost particle at time $n$. Therefore, let $D_n$ be the set of particles at time $n$. For $v \in D_n$ denote by $S_v$ the position of particle $v$ at time $n$. The maximum of the branching random walk is defined as \begin{equation}\label{Mndef} M_n= \max_{v \in D_n} S_v. \end{equation} The asymptotics of the maximum $M_n$ are by now well-understood. Let $I$ be the rate function of the random walk. Conditioned on survival, it is well-known, see Biggins \cite{B76}, Hammersley \cite{H74} and Kingman \cite{K75}, that $M_n$ grows at linear speed $x^*$, where \begin{equation} \label{def_speed} x^*=\sup \bigl\{ x \geq 0 \colon I(x) \leq \log m \bigr\}. \end{equation} Addario-Berry and Reed \cite{ABR09} as well as Hu and Shi \cite{HS09} obtain a logarithmic second term. In \cite{A13} A{\"i}d{\'e}kon finally proves that $M_n - x^*n + c \log n$ converges in distribution, where $c>0$ is an explicit constant. Furthermore, let $\tilde{M}_n$ be the maximum of $|D_n|$ independent random walks. One can check that the speed of $(\tilde{M}_n)_n$ also equals $x^*$, see \cite[Theorem 1]{Z16} or \eqref{proof_thm_ind_RW_4}. However, compared to the branching random walk, there is a larger logarithmic correction term, see e.g.~\cite[Theorem 1]{Z16}. Similar results were proved for branching Brownian motion by Bramson in \cite{B78}. We investigate the exponential decay rates of the probabilities $\P \bigl( \frac{M_n}{n} \geq x \bigr)$ for $x \geq x^*$ and $\P \bigl( \frac{M_n}{n} \leq x \bigr)$ for $x \leq x^*$. Our main result, Theorem \ref{theorem_BRW}, characterises these exponential decay rates. We consider the same question for $\tilde{M}_n$ and determine the exponential decay rates, see Theorem \ref{theorem_ind_RW}. Interestingly, the rate functions coincide for $x \geq x^*$, but in general they do not coincide for $x < x^*$. Similar questions have been studied before. Large deviation estimates for the maximum of branching Brownian motion have first been investigated by Chauvin and Rouault in \cite{CR88} and very recently by Derrida and Shi in \cite{DS17} and \cite{DS17_2}. See also \cite{DS16} and \cite{S13} for extensions with coalescence and selection or immigration, respectively. Note that \cite{DS17_2} also treats continuous time branching random walks. The difference to our setup is that in the time-continuous case, the strategies can involve the exponential waiting times, while in our setup, they can involve the branching mechanism given by the offspring distribution. Upper large deviations for the maximum of discrete time branching random walks have been investigated in \cite{R93} in the case where every particle has at least one offspring. Recently large deviation results for the empirical distribution of the branching random walk have been obtained in \cite{LP15}, \cite{CH17}, \cite{LT17}, but they do not seem to imply our result. We also mention that in the case of a fixed number of offspring, much more precise results (describing not only the exponential decay rates) were derived in \cite{BM17}. Our strategy of proof is rather direct. We compare $M_n$ and $\tilde{M}_n$ and show that $\tilde{M}_n$ stochastically dominates $M_n$ for all $n \in \N$, see Lemma~\ref{lemma_BRW_vs_ind_RW2}. Let us now introduce the model in a more formal way and fix some notation. Let $(Z_n)_{n \in \N_0}$ be a Galton-Watson process with one initial particle and offspring law given by the weights $(p(k))_{k \in \N_0}$. Let $m=\sum_{k=1}^\infty k p(k) $ be the reproduction mean. The associated Galton-Watson tree is denoted by $\mathcal{T}=(V,E)$, where $V$ is the set of vertices and $E$ is the set of edges. Further, for $n \in \N$ let $D_n$ be the set of vertices in the $n$-th generation of the tree. Then, $|D_n|=Z_n$. For $v \in D_n$ the set of descendants of $v$ in the $(l+n)$-th generation is denoted by $D_l^v$. Note that $|D_l^v|$ equals $|D_l|$ in distribution. The root of $\mathcal{T}$ is called $o \in V$. For $v,w \in V$ define $[v,w]$ as the set of edges along the unique path from $v$ to $w$. We now define the locations of the particles. Let $(X_e)_{e \in E}$ be a collection of i.i.d.~random variables, i.e.~every edge of $\mathcal{T}$ is labelled with a random variable. For $v \in D_n$ the position of the particle $v$ at time $n$ is defined as $S_v=\sum_{w \in [o,v]} X_w$. For $n \in \N$ the position of the rightmost particle at time $n$ is $M_n= \max_{v \in D_n} S_v$ and if $D_n=\emptyset$, we set $M_n = - \infty$. We refer to $(M_n)_{n \in \N}$ as the maximum of the branching random walk. For $v \in D_n$, the rightmost descendant of $v$ at time $l+n$ is defined as $M_l^v=\max_{w \in D_l^v} S_w$. We also introduce a collection of i.i.d.~random variables $(X_i^j)_{i,j \in \N}$, where $X_1^1$ has the same distribution as $X_e$ for some $e \in E$. Moreover, for $j,n \in \N$ define the random walk $S_n^j= \sum_{i=1}^n X_i^j$ and the maximum of independent random walks as \begin{equation}\label{Mntildedef} \tilde{M}_n = \max_{1 \leq j \leq Z_n} S_n^j. \end{equation} In analogy to the maximum of the branching random walk, we set $\tilde{M}_n=- \infty$, if $D_n = \emptyset$. Furthermore, for $i \in \N$ let $X_i$ be an independent copy of $X_i^1$ and define $S_n= \sum_{i=1}^n X_i$. Note that for every time $n$ the number of particles in the branching random equals the number of random walks considered for $\tilde{M}_n$. However, the positions of the particles in the branching random walk are not independent. Indeed, this dependence is such that the maximum of independent random walks stochastically dominates the maximum of the branching random walk, see Lemma~\ref{lemma_BRW_vs_ind_RW2}. We introduce the measure \begin{equation} \label{def_P*} \P^*( \cdot) = \P( \cdot \vert Z_n > 0 \ \forall n\in \N). \end{equation} The associated expectation is denoted by $\E^*$. Let $(a_n)_{n \in \N}$ be a sequence of positive numbers and let $c \in (0,\infty]$ be a constant. With a slight abuse of notation for $c=\infty$, we write $a_n=\exp(-cn + o(n))$, if $ \lim_{n \to \infty} \frac{1}{n} \log a_n = -c$. Note that $a_n$ decays faster than exponentially in $n$ if $c = \infty$. The stochastic processes considered in this paper are discrete time processes. However, to increase the readability of the paper, we omit integer parts if no confusion arises. The paper is organised as follows. In Section~2 we first introduce the rate function of the random walk and two rate functions concerning the Galton-Watson process. We further describe our assumptions. Then we state our main results in Section~3. We collect some preliminary results in Section~4 and prove the main results in Section~5. \section{Rate functions and assumptions} \label{ratefcts} In this section we introduce the rate functions of the random walk and the Galton-Watson process, which are needed to state our main results. \subsection{Rate function of the random walk} For $x \in \R$ the rate function of the random walk $(S_n)_{n \in \N}$ is defined as \begin{equation} \label{def_I} I(x)= \sup_{\lambda \in \R} \bigl(\lambda x - \log \E\left[e^{\lambda X_1} \right]\bigr). \end{equation} \begin{as} \label{assumption_RW} There exists $\varepsilon>0$ such that $\E\bigl[ e^{\lambda X_1} \bigr] < \infty$ for all $\lambda \in (-\varepsilon, \varepsilon)$. Furthermore, for simplicity suppose that $\E[X_1]=0$. \end{as} Note that $\E[X_1]=0$ is not necessary for the results in Section~\ref{results}, since we could derive similar results for the collection of random variables $(X_e-\E[X_1])_{e \in E}$. Assumption \ref{assumption_RW} ensures that $I(x)>0$ for all $x \neq 0$ and $I(x) \to \infty$ as $|x| \to \infty$. If Assumption \ref{assumption_RW} is satisfied, Cramér's theorem implies that the probabilities $\P(S_n \geq xn)$ decay exponentially in $n$ with rate $I(x)$ for $x>0$. A proof can e.g.~be found in \cite[Theorem 3.7.4]{DZ10}. \subsection{Rate functions of the Galton-Watson process} First, we need to introduce some more assumptions before we can state the large deviation results. \begin{as} \label{assumption_supercritical} The Galton-Watson process is supercritical, i.e.~$m>1$. \end{as} Let $ q=\inf \bigl\{s \in [0,1] \colon \E[s^{Z_1}]=s \bigr\}$. Note that $q$ is the extinction probability of the process $(Z_n)_{n \in \N_0}$. Assumption \ref{assumption_supercritical} implies that $q < 1$. \begin{as} \label{assumption_ZlogZ} The Galton-Watson process satisfies $\E[Z_1 \log Z_1] < \infty$. \end{as} First of all, note that Assumption \ref{assumption_ZlogZ} implies $m < \infty$. Together with Assumption \ref{assumption_RW}, this yields that the speed of the branching random walk $x^*$ defined in \eqref{def_speed} is finite. Due to the well-known Kesten-Stigum Theorem, Assumption \ref{assumption_ZlogZ} implies that the Galton-Watson process grows like its expectation, see Theorem~\ref{Lemma_GWP_martingale}. \begin{as} \label{assumption_Schroeder} Every particle has less than two children with positive probability, i.e.~it holds that $p(0)+p(1)>0$. \end{as} Assumption \ref{assumption_Schroeder} is often referred to as Schröder case, whereas the case $p(0)+ p(1)=0$ is called Böttcher case. If Assumptions \ref{assumption_supercritical} and \ref{assumption_ZlogZ} are satisfied, there is a large deviation result for the probability that $(Z_n)_{n \in \N_0}$ grows at most subexponentially. A sequence $(a_n)_{n \in \N}$ is called subexponential, if $a_n e^{-\varepsilon n} \to 0$ as $n \to \infty$ for all $\varepsilon>0$. Define \begin{equation}\label{def_rho} \rho: = -\log \E[Z_1 q^{Z_1-1}] \in (0, \infty]. \end{equation} Note that $\rho=-\log p(1)$ if $p(0)=0$ (and therefore also $q=0$). In particular, $\rho <\infty$ if and only if Assumption \ref{assumption_Schroeder} is satisfied. Consider the set \begin{equation} \label{def_A} A= \bigl\{l \in \N \colon \exists n \in \N \text{ such that } \P(Z_n=l)>0 \bigr\} \end{equation} containing all positive integers $l$ such that there are $l$ particles at some time $n$ with positive probability. \begin{theorem}\label{lemma_LDP_rho} Let Assumption \ref{assumption_supercritical} and \ref{assumption_ZlogZ} hold. Then, for every $k \in A$ we have \begin{equation*} \lim_{n \to \infty} \frac{1}{n} \log \P^* \bigl(Z_n =k \bigr) = - \rho. \end{equation*} Moreover, for every subexponential sequence $(a_n)_{n \in \N}$ such that $a_n \to \infty$ as $n \to \infty$, \begin{equation*} \lim_{n \to \infty} \frac{1}{n} \log \P^* \bigl(Z_n \leq a_n \bigr) = - \rho. \end{equation*} \end{theorem} A proof of the first statement in Theorem~\ref{lemma_LDP_rho} can be found in \cite[Chapter 1, Section 11, Theorem 3]{AN04}. The second statement is a consequence of of \cite[Theorem 3.1]{BB13}. For $x \in [0, \log m]$ define the rate function of the Galton-Watson process as \begin{equation} \label{def_I_GW} I^{\text{GW}}(x)= \rho \bigl(1-x (\log m)^{-1} \bigr) \end{equation} Note that $I^{\text{GW}}(x)>0$ for all $x < \log m$. \begin{theorem}\label{lemma_LDP_Zn} Under Assumptions \ref{assumption_supercritical} and \ref{assumption_ZlogZ} we have for $x \in [0, \log m]$ \begin{equation*} \lim_{n \to \infty} \frac{1}{n} \log \P^* \bigl( Z_n \leq e^{xn} \bigr) = - I^{\textnormal{GW}}(x). \end{equation*} \end{theorem} This theorem is a consequence of \cite[Theorem 3.2]{BB13}. Note that there is also an upper large deviation result for $\P^* \bigl( Z_n \geq e^{xn} \bigr)$, where $x > \log m$, see e.g.~\cite[Theorem 1]{BB11}. \section{Results} \label{results} After defining the rate functions of the random walk and the Galton-Watson process we are now able to state our main results. Note that $I(x^*)= \log m$ if $I(x)< \infty$ for some $x > x^*$. On the other hand, $I(x^*)< \log m$ already implies $\P(X_1>x^*)=0$. This case leads to a different shape of the rate functions, see Figure \ref{fig_rate_functions}. Let \begin{equation} \label{def_k*} k^*=\inf\{k \geq 1 \colon p(k)>0 \}. \end{equation} Note that $k=k^*$ is the smallest positive integer, such that $\P(Z_n=k)>0$ for some $n \in \N$. Define the rate function for the maximum of independent random walks as \begin{equation} \label{def_I_ind} I^{\text{ind}}(x)= \begin{cases} I(x) - \log m & \text{for } x > x^*,\\ 0 & \text{for } x=x^*,\\ \rho \bigl(1 - \tfrac{I(x)}{\log m} \bigr) & \text{for } 0 \leq x < x^*,\\ k^* I(x) + \rho & \text{for } x \leq 0. \end{cases} \end{equation} Note that $\rho \bigl(1 - \tfrac{I(x)}{\log m} \bigr)= I^{\text{GW}}(I(x))$ for $ 0 \leq x < x^*$. Recall the maximum $\tilde{M}_n$ of a random number of independent walks, defined in \eqref{Mntildedef}. \begin{theorem}\label{theorem_ind_RW} Suppose that Assumptions \ref{assumption_RW}, \ref{assumption_supercritical} and \ref{assumption_ZlogZ} are satisfied. Then, the laws of $\frac{\tilde{M}_n}{n}$ under $\P^*$ satisfy a large deviation principle with rate function $I^{\text{ind}}$. More precisely, \begin{align*} - I^{\textnormal{ind}}(x)= \begin{cases} \lim_{n \to \infty} \frac{1}{n} \log \P \bigl( \frac{\tilde{M}_n}{n} \geq x \bigr) & \text{for } x \geq x^*,\\ \lim_{n \to \infty} \frac{1}{n} \log \P \bigl( \frac{\tilde{M}_n}{n} \leq x \bigr) & \text{for } x \leq x^*. \end{cases} \end{align*} \end{theorem} In the Böttcher case ($p(0)+p(1)=0$) we have $\rho=\infty$ and therefore $I^{\text{ind}}(x)=\infty$ for all $x<x^*$. Hence, in this case the lower deviation probabilities $\P^*( \tilde{M}_n \leq xn)$ for $x<x^*$ decay faster than exponentially in $n$. Let us now give some intuition for the rate function $I^{\text{ind}}$ and describe the large deviation event $\{\tilde{M}_n \geq xn\}$ for some $x>x^*$, respectively $\{\tilde{M}_n \leq xn\}$ for some $x<x^*$. For $x>x^*$, the number of particles should be larger or equal than expected, i.e.~$Z_n \geq e^{nt}$ for some $t \geq \log m$. The probability of such an event is of order $\exp(-I^{\text{GW}}(t)n + o(n))$. If there are $e^{nt}$ particles at time $n$, the probability that at least one particle reaches $xn$ is of order $\exp(-I(x)n+tn+o(n))$ for $t < I(x)$. Therefore, we need to maximize the product of these two probabilities, which amounts to minimize $I^{\text{GW}}(t)+ I(x) - t$, where $t$ runs over the interval $[\log m, I(x))$. It turns out that the optimal value is $t=\log m$. This argument will go through for the maximum of the branching random walk. If $0 \leq x< x^*$, the probability that one particle reaches $xn$ is of order $\exp(-I(x)n+o(n))$. Hence, for every $\varepsilon>0$, if there are less than $e^{(I(x)- \varepsilon) n}$ particles, the probability that none of these particles reaches $xn$ is close to 1. However, if there are more than $e^{(I(x)+ \varepsilon) n}$ particles, this probability decays exponentially in $n$. If $x<0$, already the probability that a single particle is below $xn$ at time $n$ decays exponentially fast in $n$. Hence, if the number of particles $Z_n$ grows exponentially, the probability that all particles are below $xn$ at time $n$ decays faster than exponentially. Therefore, the number of particles needs to grow subexponentially. Since $\rho$ does not depend on the choice of $k$ in Theorem~\ref{lemma_LDP_rho}, there have to be only $k^*$ particles at time $n$ (provided that $\rho < \infty$). Next, we consider the maximum of the branching random walk. For $x < x^*$ let \begin{equation} \label{def_H} H(x)=\inf_{t \in (0,1]} \Bigl\{ t \rho + t I \bigl(t^{-1} \bigl(x-(1-t)x^* \bigr) \bigr) \Bigr\} . \end{equation} Note that for $x > 0$ it suffices to take the infimum over $t \in (0,1-\frac{x}{x^*}]$. Define the rate function of the branching random walk as \begin{equation} \label{def_I_BRW} I^{\text{BRW}}(x)= \begin{cases} I(x) - \log m & \text{for } x > x^*,\\ 0 & \text{for } x=x^*,\\ H(x) & \text{for } x < x^*. \end{cases} \end{equation} \begin{theorem}\label{theorem_BRW} Suppose that Assumptions \ref{assumption_RW}, \ref{assumption_supercritical}, \ref{assumption_ZlogZ} and \ref{assumption_Schroeder} are satisfied. Then, the laws of $ \frac{M_n}{n}$ under $\P^*$ satisfy a large deviation principle with rate function $I^{\textnormal{BRW}}$. More precisely, \begin{align*} - I^{\textnormal{BRW}}(x)= \begin{cases} \lim_{n \to \infty} \frac{1}{n} \log \P \bigl( \frac{M_n}{n} \geq x \bigr) & \text{for } x \geq x^*,\\ \lim_{n \to \infty} \frac{1}{n} \log \P \bigl( \frac{M_n}{n} \leq x \bigr) & \text{for } x \leq x^*. \end{cases} \end{align*} \end{theorem} In contrast to the case of independent random walks we only consider the Schröder case (Assumption \ref{assumption_Schroeder}) for the branching random walk. \begin{remark} Let us comment on the shape of the rate functions and on Assumption \ref{assumption_Schroeder}. \begin{itemize} \item[a)] One can check that the rate function $I^{\textnormal{BRW}}$ is convex. Further note that $I^{\textnormal{ind}}$ is concave on the interval $[0,x^*]$. \item[(b)] Assumption \ref{assumption_Schroeder} is only needed for the lower deviations ($x<x^*$) in Theorem \ref{theorem_BRW}. In the Böttcher case, i.e.~if Assumption~\ref{assumption_Schroeder} is not satisfied, the strategy for lower deviations is different. We refer to \cite{CH18} for recent results. \end{itemize} \end{remark} For $x > x^*$ we have $I^{\text{BRW}}(x)=I^{\text{ind}}(x)$. In this case the strategy is the same as for independent random walks. The strategy in the case $x < x^*$ goes as follows. At time $tn$ there are only $k^*$ particles, and the position of one of those particles is smaller than its expectation. All other $k^*-1$ particles are killed at time $tn$. Note that by Assumption \ref{assumption_Schroeder} either $k^*=1$ or particles may have no offspring with positive probability. Afterwards, every particle moves and branches according to its usual behaviour. Further notice, that in contrast to the case of independent random walks, the number of particles can also grow exponentially if $x<0$. It suffices to have a small number of particles at time $tn$ for some $t \in [0,1]$. \begin{figure}[ht] \centering \begin{tikzpicture}[ baseline=0pt, declare function={I_brw_1(\x)=1.7*(\x)*(\x)+2.5;}, declare function={I_brw_2(\x)=2.5-0.625*(\x)*(\x);}, declare function={I_brw_schwarz_1(\x)=0.3*(\x-2)*(\x-2);}, declare function={I_brw_3(\x)=2*(\x-2)*(\x-2);} ] \begin{axis}[ domain=-1:3, axis y line =box, axis x line =box, xtick={0, 2}, xticklabels={$0$, $x^*$}, ytick={0}, yticklabels={0}, xmin=-1, xmax=3, ymin=0, ymax=6, x=1.2cm, y=0.8cm, smooth, title = {$I(x^*)= \log m$}, ] \addplot [domain=-1:0,dashed] {I_brw_1(x)} ; \label{figure_I_ind} \addplot [domain=0:2,dashed] {I_brw_2(x)} ; \addplot [domain=-1:2] {I_brw_schwarz_1(x)} ; \addplot [domain=2:3] {I_brw_3(x)} ; \label{figure_I_BRW} \end{axis} \end{tikzpicture}% \hskip 20pt \begin{tikzpicture}[ baseline=0pt, declare function={I_brw_3(\x)=1.3*(\x)*(\x)+2.6;}, declare function={I_brw_4(\x)=2.6-0.4*(\x)*(\x);}, declare function={I_brw_schwarz_3(\x)=0.3*(\x-2)*(\x-2);}, declare function={I_brw_5(\x)=0;}, declare function={I_brw_6(\x)=5;}, ] \begin{axis}[ domain=-1:3, axis y line =box, axis x line =box, xtick={0, 2}, xticklabels={$0$, $x^*$}, ytick={0,1,5}, yticklabels={0,$I^\textnormal{GW}(I(x^*))$, $\infty$}, xmin=-1, xmax=3, ymin=0, ymax=6, smooth, x=1.2cm, y=0.8cm, enlarge y limits=false,clip=false, title = {$I(x^*)< \log m$}, ] \addplot [domain=-1:0,dashed] {I_brw_3(x)} ; \addplot [domain=-0:2,dashed] {I_brw_4(x)} ; \addplot [domain=-1:2] {I_brw_schwarz_3(x)} ; \addplot [domain=2:3] {I_brw_6(x)} ; \draw[fill] (axis cs: 2,0) circle [radius=0.06cm]; \draw[fill=white] (axis cs: 2,1) circle [radius=0.06cm]; \draw[fill=white] (axis cs: 2,5) circle [radius=0.06cm]; \end{axis} \end{tikzpicture} \caption{The figure shows the qualitative behaviour of the rate function of the branching random walk (\ref{figure_I_BRW}) and the rate function of independent random walks (\ref{figure_I_ind}). } \label{fig_rate_functions} \end{figure} To compare the rate functions, note that the maximum of independent random walks stochastically dominates the maximum of the branching random walk, see Lemma~\ref{lemma_BRW_vs_ind_RW2}. Therefore, $I^{\text{ind}}(x) \leq I^{\text{BRW}}(x)$ for $x>x^*$, respectively $I^{\text{ind}}(x) \geq I^{\text{BRW}}(x)$ for $x<x^*$. For $x<x^*$, the inequality is in general strict. For $x>x^*$, the rate functions coincide, see the argument above. Let us now comment on the shape of the rate functions. If $I(x)=\infty$ for some $x>x^*$, also $I^{\text{ind}}(x)=I^{\text{BRW}}(x)=\infty$. More precisely, $I(x)=\infty$ already implies $\P(X_1 \geq x- \varepsilon)=0$ for some $\varepsilon>0$ and therefore $M_n \leq x^*n $, respectively $\tilde{M}_n \leq x^*n$ almost surely. If $I(x^*)= \log m$, then the rate functions $I^{\text{ind}}(x)$ and $I^{\text{BRW}}(x)$ are continuous at $x=x^*$. However, if $I(x^*)< \log m$, the rate functions $I^{\textnormal{ind}}(x)$ and $I^{\textnormal{BRW}}(x)$ are infinite for $x>x^*$, since $I(x)=\infty$. Therefore, they are not continuous from the right at $x=x^*$. The rate function $I^{\textnormal{BRW}}(x)$ is continuous from the left at $x=x^*$, since $I^{\textnormal{BRW}}(x) \leq \rho \bigl(1-\frac{x}{x^*} \bigr)$ for $x<x^*$. However, $I^{\textnormal{ind}}(x)$ is also not continuous from the left at $x=x^*$. In particular, $ \lim_{x\nearrow x^*} I^{\textnormal{ind}}(x) \in (0, \infty)$ if $\rho< \infty$. An intuitive explanation of this discontinuity is the following. If there are at least $\exp\bigl(I(x^*) n \bigr)$ particles at time $n$, then $\tilde{M}_n = x^*n + o(n)$ with high probability. For a smaller linear term there have to be less particles, hence for all $x<x^*$ the probability $\P^*(\tilde{M}_n \leq xn)$ is bounded from below by the probability to have at most $\exp\bigl(I(x^*) n \bigr)$ particles at time $n$, which decays exponentially. Note that for the branching random walk, in contrast, it suffices to have a small number of particles at the beginning. \section{Preliminaries} \label{preliminaries} Before we prove the main results, we collect some preliminaries which are needed throughout the proofs. As already mentioned in the introduction, conditioned on survival, the linear speed of the branching random walk equals $x^*$, i.e. \begin{equation}\label{linspeed} \lim_{n \to \infty} \frac{M_n}{n} = x^* \quad \P^*\text{-a.s.} \end{equation} We often use the following simple inequalities. \begin{enumerate} \item[(U1)] We have $1-x \geq e^{-ex}$ for $x \in [0,e^{-1}]$. \item[(U2)] We have $1-(1-x)^y \geq xy(1-xy)$ for $x \in [0,1]$ and $y \geq 0$. \end{enumerate} \begin{proof} Both inequalities follow after some elementary calculations. \begin{enumerate} \item[(U1)] The function $f(x)=1-x-e^{-ex}$ is increasing on $[0,e^{-1}]$. The claim follows, as $f(0)=0$. \item[(U2)] The function $g(x)=1-e^{-x}-x(1-x)$ is increasing for $x \geq 0$. As $g(0)=0$, we have $1-e^{-x} \geq x(1-x)$. Using additionally the well known inequality $1-x \leq e^{-x}$, we get \begin{equation*} 1-(1-x)^y \geq 1- e^{-xy} \geq xy(1-xy). \qedhere \end{equation*} \end{enumerate} \end{proof} Furthermore, we need the following simple inequality about the sum of random variables. Note that the random variables in this lemma do not have to be independent. \begin{lemma}\label{lemma_sum_rv} For $i \in \N$ let $X_i$ be a Bernoulli($p_i$) random variable with $\inf_{i\in \N}p_i =:p$. Then for every $a >0$ and $n \in \N$ \begin{equation*} \P \left(\frac{1}{n} \sum_{i=1}^n X_i \geq a \right) \geq p-a. \end{equation*} \end{lemma} A proof can be found in \cite{DGHPW18}. For $i \in \N$ let $(a_n^i)_{n \in \N}$ be a sequence of positive numbers and $a^i= \limsup_{n \to \infty} \frac{1}{n} \log a_n^i$. Then, for all $k \in \N$ it holds that \begin{equation}\label{sevterms} \limsup_{n \to \infty} \frac{1}{n} \log \sum_{i=1}^k a_n^i = \max_{i \in \{1, \ldots , k\}} a^i. \end{equation} A proof can e.g.~be found in \cite[Lemma 1.2.15]{DZ10}. We often need to estimate the number of particles at time $n$, which has expectation $m^n$. Let $W_n = \frac{Z_n}{m^n}$ and let $(\mathcal{F}_n)_{n \in \N}$ be the natural filtration of the Galton-Watson process, i.e.~$\mathcal{F}_n= \sigma(Z_1, \ldots, Z_n)$. The process $(W_n)_{n \in \N}$ is a martingale with respect to the filtration $(\mathcal{F}_n)_{n \in \N}$. Therefore, $W_n \to W$ almost surely, where $W$ is an almost surely finite random variable. The following well-known theorem shows that under our assumptions, the limit $W$ is non-trivial, i.e.~$\P(W=0)<1$. \begin{theorem}[Kesten-Stigum] \label{Lemma_GWP_martingale} If Assumption \ref{assumption_supercritical} and Assumption \ref{assumption_ZlogZ} are satisfied, we have \begin{equation*} \E[W]=1 \quad \text{and } \quad \P(W=0)=q < 1. \end{equation*} \end{theorem} A proof can e.g.~be found in \cite[Chapter 1, Section 10, Theorem 1]{AN04}. As already mentioned above, a supercritical Galton-Watson process survives with probability $1-q$. In the proof of Theorem~\ref{theorem_ind_RW} and Theorem~\ref{theorem_BRW} we also need the asymptotics of the survival probability of a critical Galton-Watson process. \begin{theorem} \label{theorem_survival_critical} Let $m=1$ and $p(1)<1$. Then, $\lim_{n \to \infty} n\P(Z_n>0)=\frac{2}{\operatorname{Var}(Z_1)}$. \end{theorem} A proof can be found in \cite[Chapter 1, Section 9, Theorem 1]{AN04}. Cramér's theorem gives the exponential decay rate of the probability $\P(S_n \geq xn)$ for $x>0$. The following theorem gives the precise asymptotics for this probability. \begin{theorem} \label{theorem_RW_precise_asyptotics} Let $x>0$ and $I(x)< \infty$. There exists an explicit constant $c>0$ such that \begin{equation*} \lim_{n \to \infty} \sqrt{n}e^{I(x)n} \P \Bigl( \frac{S_n}{n} \geq x \Bigr) = c. \end{equation*} \end{theorem} A proof can be found in \cite[Theorem 3.7.4]{DZ10}. Furthermore, we need some properties of the rate function $I$. \begin{lemma} \label{lemma_prop_I} Assume that there exists $x \in \R$ such that $I(x) < \infty$ and $I(x + \varepsilon) = \infty$ for all $\varepsilon>0$. Then $\P(X_1 >x)=0$ and $\P(X_1=x)=e^{-I(x)}$. \end{lemma} \begin{proof} Let $x \in \R$ such that $I(x) < \infty$ and $I(x + \varepsilon) = \infty$ for all $\varepsilon>0$. Assume that $\P(X_1 >x)>0$. Then, there exists $\varepsilon>0$ such that $\P(X_1 \geq x+ \varepsilon)>0$. However, \begin{align} \label{proof_prop_I_1} I(x+ \varepsilon)&= \sup_{\lambda \in \R} \bigl(\lambda (x+\varepsilon) - \log \E \bigl[e^{\lambda X_1} \bigr]\bigr) \nonumber\\ &\leq \sup_{\lambda \in \R} \bigl(\lambda (x+\varepsilon)- \log \bigl( e^{\lambda (x+\varepsilon)} \P(X_1 \geq x + \varepsilon) \bigr) \nonumber\\ & = - \log \P(X_1 \geq x + \varepsilon) < \infty, \end{align} which leads to a contradiction. It remains to show that $\P(X_1=x)=e^{-I(x)}$. Analogously to \eqref{proof_prop_I_1}, we get \begin{equation*} I(x) \leq \sup_{\lambda \in \R} \bigl(\lambda x- \log \bigl( e^{\lambda x} \P(X_1 =x) \bigr) = - \log \P(X_1=x). \end{equation*} Moreover, since $\P(X_1 > x)=0$ we have for all $\varepsilon>0$ \begin{equation*} I(x) \geq \sup_{\lambda \in \R} \bigl(\lambda x- \log \bigl( e^{\lambda x} \P(X_1 \in (x-\varepsilon, x] + e^{\lambda (x - \varepsilon)} \bigr). \end{equation*} Letting $\lambda \to \infty$ and $\varepsilon \to 0$ shows that \begin{equation*} I(x) \geq - \log \P(X_1=x), \end{equation*} which finishes the proof. \end{proof} \section{Proofs} In this section we prove the main results of the paper. \subsection{Independent random walks} \label{chapter_brw_proofs_indRW} \begin{proof}[Proof of Theorem \ref{theorem_ind_RW}] \textbf{1. Case}: $x > x^*$ Following the strategy explained in Section~3, independence of the random walks and (U2) yields \begin{align} \label{proof_thm_ind_RW_1} \P^* \Bigl( \frac{\tilde{M}_n}{n} \geq x \Bigr) & = \E^* \biggr[ 1- \Bigr(1- \P \Bigl( \frac{S_n}{n} \geq x \Bigr) \Bigr)^{Z_n} \biggr] \nonumber\\ & \geq \P^* \Bigl(Z_n \geq \frac{1}{2} m^n \Bigr) \cdot \E \biggr[ 1- \Bigr(1- \P \Bigl( \frac{S_n}{n} \geq x \Bigr) \Bigr)^{\frac{1}{2} m^n} \biggr] \nonumber \\ & \geq \P^* \Bigl(W_n \geq \frac{1}{2} \Bigr) \P \Bigl( \frac{S_n}{n} \geq x \Bigr) \frac{1}{2} m^n \Bigl( 1- \P \Bigl( \frac{S_n}{n} \geq x \Bigr) \frac{1}{2} m^n \Bigr). \end{align} By Cramér's theorem, $\P \bigl( \frac{S_n}{n} \geq x \bigr) \frac{1}{2} m^n \to 0$ as $n \to \infty$, since $\log m < I(x)$. For the first factor on the right hand side of \eqref{proof_thm_ind_RW_1} we have $\liminf_{n \to \infty} \P^* (W_n \geq \frac{1}{2} ) \geq \P^*(W > \frac{1}{2}) >0$, since $\E^*[W]\geq \E[W]=1$ by Theorem~\ref{Lemma_GWP_martingale}. Together with \eqref{sevterms} we conclude \begin{equation*} \P^* \Bigl( \frac{\tilde{M}_n}{n} \geq x \Bigr) \geq \exp \bigl( - (I(x) - \log m )n + o(n) \bigr), \end{equation*} which yields the lower bound. For the upper bound, the Markov inequality yields \begin{align}\label{proof_thm_ind_RW_2} \P^* \Bigl( \frac{\tilde{M}_n}{n} \geq x \Bigr) = \P^* \Bigl( \sum_{i=1}^{Z_n} \mathds{1}_{\{S_n^i \geq nx\}} \geq 1 \Bigr) \leq \P \Bigl( \frac{S_n}{n} \geq x \Bigr)\E^*[Z_n] = \P \Bigl( \frac{S_n}{n} \geq x \Bigr) \frac{m^n}{1-q}, \end{align} which immediately implies the claim. \textbf{2. Case}: $0 < x < x^*$ Since the rate function $I$ is strictly increasing on the interval $[0,x^*]$, we can choose $\varepsilon>0$ such that $\varepsilon < I(x) < \log m - \varepsilon$. We prove the upper bound first. Using the inequality $1-y \leq e^{-y}$ and Theorem~\ref{lemma_LDP_Zn}, we have for $n$ large enough \begin{align*} \P^* \Bigl(\frac{\tilde{M}_n}{n} \leq x \Bigr) & = \E^* \biggl[\Bigl(1-\P \Bigl(\frac{S_n}{n} > x \Bigr)\Bigr)^{Z_n} \biggr] \leq \E^* \biggl[\exp \Bigl(-\P \Bigl(\frac{S_n}{n} > x \Bigr) Z_n \Bigr) \biggr]\\ & \leq \P^* \Bigl(Z_n \leq e^{(I(x)+\varepsilon)n} \Bigr) + \exp \bigr(-e^{\varepsilon n+o(n)} \bigr)\\ &= \exp \Bigl(- \bigl( I^{\text{GW}}(I(x) +\varepsilon \bigr) n + o(n) \Bigr). \end{align*} Letting $\varepsilon \to 0$ yields the upper bound. Note that $I^{\text{GW}}$ defined in \eqref{def_I_GW} is continuous. The proof for the lower bound is similar. More precisely, since $\P (\frac{S_n}{n} > x) < e^{-1}$ for $n$ large enough, (U1) yields for $n$ large enough \begin{align*} \P^* \Bigl(\frac{\tilde{M}_n}{n} \leq x \Bigr) & = \E^* \biggl[\Bigl(1-\P \Bigl(\frac{S_n}{n} > x \Bigr)\Bigr)^{Z_n} \biggr] \geq \E^* \biggl[\exp \Bigl(-e \cdot \P \Bigl(\frac{S_n}{n} > x \Bigr) Z_n \Bigr) \biggr]\\ & \geq \P^* \Bigl(Z_n \leq e^{(I(x)-\varepsilon)n} \Bigr) \cdot \exp \bigr(-e^{-\varepsilon n+o(n)} \bigr)\\ &= \exp \Bigl(- I^{\text{GW}}(I(x) - \varepsilon ) n + o(n) \Bigr). \end{align*} Letting $\varepsilon \to 0$ yields the lower bound. \textbf{3. Case}: $x \leq 0$ We first consider $x <0$. For the upper bound we have for $K \in \N$ \begin{equation} \label{proof_thm_ind_RW_3} \P^* \Bigl(\frac{\tilde{M}_n}{n} \leq x \Bigr) = \E^* \biggl[\P \Bigl(\frac{S_n}{n} \leq x \Bigr)^{Z_n} \biggr] \leq \sum_{k=1}^K \P \Bigl(\frac{S_n}{n} \leq x \Bigr)^k \P^*(Z_n=k) + \P \Bigl(\frac{S_n}{n} \leq x \Bigr)^K. \end{equation} By Theorem~\ref{lemma_LDP_rho}, the probability $\P(Z_n=k)$ is of order $\exp(-\rho n + o(n))$ for all $k \in A$ (defined in \eqref{def_A}) and $\P(Z_n=k)=0$ otherwise. For all $K \in \N$, \eqref{sevterms} yields \begin{equation*} \limsup_{n \to \infty} \frac{1}{n} \log \P^* \Bigl(\frac{\tilde{M}_n}{n} \leq x \Bigr) \leq \max \bigl\{-(k^*I(x)+\rho), -KI(x) \bigr\}. \end{equation*} Hence, letting $K \to \infty$ proves the upper bound. Note that $I(x)>0$ for $x<0$. As in the proof of \eqref{proof_thm_ind_RW_3} we have \begin{align*} \P^* \Bigl(\frac{\tilde{M}_n}{n} \leq x \Bigr) & = \E^* \biggl[\P \Bigl(\frac{S_n}{n} \leq x \Bigr)^{Z_n} \biggr] \geq \P \Bigl(\frac{S_n}{n} \leq x \Bigr)^{k^*} \cdot \P(Z_n=k^*)\\ &= \exp \bigl( -(k^*I(x)+\rho)n +o(n) \bigr), \end{align*} which shows the lower bound. For $x=0$ the result follows from continuity of the rate function $I^\textnormal{ind}$ at 0. \textbf{4. Case}: $x=x^*$\\ In the same was as in \eqref{proof_thm_ind_RW_2}, \begin{equation} \label{proof_thm_ind_RW_4} \P \Bigl( \frac{\tilde{M}_n}{n} \leq x^* \Bigr) = 1- \P \Bigl( \frac{\tilde{M}_n}{n} > x^* \Bigr) \geq 1- \P \Bigl( \frac{S_n}{n} > x^* \Bigr) \frac{m^n}{1-q}. \end{equation} Now we have to distinguish two cases. If $I(x^*) = \log m$, then the right hand side of \eqref{proof_thm_ind_RW_4} converges to 1 as $n \to \infty$ by Theorem~\ref{theorem_RW_precise_asyptotics}. If $I(x^*) < \log m$, then $I(x)=\infty$ for all $x>x^*$ and therefore $\P(X_1>x^*)=0$ by Lemma~\ref{lemma_prop_I}. Hence, the right hand side of \eqref{proof_thm_ind_RW_4} equals 1. In both cases we get \begin{equation*} \lim_{n \to \infty} \frac{1}{n} \log \P \Bigl( \frac{\tilde{M}_n}{n} \leq x^* \Bigr)=0. \end{equation*} Since $\P (\tilde{M}_n \geq x^* n ) \geq \P (M_n \geq x^* n )$, it remains to show that $\P(M_n \geq x^* n )$ decays slower than exponentially in $n$. If $I(x)< \infty$ for some $x>x^*$, then the rate function $I^{\text{BRW}}(x)$ is continuous from the right at $x=x^*$. Since $I^{\text{BRW}}(x) \to 0$ as $x \searrow x^*$ in this case, the claim follows. Therefore assume that $I(x)= \infty$ for all $x>x^*$. By Lemma~\ref{lemma_prop_I} we have $\P(X_1=x^*)=e^{-I(x^*)}$ in this case. Consider the following embedded process. Every particle with step size smaller than $x^*$ at any time is killed. Therefore, the reproduction mean in every step is $\P(X_1=x^*) m \geq 1$. Let $q_n$ be the extinction probability of this process at time $n$. By Theorem~\ref{theorem_survival_critical}, $q_n$ decays slower than exponentially in $n$. Since $\P (M_n \geq x^* n ) \geq q_n$, the claim follows. \end{proof} \subsection{Branching random walk} \label{chapter_brw_proofs_BRW} Before proving Theorem~\ref{theorem_BRW}, we first show that the maximum of independent random walks stochastically dominates the maximum of the branching random walk. \begin{lemma} \label{lemma_BRW_vs_ind_RW1} Let $(X_i)_{i \in \N}$ and $(Y_i)_{i \in \N}$ be independent sequences of (not necessarily independent) random variables. Furthermore, assume that the random variables $Y_i, i \in \N$, have the same distribution. Then we have for all $k \in \N$ and $x \in \R$ \begin{equation*} \P \Bigl( \max_{i \in \{1, \ldots, k\}} \{X_i+Y_i\} \leq x \Bigr) \leq \P \Bigl( \max_{i \in \{1, \ldots, k\}} \{X_i+Y_1\} \leq x \Bigr). \end{equation*} In other words, $\max_{i \in \{1, \ldots, k\}} \{X_i+Y_1\} \preceq \max_{i \in \{1, \ldots, k\}} \{X_i+Y_i\}$ where we write ``$\preceq$'' for the usual stochastic domination. \end{lemma} \begin{proof} Let $i^*$ be the smallest (random) index such that $X_{i^*}=\max_{i \in \{1, \ldots, k\}} X_i$. We have \begin{equation*} \P \Bigl( \max_{i \in \{1, \ldots, k\}} \{X_i+Y_i\} \leq x \Bigr) \leq \P \bigl( X_{i^*}+Y_{i^*} \leq x \bigr) = \P \bigl( X_{i^*}+Y_1 \leq x \bigr). \qedhere \end{equation*} \end{proof} As a consequence we can show that the maximum of independent random walks stochastically dominates the maximum of the branching random walk. \begin{lemma}\label{lemma_BRW_vs_ind_RW2} We have $ M_n \preceq \tilde{M}_n$ for all $n \in \N$. \end{lemma} \begin{proof} We prove this lemma by induction over $n$. For $n=1$ the inequality is obviously true. Assume that the inequality holds for some $n \in \N$. Let $(\tilde{M}_n^1)_{n \in \N}, (\tilde{M}_n^2)_{n \in \N}, \ldots$ and $(M_n^1)_{n \in \N}, (M_n^2)_{n \in \N}, \ldots$ be independent copies of $(\tilde{M}_n)_{n \in \N}$ and $(M_n)_{n \in \N}$, respectively. Furthermore, let $(X_1^{i,j})_{i,j \in \N}$ be a collection of i.i.d.~random variables such that $X_1^{1,1}$ has the same distribution as $X_1$. For $i \in \{1, \ldots, Z_1\}$, denote by $Z_n^i$ the number of descendants of the $i$-th particle of the first generation at time $n+1$. Note that $Z_n^i$ equals $Z_n$ in distribution. Using first the induction hypothesis and then Lemma~\ref{lemma_BRW_vs_ind_RW1}, \begin{align*} M_{n+1} & \stackrel{d}{=} \max_{i \in \{1, \ldots, Z_1\}} \{X^i_1+M_n^i\}\\ & \preceq \max_{i \in \{1, \ldots, Z_1\}} \{X^i_1+\tilde{M}_n^i\} \\ & \preceq \max_{i \in \{1, \ldots, Z_1\}} \max_{j \in \{1, \ldots, Z_n^i\}} \{ X_1^{i,j} + \tilde{M}_n^i\} \\ & \stackrel{d}{=} \tilde{M}_{n+1} . \qedhere \end{align*} \end{proof} The statement of Lemma~\ref{lemma_BRW_vs_ind_RW2} is also true with respect to $\P^*$. \begin{proof}[Proof of Theorem \ref{theorem_BRW}] \textbf{1. Case}: $x > x^*$ Recall that $I^{\text{BRW}}(x) = I^{\text{ind}}(x)$ for $x \geq x^*$. Therefore, the upper bound immediately follows from Theorem~\ref{theorem_ind_RW} and Lemma~\ref{lemma_BRW_vs_ind_RW2}. It remains to prove the lower bound. Let $\varepsilon>0$ such that $(1-\varepsilon)I(x)> \log m$. Recall that for $v \in D_{\varepsilon n}$ the rightmost descendant of $v$ at time $n$ is denoted by $M^v_{(1-\varepsilon)n}$. By Lemma~\ref{lemma_BRW_vs_ind_RW1}, \begin{align} \P^* \Bigl(\frac{M_n}{n} \geq x \Bigr) & = \P^* \Bigl( \max \limits_{v \in D_{\varepsilon n}} \frac{M_{(1-\varepsilon)n}^v - S_v}{(1-\varepsilon)n} + \frac{S_v}{(1-\varepsilon)n} \geq \frac{x}{1-\varepsilon} \Bigr) \notag \\ & \geq \P^* \Bigl( \max \limits_{v \in D_{\varepsilon n}} \frac{M_{(1-\varepsilon)n}^v - S_v}{(1-\varepsilon)n} + \frac{S_{\varepsilon n}}{(1-\varepsilon)n} \geq \frac{x}{1-\varepsilon} \Bigr) \notag \\ & \geq \P^* \Bigl(\max \limits_{v \in D_{\varepsilon n}} \frac{M_{(1-\varepsilon)n}^v - S_v}{(1-\varepsilon)n} \geq x \Bigr) \cdot \P \Bigl( \frac{S_{\varepsilon n}}{\varepsilon n} \geq x \Bigr). \label{proof_theorem_BRW_case_x>x*_1} \end{align} It remains to estimate the first probability on the right hand side of \eqref{proof_theorem_BRW_case_x>x*_1}. Therefore, let $A_k$ be the set of infinite subtrees in generation $k$, i.e.~$A_k=\{v \in D_k \colon |D_l^v|>0 \ \forall l \in \N \}$. Note that $(M_{(1-\varepsilon)n}^v - S_v)_{v \in D_{\varepsilon n}}$ are independent under $\P^*$ conditioned on $A_{\varepsilon n}$. We can now use similar estimates as in the proof of Theorem~\ref{theorem_ind_RW}. More precisely, by independence and (U2) we get \begin{align} \label{proof_theorem_BRW_case_x>x*_2} & \quad \ \P^* \Bigl(\max \limits_{v \in D_{\varepsilon n}} \frac{M_{(1-\varepsilon)n}^v - S_v}{(1-\varepsilon)n} \geq x \Bigr) \notag \\ & = \E^* \Bigl[ \P^* \Bigl(\max \limits_{v \in D_{\varepsilon n}} \frac{M_{(1-\varepsilon)n}^v - S_v}{(1-\varepsilon)n} \geq x \bigm\vert A_{\varepsilon n} \Bigr) \Bigr] \nonumber\\ & = \E^* \Bigl[ 1- \Bigl(1- \P^* \Bigl(\frac{M_{(1-\varepsilon)n}}{(1-\varepsilon)n} \geq x \Bigr) \Bigr)^{|A_{\varepsilon n}|} \Bigr] \nonumber\\ & \geq \P^* \Bigl(Z_{\varepsilon n} \geq \frac{1}{2} m^{\varepsilon n } \Bigr) \cdot \P^*\Bigl(|A_{\varepsilon n}| \geq \frac{(1-q)}{2} Z_{\varepsilon n} \bigm\vert Z_{\varepsilon n} \geq \frac{1}{2} m^{\varepsilon n } \Bigr) \nonumber\\ & \quad \cdot \biggl( 1- \Bigl(1- \P^* \Bigl(\frac{M_{(1-\varepsilon)n}}{(1-\varepsilon)n} \geq x \Bigr) \Bigr)^{\frac{1-q}{4} m^{\varepsilon n}} \biggr) \notag\\ & \geq \P^* \Bigl(W_{\varepsilon n} \geq \frac{1}{2} \Bigr) \cdot \P^*\Bigl(\frac{1}{Z_{\varepsilon n}} \sum_{v \in D_{\varepsilon n}} \mathds{1}_{\{ |D_l^v|>0 \ \forall l \in \N\}} \geq \frac{(1-q)}{2} \bigm\vert Z_{\varepsilon n} \geq \frac{1}{2} m^{\varepsilon n } \Bigr) \nonumber\\ & \quad \cdot \P^* \Bigl(\frac{M_{(1-\varepsilon)n}}{(1-\varepsilon)n} \geq x \Bigr) \frac{1-q}{4} m^{\varepsilon n} \cdot \Bigl(1-\P^* \Bigl(\frac{M_{(1-\varepsilon)n}}{(1-\varepsilon)n} \geq x \Bigr) \frac{1-q}{4} m^{\varepsilon n} \Bigr). \end{align} The first probability on the right hand side of \eqref{proof_theorem_BRW_case_x>x*_2} can be estimated as in \eqref{proof_thm_ind_RW_1}. It holds that $\liminf_{n \to \infty} \P^* (W_{\varepsilon n} \geq \frac{1}{2} ) \geq \P^* (W > \frac{1}{2} )>0$. The second probability is at least $\frac{1-q}{2}$ by Lemma~\ref{lemma_sum_rv}. Furthermore, as for \eqref{proof_thm_ind_RW_2}, the Markov inequality and the choice of $\varepsilon$ yields \begin{align} \label{proof_theorem_BRW_case_x>x*_3} 1-\P^* \Bigl(\frac{M_{(1-\varepsilon)n}}{(1-\varepsilon)n} \geq x \Bigr) \frac{1-q}{4} m^{\varepsilon n} & \geq 1 - \P \Bigl(\frac{S_{(1-\varepsilon)n}}{(1-\varepsilon)n} \geq x \Bigr) \frac{m^n}{4} \nonumber \\ & = 1- \exp \Bigl(-n \Bigl((1- \varepsilon)I(x) - \log m \Bigr) +o(n) \Bigr) \to 1. \end{align} Combining \eqref{proof_theorem_BRW_case_x>x*_1}, \eqref{proof_theorem_BRW_case_x>x*_2} and \eqref{proof_theorem_BRW_case_x>x*_3} shows \begin{equation*} \liminf_{n \to \infty} \frac{1}{n} \log \P^* \Bigl(\frac{M_n}{n} \geq x \Bigr) \geq -\varepsilon (I(x) -\log m) + (1- \varepsilon) \liminf_{n \to \infty} \frac{1}{(1- \varepsilon) n} \log \P^* \Bigl(\frac{M_{(1- \varepsilon) n}}{(1-\varepsilon) n} \geq x \Bigr). \end{equation*} This implies the lower bound. \textbf{2. Case}: $x < x^*$ Following the strategy explained in Section~3, there are only $k^*$ particles at time $tn$ and the position of one particle is smaller than its expectation. Afterwards, all particles move and branch as usual. For the lower bound let $t \in (0, \min \{1- \frac{x}{x^*},1 \}]$ and fix $\varepsilon >0$. Note that $t \in (0, 1- \frac{x}{x^*}]$ if $ x>0$ and $t \in (0, 1 ]$ if $x \leq 0$. We have \begin{align} \label{proof_theorem_BRW_case_x<x*_1} \P^* \Bigl(\frac{M_n}{n} \leq x \Bigr) & \geq \P^* \Bigl(\frac{M_n}{n} \leq x \bigm\vert Z_{tn}=k^* \Bigr) \cdot \P^* (Z_{tn}=k^* ) \nonumber\\ & \geq q^{k^*-1} \P^* \Bigl(\frac{S_{tn}+M_{(1-t)n}}{n} \leq x \Bigr) \cdot \P^* (Z_{tn}=k^* ) \nonumber \\ & \geq q^{k^*-1} \P^* \Bigl(\frac{M_{(1-t)n}}{(1-t)n} \leq x^* + \varepsilon \Bigr) \cdot \P\Bigl(\frac{S_{tn}}{n} \leq \left(x-(1-t)(x^*+ \varepsilon) \right) \Bigr) \nonumber\\ & \quad \cdot \P^* (Z_{tn}=k^* ). \end{align} Since the first probability on the right hand side of \eqref{proof_theorem_BRW_case_x<x*_1} converges to 1 almost surely as $n \to \infty$ by \eqref{linspeed}, we get \begin{align*} \P^* \Bigl(\frac{M_n}{n} \leq x \Bigr) \geq \exp \Bigl( -\Bigl[I \Bigl(t^{-1}(x-(1-t)(x^*+ \varepsilon)) \Bigr)+ \rho \Bigr]tn + o(n) \Bigr). \end{align*} Letting $\varepsilon \to 0$ and since this inequality holds for all $t \in (0, \min \{1- \frac{x}{x^*},1 \}]$, we conclude \begin{align*} \liminf_{n \to \infty} \frac{1}{n} \log \P^* \Bigl(\frac{M_n}{n} \leq x \Bigr) \geq \sup_{t \in (0, \min \{1- \frac{x}{x^*},1 \}]} - H(x) = -\inf_{t \in (0, 1]} H(x). \end{align*} For the upper bound define \begin{equation*} T_n = \inf \Bigl\{t \geq 0 \colon Z_{tn} \geq n^3 \Bigr\} \end{equation*} and for $\varepsilon_1>0$ introduce the set \begin{equation*} F=F(\varepsilon_1)= \Bigl\{\varepsilon_1, 2 \varepsilon_1, \ldots, \Bigl\lceil \min \Bigl\{\Bigl(1- \frac{x}{x^*} \Bigr), 1 \Bigr\} \varepsilon_1^{-1} \Bigr\rceil \varepsilon_1 \Bigr\}. \end{equation*} By the definition of $T_n$ we then have \begin{align} \label{proof_theorem_BRW_case_x<x*_2} & \quad \ \P^* \Bigl(\frac{M_n}{n} \leq x \Bigr) \nonumber\\ & \leq \P^* \Bigl(T_n > \min \Bigl\{\Bigl(1- \frac{x}{x^*} \Bigr), 1 \Bigr\} \Bigr) + \sum_{t \in F} \P^* \Bigl(\frac{M_n}{n} \leq x \bigm\vert T_n \in \bigl(t - \varepsilon_1, t \bigr] \Bigr) \P^* \bigl(T_n \in \bigl(t - \varepsilon_1, t \bigr] \bigr) \nonumber \\ & \leq \P^* \bigl( Z_{(\min \{(1- \frac{x}{x^*} ), 1 \})n} \leq n^3 \bigr)+ \sum_{t \in F} \P^* \Bigl(\frac{M_n}{n} \leq x \bigm\vert T_n \in \bigl(t - \varepsilon_1, t \bigr] \Bigr) \P^* ( Z_{(t- \varepsilon_1) n} \leq n^3). \end{align} Let $\varepsilon_2>0$. Recall that $A_{tn}$ is the set of infinite subtrees in generation $tn$. Using Lemma~\ref{lemma_BRW_vs_ind_RW1} and the same estimate as in \eqref{proof_theorem_BRW_case_x>x*_2}, \begin{align} \label{proof_theorem_BRW_case_x<x*_3} & \quad \ \P^* \Bigl(\frac{M_n}{n} \leq x \bigm\vert T_n \in \bigl(t - \varepsilon_1, t \bigr] \Bigr) \nonumber \\ & \leq \P^* \biggl(\max_{v \in D_{tn}} \frac{S_{tn}+ M^v_{(1-t)n}}{n} \leq x \Bigm\vert T_n \in \bigl(t - \varepsilon_1, t \bigr] \biggr) \nonumber \\ & \leq \P \Bigl(\frac{S_{tn}}{n} \leq -\bigl((1-t)(x^*- \varepsilon_2)-x \bigr) \Bigr)+ \P^* \Bigl(\frac{M_{(1-t)n}}{(1-t)n} \leq x^* - \varepsilon_2 \Bigr)^n \notag \\ & \quad \ + \P^* \bigl( Z_{tn} \leq n^2 \bigm\vert T_n \in (t - \varepsilon_1, t] \bigr) + \P^* \bigl( |A_{tn}| \leq n \bigm\vert Z_{tn}>n^2, T_n \in (t - \varepsilon_1, t] \bigr). \end{align} The second probability on the right hand side of \eqref{proof_theorem_BRW_case_x<x*_3} converges to 0 almost surely by \eqref{linspeed}. Hence, the second term in \eqref{proof_theorem_BRW_case_x<x*_3} decays faster than exponentially in $n$. For the third term on the right hand side of \eqref{proof_theorem_BRW_case_x<x*_3}, \begin{align} \label{proof_theorem_BRW_case_x<x*_4} \P^* \bigl( Z_{tn} \leq n^2 \bigm\vert T_n \in (t - \varepsilon_1, t] \bigr) & \leq \P^* \bigl(\exists k \in \N \colon Z_k \leq n^2 \bigm\vert Z_0=n^3 \bigr) \notag\\ & \leq \binom{n^3}{n^2} q^{n^3-n^2} \leq \exp \bigl( (n^3-n^2) \log q + 3n^2 \log n \bigr). \end{align} In the second inequality we used the fact that for the event we consider, at most $n^2$ of the initial $n^3$ Galton-Watson trees may survive. Note that every initial particle produces an independent Galton-Watson tree. Similarly to \eqref{proof_theorem_BRW_case_x<x*_4}, we get for the fourth term on the right hand side of \eqref{proof_theorem_BRW_case_x<x*_3} \begin{equation} \label{proof_theorem_BRW_case_x<x*_5} \P^* \bigl( |A_{tn}| \leq n \bigm\vert Z_{tn}>n^2, T_n \in (t - \varepsilon_1, t] \bigr) \leq \binom{n^2}{n} q^{n^2-n} \leq \exp \bigl( (n^2-n) \log q + 2n \log n \bigr). \end{equation} Combining \eqref{proof_theorem_BRW_case_x<x*_2}, \eqref{proof_theorem_BRW_case_x<x*_3}, \eqref{proof_theorem_BRW_case_x<x*_4} and \eqref{proof_theorem_BRW_case_x<x*_5} and letting $\varepsilon_1, \varepsilon_2 \to 0$, we conclude with \eqref{sevterms} after a straightforward calculation \begin{equation*} \limsup_{n \to \infty} \frac{1}{n} \log \P^* \Bigl(\frac{M_n}{n} \leq x \Bigr) \leq - \inf_{t \in (0,\min \{1- \frac{x}{x^*},1 \}]} \Bigr\{t \rho + tI\Bigl(-\frac{(1-t)x^*-x}{t} \Bigr) \Bigr\} = -H(x). \end{equation*} Note that we could take the limit $\varepsilon_2 \to 0$, since $I$ is continuous from the right on $(0,\infty)$. \textbf{3. Case}: $x = x^*$ The proof is analogous to the proof of Theorem~\ref{theorem_ind_RW}. \end{proof} \paragraph{Acknowledgments} We thank the referee for reading carefully the first version of this article, and for pointing out several inaccuracies. Furthermore, we thank Stefan Junk for discussions regarding the proof of Theorem~\ref{theorem_BRW}. \bibliographystyle{amsplain}
{ "timestamp": "2019-06-27T02:15:32", "yymm": "1802", "arxiv_id": "1802.03960", "language": "en", "url": "https://arxiv.org/abs/1802.03960" }
\section{Introduction}\label{Sec:Int} Quantum resource theories provide a rigorous structure to characterise the resources present in quantum systems~\cite{horodecki2013quantumness,coecke2016mathematical,brandao2015reversible,gour2017quantum,regula2017convex}. Such resources arise whenever there is a restriction imposed on the available operations that an agent can perform on the quantum system, identifying a set of \emph{free operations} $\mathcal{O}$ which form a subset of the completely positive and trace preserving linear maps~\cite{nielsen2010quantum}. The restriction also identifies a set of \emph{free states} $\mathcal{F}$, forming the largest subset of the set $\mathcal{D(H)}$ of quantum states for which any pair of states can be reversibly converted using free operations alone. Any non-free state is hence a \emph{resource state}, since one must always input resource in the form of a non-free operation to create such a state from a free state The restricted agent based approach~\cite{del2015resource} to characterising quantum resources has already been particularly fruitful for understanding quantum entanglement~\cite{horodecki2009quantum,bengtsson2007geometry}, where the restriction is given by the paradigm of local operations and classical communication (LOCC) between spatially separated parties~\cite{bennett1996concentrating,horodecki2009quantum,donald2002uniqueness}. In fact, entanglement theory can act as a progenitor for modelling more general resource theories. For example, the many-copy interconversion between resource states using free operations, first understood for entanglement theory, leads to the general concept of resource distillation and cost~\cite{bennett1996concentrating,bennett1996mixed,rains1999bound,rains1999rigorous,brandao2013resource,anshu_2017}. The development of general quantum resource theories has led to further understanding of the resources of quantum coherence~\cite{baumgratz2014quantifying,streltsov2017colloquium,aberg2006quantifying}, quantum correlations~\cite{adesso2016measures,bromley2016there,modi2012classical,streltsov2011behavior,hu2012necessary,guo2013necessary,brodutch2012criteria,aaronson2013comparative,spehner2014quantum} and other nonclassical properties~\cite{brandao2015second,gour2015resource,veitch2014resource,schuch2004nonlocal,gour2008resource,de2014nonlocality,liu2016theory,devetak2008resource,theurer2017resource,ahmadi2017quantification}. An important question is to consider how much resource is present in a given state. The free operations allow for a qualitative characterisation: a state $\rho$ is more resourceful than another state $\sigma$ if there exists a free operation $\Lambda \in \mathcal{O}$ so that $\sigma = \Lambda(\rho)$, meaning that the state $\sigma$ can be prepared from $\rho$ without consuming any resource. Such a characterisation can result in a complicated multibranch hierarchy~\cite{dur2000three,verstraete2002four}, where it can be difficult to identify necessary and sufficient conditions for interconvertibility between two resource states~\cite{nielsen2010quantum,nielsen1999conditions,hayden2000locc,chitambar2016critical,du2015conditions,winter2016operational}. However, quantum resource theories also provide the structure to quantitatively measure the resource content of a state~\cite{plenio2007introduction,bengtsson2007geometry,adesso2016measures,streltsov2017colloquium,regula2017convex}. Here, the complicated multibranch hierarchy can be condensed into a single quantitative ordering that preserves the hierarchy within each branch. Since there is not a unique way to impose a quantitative ordering, there is no unique measure of the resources present in a quantum system. Although this may appear counterintuitive, one may reconcile the non-uniqueness of resource measures from an operational perspective: we expect to exploit our quantum resource for a variety of different tasks, for which each task may value certain resource states over others and hence impose a different ordering. Any \emph{bona fide} measure ${R}$ of a quantum resource must have compatibility with the corresponding quantum resource theory by satisfying two universal requirements. First, it must hold that ${R}(\rho)\geq 0$ for all $\rho \in \mathcal{D(H)}$ and ${R}(\rho)= 0$ for all $\rho \in \mathcal{F}$, i.e. that a resource measure is in general non-negative and always zero when there is no resource. Second, it must hold that ${R}(\Lambda(\rho))\leq {R}(\rho)$ for all $\rho \in \mathcal{D(H)}$ and $\Lambda \in \mathcal{O}$. This requirement is known as \emph{resource monotonicity}, and imposes that resource measures should preserve the hierarchy within each branch. Additional properties may also be considered for a given resource, such as strong monotonicity~\cite{horodecki2009quantum} or convexity whenever $\mathcal{F}$ is convex (see e.g.~\cite{bengtsson2007geometry,plenio2007introduction} for comprehensive accounts of the requirements for measures of entanglement). When a bona fide resource measure is selected, one is then presented with the task of evaluating the measure for arbitrary states. This task is typically intractable analytically and difficult numerically, often resulting in strong restrictions on the applicability of the resource measure. For example, consider the non-trivial optimisations given by the distance-based resource measures \begin{equation}\label{Eq:DistBased} {R}^{D_{\delta}}(\rho) := \inf_{\sigma \in \mathcal{F}} D_{\delta}(\rho,\sigma), \end{equation} where $D_{\delta}$ is a contractive distance on the set of quantum states~\cite{bengtsson2007geometry,vedral1997quantifying,vedral1998entanglement}, as well as by the (generalised) resource robustness~\cite{vidal1999robustness,steiner2003generalized,napoli2016robustness,piani2016robustness,bromley2017navigating,regula2017convex} \begin{equation}\label{Eq:RobustnessGeneral} {R}^{R}(\rho):= \inf_{\tau \in \mathcal{D(H)}} \left\lbrace s \geq 0 \left| \frac{\rho + s \tau}{1+s}=: \sigma \in \mathcal{F} \right\rbrace \right. , \end{equation} which quantifies the resilience of a resource state $\rho$ against mixing. In this paper, we construct a general framework to calculate simplified lower bounds to bona fide resource measures. We begin in Section~\ref{Section:RNIPS} by introducing two of the foundational concepts for our framework: the resource non-increasing projections and the corresponding resource guarantor states, both of which can have wider relevance to quantum resource theory. In Section~\ref{Section:Framework}, we detail the four main steps of our framework. Our approach is not restricted to specific types of resource, since it relies on general concepts using the structure of resource theories. Nevertheless, to verify the usability of our framework, we provide example applications in Section~\ref{Section:Application}, focusing in particular on the resource of multiqubit entanglement. Here we first provide a method for constructing entanglement non-increasing projections and identifying their corresponding entanglement guarantor states. By using this construction we define a new family of entanglement guarantor states complementing those highlighed in~\cite{cianciaruso2016accessible}, and proceed to evaluate the robustness of multiqubit entanglement on these states, which can in turn be used to lower bound the robustness of the GHZ state. Finally, we conclude and discuss our findings in Section~\ref{Section:Discussion} \section{Resource non-increasing projections and resource guarantor states}\label{Section:RNIPS} \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{RNIPs.pdf} \caption{The action of a resource non-increasing projection (RNIP) $\Pi$, which satisfies $\Pi^2 = \Pi \in \mathcal{O}$, is to project the set of states $\mathcal{D(H)}$ onto the set of resource guarantor states (RGSs) $\mathcal{G}$ (dashed orange ellipse). Any state $\rho \in \mathcal{D(H)}$ has a corresponding RGS $\Pi(\rho)\in \mathcal{G}$ (orange circle), with a many-to-one correspondence between general states and RGSs (dotted orange area). The intersection between free states $\mathcal{F}$ (solid blue ellipse) and RGSs $\mathcal{G}$ is the set of free RGSs $\mathcal{F}_{\mathcal{G}}$, so that every free state $\sigma \in \mathcal{F}$ is transformed into a corresponding free RGS $\Pi(\sigma) \in \mathcal{F}_{\mathcal{G}}$.} \label{Figure:RNIPsAndRGSs} \end{figure} We now introduce the two main foundations of our framework. A quantum operation $\Pi$ that satisfies the composition relation $\Pi^2 = \Pi$ is referred to as a \emph{projection}\footnote{Here we restrict to quantum operations that preserve the dimension of the quantum system.}. We define the \emph{resource non-increasing projections} (RNIPs) to be the subset of projections that are also free. Every such $\Pi \in \mathcal{O}$ identifies a corresponding set of \emph{resource guarantor states} (RGSs) $\mathcal{G}$ given by all the states left invariant by $\Pi$, i.e. \begin{equation} \mathcal{G} = \left\lbrace \rho \in \mathcal{D(H)} \,\,\, \left| \,\,\, \Pi(\rho) = \rho \right\rbrace \right. . \end{equation} It can then be seen that the action of $\Pi$ on the set of quantum states $\mathcal{D(H)}$ is to project every state onto the set of resource guarantor states, so that \begin{align} \begin{split} \Pi(\rho) &\in \mathcal{G} \qquad \forall \,\,\, \rho \in \mathcal{D(H)}, \\ \Pi(\rho) &= \rho \qquad \, \forall \,\,\, \rho \in \mathcal{G}. \end{split} \end{align} Hence, for every $\rho \in \mathcal{D(H)}$ there is a corresponding RGS $\Pi(\rho) \in \mathcal{G}$. Now, for any bona fide measure of a resource ${R}$ we know that \begin{equation}\label{Eq:RGSLowerBound} {R}(\Pi (\rho)) \leq {R}(\rho), \end{equation} which holds since ${R}$ satisfies the resource monotonicity requirement. It can therefore be seen that the state $\Pi(\rho) \in \mathcal{G}$ provides a quantitative guarantee on the resources of $\rho \in \mathcal{D(H)}$ in terms of a lower bound. Figure~\ref{Figure:RNIPsAndRGSs} illustrates the action of RNIPs and the corresponding set of RGSs. We remark that there is generally a many-to-one correspondence between a general state $\rho \in \mathcal{D(H)}$ and the corresponding RGS $\Pi(\rho) \in \mathcal{G}$. Furthermore, RNIPs present a generalisation of the resource destroying maps introduced in~\cite{liu2016theory}, which are an extremal form of RNIPs that destroy all resource, as their RGSs are free states. In general, since $\Pi \in \mathcal{O}$, we know that the action of $\Pi$ on the set of free states $\sigma \in \mathcal{F}$ is to map it to a subset $\mathcal{F}_{\mathcal{G}}$ of free RGSs, i.e. the intersection between $\mathcal{F}$ and $\mathcal{G}$. We have that \begin{equation}\label{Eq:FreeRNGs} \mathcal{F}_{\mathcal{G}} = \left\lbrace \Pi(\rho) \,\,\, \left| \,\,\, \rho \in \mathcal{F} \right\rbrace \right. \subset \mathcal{F}. \end{equation} We see in the following that our framework allows for a simplification in the evaluation of resource measures by replacing optimisation over all free states $\mathcal{F}$ with an optimisation over the free RGSs $\mathcal{F}_{\mathcal{G}} \subset \mathcal{F}$, which are typically simpler to characterise. \section{Framework}\label{Section:Framework} We now specify the general framework for providing lower bounds to bona fide resource measures for arbitrary quantum states. Our framework consists of four steps. \begin{description} \item[Step One: ]{Identify an RNIP $\Pi$ and characterise the corresponding set of RGSs $\mathcal{G}$.} \item[Step Two: ]{Characterise the set $\mathcal{F}_{\mathcal{G}}$ of free RGSs.} \item[Step Three: ]{Evaluate ${R}(\varpi)$ for all $\varpi \in \mathcal{G}$.} \item[Step Four: ]{Optimise the lower bound ${R}(\Pi(\mathcal{U}(\rho))) \leq {R}(\rho)$ over free unitaries $\mathcal{U} \in \mathcal{O}$.} \end{description} Each step is now explained in detail. An illustration of the framework is provided in Fig.~\ref{Figure:Framework} and an example of its application to the resource of multiqubit entanglement can be found in Sec.~\ref{Section:Application}. The first step is to identify an RNIP and characterise the corresponding RGSs. This step requires attention to two objectives: on the one hand it is desirable for the RGSs and free RGSs to be simple to characterise, so that ${R}(\varpi)$ can be evaluated for any $\varpi \in \mathcal{G}$. On the other hand, one does not want to pick an RNIP that destroys a lot of resource, as the resultant lower bound in Eq.~(\ref{Eq:RGSLowerBound}) becomes less informative. Indeed, it is possible for $\Pi(\rho) \in \mathcal{F}_{\mathcal{G}}$ even if $\rho \notin \mathcal{F}$, so that the corresponding lower bound is trivial. This is seen in the extreme for the resource destroying maps~\cite{liu2016theory}, for which $\Pi(\rho) \in \mathcal{F}_{\mathcal{G}}$ for all $\rho \in \mathcal{D(H)}$. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{Steps2.pdf} \caption{A zoom-in onto the set $\mathcal{G}$ of RGSs illustrates the four steps of our framework. The first two steps consist of fixing an RNIP $\Pi$ and finding the sets of RNGs $\mathcal{G}$ (dashed orange ellipse) and free RNGs $\mathcal{F}_{\mathcal{G}}$ (given by the intersection with the solid blue ellipse). In the third step, we use the characterisations of $\mathcal{G}$ and $\mathcal{F}_{\mathcal{G}}$ to evaluate a chosen resource measure ${R}(\varpi)$ for any RGS $\varpi \in \mathcal{G}$ (illustrated here for the distance-based measures in Eq.~(\ref{Eq:DistanceBasedRGS})). Finally, the fourth step involves considering the optimised lower bound $\max_{\mathcal{U}} {R}(\Pi(\mathcal{U}(\rho)))\leq {R}(\rho)$ over all free unitary operations $\mathcal{U} \in \mathcal{O}$, with the set $\Pi(\mathcal{U}(\rho)) \in \mathcal{G}$ of RGSs illustrated by the orange ellipse.} \label{Figure:Framework} \end{figure} The second step consists of characterising the set $\mathcal{F}_{\mathcal{G}}$ of free RGSs, i.e. the intersection between $\mathcal{F}$ and $\mathcal{G}$. This can be achieved using Eq.~(\ref{Eq:FreeRNGs}), which tells us that $\mathcal{F}_{\mathcal{G}}$ is simply the result of applying the chosen RNIP onto the set of free states $\mathcal{F}$. In the third step of the framework, we evaluate a chosen resource measure ${R}(\varpi)$ on the set of RGSs, i.e. for all $\varpi \in \mathcal{G}$. Typically, the evaluation of ${R}(\varpi)$ for $\varpi \in \mathcal{G}$ is much more affordable than the evaluation of ${R}(\rho)$ for $\rho \in \mathcal{D(H)}$, since one can employ a number of tricks to simplify the optimisation. For example, consider the distance-based resource measures in Eq.~(\ref{Eq:DistBased}). It can be seen that for any $\varpi \in \mathcal{G}$ \begin{eqnarray}\label{Eq:DistanceBasedRGS} {R}^{D_{\delta}}(\varpi) &=& \inf_{\sigma \in \mathcal{F}} D_{\delta}(\varpi,\sigma) \nonumber \\ &=& \inf_{\sigma \in \mathcal{F}} D_{\delta}(\Pi(\varpi),\Pi(\sigma)) \nonumber \\ &=& \inf_{\sigma \in \mathcal{F}_{\mathcal{G}}} D_{\delta}(\varpi,\sigma), \end{eqnarray} where in the second equality we use the contractivity of the distance $D_{\delta}$ under any quantum operation, while in the third equality use Eq.~(\ref{Eq:FreeRNGs}) and the fact that $\Pi(\varpi) = \varpi$. This equation means that the distance-based resources of $\varpi \in \mathcal{G}$ are given simply by the distance to the free RGSs. Alternatively, consider the resource robustness in Eq.~(\ref{Eq:RobustnessGeneral}). Whenever we consider an RGS $\varpi \in \mathcal{G}$, for every mixture $\frac{\varpi + s \tau}{1+s}\in \mathcal{F}$ with $\tau \in \mathcal{D(H)}$, there is a corresponding mixture $\frac{\varpi + s \Pi(\tau)}{1+s}\in \mathcal{F}_{\mathcal{G}}$. Hence, it holds that \begin{equation}\label{Eq:RobustnessRGS} {R}^{R}(\varpi) = \inf_{\tau \in \mathcal{G}} \left\lbrace s \geq 0 \left| \frac{\varpi + s \tau}{1+s}=: \sigma \in \mathcal{F}_{\mathcal{G}} \right\rbrace \right. , \end{equation} so that one needs only to consider mixtures of $\varpi$ with other RGSs to obtain a free RGS. We note that Eq.~(\ref{Eq:RobustnessRGS}) is a convex optimisation problem whenever $\mathcal{G}$ and $\mathcal{F}_{\mathcal{G}}$ are convex sets, and that Eq.~(\ref{Eq:RobustnessRGS}) may be evaluated as the solution of a semidefinite program~\cite{boyd2004convex,piani2016robustness} if $\mathcal{G}$ and $\mathcal{F}_{\mathcal{G}}$ can additionally be characterised with a finite number of linear matrix inequalities. This can be the case even when Eq.~(\ref{Eq:RobustnessGeneral}) cannot be posed as the solution to a semidefinite program, as we see for the example in Sec.~\ref{Section:Application} of multiqubit entanglement. The final step of our framework is to provide a lower bound to the resource degree of any state $\rho \in \mathcal{D(H)}$, according to Eq.~(\ref{Eq:RGSLowerBound}), by considering the corresponding RGS $\Pi(\rho) \in \mathcal{G}$. However, this lower bound may be optimised over free unitaries: the unitary transformations $\mathcal{U}(\rho):= U \rho U^{\dagger}$ with $U U^{\dagger} = U^{\dagger} U = \mathbb{I}$ such that $\mathcal{U} \in \mathcal{O}$. Indeed, it is straightforward to see that ${R}(\rho) = {R}(\mathcal{U(\rho)})$ for any monotonic resource measure, so that \begin{equation} {R}(\Pi(\mathcal{U}(\rho))) \leq {R}(\mathcal{U}(\rho)) = {R}(\rho). \end{equation} Evaluating the maximum of the left hand side of the above equation over all free unitary operations $\mathcal{U} \in \mathcal{O}$ hence provides an optimised lower bound to the resource contents of $\rho \in \mathcal{D(H)}$. Nevertheless, the set of free unitaries may not always be fully characterised for a given resource, while optimisation over the free unitaries can be computationally intensive. It can thus be more realistic to optimise over a well characterised subset of free unitaries. As we see in the following, the free unitaries of multiqubit entanglement are local qubit unitaries. Performing an optimisation over the ${\sf SU(2)}$ group is often simple to solve, with some cases amenable to evaluation as the result of a semidefinite program These four steps compose the main structure of our framework. Whilst step one and step four have already been employed for the resource of entanglement~\cite{buchholz2015evaluating,hofmann2014analytical,siewert2012quantifying,eltschka2012quantitative}, our primary contribution here is formalising the framework for general resources, as well as highlighting the simplifications possible in evaluating resource measures by restricting to what we have defined as RGSs, see Eqs.~(\ref{Eq:DistanceBasedRGS}) and~(\ref{Eq:RobustnessRGS}). It is also important to comment on the experimental applicability of our result. One approach to quantify the resource of a state prepared in the laboratory is to perform a full state tomography~\cite{james2001measurement,thew2002qudit,nielsen2010quantum}, requiring an exponential number of measurement settings (with respect to system dimension) in the worst-case scenario, although a less intensive overhead can be achieved by restricting to low rank states\cite{gross2010quantum}. Alternatively, one can reconstruct the corresponding RGS, which may require much fewer measurement settings to achieve. For example, in the following section we discuss a family of RGSs that are experimentally accessible using only three local measurement settings. Moreover, the optimisation over free unitaries $\mathcal{U} \in \mathcal{O}$ in the fourth step of our framework can be attempted experimentally whenever partial prior knowledge of the target state is available, as is the case in most scenarios. Here, one performs a change of basis before the measurement according to the optimal free unitary of the target state. \section{Applications of the framework}\label{Section:Application} The framework naturally lends itself to the characterisation of a variety of quantum resources. We first briefly discuss some very natural example applications for thermodynamics and quantum coherence, before proceeding to give a detailed example of applying our framework to multiqubit entanglement. Our first example, the resource theory of thermodynamics (or athermality) \cite{brandao2013resource,gour2015resource}, identifies a unique free state --- the thermal state of a given Hamiltonian $H$ at a fixed temperature --- and the free operations as those which can be implemented by attaching a thermal ancilla and applying a unitary operation which commutes with the total Hamiltonian of the system. Here, one can consider as a resource non-increasing projection the completely dephasing map $\Delta_H(\cdot) = \sum_i \proj{E_i} \cdot \proj{E_i}$ where $\{\ket{E_i}\}$ is the eigenbasis of $H$. This greatly simplifies the evaluation of resource measures, since the resultant states are simply classical probability distributions, and indeed the problem of computing distance-based resource measures reduces to optimising distances between probability distributions. Such projections have already found use in the description of operational tasks in this resource theory \cite{brandao2013resource,horodecki_2013-1,lostaglio_2015}. Another example is quantum coherence, which captures the existence of a quantum system in a superposition of states with respect to a given reference basis, and has relevance in fundamental information processing tasks, metrology, and quantum biology, as well as being a crucial ingredient for the creation of entanglement~\cite{streltsov2017colloquium}. The coherence-free states, or \emph{incoherent states}, can be identified as diagonal when represented in the reference basis $\{\ket{i}\}$, while the free operations can be identified as \emph{incoherent operations}~\cite{baumgratz2014quantifying}. Any projective measurement on subspaces spanned by vectors of the reference basis, representing a decohering map which zeroes some off-diagonal elements of the density matrix, is then a resource non-increasing projection. Identifying particular instances of this type of projection will then provide varying lower bounds for coherent states. Another type of an RNIP which has been employed in the resource theory of coherence is an operation reducing all two-qubit states to so-called $\mathcal{M}_{N}^{3}$ states~\cite{silva2016observation}, which we will also encounter in Sec. \ref{sec:applying_entanglement}. We now provide a more in-depth analysis of the application of our framework to the resource theory of multiqubit entanglement --- a fundamental resource of paramount importance in quantum information \cite{horodecki2009quantum}, although quantitatively very difficult to characterise. We begin by outlining the background details of multiqubit entanglement and then proceed to discuss a general method to construct entanglement non-increasing projections and find their corresponding entanglement guarantor states. This construction is used to identify the $EG_{N} \ $ states, which we then use within our framework. \subsection{Resource theory of multiqubit entanglement} Within the quantum resource theory of multiqubit entanglement, there exists a hierarchy of free states referred to as $M$\textit{-separable states}, with $2 \leq M \leq N$. These states can be written as convex combinations of product states, each of which is factorised with respect to any (possibly different) partition of the $N$ qubits into $M$ subsystems, i.e. \begin{equation}\label{Eq:MSep} \varsigma = \sum_i p_i |\psi_i^{(1)}\rangle\langle\psi_i^{(1)}|\otimes|\psi_i^{(2)}\rangle\langle\psi_i^{(2)}|\otimes\cdots\otimes|\psi_i^{(M)}\rangle\langle\psi_i^{(M)}|, \end{equation} where $|\psi_i^{(\alpha)}\rangle$ is any pure state of the $\alpha$-th subsystem of the $M$-partition corresponding to the $i$-th term (we stress that the $M$-partition is allowed to vary for different values of $i$). The hierarchy of $M$-separable states stands as follows: $M$-separability implies $M'$-separability for any $M'<M$, whereas $M$-inseparability implies $M'$-inseparability for any $M'>M$. For example, when considering the two extremes of this hierarchy, we have that $N$-separability implies any other form of $M$-separability, and is thus called \textit{full separability}, whereas $2$-inseparability implies any other form of $M$-inseparability, and is thus called \textit{genuine multiqubit entanglement} or \textit{full inseparability}. The free operations are instead given by the single-qubit LOCC, whereby only operations that are local on each of the $N$ qubits are permitted, along with classical communication~\cite{chitambar2014everything}. An important instance of single-qubit LOCC is a convex combination of single-qubit local unitaries, whose action on a state $\rho$ is given by \begin{equation} \sum_{i} p_{i}\ U_{i}^{(1)} \otimes U_{i}^{(2)} \otimes \ldots \otimes U_{i}^{(N)} \rho\ U_{i}^{(1) \dagger} \otimes U_{i}^{(2) \dagger} \otimes \ldots \otimes U_{i}^{(N) \dagger}. \end{equation} It requires only one-way communication and can be physically achieved by allowing one of the qubit parties, e.g. the $\alpha$-th one, to randomly select a local unitary $U_{i}^{(\alpha)}$ by using the probability distribution $\{p_{i}\}$ and then to communicate the result to all the other parties. Having identified the free states and free operations, we can define a bona fide measure $E_{M}$ of $M$-inseparable multiqubit entanglement to be any function satisfying the requirements discussed in Sec.~\ref{Sec:Int}. In particular, the distance-based measures $E_{M}^{D_{\delta}}$ are specified by Eq.~(\ref{Eq:DistBased}) and the entanglement robustness is specified by Eq.~(\ref{Eq:RobustnessGeneral}). \begin{comment} Having introduced the free states and free operations of multiqubit entanglement, we can now list the axioms that a real and non-negative function $E_M$ on the set of states $\rho$ has to satisfy in order to be a fully bona fide measure $E_M$ of $M$-inseparable entanglement~\cite{horodecki2009quantum,vidal2000entanglement,plenio2007introduction}: \begin{itemize} \item (E1) $E_M(\varsigma)=0$ if $\varsigma\in\mathcal{S}_{M}$; \item (E2) $E_M(\Lambda_{LOCC}(\rho))\leq E_M(\rho)$ for any single-qubit LOCC $\Lambda_{LOCC}$; \item (E3) $\sum_i p_i E_M(\rho_i)\leq E_M(\rho)$, with $p_i=\mbox{{\rm Tr}}(K_i \rho K_i^\dagger)$ and $\rho_i=K_i \rho K_i^\dagger/p_i$, for any $\{K_i\}$ such that $\sum_i K_i^\dagger K_i=\mathbb{I}$ and $K_i$ is a single-qubit local operator for any $i$; \item (E4) $E_M(\sum_i p_i \rho_i)\leq \sum_i p_i E_M(\rho_i)$ for any probability distribution $\{p_i\}$ and any set of states $\{\rho_i\}$. \end{itemize} Condition (E1) entails that the separable states have zero resource. Condition (E2) ensures that entanglement cannot increase under the corresponding free operations, i.e. LOCC, and implies that it is invariant under local unitaries, i.e. any local change of basis. Condition (E3) states that entanglement cannot even increase on average under selective LOCC, i.e. non-entangling quantum operations for which the information about the measurement outcomes $\{\rho_i\}$ is retained. This stronger monotonicity requirement is quite important as it allows for sub-selection based on measurement outcomes, a process available in well controlled quantum experiments. Finally, (E4) implies that entanglement cannot increase by classical mixture. Notice, however, that while convexity (E4) is physically desirable as it would mean that entanglement cannot increase by classically mixing states, it is not essential, since there are meaningful entanglement monotones that do not share such property~\cite{plenio2005logarithmic}. Two notable examples of fully bona fide quantifiers of $M$-inseparable entanglement of a state $\rho$ are given by the geometric $M$-inseparable entanglement, defined as the minimal distance between $\rho$ and the set of $M$-separable states: \begin{equation}\label{eq:geometricentanglement} E_M^D(\rho) := \inf_{\varsigma\in\mathcal{S}_M} D(\rho,\varsigma), \end{equation} where $D$ is any contractive distance, and the (generalised) robustness of $M$-inseparable entanglement \cite{cavalcanti_2005}, defined as the minimal amount of mixing between $\rho$ and an arbitrary state $\tau$ to get an $M$-separable state $\varsigma$: \begin{equation}\label{eq:robustnessofentanglement} E_M^R(\rho) := \inf_{\tau \in {\mathcal{D}}(\mathcal{H})} \left\{ s\geq 0\ \Big\vert\ \frac{\rho + s\ \tau}{1+s} =: \varsigma \in {\mathcal{S}_M}\right\}. \end{equation} \end{comment} \subsection{Constructing entanglement non-increasing projections}\label{Sec:Systematic} We now introduce a systematic way to build multiqubit entanglement non-increasing projections (ENIPs) and, as a consequence, the corresponding entanglement guarantor states (EGSs). First of all, let us give a shorthand notation for the Bloch representation of a generic $N$-qubit state $\rho$ in the computational basis $\{|0\rangle,|1\rangle \}^{\otimes N}$: \begin{equation}\label{eq:BlochRepresentation} \rho=\frac{1}{2^N} \sum_{\bm{\alpha}\in I_N} T^\rho_{\bm{\alpha}} P_{\bm{\alpha}}, \end{equation} where the set $I_N=\{0,1,2,3\}^{N}$ contains all the $N$-tuples $\bm{\alpha}=(\alpha_1,\alpha_2,\cdots,\alpha_N)$ of indices ranging from $0$ to $3$, $P_{\bm{\alpha}}=\sigma_{\alpha_1}\otimes\sigma_{\alpha_2}\otimes\cdots\otimes\sigma_{\alpha_N}$, with $\sigma_0=\mathbb{I}$ being the $2\times 2$ identity matrix and $\sigma_1,\sigma_2,\sigma_3$ the Pauli matrices, and $T^\rho_{\bm{\alpha}}=\mbox{Tr}(\rho P_{\bm{\alpha}})$ are the so-called correlation tensor elements of $\rho$. The single-qubit (Hermitian) local unitaries $P_{\bm{\alpha}}$ satisfy several properties that will be extremely useful in the following, see \ref{Appendix:Palpha} for further details. Now we provide a systematic way to project, via single-qubit LOCC, an arbitrary state $\rho$ onto an EGS of the following form: \begin{equation}\label{eq:entanglementguarantorstates} \varpi_\rho = \frac{1}{2^N} \sum_{\bm{\alpha}\in G} T^\rho_{\bm{\alpha}} P_{\bm{\alpha}}, \end{equation} for some instances of $G\subset I_N$. This ENIP consists of putting equal to zero all the correlation tensor elements $T^\rho_{\bm{\alpha}}$ of $\rho$ such that $\bm{\alpha}\notin G$ and leaving alone the remaining ones. The number of surviving correlation tensor elements $T^\rho_{\bm{\alpha}}$ is given by the cardinality $|G|$ of the set $G$. One may pick $G$ so that $|G|$ is small, leading to a reduction in the complexity of evaluating the multiqubit entanglement of $\varpi_{\rho}$ as well as a decreased overhead in the number of local measurement settings required to recover $\varpi_{\rho}$ in laboratory. However, $|G|$ can be large and still tractable, e.g., for the entanglement robustness whenever $\varpi_{\rho}$ and the corresponding free states can be described with a finite number of linear matrix inequalities. One approach to performing the above ENIP is to apply to $\rho$ the following convex combination of single-qubit local unitaries (which is a single-qubit LOCC as we have previously mentioned): \begin{equation}\label{eq:groupavarageprojection} \Pi_{G}(\rho):= \frac{1}{|J_G|} \sum_{\bm{\alpha}\in J_G} P_{\bm{\alpha}}' \rho P_{\bm{\alpha}}'^{\dagger}, \end{equation} where $J_G\subset I_N$ is defined in such a way that \begin{equation}\label{eq:conditionsonUG} \left\{ \begin{array}{c} [P_{\bm{\alpha}},P_{\bm{\beta}}']=0 \ \ \ \ \forall \bm{\alpha}\in G, \ \bm{\beta}\in J_G, \\ \exists \bm{\beta}\in J_G : \{P_{\bm{\alpha}},P_{\bm{\beta}}'\}=0 \ \forall \bm{\alpha} \notin G . \end{array} \right. \end{equation} This ENIP is successful, i.e. so that $\Pi_{G}(\rho)$ can always be written as in Eq.~(\ref{eq:entanglementguarantorstates}), provided that the matrices $P_{\bm{\beta}}'$ for which $\bm{\beta}\in J_G$ form a set that can be written as \begin{equation}\label{eq:groupuptoaphase} \{P_{\bm{\beta}_i}'\}_{i=1}^{2^{n}} = \left \{ \begin{tabular}{c} $\mathbb{I}^{\otimes N}$ \\ $\{P_{\bm{\alpha}_{i_1}}\}_{i_{1}=1}^{n}$ \\ $\{P_{\bm{\alpha}_{i_2}}P_{\bm{\alpha}_{i_1}}\}_{i_{2}>i_{1}=1}^{n}$ \\ $\cdots$ \\ $\{P_{\bm{\alpha}_{i_n}} \ldots P_{\bm{\alpha}_{i_2}}P_{\bm{\alpha}_{i_1}}\}_{i_{n}>\ldots>i_{2}>i_{1}=1}^{n}$ \\ \end{tabular} \right \}, \end{equation} for some family of matrices $\{P_{\bm{\alpha_i}}\}_{i=1}^{n}$ Indeed, by using the properties discussed in \ref{Appendix:Palpha}, one can easily see that \begin{equation} \Pi_{G}(\rho) = \varpi_\rho, \end{equation} where $\varpi_\rho$ is defined in Eq.~(\ref{eq:entanglementguarantorstates}). This is due to the fact that for any $\bm{\alpha} \in I_{N}$ \begin{eqnarray} \mbox{Tr}(\Pi_{G}(\rho)P_{\bm{\alpha}}) &=& \frac{1}{|J_G|} \sum_{\bm{\beta}\in J_G} \mbox{Tr} (P_{\bm{\beta}}' \rho P_{\bm{\beta}}'^{\dagger} P_{\bm{\alpha}}) \nonumber \\ &=& \frac{1}{|J_G|} \sum_{\bm{\beta}\in J_G} \mbox{Tr} (\rho P_{\bm{\beta}}'^{\dagger} P_{\bm{\alpha}} P_{\bm{\beta}}') \nonumber \\ &=& \frac{1}{|J_G|} \left[ \sum_{\bm{\beta}\in J_G^+(\bm{\alpha})} \mbox{Tr} (\rho P_{\bm{\alpha}} P_{\bm{\beta}}'^{\dagger}P_{\bm{\beta}}') - \sum_{\bm{\beta}\in J_G^-(\bm{\alpha})}\mbox{Tr} (\rho P_{\bm{\alpha}} P_{\bm{\beta}}'^{\dagger}P_{\bm{\beta}}')\right] \nonumber \\ &=& \frac{1}{|J_G|} (|J_G^+(\bm{\alpha})|-|J_G^-(\bm{\alpha})|) \mbox{Tr} (\rho P_{\bm{\alpha}}) \nonumber \\ &=& \left(2 \frac{|J_G^+(\bm{\alpha})|}{|J_G|} - 1\right) \mbox{Tr} (\rho P_{\bm{\alpha}}) \nonumber \\ &=& \left\{ \begin{array}{c} \mbox{Tr} (\rho P_{\bm{\alpha}}) \ \ \mbox{if}\ \bm{\alpha}\in G, \\ 0 \ \ \ \ \ \ \ \ \ \ \ \ \mbox{otherwise}, \end{array} \right. \end{eqnarray} where the first and second lines are due to the linearity and cyclicity of the trace, respectively, the third line is due to the fact that $J_G=J_G^+(\bm{\alpha}) \cup J_G^-(\bm{\alpha})$, with $J_G^+(\bm{\alpha}) := \{\bm{\beta}\in J_G : [P_{\bm{\beta}}',P_{\bm{\alpha}}]=0\}$ and $J_G^-(\bm{\alpha}) := \{\bm{\beta}\in J_G : \{P_{\bm{\beta}}',P_{\bm{\alpha}}\}=0\}$, the fourth line is due to $P_{\bm{\beta}}'^{\dagger}P_{\bm{\beta}}' = P_{\bm{\beta}}^2 = \mathbb{I}$, the fifth line is due to $|J_G|=|J_G^+(\bm{\alpha})|+ |J_G^-(\bm{\alpha})|$, and finally the sixth line is due to the fact that $|J_G^+(\bm{\alpha})|=|J_G|$ when $\bm{\alpha}\in G$ while $|J_G^+(\bm{\alpha})|=|J_G|/2$ otherwise, which in turn is due to Eqs.~(\ref{eq:conditionsonUG}) and the assumption that the matrices $P_{\bm{\beta}}'$ for which $\bm{\beta}\in J_G$ form a set of the form given in Eq.~(\ref{eq:groupuptoaphase}) for some family of matrices $\{P_{\bm{\alpha_i}}\}_{i=1}^{n}$. An alternative implementation of the above described ENIP can be realised by resorting to the following sequential approach. We begin by considering the $n$ single-qubit local unitaries $\{P_{\bm{\alpha}_i}\}_{i=1}^n$. Then, we fix a sequence of states $\{\rho_{i}\}_{i=0}^n$ defined recursively by \begin{equation} \rho_{i} := \frac{1}{2}\left( \rho_{i-1}+P_{\bm{\alpha}_{i-1}}\rho_{i-1}P_{\bm{\alpha}_{i-1}} \right), \end{equation} for $i \in \{1,2, \ldots n\}$. This can be achieved physically in each step by having one of the qubit parties flip a coin and classically communicate the result to all the other qubits, with the result of the flip deciding whether the single-qubit local unitary $P_{\bm{\alpha}_{i-1}}$ is applied or not. Then, by setting $\rho_{0} = \rho$ we can easily see that \begin{equation}\label{Eq:ConvexIteration} \Pi_G(\rho) = \rho_{n}= \frac{1}{2^n}\sum_{i=1}^{2^{n}} P_{\bm{\beta}_i}' \rho P_{\bm{\beta}_i}'^{\dagger}, \end{equation} where the matrices $P_{\bm{\beta}_i}'$ are defined in Eq.~(\ref{eq:groupuptoaphase}). We show in the following a particular realisation of this method for constructing an ENIP for any number of qubits $N$, and hence see how it can be used as a tool within our framework. The identification of alternative ENIPs may proceed by first fixing $N$ and choosing a $G \subset I_{N}$, perhaps based on experimental or physical considerations. One then searches for a family of matrices $\{P_{\bm{\alpha_i}}\}_{i=1}^{n}$ so that Eq.~(\ref{eq:conditionsonUG}) and Eq.~(\ref{eq:groupuptoaphase}) hold. If such a family can be found, then the resultant matrices $\{P_{\bm{\beta}_i}'\}_{i=1}^{2^{n}}$ in Eq.~(\ref{eq:groupuptoaphase}) define a $J_{G}$ that can be used to construct the ENIP according to Eq.~(\ref{eq:groupavarageprojection}). Generally, identifying a valid $G$ and $J_{G}$ can be a difficult task. Nevertheless, it is a process that can be easily automated for small numbers of qubits, where the quantification of multiqubit entanglement still remains an open and relevant problem. \subsection{Applying the framework}\label{sec:applying_entanglement} We are now ready to apply the four-step framework introduced in Section \ref{Section:Framework} to the resource theory of multiqubit entanglement. One realisation of this framework has already been achieved in \cite{cianciaruso2016accessible} by considering a fixed ENIP with corresponding EGSs given by the so-called $\mathcal{M}_{N}^{3}$ states $\omega=(\mathbb{I}^{\otimes N}+\sum_{i=1}^3 c_i \sigma_i^{\otimes N})/2^N$ ($N$-qubit mixed states with all maximally mixed marginals), with the $c_{i}\in \mathbb{R}$ constrained so that $\omega$ is positive semidefinite. In the following we will introduce another realisation of our framework based on the so-called $EG_{N} \ $ states. As we shall see, the $EG_{N} \ $ states allow us to derive lower bounds on multiqubit entanglement that are complementary to those provided by the $\mathcal{M}_{N}^{3}$ states. The steps of our framework for $EG_{N} \ $ states are now explained. \begin{description} \item[Step One: ]{Identify an ENIP and characterise the corresponding set of EGSs.} \end{description} In this step we can use the previously discussed method to construct ENIPs and find the corresponding EGSs. By resorting to the following $2(N-1)$ local unitaries \begin{eqnarray}\label{eq:localunitariesgforNgeq3} \{P_{\bm{\alpha}_i}\}_{i=1}^{2(N-1)}=\{(\sigma_{3} \otimes \sigma_{3} \otimes \mathbb{I}^{\otimes N-2}), (\mathbb{I} \otimes \sigma_{3} \otimes \sigma_{3} \otimes \mathbb{I}^{\otimes N-3}),\nonumber \\ \ldots (\mathbb{I}^{\otimes N-3} \otimes \sigma_{3} \otimes \sigma_{3} \otimes \mathbb{I}), (\mathbb{I}^{\otimes N-2} \otimes \sigma_{3} \otimes \sigma_{3}) \nonumber \\, (\sigma_{2} \otimes \sigma_{2} \otimes \mathbb{I}^{\otimes N-2}), (\mathbb{I} \otimes \sigma_{2} \otimes \sigma_{2} \otimes \mathbb{I}^{\otimes N-3}),\nonumber \\ \ldots (\mathbb{I}^{\otimes N-3} \otimes \sigma_{2} \otimes \sigma_{2} \otimes \mathbb{I}), (\mathbb{I}^{\otimes N-2} \otimes \mathbb{I} \otimes \sigma_{2}) \}, \end{eqnarray} when $N \geq 3$ and to \begin{equation}\label{eq:localunitariesgforNequal2} \{P_{\bm{\alpha}_i}\}_{i=1}^{2}= \{(\sigma_{3} \otimes \sigma_{3}), (\mathbb{I} \otimes \sigma_{2})\} \end{equation} when $N=2$, as well as to the recursive procedure introduced in Eq.~(\ref{Eq:ConvexIteration}), we identify an ENIP and obtain the family of $N$-qubit EGSs whose matrix representation in the computational basis is given by: \begin{equation}\label{eq:NqubitEGState} \varpi = \frac{1}{2^{N}} \left( \mathbb{I}^{\otimes N} + d_{1} \sigma_{1}^{\otimes N-1}\otimes\sigma_2 + d_2 \sigma_{2}^{\otimes N}+ d_3 \sigma_{3}^{\otimes N-1}\otimes\mathbb{I}\right), \end{equation} where $d_{1} = {\rm Tr}\left[\varpi (\sigma_{1}^{\otimes N-1}\otimes \sigma_2)\right]$, $d_{2} = {\rm Tr}\left[\varpi \sigma_{2}^{\otimes N}\right]$, and $d_{3} = {\rm Tr}\left[\varpi (\sigma_{3}^{\otimes N-1}\otimes \mathbb{I})\right]$. These states will be referred to as $EG_{N} \ $ states and will be denoted in the following also by the triple $\{d_{1},d_{2},d_{3}\}$. The characterisation of the $EG_{N} \ $ states is manifestly different between the odd and even $N$ case. In the $\{d_{1},d_{2},d_{3}\}$-space, the set of $EG_{N} \ $ states with odd $N>1$ is represented by the tetrahedron ${\cal T}_{(-1)^{(N-1)/2}}$ with vertices $\{1,(-1)^{(N-1)/2},1\}$, $\{-1,-(-1)^{(N-1)/2},1\}$, $\{1,-(-1)^{(N-1)/2},-1\}$ and $\{-1,(-1)^{(N-1)/2},-1\}$, see Fig.~\ref{Fig:Robustness}. This tetrahedron is constructed simply by imposing the non-negativity of the four unique eigenvalues of $\varpi$, see \ref{Appendix:CharacterisingVarpi}. Similarly, for even $N$ the set of $EG_{N} \ $ states is given in the $\{d_{1},d_{2},d_{3}\}$-space by the unit ball ${\cal B}_{1}$ centred into the origin. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{Rob.pdf} \caption{The geometry of $EG_{N} \ $ states for odd $N$ and $M > \left \lfloor{N/2}\right \rfloor +1$, together with example choices of optimal states $\sigma$, $\tau$ for the generalised robustness $E_{M}^{R}(\varpi)$. The tetrahedron ${\cal T}_{(-1)^{(N-1)/2}}$ can be seen to contain the $M$-separable octahedron $\mathcal{O}_{1}$ (shaded blue). For any choice of $\varpi$, the closest $M$-separable states $\sigma$ lie in the face of the octahedron closest to $\varpi$ (shaded yellow), and the optimal $\tau$ lie in the base of the tetrahedron (shaded red). The particular choice of the optimal state $\sigma$ is also the closest $M$-separable state with respect to any bona fide distance measure of entanglement. Instead, another optimal state $\sigma'$ can be seen to constitute a valid optimal choice for the standard robustness of $M$-inseparable entanglement since $\tau'$ is also $M$-separable.} \label{Fig:Robustness} \end{figure} \begin{description} \item[Step Two: ]{Characterise the set of free EGSs.} \end{description} We now discuss the set $\mathcal{S}_M^{EG_N}$ of $M$-separable $EG_{N} \ $ states for any $2\leq M \leq N$. This set can be characterised as the result of applying the fixed ENIP specified in Eqs.~(\ref{Eq:ConvexIteration}), (\ref{eq:localunitariesgforNgeq3}) and (\ref{eq:localunitariesgforNequal2}) onto the general set of $M$-separable states given in Eq.~(\ref{Eq:MSep}), see \ref{Appendix:SepEGN} for further details. We see that when $M > \left \lfloor{N/2}\right \rfloor +1$ the $M$-separable $EG_{N} \ $ states are such that $|d_1|+|d_2|+|d_3|\leq 1$ and thus fill the set represented in the $\{d_{1},d_{2},d_{3}\}$-space by the unit octahedron ${\cal O}_{1}$ with vertices $\{\pm 1, 0, 0\}$, $\{0, \pm 1, 0\}$ and $\{0, 0, \pm 1\}$, as illustrated in Fig~\ref{Fig:Robustness}. On the other hand, when $M \leq \left \lfloor{N/2}\right \rfloor +1$ all the $EG_{N} \ $ states are $M$-separable. \begin{description} \item[Step Three: ]{Evaluate $E_{M}(\varpi)$ for all $\varpi \in \mbox{$EG_{N} \ $}$.} \end{description} It is immediate to see that $E_{M}(\varpi) =0$ for all $\varpi \in \mbox{$EG_{N} \ $}$ whenever $M \leq \left \lfloor{N/2}\right \rfloor +1$. Therefore, one cannot use the $EG_{N} \ $ states to provide non-trivial lower bounds on multiqubit entanglement for such $M$. We instead focus on the cases $M > \left \lfloor{N/2}\right \rfloor +1$, where the $M$-separable states always form the unit octahedron as a strict subset of all $EG_{N} \ $ states. Let us first consider the distance-based measures of $M$-inseparable multiqubit entanglement, where Eq.~(\ref{Eq:DistanceBasedRGS}) shows that we simply need to find the minimal distance from $\varpi$ to the set of $EG_{N} \ $ states inside the unit octahedron $\mathcal{O}_{1}$. In the odd $N>1$ case, since all the $EG_{N} \ $ states are diagonal in the same basis (\ref{Appendix:CharacterisingVarpi}), we have that distances between them reduce to the corresponding classical distance between the probability distributions formed by their eigenvalues. What is more, since the eigenvalues of the $EG_{N} \ $ states with odd $N>1$ are equivalent to the eigenvalues of the $\mathcal{M}_{N}^{3}$ states with even $N$, it happens that the ensuing optimisation problem of classical information geometry has already been solved in \cite{cianciaruso2016accessible}. The result is that, for any choice of distance, the closest $M$-separable $EG_{N} \ $ state to $\varpi$ is on the nearest surface of the unit octahedron, with the location specified by the intersection with the extended line connecting $\varpi$ to its corresponding nearest vertex, see Fig.~\ref{Fig:Robustness}. Any bona fide distance-based measure can then be calculated as a monotonically increasing function of the height $h_{\varpi} = \frac{1}{2}\left(\sum_{j=1}^{3} |d_{j}|-1\right) \in [-\frac{1}{2},1]$ above the separable plane (with a negative value indicating that $\varpi$ is in the unit octahedron and hence $M$-separable). On the other hand, for even $N$, the closest $M$-separable state to $\varpi$ depends on the choice of distance. Nevertheless, since $\sigma_{1}^{\otimes N-1}\otimes\sigma_2$, $\sigma_{2}^{\otimes N}$ and $\sigma_{3}^{\otimes N-1}\otimes\mathbb{I}$ form a triple of anticommuting matrices, one can easily see that the trace distance between any two $EG_{N} \ $ states with even $N$ reduces to (half) the Euclidean distance between their corresponding triples, as is also the case for the $\mathcal{M}_{N}^{3}$ states with odd $N$~\cite{cianciaruso2016accessible}. This means that the trace distance-based measure of $M$-inseparable multiqubit entanglement for $\varpi$ is simply the Euclidean distance from its triple to the unit octahedron. We now prove that the analytical expression of the robustness of $M$-inseparable multiqubit entanglement $E_{M}^{R}(\varpi)$ of an arbitrary $EG_{N} \ $ state $\varpi$ for any odd $N>1$ is the following: \begin{equation}\label{oddNrobustnessEGN} E^R_{M}(\varpi) = \left\{ \begin{array}{ll} 0 \,, & \hbox{$h_\varpi \leq 0$ or $M \leq \left \lfloor{N/2}\right \rfloor + 1$;} \\ h_\varpi\,, & \hbox{otherwise.} \end{array} \right. \end{equation} To do this, given an $EG_{N} \ $ state $\varpi$ and taking into account Eq.~(\ref{Eq:RobustnessRGS}), we need to solve the following simplified optimisation (which can easily be posed as a semidefinite program~\cite{napoli2016robustness}) \begin{equation}\label{Eq:RobustnessEGN} E_{M}^{R}(\varpi)= \inf_{\tau \in \mbox{$EG_{N} \ $}} \left\lbrace s \geq 0 \left| \frac{\varpi + s \tau}{1+s}=: \sigma \in \mathcal{S}_{M}^{EG_N} \right\rbrace \right. , \end{equation} i.e. we need to find the smallest positive $s$ such that $\sigma=\frac{\varpi + s \tau}{1+s}$ is an $M$-separable $EG_{N} \ $ state and $\tau$ is any $EG_{N} \ $ state. In other words, we need to prove that $s=h_\varpi$ is the smallest positive $s$ for which $\sigma=\frac{\varpi + s \tau}{1+s}$ is represented in the $\{d_1,d_2,d_3\}$-space by a point belonging to the unit octahedron $\mathcal{O}_1$, and $\tau$ by any point in the tetrahedron ${\cal T}_{(-1)^{(N-1)/2}}$, provided that $h_\varpi \geq 0$ and $M \leq \left \lfloor{N/2}\right \rfloor + 1$, which are the only nontrivial cases where $\varpi$ is not $M$-separable. In the following we will assume without loss of generality that $\varpi$ belongs to the corner containing the vertex $\{(-1)^{(N-1)/2},(-1)^{(N-1)/2},(-1)^{(N-1)/2}\}$, since all the $EG_{N} \ $ states belonging to the other three corners can be obtained from this by simply applying a single-qubit local unitary $\sigma_{i}~\otimes~\mathbb{I}^{\otimes N-1}$, $i \in \{1,2,3\}$, under which any sort of multiqubit entanglement is invariant. The optimisation in Eq.~(\ref{Eq:RobustnessEGN}) can be solved simply by using the fact that the optimal $\tau$ must be as far from $\varpi$ as possible and is hence represented by a point on the base of the tetrahedron ${\cal T}_{(-1)^{(N-1)/2}}$ with respect to $\varpi$, i.e. given by a triple $\{e_1,e_2,e_3\}$ satisfying \begin{equation}\label{eq:optimaltaucontraint} e_1+e_2+e_3=- (-1)^{(N-1)/2}, \end{equation} shown as the shaded red region in Fig.~\ref{Fig:Robustness}. For a given $\tau$ satisfying this condition, one can then easily see that the optimal $\sigma$ lies on the intersection of the line connecting $\tau$ and $\varpi$ (given by the convex combination $\sigma = \frac{\varpi + s \tau}{1+s}$ for $s \geq 0$) with the face of the unit octahedron $\mathcal{O}_{1}$ closest to $\varpi$, given by any triple $\{s_1,s_2,s_3\}$ satisfying \begin{equation}\label{eq:optimalsigmaconstraint} s_1+s_2+s_3=(-1)^{(N-1)/2}, \end{equation} see again Fig.~\ref{Fig:Robustness} for an illustration. One then finds that $s = h_{\varpi}$, which holds for \emph{any} choice of $\tau$ on the base of the tetrahedron. It is hence clear that there is not a unique pair of $\tau \in \mbox{$EG_{N} \ $}$ and $\sigma \in \mathcal{S}_{M}^{EG_N}$ satisfying the infimum in Eq.~(\ref{Eq:RobustnessEGN}). We have shown that one can in fact satisfy the infimum with any $\tau$ on the base of the tetrahedron furthest from $\varpi$ and any $\sigma$ on the face of the octahedron closest to $\varpi$, provided that they are colinear with $\varpi$ itself. The optimal $s$ is then given by the plane height $h_{\varpi}$, as shown in Eq.~(\ref{oddNrobustnessEGN}). The non-uniqueness in the optimisation means that the infimum can even be satisfied by an $M$-separable $\tau$ sitting on the face of the octahedron $\mathcal{O}_{1}$ furthest from $\varpi$. A consequence of this is that the robustness of $M$-inseparable multiqubit entanglement $E_{M}^{R}(\varpi)$ of any $EG_{N} \ $ state $\varpi$ coincides with the \textit{standard} robustness, where the optimisation over $\tau$ in Eq. \eqref{Eq:RobustnessEGN} is additionally restricted to $M$-separable states~\cite{vidal1999robustness}. The standard robustness was previously calculated for two-qubit Bell-diagonal states in~\cite{akhtarshenas2003robustness}, which have an identical geometry to the odd $N$ $EG_{N} \ $ states. It is also relevant to note that the robustness of $M$-inseparable multiqubit entanglement coincides with (twice) the trace distance-based measure $E^{D_{\text{Tr}}}_{M}(\varpi) = h_{\varpi}/2$ for $EG_{N} \ $ states $\varpi$ with odd $N>1$~\cite{cianciaruso2016accessible}. Intriguingly, the closest $M$-separable state to $\varpi$ according to any contractive distance (such as the trace distance) is also a valid $M$-separable $EG_{N} \ $ state satisfying the optimisation for the robustness, see Fig.~\ref{Fig:Robustness}. Additionally, we note that the standard and generalised robustness provide, respectively, upper and lower bounds for a family of norms introduced in \cite{regula2017convex} which constitute measures of $M$-inseparable multiqubit entanglement generalising the greatest cross norm \cite{rudolph2001new}. The fact that the two robustness quantifiers coincide in this case then implies that the multiqubit norms are also equal to them for all $EG_{N} \ $ states. These simplifications for $EG_{N} \ $ states highlight the wide scope of the applicability of our framework to different resource measures when one chooses a suitably simple class of RGSs. \begin{description} \item[Step Four: ]{Optimise the lower bound $E_{M}(\Pi(\mathcal{U}_{\otimes}\rho \mathcal{U}_{\otimes}^{\dagger})) \leq E_{M}(\rho)$ over single-qubit local unitaries $\mathcal{U}_{\otimes}$.} \end{description} We now consider optimising the lower bound provided through our framework to $E_{M}(\rho)$ for any state $\rho$ by varying over single-qubit unitaries $U_\otimes = \bigotimes_{\alpha=1}^{N} U^{(\alpha)}$ and considering the corresponding $EG_{N} \ $ state $\Pi(U_\otimes \rho U_\otimes^\dagger)$, resulting in the maximised lower bound \begin{equation}\label{lowopt} \sup_{U_\otimes} E_{M}(\Pi(U_\otimes \rho U_\otimes^\dagger)) \leq E_{M}(\rho)\,. \end{equation} Experimentally, the optimised bound can be accessed by measuring a triple of correlations functions $\{\widetilde{d}_j\}$ given by the expectation values of correspondingly rotated Pauli operators on each qubit, $\widetilde{d}_1 = \langle U_\otimes^\dagger (\sigma_1^{\otimes N-1}\otimes \sigma_2) U_\otimes \rangle$, $\widetilde{d}_2 = \langle U_\otimes^\dagger \sigma_2^{\otimes N} U_\otimes \rangle$, $\widetilde{d}_3 = \langle U_\otimes^\dagger (\sigma_3^{\otimes N-1} \otimes \mathbb{I}) U_\otimes \rangle$, and is non-zero whenever $M > \left \lfloor{N/2}\right \rfloor +1$ and $\sum_{j=1}^{3} |\widetilde{d}_j| > 1$. For odd $N>1$, optimality in Eq.~(\ref{lowopt}) for both the family of distance-based measures and the robustness can always be achieved by the choice of $U_\otimes$ such that the quantity $\widetilde{h}_\varpi = \frac12\left( \sum_{j=1}^3 |\widetilde{d}_j|-1\right)$ is maximum. For even $N$, while measures are not generally monotonic functions of $\widetilde{h}_\varpi$, one can take as an ansatz that optimising $\widetilde{h}_\varpi$ provides an improved lower bound. \begin{table*}[!t] \centering \begin{tabular}{ccccc} \hline \hline $N$ & \multicolumn{1}{c}{State} & \multicolumn{1}{c}{$\{\widetilde d_1,\widetilde d_2,\widetilde d_3\}$} & \multicolumn{1}{c}{$\sum_{j=1}^{3} |\widetilde{d}_j|$} & \multicolumn{1}{c}{$\{\theta,\psi,\phi\}$} \\ \hline $3$ & $|\text{GHZ}^{(3)}\rangle$ & $\left\lbrace 1, -1, 1 \right\rbrace$ &$3$ & $\left\lbrace 0,\frac{\pi}{12}, \frac{\pi}{12} \right\rbrace$ \\ \hline $4$ & $|\text{GHZ}^{(4)}\rangle$ & $\left\lbrace \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0\right\rbrace$ & $\sqrt{2}$ & $\left\lbrace 0, \frac{\pi}{32}, \frac{\pi}{32} \right\rbrace$ \\ \hline $5$ & $|\text{GHZ}^{(5)}\rangle$ & $\left\lbrace 1, 1, 1\right\rbrace$ & $3$ & $\left\lbrace 0, \frac{\pi}{20}, \frac{\pi}{20} \right\rbrace$ \\ \hline $6$ & $|\text{GHZ}^{(6)}\rangle$ & $\left\lbrace \frac{1}{\sqrt{2}}, - \frac{1}{\sqrt{2}}, 0\right\rbrace$ & $\sqrt{2}$ & $\left\lbrace 0, \frac{\pi}{48}, \frac{\pi}{48} \right\rbrace$ \\ \hline $7$ & $|\text{GHZ}^{(7)}\rangle$ & $\left\lbrace 1, -1, 1\right\rbrace$ & $3$ & $\left\lbrace 0, \frac{\pi}{28}, \frac{\pi}{28} \right\rbrace$ \\ \hline \hline \end{tabular} \caption{\label{TableExamples} Lower bounds to the $M$-inseparable multiqubit entanglement of $|\text{GHZ}^{(N)}\rangle$ when $M > \left \lfloor{N/2}\right \rfloor +1$ can be improved by maximising $\sum_{j=1}^{3} |\widetilde{d}_j| = 2 \widetilde{h}_{\varpi} + 1$, with identical single-qubit unitaries described by the angles $\{\theta,\psi,\phi\}$.} \end{table*} \begin{comment By using the well known correspondence between the special unitary group ${\sf SU}(2)$ and special orthogonal group ${\sf SO}(3)$, we have that to any one-qubit unitary $U^{(\alpha)}$ corresponds the orthogonal $3\times 3$ matrix $O^{(\alpha)}$ such that \begin{equation} U^{(\alpha)} \vec{n}\cdot\vec{\sigma}U^{(\alpha)\dagger} = (O^{(\alpha)} \vec{n})\cdot\vec{\sigma}, \end{equation} where $\vec{n}=\{n_1,n_2,n_3\}\in\mathbb{R}^3$ and $\vec{\sigma} = \{\sigma_1,\sigma_2,\sigma_3\}$ is the vector of Pauli matrices. We have then that \begin{equation} \sup_{\{U^{(\alpha)}\}} (|\widetilde{d}_1|+|\widetilde{d}_2|+|\widetilde{d}_3|)=\sup_{\{O^{(\alpha)}\}} (|\widetilde{T}_{11\cdots 2}|+|\widetilde{T}_{22\cdots 2}|+|\widetilde{T}_{33\cdots 0}|), \end{equation} where \begin{eqnarray} \widetilde{T}_{i_1i_2\cdots i_N} &=& \sum_{j_1j_2\cdots j_N} T_{j_1j_2\cdots j_N}O^{(1)}_{i_1j_1}O^{(2)}_{i_2j_2}\cdots O^{(N)}_{i_Nj_N} \\ \widetilde{T}_{i_1i_2\cdots i_{N-1} 0} &=& \sum_{j_1j_2\cdots j_{N-1}} T_{j_1j_2\cdots j_{N-1} 0} O^{(1)}_{i_1j_1}O^{(2)}_{i_2j_2}\cdots O^{(N-1)}_{i_{N-1}j_{N-1}} \end{eqnarray} and \begin{equation} {T}_{i_1i_2\cdots i_N}= \text{Tr}\left[\rho \left(\sigma_{i_1}\otimes\sigma_{i_2}\otimes\cdots\otimes\sigma_{i_N}\right) \right]. \end{equation} \end{comment} In Table \ref{TableExamples} we can see how useful our results are on the paradigmatic example of the $N$-qubit GHZ state~\cite{greenberger1990bell} \begin{equation} |\text{GHZ}^{(N)}\rangle = \frac{1}{\sqrt{2}} \left(|00\cdots 00 \rangle + |11\cdots 11 \rangle\right), \end{equation} with $N\geq 3$, which constitutes a primary resource for quantum computation and metrology~\cite{giovannetti2011advances}. See Ref.~\cite{cianciaruso2016accessible} for a comparison to results for $\mathcal{M}_{N}^{3}$ states. Here, due to the qubit permutation invariance of $|\text{GHZ}^{(N)}\rangle$\cite{cianciaruso2016accessible}, optimisation of $\widetilde{h}_{\varpi}$ can be achieved by setting all the single-qubit unitaries to be identical, i.e. $U^{(\alpha)} = U_{2}$ for all $\alpha$. Here, $U_{2}$ can be parameterised by $3$ angles $\{\theta, \psi, \phi\}$ in the following way, \begin{equation} U_2 = \left( \begin{array}{cc} \cos\frac{\theta}{2} e^{-i\frac{\psi+\phi}{2}} & -i \sin\frac{\theta}{2} e^{-i\frac{\phi-\psi}{2}} \\ -i \sin\frac{\theta}{2} e^{i\frac{\phi-\psi}{2}} & \cos\frac{\theta}{2} e^{i\frac{\psi+\phi}{2}} \\ \end{array} \right). \end{equation} \section{Discussion}\label{Section:Discussion} Our general framework provides a clearcut approach to finding lower bounds to resource measures evaluated on arbitrary states. This framework is founded upon the hereby introduced concepts of resource non-increasing projections and the corresponding resource guarantor states. Each step in the framework is feasible to carry out. The first step can be performed by systematically identifying an RNIP, as we have shown in Sec.~\ref{Sec:Systematic}, or by using intuition about the resource under consideration, as may be done for coherence. The second step can be realised by characterising the intersection between free states and RGSs, as we have shown in \ref{Appendix:SepEGN} for multiqubit entanglement. The resultant optimisation in step three is necessarily simpler than for the corresponding arbitrary state due to the simplified structure of the RGS. We have furthermore shown that the optimisation can be expressed as an SDP for the resource robustness and can hence be evaluated computationally with little overhead. Finally, varying over local unitaries in the fourth step can be a restricted optimisation over a constrained and/or discrete set of candidates. Moreover, our framework is more experimentally friendly in the sense that it necessarily requires fewer measurements than a full state tomography. We illustrated the relevance of this framework for multiqubit entanglement by constructing a general accessible formalism to identify entanglement non-increasing projections and their resource guarantor states, giving a particular example of a projection that results in suitably defined $EG_{N} \ $ states. We then proceeded to complete the steps of our framework for this example, allowing us to find analytic lower bounds to the multiqubit entanglement of GHZ states that can be measured experimentally using only three local measurement settings. Our approach can be understood as a particular type of quantitative resource witness~\cite{eisert2007quantitative,guhne2007estimating}, providing an approximation of the resources present in a system based on the results of a limited selection of measurement settings. It will be of further interest to compare the efficiency of lower bounds arising from our framework to those arising from other approaches, as has been done specifically for entanglement in~\cite{cianciaruso2016accessible}. Nevertheless, our framework relies on the universal concept of resource monotonicity, and can hence be applied in principle to a vast range of possible resource measures. We have focused in this work on the provision of lower bounds to resource measures. These lower bounds are useful for verifying the minimum usefulness of a resource state. In practice, whenever a measure can be linked to the performance of an operational protocol, our lower bounds can be harnessed to guarantee a worst-case performance of using a given resource state. Nevertheless, evaluating \emph{upper} bounds on relevant resource measures is also important, allowing for better comparison between resource states and hence a finer grained identification of states most useful in an operational setting. Our framework is geared towards providing lower bounds by contracting the state space using resource non-increasing projections. It will be of future interest to identify dual frameworks able to identify upper bounds for a given class of resource states. By applying our framework to $EG_{N} \ $ states, we have been able to provide new results for evaluating the robustness of entanglement in both $EG_{N} \ $ states with odd $N$ and $\mathcal{M}_{N}^{3}$ states with even $N$, complementing previous evaluations of the robustness of entanglement for two-qubit Bell diagonal states~\cite{akhtarshenas2003robustness}. Our results show that the robustness coincides, for Bell diagonal states, with the plane height $h_{\varpi}$, which equates the two-qubit concurrence and half the trace distance-based measure of entanglement. Our approach therefore allowed the evaluation of the robustness of entanglement, which is an NP-hard problem~\cite{brandao2005quantifying}, to be simplified to an intuitive geometric optimisation for relevant classes of states. It is hoped that our framework will provide further simplifications when using alternative resource non-increasing projections. Quantum resources embody the power behind presently developing quantum technologies. These technologies will require rigorous verification, through benchmarking, of the resources present in the employed devices. Our framework allows for a quantitative benchmark with a low overhead. Some of the next steps of our work could be to provide a variety of new lower bounds to resource measures stemming from different choices of projections. From an analytical perspective, it will be of interest to formalise whether a link exists between the strength of the projection (i.e.~the amount of resource lost) and the simplicity of the corresponding family of resource guarantor states. Here it is expected that one necessarily loses a lot of resource by projecting onto a simple family. On the other hand, experimentally it will be interesting to harness our established lower bounds for concrete applications and proof-of-concept experiments to verify and quantify the resources present in complex quantum systems \ack{ \vspace*{-.3cm} We thank Otfried G\"uhne and Nicolangelo Iannella for informative discussions, as well as anonymous referees for crucial feedback and comments. We acknowledge funding from the European Research Council (ERC) under the Starting Grant GQCOP (Grant No.~637352) and the Foundational Questions Institute (fqxi.org) under the Physics of the Observer Programme (Grant No.~FQXi-RFP-1601).
{ "timestamp": "2018-10-23T02:15:11", "yymm": "1802", "arxiv_id": "1802.04066", "language": "en", "url": "https://arxiv.org/abs/1802.04066" }
\section{Introduction} \noindent The first idea on syllogisms was produced in the field of proper thinking by the Greek philosopher Aristotle. His idea mainly said that in his Prior Analytics: \textquotedblleft syllogism is discourse in which, certain things being stated, something other than what is stated follows of necessity from their being so. I mean by the last phrase that they produce the consequence, and by this, that no further term is required from without in order to make the consequence necessary\textquotedblright \cite{jon}. A syllogism is a formal logical pattern to obtain a conclusion from the set of premises. A categorical syllogism can be defined as a logical consequence which made up three categorical propositions. It consists of three propositions which are said to be statements or sentences called major premise, minor premise and conclusion, respectively. Each of them has a quantified relationship between two objects. The position of objects in premises generate a classification called as syllogistic figures. So, there are $4$ different types figures. And, the combination of quantifiers ordering deduces $64$ different combinations in each figure. Therefore, a categorical syllogistic system consists of $256$ syllogistic moods, $15$ of which are unconditionally and $9$ of them are conditionally; in total $24$ of them are valid. Those syllogisms in the conditional group are also said to be \textit{strengthened}, or valid under \textit{existential import}, which is an explicit assumption of existence of some \textit{S}, \textit{M} or \textit{P}. Then we add a rule to SLCD: \textit{Some X is X when $X$ exists} and consequently, we obtain the formal system SLCD$^\dagger$. Throughout the centuries, categorical syllogistic system was a paramount part of logic. Innovations in the scope of mathematical logic in the 19th and the beginning of 20th centuries, the situation is changed. However, when J. \L{}ukasiewicz introduced syllogistic as an axiomatic system built on classical propositional calculus \cite{Lukasiewicz}, the situation became reversed once again. Thereby, categorical syllogistic system plays an important role in the mainstream of contemporary formal logic. Furthermore, \L{}ukasiewicz axiomatization on syllogisms is still open and new ideas rise from time to time. In recent years, the using of syllogisms is studied extensively and investigated under different treatments such as computer science \cite{Kumova, Hartmann}; engineering \cite{Kulik, jet}; artificial intelligence \cite{kryvyi}, \cite{zadeh}; etc. And also, computer science oriented logicians began to take part in \cite{Rocha}. Using of diagrams in formal logic reasoning has created a spate of interest for years by the reason of needing to visualize complex logic problems that are difficult to understand. For example, at the end of the 1800s, Lewis Carroll used an original diagrammatic scheme to visualize categorical syllogisms in his book \cite{Lewis}. On the contrary using venn diagrams, he used literal diagrams to solve categorical syllogistic problems containing 2-terms, 3-terms and so on. Moreover, using of diagrams in computer systems is a significant topic today because it has potential in offering systems which are more clear and flexible to perform. A common problem of various systems nowadays is that they are complicated which are hard to understand and use. So, we need to use diagrams or other graphical representations to develop more effective and efficient problem solving \cite{nakatsu}. On the contrary the applications of diagrammatic resoning in the cognitive sciences seek a solution how to support learners in complex tasks typically with paper-based or more \textquotedblleft static\textquotedblright \ diagrams \cite{Mayer1, Mayer2}, the applications more typically include how to program a computer to carry out these tasks in artificial intelligence \cite{Glaskow}. Besides, there are also some related works with the using diagrams of syllogisms in different areas such as \cite{moktefi2013beyond, castro2017re, alternativetovennn, manzano}. In this paper, we show how the categorical syllogistic statements are expressed using this Carroll literal diagrams. Then, we give a new algorithm deciding whether the syllogism and a strengthened syllogism are valid or not by the help of a calculus system SLCD and SLCD$^{\dagger}$, respectively. \section{Preliminaries} In this section, we sketch out notations and terminology which are used during this manuscript. A categorical syllogism can be defined as a deductive argument consisting of two logical propositions and a conclusion obtained from these propositions. They contain exactly three terms, each of which occurs in exactly two of the constituent propositions and the conclusion, where the propositions and the conclusion each has a quantified relationship between two terms. The objects in a categorical proposition are related with following four distinct forms as in Table \ref{tab-1}. \begin{table}[h!] \centering \caption{Categorical Syllogistic Propositions} \label{tab-1} \begin{tabular}{c c c } \hline Symbol & Statements & Generic Term \\ \hline $A$ & All $X$ are $Y$ & Universal Affirmative\\ $E$ & No $X$ are $Y$ & Universal Negative\\ $I$ & Some $X$ are $Y$ & Particular Affirmative\\ $O$ & Some $X$ are not $Y$ & Particular Negative\\ \hline \end{tabular} \end{table} For any syllogism, the categorical propositions are composed of three terms, a subject term, a predicate term, and a middle term: the subject term is the subject of the conclusion and denoted by $S$; the predicate term modifies the subject in the conclusion and denoted by $P$, and the middle term which occurs in the two premises and links the subject and predicate terms and noted by $M$. The subject and predicate terms occur in different premises but the middle term occurs once in each premise. The premise which consists of the predicate term and the middle term is called the \textit{major premise}; the premise which consists of subject term and the middle term is called the \textit{minor premise}. Categorical syllogisms are grouped into $4$ different ways, which are traditionally called figures, depending on the position of the term-variables $S$, $P$ and $M$ in Table \ref{tab-2}. \begin{table}[h!] \centering \caption{Categorical Syllogistic Figures} \label{tab-2} \begin{tabular} [c]{|c|c|c|c|}\hline Major & Minor & Conclusion & Figure\\\hline\hline $M-P$ & $S-M$ & $S-P$ & 1\\\hline $P-M$ & $S-M$ & $S-P$ & 2\\\hline $M-P$ & $M-S$ & $S-P$ & 3\\\hline $P-M$ & $M-S$ & $S-P$ & 4\\\hline \end{tabular} \end{table} Aristotle identified only the first three figures, but the last one was discovered in the middle ages. He searched each mood and figure in terms of whether it was valid or not. After, he obtained some common properties of these syllogisms, which are called rules of deduction. These rules are as follows: $\textbf{Step 1:}$ Relating to premises irrespective of conclusion or figure: \begin{itemize} \item[(a)]No inference can be made from two particular premises. \item[(b)]No inference can be made from two negative premises. \end{itemize} $\textbf{Step 2:}$ Relating to propositions irrespective of figure: \begin{itemize} \item[(a)]If one premise is particular, the conclusion must be particular. \item[(b)]If one premise is negative, the conclusion must be negative. \end{itemize} $\textbf{Step 3:}$ Relating to distribution of terms: \begin{itemize} \item[(a)]The middle term must be distributed at least once. \item[(b)]A predicate distributed in the conclusion must be distributed in the major premise. \item[(c)]A subject distributed in the conclusion must be distributed in the minor premise. \end{itemize} In categorical syllogistic system, there are 64 different syllogistic forms for each figure. These are called \textit{moods}. Therefore, the categorical syllogistic system is composed of 256 possible syllogisms. Only 24 of them are valid in this system. And they divided into two groups of 15 and of 9.\\ The syllogisms in the first group are valid \textit{unconditionally} which are given in Table \ref{tab-3}. \begin{table}[h!] \centering \caption{Unconditionally Valid Forms} \label{tab-3} \begin{tabular}{ c c c c} \hline Figure I & Figure II & Figure III & Figure IV \\ \hline $AAA$ & $EAE$ & $IAI$ & $AEE$\\ $EAE$ & $AEE$ & $AII$ & $IAI$\\ $AII$ & $EIO$ & $OAO$ & $EIO$\\ $EIO$ & $AOO$ & $EIO$ & \\ \hline \end{tabular} \end{table} The syllogisms in the second group called \textit{strengthened syllogism} are valid \textit{conditionally} or valid \textit{existential import} ,which is an explicit supposition of being of some terms, are shown in Table \ref{tab-4}. \pagebreak \begin{table}[h!] \centering \caption{Conditionally Valid Forms} \label{tab-4} \begin{tabular}{c c c c c} \hline Figure I & Figure II & Figure III & Figure IV & Necessary Condition\\ \hline $AAI$ & $AAO$ & & $AEO$ & \textit{S} exists\\ $EAO$ & $EAO$ & & & \textit{S} exists\\ & & $AAI$ & $EAO$ & \textit{M} exists\\ & & $EIO$ & & \textit{M} exists\\ & & & $AAI$ & \textit{P} exists\\ \hline \end{tabular} \end{table} \section{Representation of Categorical Syllogisms via Carroll's Diagrams and a Calculus System SLCD} Carroll's diagrams, thought up in 1884, are Venn-type diagrams where the universes are represented with a square. Nevertheless, it is not clear whether Carroll studied his diagrams independently or as a modification of John Venn's. Carroll's scheme looks like a productive method summing up several developments that have been introduced by researchers studying in this area. For categorical syllogistic system, we describe an homomorphic mapping between the categorical syllogistic propositions and the Carroll's diagrams. Let $X$ and $Y$ be two terms and let $X'$ and $Y'$ be complements of $X$ and $Y$, respectively. For two-terms, Carroll divides the square into four cells, by this means he gets the so-called bilateral diagram, as shown in Table \ref{tab-5}. \begin{table}[h!] \centering \caption{Relation of Two Terms} \label{tab-5} \begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & $X'Y'$ & $XY'$ \\ \hline $Y$ & $X'Y$ & $XY$ \\ \hline \end{tabular} \end{table} Each of these four cells can have three possibilities, when we explain the relations between two terms. They can be $0$ or $1$ or \textit{blank}. In this method, $0$ means that there is no element intersection cell of two elements, $1$ means that it is not empty and \textit{blank} cell means that we don't have any idea about the content of the cell, it could be $0$ or $1$. As above method, let $X$, $Y$, and $M$ be three terms and $X'$, $Y'$, and $M'$ be complements of $X$, $Y$, and $M$, respectively. For examining all relations between three terms, he added one more square in the middle of bilateral diagram which is called the trilateral diagram, as in Figure \ref{fig-1}. \begin{figure}[h!] \centering {\scalebox{0.70}{ \includegraphics[ ]{trilateral.jpg}}}% \caption{Relations of three terms} \label{fig-1} \end{figure} Each cell in a trilateral diagram is marked with a $0$, if there is no element and is marked with a $\textbf{I}$ if it is not empty and another using of $\textbf{I}$, it could be on the line where the two cell is intersection, this means that at least one of these cells is not empty. So, $\textbf{I}$ is different from $1$. In addition to these, if any cell is \textbf{blank}, it has two possibilities, $0$ or $\textbf{I}$. To obtain the conclusion of a syllogism, the knowledge of two premises are carried out on a trilateral diagram. This presentation is more useful for the elimination method than the Venn diagram view. In this way, one can observe the conclusion of the premises truer and quicker from a trilateral diagram. By dint of this method, we demean the data from a trilateral diagram to a bilateral diagram, involving only two terms that should occur in the conclusion and consequently eliminating the middle term. This method can be used in accordance with the rules below \cite{Lewis}: \noindent\textit{\textbf{First Rule:}} $0$ and $\textbf{I}$ are fixed up on trilateral diagrams. \noindent\textit{\textbf{Second Rule:}} If the quarter of the trilateral diagram contains a \textquotedblleft$\textbf{I}$\textquotedblright \ in either cell, then it is certainly occupied, and one may mark the corresponding quarter of the bilateral diagram with a \textquotedblleft$1$\textquotedblright \ to indicate that it is occupied. \noindent\textit{\textbf{Third Rule:}} If the quarter of the trilateral diagram contains two \textquotedblleft$0$\textquotedblright s, one in each cell, then it is certainly empty, and one may mark the corresponding quarter of the bilateral diagram with a \textquotedblleft$0$\textquotedblright \ to indicate that it is empty. We obtain the required conclusion of a syllogism by using of these rules. The effect of Carroll's method of transfer, unknown to Venn, could not be underestimated. It only shows how to extract the conclusion from the premises of a syllogism \cite{moktefi2012history}. Now, we give the set theoretical representation of syllogistic arguments by means of bilateral diagrams. To build such a model, we draw from Carroll's diagrammatic method. We give a definition of a map which corresponds each bilateral diagram to a set. Eventually, our main purpose is to construct a complete bridge between sets and categorical syllogisms such as Table \ref{tab-6}. \begin{table}[h] \centering \caption{\textit{The Paradigm for the representation of syllogistic arguments by using sets}} \label{tab-6} \begin{tabular}{c|c|c|c} & LOGIC& DIAGRAMS & SETS \\ \hline PREMISES& Propositions & $\xrightarrow{Translate}$& Sets\\ & & & $\downarrow$\\ CONCLUSIONS & Propositions & $\xleftarrow{Translate}$ & Sets \\ \end{tabular} \end{table} Let $X$ and $Y$ be two terms and their complements are denoted by $X'$ and $Y'$, respectively. Assume that $p_i$ shows a possible form of any bilateral diagram, such that $1\leq i \leq k$, where $k$ is the number of possible forms of bilateral diagram, as in Table \ref{tab-7}. \begin{table}[h] \centering \caption{Bilateral diagram for a quantity relation between $X$ and $Y$} \label{tab-7} \begin{tabular}{|c|c|c|} \hline $p_i$ & $X'$ & $X$ \\ \hline $Y'$ & $n_1$ & $n_2$ \\ \hline $Y$ & $n_3$ & $n_4$ \\ \hline \end{tabular} \end{table} \noindent where $n_1, n_2, n_3, n_4\in\{0,1\}$. During this paper, $R_{(A)}$, $R_{(E)}$, $R_{(I)}$ and $R_{(O)}$ correspond ``$All$'', \textquotedblleft$No$\textquotedblright, \textquotedblleft$Some$\textquotedblright and \textquotedblleft$Some-not$\textquotedblright statements, respectively. \begin{example}\label{example1} We analyze $\textit{``No X are Y''}$ statement means that there is no element in the intersection cell of $X$ and $Y$. We show it in the following bilateral diagram as in Table \ref{tab-8}. From Table \ref{tab-8}, we obtain all possible bilateral diagrams which have $0$ in the intersection cell of $X$ and $Y$. So, Table \ref{tab-9} shows all possible forms of $\textit{``No X are Y"}$.\\ \begin{table}[h!] \centering \caption{\textit{Bilateral diagram for ``$No$ $X$ $are$ $Y$"}} \label{tab-8} $R_{(A)}=$ \begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & & \\ \hline $Y$ & & 0 \\ \hline \end{tabular} \end{table} \begin{table}[h!] \centering \caption{\textit{All possible forms of ``$All$ $X$ $are$ $Y$"}} \label{tab-9} \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_1}$ & $X'$ & $X$ \\ \hline $Y'$ & 0 & 0 \\ \hline $Y$ & 0 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_2}$ & $X'$ & $X$ \\ \hline $Y'$ & 0 & 0 \\ \hline $Y$ &1 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_3}$ & $X'$ & $X$ \\ \hline $Y'$ & 0 & 1 \\ \hline $Y$ & 0 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_4}$ & $X'$ & $X$ \\ \hline $Y'$ & 1 & 0 \\ \hline $Y$ & 0 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_5}$ & $X'$ & $X$ \\ \hline $Y'$ & 0 & 1 \\ \hline $Y$ & 1 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_6}$ & $X'$ & $X$ \\ \hline $Y'$ & 1 & 0 \\ \hline $Y$ & 1 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_7}$ & $X'$ & $X$ \\ \hline $Y'$ & 1 & 1 \\ \hline $Y$ & 0 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_8}$ & $X'$ & $X$ \\ \hline $Y'$ & 1 & 1 \\ \hline $Y$ & 1 & 0 \\ \hline \end{tabular} \end{table} \end{example} Now in order to define a relation between bilateral diagrams and sets, let us form a set consisting of numbers which correspond to possible forms that each bilateral diagram possesses. For this aim, we firstly define a value mapping in which each possible bilateral diagram corresponds to exactly one value. \begin{definition}\label{definition1}\cite{rus} Let $p_i$ be a possible bilateral diagram and $n_i$ be the value that the $i$-th cell possesses. The $r^{\mathit{val}}_j$ corresponds to the value of $p_i$ which is calculated by using the formula $$r^{\mathit{val}}_j=\sum_{i=1}^4 2^{(4-i)}n_i, \ \ \ 1\leq j\leq k,$$ where $k$ is the number of all possible forms. \end{definition} \begin{definition} Let $R^{\mathit{set}}$ be the set of the values which correspond to all possible forms of any bilateral diagram; that is $R^{\mathit{set}}=\{r^{\mathit{val}}_j: 1\leq j \leq k,\text{$k$ is the number}$ $\text{of all possible forms}\}$. The set of all these $R^{\mathit{set}}$'s is denoted by $\mathcal{R}^{\mathit{Set}}$. \end{definition} \begin{corollary} We obtain the set representations of all categorical propositions as follows: \begin{itemize} \item[-] \textit{All X are Y:} It means that $X$ intersection with $Y'$ cell is empty set. We can illustrate this statement as Table \ref{tab-10}. \begin{table}[h!] \centering \caption{\textit{$X$ intersection with $Y'$ is empty set}} \label{tab-10 $R_{(A)}=$\begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & &0 \\ \hline $Y$ & & \\ \hline \end{tabular} \end{table} \noindent From Table \ref{tab-10}, we obtain all possible forms as the same method in Example \ref{example1}. By the help of Definition \ref{definition1}, the set representation of ``\textit{All X are Y}" corresponds to the $R^{\mathit{set}}_{(A)}=\{0,1,2,3,8,9,10,11\}$. \item[-]\textit{No X are Y:}There is no element in the intersection cell of $X$ and $Y$ as Table \ref{tab-11}. \begin{table}[h] \centerin \caption{\textit{$X$ intersection with $Y$ is empty set}} \label{tab-11} $R_{(E)}=$\begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & & \\ \hline $Y$ & &0 \\ \hline \end{tabular} \end{table} \noindent By Example \ref{example1}, we have all possible forms of ``\textit{No X are Y}". Then, we obtain $R^{\mathit{set}}_{(E)}=\{0,2,4,6,8,10,12,14\}$. \newpage \item[-]\textit{Some X are Y:} There is at least one element in the intersection $X$ and $Y$ as Table \ref{tab-12}. \begin{table}[h!] \centerin \caption{\textit{$X$ intersection $Y$ has at least one element}} \label{tab-12} $R_{(I)}=$\begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & & \\ \hline $Y$ & & 1 \\ \hline \end{tabular} \end{table} By using the possible bilateral diagrams of $R_{(I)}$, we have $R^{\mathit{set}}_{(I)}=\{1,3,5,7,9,11,13,15\}$. \item[-]\textit{Some X are not Y:} If some element of $X$ are not $Y$, then they have to be in $Y'$. So, the intersection cell of $X$ and $Y'$ is not empty as Table \ref{tab-13}. \begin{table}[h!] \centerin \caption{\textit{$X$ intersection $Y'$ has at least one element}} \label{tab-13} $R_{(O)}=$\begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & & $1$ \\ \hline $Y$ & & \\ \hline \end{tabular} \end{table} \noindent From the bilateral diagram of $R_{(O)}$, we get $R^{\mathit{set}}_{(O)}=\{4,5,6,7,12,13,14,15\}$. \end{itemize} \end{corollary} Let's consider the relationship between the possible bilateral diagrams of the categorical syllogisms before discussing of the categorical syllogisms via Carroll's diagrams. \begin{example} Let $p_i$ and $p_j$ be two possible forms of the bilateral diagrams of major and minor premises, respectively. We take the possible forms of bilateral diagrams as Table \ref{tab-14}. \begin{table}[h] \centering \caption{The possible forms of bilateral diagrams} \label{tab-14} $p_i=$ \begin{tabular}{|c|c|c|} \hline & $P'$ & $P$ \\ \hline $M'$ & 1 & 0 \\ \hline $M$ & 0 & 0 \\ \hline \end{tabular} \ \ and \ \ \ $p_j=$ \begin{tabular}{|c|c|c|} \hline & $S'$ & $S$ \\ \hline $M'$ & 0 & 1 \\ \hline $M$ & 0 & 0 \\ \hline \end{tabular} \end{table} We input the data on trilateral diagram as in Figure \ref{fig-2}. \begin{figure}[h!] \centering \caption{The relation of two possible forms} \label{fig-2} {\scalebox{0.70}{ \includegraphics[]{trilateral1.jpg}}}% \end{figure} By using the elimination method, we obtain the relation between $S$ and $P$ as Table \ref{tab-15}. \begin{table}[h] \centering \caption{The relation between $S$ and $P$} \label{tab-15} $p_k=$ \begin{tabular}{|c|c|c|} \hline & $P'$ & $P$ \\ \hline $S'$ & 0 & 0 \\ \hline $S$ & 1 & 0 \\ \hline \end{tabular} \end{table} $r_i=8$ corresponds to possible form $p_i$, and $r_j=4$ corresponds to possible form $p_j$, then we obtain that $r_k=2$ corresponds to $p_k$ that is a possible conclusion. \end{example} Let $r^{\mathit{val}}_i$ and $r^{\mathit{val}}_j$ be the numbers corresponding to possible forms of bilateral diagrams which have a common term. Then we can get the relation between two other terms by using this method. After these examples, we try to generalize them by a formula. \begin{definition} The syllogistic possible conclusion mapping, denoted $\ast$, is a mapping which gives us the deduction set of possible forms of major and minor premises sets. \end{definition} \begin{theorem} Let $r^{\mathit{val}}_i$ and $r^{\mathit{val}}_j$ correspond to the numbers of possible forms of major and minor premises, respectively. Then, $r^{\mathit{val}}_i\ast r^{\mathit{val}}_j$ equals the value given by the intersection of row and column numbers corresponding to $r^{\mathit{val}}_i$ and $r^{\mathit{val}}_j$ in Table \ref{tab-16}. \end{theorem} \begin{table}[h] \centering \caption{Operation table} \label{tab-16} {\small \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\ast$& 0 & 1 & 2 & 3 & 4 & 8 & 12 & 5 & 10 & 6 & 9 & 7 & 11 & 13 & 14 & 15 \\ \hline 0& 0 & & & & & & & & & & & & & & & \\ \hline 1& & 1 & 4 & 5 & & & & & & & & & & & & \\ \hline 2& & 2 & 8 & 10 & & & & & & & & & & & & \\ \hline 3& & 3 & 12 & $H$ & & & & & & & & & & & & \\ \hline 4& & & & & 1 & 4 & 5 & & & & & & & & & \\ \hline 8& & & & & 2 & 8 & 10 & & & & & & & & & \\ \hline 12& & & & & 3 & 12 & $H$ & & & & & & & & & \\ \hline 5& & & & & & & & 1 & 4 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\ \hline 10& & & & & & & & 2 & 8 & 10 & 10 & 10 & 10 & 10 & 10 & 10 \\ \hline 6& & & & & & & & 3 & 12 & 9 & 6 & 11 & 14 & 7 & 13 & 15\\ \hline 9& & & & & & & & 3 & 12 & 6 & 9 & 7 & 13 & 11 & 14 & 15 \\ \hline 7& & & & & & & & 3 & 12 & 13 & 7 & $H_4$ & $H'_3$ & 7 & 13 & $H'_1$ \\ \hline 11& & & & & & & & 3 & 12 & 14 & 11 & $H_3$ & $H'_4$ & 11 & 14 & $H'_2$ \\ \hline 13& & & & & & & & 3 & 12 & 7 & 13 & 7 & 13 & $H_4$ & $H'_3$ & $H'_1$ \\ \hline 14& & & & & & & & 3 & 12 & 11 & 14 & 11 & 14 & $H_3$ & $H'_4$ & $H'_2$ \\ \hline 15& & & & & & & & 3 & 12 & 15 & 15 & $H_1$ & $H_2$ & $H_1$ & $H_2$ & $H$ \\ \hline \end{tabular}} \end{table} In the Table \ref{tab-16}, considering possible conclusion operation, some possible forms of premises have more than one possible conclusions, given as below: \begin{gather*} H=\{6, 7, 9, 11, 13, 14, 15\},\ H_1=\{7, 11, 15\},\ H_1'=\{6, 7, 9, 11, 13, 15\},\\ H_2=\{13, 14, 15\},\ H_2'=\{11, 14, 15\},\ H_3=\{6, 7, 11, 14, 15\},\\ H_3'=\{6, 7, 13, 14, 15\},\ H_4=\{7, 9, 11, 13, 15\},\ H_4'=\{9, 11, 13, 14, 15\} \end{gather*} Therefore, we scrutinise all possible cases between two terms and their conclusions. Note that, Table \ref{tab-16} is used as $Syllogistic\_Mapping()$ subalgorithm in Section \ref{sec4}. \begin{definition} Universes of values sets of major premises, minor premises, and conclusions are denoted by $\mathcal{R}^{\mathit{set}}_{\textit{Maj}}$, $\mathcal{R}^{\mathit{set}}_{\textit{Min}}$ and $\mathcal{R}^{\mathit{set}}_{\textit{Con}}$, respectively. \end{definition} Let $R^{\mathit{set}}_{(k)}$ be an element of $\mathcal{R}^{\mathit{set}}_{\textit{Maj}}$ and $R^{\mathit{set}}_{(l)}$ be an element of $\mathcal{R}^{\mathit{set}}_{\textit{Min}}$. The main problem is what the conclusion of these premises is. In syllogistic, we have some patterns which are mentioned in Table \ref{tab-3} and Table \ref{tab-4} above. Now, we explain them by using bilateral diagrams with an algebraic approach. \begin{definition} The syllogistic mapping, denoted by $\circledast$, is a mapping which gives us the conclusion of the major and the minor premises as Table \ref{tab-17}. \begin{table}[h!] \centering \caption{The conclusion of the major and the minor premises} \label{tab-17} \begin{tabular}{|c|c|c|} \hline & $P'$ & $P$ \\ \hline $M'$ & & \\ \hline $M$ & $$ & $$ \\ \hline \end{tabular} $\circledast$ \begin{tabular}{|c|c|c|} \hline & $S'$ & $S$ \\ \hline $M'$ & & \\ \hline $M$ & & \\ \hline \end{tabular} = \begin{tabular}{|c|c|c|} \hline & $P'$ & $P$ \\ \hline $S'$ & & \\ \hline $S$ & & \\ \hline \end{tabular} \end{table} \end{definition} \begin{theorem}\label{theorem4.12} Let $R^{\mathit{set}}_{(k)}=\{r^{\mathit{val}}_{k_1},\dots, r^{\mathit{set}}_{k_n}\}$ and $R^{\mathit{set}}_{(l)}=\{r^{\mathit{val}}_{l_1},\dots, r^{\mathit{val}}_{l_t}\}$ two sets corresponding to the Major and the Minor premises. Then $\circledast: \mathcal{R}^{\mathit{set}}_{\textit{Maj}}\times\mathcal{R}^{\mathit{set}}_{\textit{Min}}\rightarrow \mathcal{R}^{\mathit{set}}_{\textit{Con}}$ $$R^{\mathit{set}}_{(k)} \circledast R^{\mathit{set}}_{(l)}:= \bigcup^n_{j=1} \bigcup^t_{i=1} r^{\mathit{val}}_{k_j}\ast r^{\mathit{val}}_{l_i}$$ is the conclusion of the premises $R^{\mathit{set}}_{(k)}$ and $R^{\mathit{set}}_{(l)}$. \end{theorem} \begin{theorem}\cite{senturkoner} A syllogism is valid if and only if it is provable in \textit{SLCD}. \end{theorem} \begin{remark} For conditional valid forms, we need an addition rule which is \textit{\textquotedblleft Some $X$ are $X$"}. We can use above Theorem by taking into consideration this rule. \end{remark} \begin{remark} Let SLCD be noted calculus system. If the rule \textit{\textquotedblleft Some X are X when X exists"} (i.e., $\vdash\boldsymbol{I}_{XX}$) is added to SLCD, then the calculus system SLCD is denoted by $\textit{SLCD}^{\dagger}$. \end{remark} \begin{definition}\cite{senturkoner} Let $R_{(k)}$ be the bilateral diagram presentation of the premise. The \textit{transposition} of a premise is the symmetric positions with respect to the main diagonal. It is shown by $Trans(R_{(k)})$. \begin{eqnarray*} Trans:\mathcal{R}^{\mathit{set}}&\rightarrow& \mathcal{R}^{\mathit{set}},\\ {R}^{\mathit{set}}_{(k)}&\rightarrow& Trans({R}^{\mathit{set}}_{(k)}) =\{r^{\mathit{val}}_{k^T_1},\dots, r^{\mathit{set}}_{k^T_n}\}.\nonumber \end{eqnarray*} \end{definition} \newpage \begin{theorem}\label{theorem4.17}\cite{senturkoner} Let $R^{\mathit{set}}_{(k)}=\{r^{\mathit{val}}_{k_1},\dots, r^{\mathit{set}}_{k_n}\}$ and $R^{\mathit{set}}_{(l)}=\{r^{\mathit{val}}_{l_1},\dots, r^{\mathit{val}}_{l_t}\}$ be two sets to correspond to the Major and the Minor premises values sets and $R^{\mathit{set}}_{(s)}=\{r^{\mathit{val}}_{s_1},\dots, r^{\mathit{set}}_{s_m}\}$ be set to correspond to the constant set values which means \textquotedblleft Some S are S", \textquotedblleft Some M are M" and \textquotedblleft Some P are P". Then $\circledast^{\dagger}: \mathcal{R}^{\mathit{set}}_{\textit{Maj}}\times\mathcal{R}^{\mathit{set}}_{\textit{Min}}\rightarrow \mathcal{R}^{\mathit{set}}_{\textit{Con}}$ $$ R^{\mathit{set}}_{(k)} \circledast^{\dagger} R^{\mathit{set}}_{(l)}:= \begin{cases} \bigcup^n_{j=1} \bigcup^t_{i=1} \bigcup^m_{h=1} (r^{\mathit{val}}_{k_j}\ast (r^{\mathit{var}}_{s_h}\ast r^{\mathit{var}}_{l^T_i})), \; \; & \textit{If S exists}, \\ \bigcup^n_{j=1} \bigcup^t_{i=1} \bigcup^m_{h=1} (r^{\mathit{val}}_{k_j}\ast (r^{\mathit{var}}_{l_i}\ast r^{\mathit{var}}_{s_h} )), \; \; & \textit{If M exists}, \\ \bigcup^n_{j=1} \bigcup^t_{i=1} \bigcup^m_{h=1} ((r^{\mathit{var}}_{s_h} \ast r^{\mathit{val}}_{k^T_j})\ast r^{\mathit{var}}_{l_i}), \; \; & \textit{If P exists}. \end{cases}$$ is the conclusion of the premises $R^{\mathit{set}}_{(k)}$ and $R^{\mathit{set}}_{(l)}$ under the conditions \textit{S exists}, \textit{M exists} or \textit{P exists}. \end{theorem} \begin{theorem}\cite{senturkoner} A strengthened syllogism is valid if and only if it is provable in \textit{SLCD}$^{\dagger}$. \end{theorem} \section{An Algorithmic Decision for Categorical Syllogisms in SLCD}\label{sec4} In this part of the manuscript, we give an algorithm to decide whether a categorical syllogism is valid or not in the calculus system SLCD or SLCD$^{\dagger}$ at the first time in the literature. \\ Global variables used to all functions are below:\\ $Conc[\ ][\ ]:$ two dimensional array, set of all possible bilateral diagrams \\ $Const\_set:$ constant set for each condition S exists, M exists and P exists $\bullet$ Algorithm Syllogism:\\ This is the main algorithm. In this algorithm, $MPSM()$ and $Decision()$ subalgorithms are run for each state (Unconditional, S exists, M exists and P exists) and for each figure (Figure1, Figure2, Figure3 and Figure4). This algorithm sends related figure as parameter to subalgorithm $MPSM()$ and also, it sends the related state and figure as parameters to subalgorithm $Decision()$. \begin{algorithm}[H] \DontPrintSemicolon \caption{Syllogism\label{A1}} \KwData{All states for each Figure} \KwResult{Obtain the Conclusion set of syllogisms and make a decision for syllogisms whether \textquotedblleft Valid" or \textquotedblleft Invalid".} \BlankLine \emph{\textbf{Syllogism()}} \; \ForEach{$cond$ in $Conditions\{Unconditional, S\_exists, M\_exists, P\_exists\}$}{ \ForEach{$fig$ in $Figures\{Figure1, Figure2, Figure3, Figure4\}$}{ $MPSM(fig)$\; $Decision(cond, fig)$\; } } \end{algorithm} \vspace{0.5cm} $\bullet$ Subalgorithm MPSM:\\ This algorithm determines the positions of the subject term, middle term and predicate term with respect to the figure parameter as input.\\ \begin{algorithm}[H] \DontPrintSemicolon \caption{MPSM \label{A2}} \KwData{The specified figure} \KwResult{Positions of the major and minor terms are determined} \BlankLine \emph{\textbf{MPSM(fig)}} \; \uIf{$fig= ``Figure 1"$}{ $mj_1 = ``M"; mj_2 = ``P"$\; $mn_1 = ``S"; mn_2 = ``M"$\;} \uElseIf{$fig=``Figure 2"$}{ $mj_1 = ``P"; mj_2 = ``M"$\; $mn_1 = ``S"; mn_2 = ``M"$\;} \uElseIf{$fig=``Figure 3"$}{ $mj_1 = ``M"; mj_2 = ``P"$\; $mn_1 = ``M"; mn_2 = ``S"$\;} \uElseIf{$fig=``Figure 4"$}{ $mj_1 = ``P"; mj_2 = ``M"$\; $mn_1 = ``M"; mn_2 = ``S"$\;} \end{algorithm} \vspace{0.5cm} $\bullet$ Subalgorithm Decision:\\ This algorithm determines major and minor sets for each prepositions (A, E, I and O) of major and minor premises by using $Set\_Interpretation()$ subalgorithm. We obtain premises conclusion via $Syllogistic\_Mapping()$ using major set and minor set values with respect to Table \ref{tab-16}. Later, for the analysed figure the premises conclusion set is compared to all conclusion sets under the corresponding state. If these are equal to each other, then the algorithm prints ``valid" output for the related syllogism.\\ \begin{algorithm}[H] \DontPrintSemicolon \caption{Decision\label{A3}} \KwData{The set interpretations of major and minor premises of syllogisms.} \KwResult{Obtain the Conclusion set of syllogisms and make a decision for syllogisms whether ``Valid" or ``Invalid".} \BlankLine \emph{\textbf{Decision(cond,fig)}} \ForEach{$mj\_prep$ in $Prepositions\{A, E, I, O\}$}{ $major\_set = Set\_Interpretation(mj\_prep,``major",cond)$\; \ForEach{$mn\_prep$ in $Prepositions\{A, E, I, O\}$}{ $minor\_set = Set\_Interpretation(mn\_prep,``minor",cond)$\; $premises\_conclusion=Syllogistic\_Mapping(major\_set,minor\_set)$\; \ForEach{$conc\_prep$ in $Prepositions\{A, E, I, O\}$}{ \If {$premises\_conclusion=Conc[cond][conc\_prep]$}{$Print \ mj\_prep\ \& \ mn\_prep\ \& \ conc\_prep\ \& \ ``-Valid"$} } } } \end{algorithm} \vspace{0.5cm} $\bullet$ Subalgorithm Set\_Interpretation:\\ In this algorithm, temp set is determined for premise type and premise preposition as unconditional state. \\ - If the state is ``S exists" and the premise type is ``minor" then new temp set is determined by taking the transpose of temp set and it returns result of the subalgorithm $Syllogistic\_Mapping()$ which gets inputs as the constant set and the new temp set, respectively.\\ - If the state is ``M exists" and the premise type is ``minor" then it returns result of the subalgorithm $Syllogistic\_Mapping()$ which gets inputs as the temp set and the constant set, respectively.\\ - If the state is ``P exists" and the premise type is ``major" then new temp set is determined by taking the transpose of temp set and it returns result of the subalgorithm $Syllogistic\_Mapping()$ which gets inputs as the constant set and the new temp set, respectively.\\ - If the state is ``Unconditional" then it just returns the temp set. \begin{algorithm}[H] \DontPrintSemicolon \caption{Set Interpretation\label{A4}} \KwData{The specified premise preposition, premise type and state} \KwResult{The conclusion set} \BlankLine \emph{\textbf{$Set\_Interpretation(premise\_prep, premise\_type, cond)$}} \; Determine $Temp\_set$ using premise\_type and premise\_prep for Unconditional state with respect to the Diagram\; \If{$premise\_type=``minor"$}{ \If{$cond=``S \ Exists"$} {$NTemp\_set=transpose\ the\ diagram\ of\ Temp\_set$\; $return \ Syllogistic\_Mapping(Const\_set, NTemp\_set)$} \If{$cond=``M \ Exists"$} {$return \ Syllogistic\_Mapping(Temp\_set, Const\_set)$} } \If{$premise\_type=``major"$}{ \If{$cond=``P \ Exists"$} {$NTemp\_set=transpose\ the\ diagram\ of\ Temp\_set$\; $return \ Syllogistic\_Mapping(Const\_set, NTemp\_set)$} } $return \ Temp\_set$ \end{algorithm} \section{Conclusion} In this paper, we present a new effective algorithm for categorical syllogisms by using calculus system SLCD at the first time in literature. In accordance with this purpose, we explain categorical syllogisms by the help of Carroll's diagrams and we find unconditionally valid syllogisms and conditionally valid syllogisms via this algorithmic approach. As a result, our aim in this paper is to design an algorithm to contribute to researchers getting into the act in different areas of science used categorical syllogisms such as artificial intelligence, engineering, computer science and also mathematics. \section*{References}
{ "timestamp": "2018-03-08T02:04:44", "yymm": "1802", "arxiv_id": "1802.04127", "language": "en", "url": "https://arxiv.org/abs/1802.04127" }
\section{Introduction} Estimating the mean of a random vector under weak tail assumptions has attracted a lot of attention recently. A number of properties have spurred the interest for these new results, where the empirical mean is replaced by a more robust estimator. One aspect is that it is possible to obtain an estimator with a sub-Gaussian tail while assuming much weaker assumptions on the data, up to the fact of assuming only the existence of a finite covariance matrix. Another appealing feature is that it is possible to obtain dimension-free non asymptotic bounds that remain valid in a separable Hilbert space. Some important references are \citet{Cat10} in the one dimensional case and \citet{Minsker} and \citet{LugoMen} in the multidimensional case. Building on the breakthrough of \citet{Minsker}, that uses a multidimensional generalization of the median of means estimator, \citet{JolyLugoOl} and \citet{LugoMen} propose successive improvements of the median of means approach to get an estimator with a genuine sub-Gaussian dimension-free tail bound, while still requiring only the existence of the covariance matrix. In the mean time, the M-estimator approach of \citet{Cat10} has also been generalized to multidimensional settings through the use of matrix inequalities in \citet{Minsker2} and \citet{MinskerWei}. Here we follow a different route, based on a multidimensional extension of \citet{Cat10} using PAC-Bayesian bounds. Our new estimator is a simple modification of the empirical mean, where some threshold is applied to the norm of the sample vectors. Therefore, it is straightforward to compute, and this is a strong point of our approach, compared to others. Note also that we make here some compromise on the sharpness of the estimation error bound, in order to simplify the definition and computation of the estimator. This compromise consists in the presence of second order terms, while the first order terms can be made as close as desired to a true sub-Gaussian bound with exact constants, as stated in \citet[eq. (1.1)]{LugoMen}. With a more involved estimator, a true sub-Gaussian bound without second order terms is possible and will be described in a separate publication. \section{Thresholding the norm} Consider $X \in \B{R}^d$, a random vector, and $(X_1, \dots, X_n)$ a sample made of $n$ independent copies of $X$. The question is to estimate $\B{E}(X)$ from the sample, under the assumption that $\B{E} \bigl( \lVert X \rVert^p \bigr) < \infty$, for some $p \geq 2$. Consider the threshold function $\displaystyle \psi(t) = \min \{ t, 1 \}, \; t \in \B{R}_+$, and for some positive real parameter $\lambda$ to be chosen later, introduce the thresholded sample \[ Y_i = \frac{\psi \bigl( \lambda \lVert X_i \rVert \bigr)}{ \lambda \lVert X_i \rVert} X_i. \] Our estimator of $m = \B{E}(X)$ will simply be the thresholded empirical mean $\displaystyle \wh{m} = \frac{1}{n} \sum_{i=1}^n Y_i$. \begin{prop} \label{prop:2.1} Introduce the increasing functions \[ g_1(t) = \frac{1}{t} \Bigl(\exp(t) - 1 \Bigr) \text{ and } g_2(t) = \frac{2}{t^2} \Bigl( \exp(t) - 1 - t \Bigr), \qquad t \in \B{R}, \] that are defined by continuity at $t = 0$ and are such that $g_1(0) = g_2(0) = 1$. Assume that $\B{E} \bigl( \lVert X \rVert^2 \bigr) < \infty$ and that we know $v$ such that \[ \sup_{\theta \in \B{S}_d} \B{E} \bigl( \langle \theta, X - m \rangle^2 \bigr) \leq v < \infty, \\ \] where $\displaystyle \B{S}_d = \bigl\{ \theta \in \B{R}^d, \lVert \theta \rVert = 1 \bigr\}$ is the unit sphere of $\B{R}^d$. For some positive real parameter $\mu$, put \begin{align*} \lambda & = \mu^{-1} \sqrt{\frac{2 \log ( \delta^{-1})}{a v n}}, & T & = \max \bigl\{ \B{E} \bigl( \lVert X - m \rVert^2 \bigr), v \bigr\}, \\ a & = g_2(2 \mu) \geq 1, & b & = \exp(2 \mu) g_1 \Biggl( \mu^2 \sqrt{\frac{2 a v}{ T \log(\delta^{-1})}} \; \Biggr) \geq 1. \end{align*} With probability at least $ 1 - \delta$, \[ \lVert \wh{m} - m \rVert \leq \sqrt{ \frac{2 a v \log(\delta^{-1})}{n}} + \sqrt{\frac{b T}{n}} + \inf_{p \geq 1} \frac{C_p}{ n^{p/2}} + \inf_{p \geq 2} \frac{C'_p}{n^{p/2}}, \] where \begin{align*} C_p & = \frac{1}{p+1} \biggl( \frac{p}{(p+1) \mu } \biggr)^p \biggl( \frac{ 2 \log(\delta^{-1})}{a v} \biggr)^{p/2} \sup_{\theta \in \B{S}_d} \B{E} \bigl( \lVert X \rVert^p \langle \theta, X - m \rangle_- \bigr), \quad \text{ and } \\ C'_p & = \frac{1}{p+1} \biggl( \frac{p}{(p+1) \mu} \biggr)^p \biggl( \frac{ 2 \log(\delta^{-1})}{a v} \biggr)^{p/2} \B{E} \bigl( \lVert X \rVert^p \bigr) \lVert m \rVert \Biggl( 1 + \sqrt{\frac{a \log(\delta^{-1})}{2 v n}} \lVert m \rVert \Biggr). \end{align*} \end{prop} \paragraph{Remarks} Note that in case $\B{E} \bigl( \lVert X \rVert^2 \bigr) < \infty$ but $\B{E} \bigl( \lVert X \rVert^p \bigr) = \infty$ for $p > 2$, we can use the bound \begin{multline*} \frac{C_1}{\sqrt{n}} + \frac{C'_2}{n} \leq \frac{1}{2 \mu} \sqrt{\frac{\displaystyle \log(\delta^{-1})( T + \lVert m \rVert^2)}{ 2 a n }} + \frac{8 \log(\delta^{-1})}{27 \mu^2 a v n} \B{E} \bigl( \lVert X \rVert^2 \bigr) \lVert m \rVert \\ \times \Biggl( 1 + \sqrt{\frac{ a \log(\delta^{-1})}{2 v n}} \lVert m \rVert \Biggr) = \C{O} \Biggl( \frac{1}{2 \mu} \sqrt{\frac{\log(\delta^{-1}) ( T + \lVert m \rVert^2)}{2 a n}} \; \Biggr). \end{multline*} Note also that if we take $\mu = 1/4$ and assume that $\delta \leq \exp( - 1)$, then $a \leq 1.2$ and $b \leq 4$. If moreover $\B{E} \bigl( \lVert X \rVert^{p+1} \bigr) < \infty$, for some $p > 1$, we obtain with probability at least $1 - \delta$ that \[ \lVert \wh{m} - m \rVert \leq \sqrt{ \frac{2.4 \, v \log( \delta^{-1})}{n}} + \sqrt{\frac{4 T}{n}} + \frac{C_p}{n^{p/2}} + \frac{C'_{p+1}}{n^{(p+1)/2}}, \] meaning that the tail distribution of $\lVert \wh{m} - m \rVert$ has a sub-Gaussian behavior, up to second order terms. Remark that by taking $\mu$ small, we can make $a$ and $b$ as close as desired to $1$, at the expense of the values of $C_p$ and $C'_p$. \paragraph{Proof} The rest of the paper is devoted to the proof of Proposition \ref{prop:2.1}. An elementary computation shows that the threshold function $\psi$ satisfies \begin{equation} \label{eq:1.2} 0 \leq 1 - \frac{\psi(t)}{t} \leq \inf_{p \geq 1} \frac{t^{p}}{p + 1} \biggl( \frac{p}{p+1} \biggr)^p, \qquad t \in \B{R}_+, \end{equation} where non integer values of the exponent $p$ are allowed. Let $\displaystyle Y = \frac{\psi \bigl( \lambda \lVert X \rVert \bigr)}{\lambda \lVert X \rVert} X$ and $\wt{m} = \B{E}(Y)$. We can decompose the estimation error in direction $\theta$ into \begin{equation} \label{eq:1} \langle \theta, \wh{m} - m \rangle = \langle \theta, \wt{m} - m \rangle + \frac{1}{n} \sum_{i=1}^n \langle \theta, Y_i - \wt{m} \rangle, \qquad \theta \in \B{R}^d. \end{equation} Introduce $\displaystyle \alpha = \frac{\psi \bigl( \lambda \lVert X \rVert \bigr) }{ \lambda \lVert X \rVert}$ and let us deal with the first term first. As $\displaystyle 0 \leq 1 - \alpha \leq \frac{\lambda^p \lVert X \rVert^p}{p+1} \biggl( \frac{p}{p+1} \biggr)^p$ \begin{multline*} \langle \theta, \wt{m} - m \rangle = \B{E} \bigl[ ( \alpha - 1 ) \langle \theta, X \rangle \bigr] = \B{E} \bigl[ ( \alpha - 1 ) \langle \theta, X - m \rangle \bigr] + \B{E} ( \alpha - 1 ) \langle \theta, m \rangle \\ \leq \inf_{p \geq 1} \frac{\lambda^p}{(p+1)} \biggl( \frac{p}{p+1} \biggr)^p \B{E} \Bigl( \lVert X \rVert^p \langle \theta, X - m \rangle_- \Bigr) + \inf_{p \geq 2} \frac{\lambda^p}{(p+1)} \biggl( \frac{p}{p+1} \biggr)^p \B{E} \bigl( \lVert X \rVert^p \bigr) \langle \theta, m \rangle_-, \end{multline*} where $r_- = \max \{ 0, -r \}$ is the negative part of integer $r$. Let us now look at the second term of the decomposition \eqref{eq:1}. To gain uniformity in $\theta$, we will use a PAC-Bayesian inequality and the family of normal distributions $\rho_{\theta} = \C{N} \bigl(\theta, \beta^{-1} I_d \bigr)$, bearing on the parameter $\theta \in \B{R}^d$, where $I_d \in \B{R}^{d \times d}$ is the identity matrix of size $d \times d$, and where $\beta$ is a positive parameter to be chosen later on. We will use the following PAC-Bayesian inequality without recalling its proof, that is a simple consequence of \citet[eq. (5.2.1) page 159]{Cat01b}: \begin{lemma} \label{lem:2.2} For any bounded measurable function $f : \B{R}^d \times \B{R}^d \rightarrow \B{R}$, for any probability measure $\pi \in \C{M}_+^1 \bigl( \B{R}^d \bigr)$, for any $\delta \in ]0,1[$, with probability at least $1 - \delta$, for any probability measure $\rho \in \C{M}_+^1(\B{R}^d)$, \[ \frac{1}{n} \sum_{i=1}^n \int f \bigl( \theta, X_i \bigr) \, \mathrm{d} \rho(\theta) \leq \int \log \Bigl[ \B{E} \Bigl( \exp \bigl( f(\theta, X) \bigr) \Bigr) \Bigr] \, \mathrm{d} \rho(\theta) \\ + \frac{\C{K}(\rho, \pi) + \log(\delta^{-1})}{n}, \] where $\C{K}$ is the Kullback-Liebler divergence $\displaystyle \C{K} (\rho, \pi) = \begin{cases} \int \log \bigl( \rho / \pi \bigr) \, \mathrm{d} \rho, & \text{ when } \rho \ll \pi, \\ + \infty, & \text{ otherwise.} \end{cases} $ \end{lemma} Remarking that $\displaystyle \frac{1}{n} \sum_{i=1}^n \langle \theta, Y_i - \wt{m} \rangle = \frac{1}{n} \sum_{i=1}^n \int \langle \theta', Y_i - \wt{m} \rangle \, \mathrm{d} \rho_{\theta}( \theta' ), $ using $\pi = \rho_0$, and taking into account the fact that $\C{K}(\rho_{\theta}, \rho_0 ) = \beta \lVert \theta \rVert^2 / 2$, we obtain as a consequence of the previous lemma that with probability at least $1 - \delta$, for any $\theta \in \B{S}_d$, \[ \frac{1}{n} \sum_{i=1}^n \langle \theta, Y_i - \wt{m} \rangle \leq \frac{1}{\mu \lambda} \int \log \biggl( \B{E} \exp \Bigl( \mu \lambda \langle \theta', Y - \wt{m} \rangle \Bigr) \biggr) \, \mathrm{d} \rho_{\theta}(\theta') + \frac{\beta}{2 n \mu \lambda} + \frac{\log ( \delta^{-1})}{ n \mu \lambda}. \] In our setting $f$ is not bounded in $\theta$, but the required extension is valid as explained in \citet{Cat01b}. Since the logarithm is concave, \begin{multline*} \int \log \biggl( \B{E} \exp \Bigl( \mu \lambda \langle \theta', Y - \wt{m} \rangle \Bigr) \biggr) \, \mathrm{d} \rho_{\theta}(\theta') \leq \log \Biggl[ \B{E} \biggl( \int \exp \Bigl( \mu \lambda \langle \theta', Y - \wt{m} \rangle \Bigr) \, \mathrm{d} \rho_{\theta}(\theta') \biggr) \Biggr] \\ = \log \Biggl[ \B{E} \biggl( \exp \Bigl( \mu \lambda \langle \theta, Y - \wt{m} \rangle + \frac{\mu^2 \lambda^2}{2 \beta} \lVert Y - \wt{m} \rVert^2 \Bigr) \biggr) \Biggr], \end{multline*} where we have used the explicit expression of the Laplace transform of a Gaussian distribution. To go further, reminding as a source of inspiration the proof of Bennett's inequality, let us introduce the increasing functions $g_1$ and $g_2$ defined in Proposition \ref{prop:2.1}. These functions will be used to bound the exponential function by polynomials. More precisely, we will exploit the fact that when $t \leq b$, $\exp(t) \leq 1 + t + g_2(b) t^2 / 2$ and $\exp(t) \leq 1 + g_1(b) t$. From this, it results that if $t \leq b$ and $u \leq c$, \begin{multline*} \exp( t + u) \leq \exp(t) \bigl(1 + g_1(c) u \bigr) \leq \exp(t) + g_1(c) \exp(b) u \\ \leq 1 + t + g_2(b) t^2/2 + g_1(c) \exp(b) u. \end{multline*} Legitimate values for $b$ and $c$ will be deduced from the remark that $\lambda \lVert Y \rVert \leq 1$, implying $\lambda \lVert \wt{m} \rVert \leq 1$. Namely, in our context, we will use $b = 2 \mu$ and $c = 2 \mu^2 / \beta$. These arguments put together lead to the inequality \begin{multline*} \B{E} \Biggl( \exp \biggl( \mu \lambda \langle \theta, Y - \wt{m} \rangle + \frac{ \mu^2 \lambda^2}{2 \beta} \bigl\lVert Y - \wt{m} \bigr\rVert^2 \biggr) \Biggr) \\ \leq 1 + g_2(2 \mu) \frac{\mu^2 \lambda^2}{2} \B{E} \bigl( \langle \theta, Y - \wt{m} \rangle^2 \bigr) + \exp (2 \mu ) g_1 \biggl( \frac{2 \mu^2}{\beta} \biggr) \frac{\mu^2 \lambda^2}{2 \beta} \B{E} \Bigl( \bigl\lVert Y - \wt{m} \bigr\rVert^2 \Bigr). \end{multline*} Replacing in the previous inequalities, we obtain \begin{lemma} With probability at least $1 - \delta$, for any $\theta \in \B{S}_d$, \begin{multline*} \langle \theta, \wh{m} - \wt{m} \rangle = \frac{1}{n} \sum_{i=1}^n \langle \theta, Y_i - \wt{m} \rangle \leq g_2(2 \mu) \frac{\mu \lambda}{2} \B{E} \bigl( \langle \theta, Y - \wt{m} \rangle^2 \bigr) \\ + \exp(2 \mu) g_1 \biggl( \frac{2 \mu^2}{\beta} \biggr) \frac{\mu \lambda}{2 \beta} \B{E} \bigl( \lVert Y - \wt{m} \rVert^2 \bigr) + \frac{ \beta + 2 \log(\delta^{-1})}{2 \mu \lambda n}. \end{multline*} \end{lemma} Remark that \begin{multline*} \langle \theta, Y - m \rangle^2 = \langle \theta, \alpha X - m \rangle^2 = \Bigl( \alpha \langle \theta, X - m \rangle - (1 - \alpha) \langle \theta, m \rangle \Bigr)^2 \\ \leq \alpha \langle \theta, X - m \rangle^2 + (1 - \alpha) \langle \theta, m \rangle^2 \leq \langle \theta, X - m \rangle^2 + (1 - \alpha) \langle \theta, m \rangle^2. \end{multline*} Therefore, using inequality \eqref{eq:1.2} and the definition of $\alpha$, \[ \B{E} \bigl( \langle \theta, Y - \wt{m} \rangle^2 \bigr) \leq \B{E} \bigl( \langle \theta, Y - m \rangle^2 \bigr) \leq \B{E} \bigl( \langle \theta, X - m \rangle^2 \bigr) + \langle \theta, m \rangle^2 \inf_{p \geq 2} \frac{\lambda^p}{p+1} \biggl( \frac{p}{p+1} \biggr)^p \B{E} \bigl( \lVert X \rVert^p \bigr). \] Remark also that $Y = g(X)$, where $g$ is a contraction (being the projection on a ball). Consequently \[ \B{E} \bigl( \lVert Y - \wt{m} \rVert^2 \bigr) = \frac{1}{2} \B{E} \bigl( \lVert Y_1 - Y_2 \rVert^2 \bigr) \leq \frac{1}{2} \B{E} \bigl( \lVert X_1 - X_2 \rVert^2 \bigr) = \B{E} \bigl( \lVert X - m \rVert^2 \bigr). \] In view of these remarks, the previous lemma translates to \begin{lemma} Let $\displaystyle a = g_2 \bigl( 2 \mu \bigr)$ and $\displaystyle b \geq \exp ( 2 \mu ) g_1 \biggl( \frac{2 \mu^2}{\beta} \biggr)$.\\ With probability at least $1 - \delta$, for any $\theta \in \B{S}_d$, \begin{multline*} \langle \theta, \wh{m} - m \rangle \leq \frac{a \mu \lambda}{2} \B{E} \bigl( \langle \theta, X - m \rangle^2 \bigr) + \frac{b \mu \lambda}{2 \beta} \B{E} \bigl( \lVert X - m \rVert^2 \bigr) + \frac{\beta + 2 \log(\delta^{-1})}{2 \mu \lambda n} \\ + \inf_{p \geq 1} \frac{\lambda^p}{p+1} \biggl( \frac{p}{p+1} \biggr)^p \B{E} \bigl( \lVert X \rVert^p \langle \theta, X - m\rangle_- \bigr) \\ + \inf_{p \geq 2} \frac{\lambda^p}{p+1} \biggl( \frac{p}{p+1} \biggr)^p \B{E} \bigl( \lVert X \rVert^p \bigr) \Biggl( \langle \theta, m \rangle_- + \frac{a \mu \lambda}{2} \langle \theta, m \rangle^2 \Biggr). \end{multline*} \end{lemma} Proposition \ref{prop:2.1} follows by taking $b$ as mentioned there, $\displaystyle\lambda = \frac{1}{\mu} \sqrt{\frac{2 \log(\delta^{-1})}{n a v}}$, and $\displaystyle \beta = \sqrt{\frac{2 b T \log(\delta^{-1})}{a v}} \geq \sqrt{ \frac{2 T \log(\delta^{-1})}{av}}$, so that the condition on $b$ is satisfied. \small
{ "timestamp": "2018-02-14T02:00:47", "yymm": "1802", "arxiv_id": "1802.04308", "language": "en", "url": "https://arxiv.org/abs/1802.04308" }
\section{Introduction} Robotics control and artificial intelligence (AI) in a broader perspective heavily rely on the availability of compact and expressive representations of the sensor data. Designing such representations has long been performed manually by the designer, but deep learning now provides a general framework to learn such representations from data. This is particularly interesting for robotics where multiple sensors (such as cameras) can provide very high dimensional data, while the robot objective can often be expressed in a much lower dimensional space (such as the 3D position of an object in a manipulation task). This low dimensional representation, frequently called the \emph{state} of the system, has the crucial role of encoding essential information (for a given task) while discarding the many irrelevant aspects of the original data. By \textit{Low dimensional}, we mean that the learned state dimension is significantly smaller than the dimensionality of the observation space. Such state representation is at the basis of the classical reinforcement learning (RL) framework \cite{Sutton98} in which an agent interacts with its environment by choosing an action as a function of the environment state in order to maximize an expected (discounted) reward. Following this framework, we call \textit{observation} the raw information provided by one or several of the robot sensors, and we call \textit{state} a compact depiction of this observation that retains the information necessary for the robot to choose its actions. While deep reinforcement learning algorithms have shown that it is possible to learn controllers directly from observations \cite{Mnih15}, reinforcement learning (or other control algorithms) can take advantage of low dimensional and informative representations, instead of raw data, to solve tasks more efficiently \cite{Munk16}. Such efficiency is critical in robotic applications where experimenting an action is a costly operation. In robotics, as well as in machine learning, finding and defining interesting states (or features) for control tasks usually requires a considerable amount of manual engineering. It is therefore interesting to learn these features with as little supervision as possible. The goal is thus to avoid direct supervision using a \textit{true} state, but instead use information about the actions performed by the agent, their consequences in the observation space, and rewards (even if sparse, and when available). Along with this information, one can also set generic constraints on what a good state representation should be \cite{Jonschkowski15,Lesort17}. Feature learning in general is a wide domain which aims at decomposing data into different features that can faithfully characterize it. It has been a particular motivation for deep learning to automatically learn a large range of specific feature detectors in high dimensional problems. State representation learning (SRL) is a particular case of feature learning in which the features to learn are low dimensional, evolve through time, and are influenced by actions or interactions. SRL is generally framed in a control setup constrained to favor small dimensions to characterize an instance of an environment or an object, often with a semantic meaning that correlates with some physical feature. The physical feature can be, for instance, a position, distance, angle or an orientation. The objective of SRL is to take advantage of time steps, actions, and optionally rewards, to transform observations into states: a vector of a reduced set of the most representative features that is sufficient for efficient policy learning. It is also worth distinguishing between feature learning on a process that is only observed, and learning the state representation of a process in which the learning agent possesses embodiment and acts. The first one considers learning directly from observations, e.g., pixels, and leaves no room for the agent to act. The latter gives opportunity to exploit more possibilities to learn better representations by balancing between exploration and exploitation, e.g., active learning, or artificial curiosity \cite{Pere18,Pathak17}. As stated above, learning in this context should be performed without explicit supervision. In this article we therefore focus on SRL where \textit{learning} does not have the \textit{pattern recognition} regression or classification sense, but rather the sense of the process of model building \cite{Lake16}. Building such models can then exploit a large set of objectives or constraints, possibly taking inspiration from human learning. As an example, infants expect inertial objects to follow principles of persistence, continuity, cohesion and solidity before appearance-based elements such as color, texture and perceptual goodness. At the same time, these principles help guide later learnings such as object’ rigidity, softness and liquids properties. Later, adults will reconstruct perceptual scenes using internal representations of the objects and their physically relevant properties (mass, elasticity, friction, gravity, collision, etc.) \cite{Lake16}. In the same way, the SRL literature may make use of knowledge about the physics of the world, interactions and rewards whenever possible as a semi-supervision or self-supervision that aids the challenge of learning state representations without explicit supervision. Recently, several different approaches have been proposed to learn such state representation. In this review paper, our objective is to present and analyze those different approaches, highlight their commonalities and differences, and to propose further research directions. We extend a previously published review \cite{Bohmer15} with the most recent and rapidly evolving literature of the past years and focus on approaches that learn low dimensional Markovian representations without direct supervision, i.e., exploiting sequences of observations, actions, rewards and generic learning objectives. The works selected in this survey mostly evaluate their algorithms in simulations where agents interact with an environment. More marginally, some SRL algorithms are tested on real settings such as robotics tasks, e.g., manipulation or exploration as detailed in Section \ref{sec:EvaluationScenarios}. In the remainder of the paper, we first introduce the formal framework and notation, then present the objectives that can be used to learn state representations, and discuss the implementation aspects of these approaches before summarizing some current and future lines of research. \section{Formalism and definitions} \label{sec:formalism} \subsection{SRL Formalism} The nomenclature we use is very close to the one used in reinforcement learning literature \cite{Sutton98} and is illustrated in Fig. \ref{fig:notation}. We define an environment $\mathcal{E}$ where an agent performs actions $a_t \in \mathcal{A}$ at time step $t$ and where $\mathcal{A}$ is the action space (continuous or discrete). Each action makes the agent transition from a true state $\tilde{s}_t$ to $\tilde{s}_{t+1}$ which is unknown but assumed to exist. We call the true state space $\tilde{\mathcal{S}}$. The agent obtains an observation of $\mathcal{E}$ from its sensors, denoted $o_t \in \mathcal{O}$ where $\mathcal{O}$ is the observation space. Optionally, the agent may receive a reward $r_t$. The reward is given at $\tilde{s}_t$ by a reward function designed to lead the agent to a certain behavior that solves a task. The reward is optional as learning a state representation don't aim to solve a task, but is often present as one of the goal of SRL may be to improve task learning performance. \begin{figure} \centering \begin{tikzpicture}[ roundnode/.style={circle, draw=black!60, fill=green!0, very thick, minimum size=10mm}, squarednode/.style={rectangle, draw=black!60, fill=black!0, very thick, minimum size=10mm}, ] \node[squarednode] (state) {$\tilde{s}_t$}; \node[roundnode] (obs) [below=of state] {$o_t$}; \node[roundnode] (reward) [right=of obs] {$r_t$}; \node[roundnode] (obs2) [right=of reward] {$o_{t+1}$}; \node[roundnode] (reward2) [right=of obs2] {$r_{t+1}$}; \node[squarednode] (state2) [above=of obs2] {$\tilde{s}_{t+1}$}; \node[roundnode] (action) [above=of state] {$a_t$}; \draw[->] (state2.south) -- (obs2.north); \draw[->] (state.east) -- (state2.west); \draw[->] (state.south) -- (obs.north); \draw[->] (action.east) .. controls +(right:7mm) .. (state2.north); \draw[dashed, ->] ($(state.east)-(state.north)$) -- (reward.north); \draw[dashed, ->] ($(state2.east)-(state.north)$) -- (reward2.north); \end{tikzpicture} \caption{General model : circle are observable and square are the latent state variables.} \label{fig:notation} \end{figure} The SRL task is to learn a representation $s_t \in \mathcal{S}$ of dimension $K$ with characteristics similar to those of $\tilde{s}_t$ without using $\tilde{s}_t$. More formally, SRL learns a mapping $\phi$ of the history of observation to the current state $s_t = \phi(o_{1:t})$. Note that actions $a_{1:t}$ and rewards $r_{1:t}$ can also be added to the parameters of $\phi$ \cite{Jonschkowski15}. In this paper, we are specifically interested in the particular setting in which this mapping is learned through proxy objectives without access to the true state $\tilde{s}_t$. This family of approaches is called unsupervised or self-supervised. Finally, we note $\hat{o}_t$ the reconstruction of $o_t$ (similarly for $\hat{a}_t$ and $\hat{r}_t$), that will be used in various SRL approaches. \subsection{SRL approaches} Based on the previously defined notations, we can briefly summarize the common strategies used in state representation learning that are detailed in the next sections. In the following, $\theta$ represents the parameters optimized by minimizing the model's loss function. In most of the approaches we present, this model is implemented with a neural network. \begin{itemize} \item{\textbf{Reconstructing the observation}}: \input{graphes/AE.tex} learning the function $\phi$ (Eq. \ref{eq:enc}) so that it is possible to reconstruct the observation with a decoder $\phi^{-1}$ (Eq. \ref{eq:dec}) by minimizing the reconstruction error between the original observation and its predicted reconstruction. The reconstruction is learned under different constraints that give to $s_t$ specific characteristics (e.g., dimensionality constraints, local denoising criterion \cite{Vincent10}, sparse encoding constraints \cite{Vincent08}, etc.) (Fig. \ref{fig:AutoEncoder}). \begin{equation} s_{t} = \phi(o_t; \theta_{\phi}) \label{eq:enc} \end{equation} \begin{equation} \hat{o}_{t} = \phi^{-1}(s_t; \theta_{\phi^{-1}}) \label{eq:dec} \end{equation} where $\theta_{\phi}$ and $\theta_{\phi^{-1}}$ are the parameters learned for the encoder and decoder, respectively. \item{\textbf{Learning a forward model}}: \input{graphes/forward.tex} A forward model predicts $s_{t+1}$ from $o_t$ or $s_t$ and $a_t$ (Fig. \ref{fig:ForwardModel}). In this context, we want to learn the mapping $\phi$ from $o_t$ to $s_t$ using the model that predicts $s_{t+1}$ from $o_t$. Hence the prediction work in two steps, first encoding from $o_t$ to $s_t$ then transition from $s_t$ to $\hat{s}_{t+1}$. We can not compute any error on $s_t$, however at $t+1$ the model can learn from the error between $\hat{s}_{t+1}$ and $s_{t+1}$. The error is back propagated through the transition model and the encoding model. Consequently the methods allows to learn a model $\phi$. \begin{equation} \hat{s}_{t+1} = f(s_t, a_t; \theta_{fwd}) \label{eq:fwd} \end{equation} Learning such a model makes it possible to impose structural constraints on the model for state representation learning. For example, the forward model can be constrained to be linear between $s_t$ and $s_{t+1}$, imposing that the system in the learned state space follows simple linear dynamics. \item{\textbf{Learning an inverse model}}: \input{graphes/Inverse.tex} An inverse model predicts action $a_{t}$ given observations $o_t$ and $o_{t+1}$ or states $s_t$ and $s_{t+1}$. Like for forward model, the goal here is to learn the mapping $\phi$ from $o_t$ to $s_t$ through two steps, encoding for $o_t$ and $o_{t+1}$ and action prediction. The error is computed for action prediction between $a_t$ and $\hat{a}_t$ and then back-propagated to learn the encoding \begin{equation} \hat{a}_t = g(s_t, s_{t+1}; \theta_{inv}) \label{eq:inv} \end{equation} Learning such model enforces that the state encodes enough information to recover the action that modified the state (Fig.~\ref{fig:InverseModel}). \item{\textbf{Using prior knowledge to constrain the state space}}: \input{graphes/prior.tex} A last approach is to handle SRL by using specific constraints or prior knowledge about the functioning, dynamics or physics of the world (besides the constraints of forward and inverse models) such as the temporal continuity or the causality principles that generally reflect the interaction of the agent with objects or in the environment \cite{Jonschkowski15}. \textit{Priors} are defined as objective or loss functions $\mathcal{L}$, applied on a set of states $s_{1:n}$ (Fig. \ref{fig:Priors}), to be minimized (or maximized) under specific condition $c$. An example of condition can be enforcing locality or time proximity within the set of states. \begin{equation} Loss = \mathcal{L}_{prior}(s_{1:n}; \theta_{\phi} | c) \label{eq:prior} \end{equation} \end{itemize} All these approaches are detailed in Section \ref{sec:LearningObjectives}. \subsection{State representation characteristics} Besides the general idea that the state representation has the role of encoding essential information (for a given task) while discarding irrelevant aspects of the original data, let us detail what the characteristics of a good state representation are. In a reinforcement learning framework, the authors of \cite{Bohmer15} defines a good state representation as a representation that is: \begin{itemize} \item Markovian, i.e. it summarizes all the necessary information to be able to choose an action within the policy, by looking only at the current state. \item Able to represent the true value of the current state well enough for policy improvement. \item Able to generalize the learned value-function to unseen states with similar futures. \item Low dimensional for efficient estimation. \end{itemize} Note that these are some characteristics expected of the state representation, but they cannot be used for learning this representation. Instead, they can later be verified by assessing the task performance for a controller based on the learned state. Note also that multiple state representations can verify these properties for a given problem and that therefore, there is no unique solution to the state representation learning problem. We detail this problem when discussing the evaluation of the learned state space in Section \ref{sec:evaluation}. State representation learning can also be linked with the idea of learning disentangled representations that clearly separate the different factors of variation with different semantics. Following \cite{Achille17}, a good representation must be sufficient, as efficient as possible (i.e., easy to work with, e.g., factorizing the data-generating factors), and minimal (from all possible representations, take the most efficient one). The \textit{minimal} assumption is comparable to the simplicity prior \cite{Jonschkowski15}. It assumes that only a small number of world properties are relevant, and that there exists a low dimensional state representation of a higher level input observation. Related to \textit{Occam’s razor}, this prior favors state representations that exclude irrelevant information to encourage a lower dimensionality. The \textit{efficiency} aspect of the representation means that there should be no overlapping between dimensions of the learned state features. Unfortunately, independence of features alone may not be enough to assure a good quality of representations and guarantee a disentanglement of factors of variation \cite{Thomas17}. Higher level abstractions can, however, allow to improve this disentanglement and permit easier generalization and transfer. Cues to disentangle the underlying factors can include spatial and temporal scales, marginal independence of variables, and controllable factors \cite{Thomas17}. \subsection{State representation learning applications} \label{sec:srl-applications} The main interest of SRL is to produce a low dimensional state space in which learning a control policy will be more efficient. Indeed, deep reinforcement learning in the observation space has shown spectacular results in control policy learning \cite{Mnih15,Lillicrap15,Mnih16} but is known to be computationally difficult and requires a large amount of data \cite{Rusu2016}. Separation of representation learning and policy learning is a way to lighten the complete process. As described in most of the reviewed papers \cite{Mattner12,Watter15,Hoof16,Munk16,Curran16,Wahlstrom15,Shelhamer17,Oh17}, this approach is used to make reinforcement learning faster in time and/or lighter in computation. SRL can be particularly relevant with multimodal observations that are produced by several complementary sensors with high dimensionality as is, for example, the case of autonomous vehicles. Low dimensional representations are then key to make an algorithm able to take decisions from hidden factors extracted from these complementary sensors. This is for instance the case of representation learning from different temporal signals in \cite{Duan17,Bohg17}. Audio and images are blended in \cite{Yang17} while RGB and depth are combined in \cite{Duan17}. SRL can also be used in a transfer learning setting by taking advantage of a state space learned on a given task to rapidly learn a related task. This is for example the case in \cite{Jonschkowski15} where a state space related to a robot position is learned in a given navigation task and reused to quickly learn another navigation task. SRL is also used as pretraining for transfer to other applications afterwards such as reinforcement learning \cite{Munk16,Oh17}. For concrete examples on SRL application scenarios see Section \ref{sec:EvaluationScenarios}. Another case where SRL could be useful is in the application of Evolution Strategies (ES) for robot control learning \cite{Stulp13}. Evolution strategies are a family of black box optimization algorithms that do not rely on gradient descent and can be an alternative to RL techniques (such as Q-learning and policy gradients) but are less adapted to high-dimensional problems. Indeed, the convergence time of ES algorithm depends on the dimension of the input: the larger the dimension is, the larger amount of solutions ES has to explore \cite{Stulp13}. ES optimization methods have shown to be efficient for deep reinforcement learning \cite{Clune17} but they could then take a clear advantage of a lower dimension input to explore faster the parameter space than using raw data \cite{Alvernaz17}. \section{Learning objectives} \label{sec:LearningObjectives} In this section, we review what objectives can be used to learn a relevant state representation. A schema detailing the core elements involved in each model's loss function was introduced in Fig.~\ref{fig:AutoEncoder}~--~\ref{fig:Priors}, which highlights the main approaches to be described here. This section touches upon machine learning tools used in SRL such as auto-encoders or siamese networks. A more detailed description of these is later addressed in Section~\ref{sec:tools}. \subsection{Reconstructing the observation} \label{sub:recon} A first idea that can be exploited is the fact that a true state, along with some noise, was used to generate the observation. Under the hypothesis that the noise is not too large, compressing the observation should retain the important information contained in the true state. While this idea is very often exploited with dimensionality reduction algorithms \cite{fodor2002survey} such as Principal Component Analysis (PCA), we focus here on the recent approaches specifically dealing with state representation. The PCA algorithm is a linear transformation able to compress and decompress observations with minimal reconstruction error. PCA have been exploited to reduce the dimensionality of the state space during learning \cite{Curran16}. By projecting images into a 3- or 4-dimensional space, it is possible to produce a state that is used by a reinforcement learning algorithm and that reduces the convergence time in Super Mario games \cite{Karakovskiy12} and different simulations such as Swimmers or Mountain Car. Auto-encoders are models that learn to reproduce their input under constraints on their internal representations such as dimensionality constraints (Fig. \ref{fig:AutoEncoder}). Their architecture can therefore be used to learn a particular representation in low dimensions $s_t$ by reconstructing $o_t$. Simple auto-encoders can be used to learn 2D representation of a real pole from raw images \cite{Mattner12} (see Section \ref{sec:EvaluationScenarios} on evaluation scenarios). After training, the encoding vector from the AE is used to learn a controller to balance the pole. An auto-encoder whose internal representation is constrained to represent a position that serves as input to a controller is also presented in \cite{Finn15} and \cite{Alvernaz17}. The proposed model learns a state representation from raw pixels of respectively a PR2 robot's hand and an agent in the VizDoom environment. These models, based on auto-encoders that reconstruct the observation at the same time step, can however learn only if the factors of variations are only linked to the actual state, or if very prominent features exists \cite{Lesort17}. In order to relax this assumption, it is possible to reconstruct observations from other time steps or to use constraints on the evolution of the state (as will be more detailed in Section \ref{sec:forward}) to focus reconstruction on features relevant to the system dynamics. An example of auto-encoder tuned to take into account the system dynamics is proposed in \cite{Goroshin15} where an AE with a siamese encoder projects sequences of images into a state representation space $\mathcal{S}$ with constraints on the transition between $s_t$ and $s_{t+1}$ to be linear. They use observations at several time steps in order to take time into account in the representation, and predict future observations through a single decoder that reconstructs $\hat{o}_{t+1}$. This makes the model able to learn representations that are related to several time steps and filter out random features of the environment. The idea of using an auto-encoder to learn a projection into a state space where transitions are assumed to be linear has also been used by \cite{Watter15}. The model presented, ``Embed To Control" (E2C), consists of a deep generative model that learns to generate image trajectories from a linear latent space. Extending \cite{Watter15}, the representation with dynamic constraints can be learned on policy at the same time as a reinforcement learning algorithm learns a task \cite{Hoof16}. They compared different types of auto-encoders to learn visual and tactile state representations and use this representation to learn manipulation task policies for a robot. Sharing the same idea of embedding dynamic constraints into auto-encoders, Deep Variational Bayes Filters (DVBF) are an extension of Kalman filters which learn to reconstruct the observation based on a nonlinear state space using variational inference \cite{Karl16}. The reconstruction from a non linear state space based on a model inspired by a Deep Dynamical Model (DDM) \cite{Wahlstrom15} and E2C \cite{Watter15} is proposed in \cite{Assael15}. It is argued that the model is adapted for better training efficiency and it can learn tasks with complex non-linear dynamics \cite{Assael15}. Their result shows improvements over the PILCO model \cite{Deisenroth11}, which learns a state representation by only minimizing the reconstruction error without constraining the latent space. \subsection{Learning a forward model} \label{sec:forward} Last subsection reviewed how reconstructing an observation is useful to learn state representations. Now we will review how temporal dynamics of the system can also help the same purpose. Therefore we present approaches that rely on learning a \textit{forward} model to learn a state space. The general idea is to force states to efficiently encode the information necessary to predict the next state (Fig.~\ref{fig:ForwardModel}). As described in Section \ref{sec:formalism}, in the case of the forward models we study here, the model is used as a proxy for learning $s_t$. The model firstly makes a projection from the observation space to the state space to obtain $s_t$ and applies a transition to predict $\hat{s}_{t+1}$. The error is computed by comparing the estimated next state $\hat{s}_{t+1}$ with the value of $s_{t+1}$ derived from the next observation $o_{t+1}$ at the next time step. Note that forward models can benefit from the observation reconstruction objective presented in Section \ref{sub:recon}. As an example, the works previously presented in Section \ref{sub:recon} \cite{Goroshin15,Hoof16,Watter15,Assael15,Karl16} belong to the auto-encoder category of models. However, they all predict future observations to learn representations and therefore, they as well belong to the family of forward models. The method these works use to combine forward models and auto-encoders consists in mapping $o_t$ to $s_t$, and then compute the transition, with the help of $a_t$, to obtain $\hat{s}_{t+1}$. $\hat{s}_{t+1}$ is then remapped onto the pixel space in form of a vector $\hat{o}_{t+1}$. The error is then computed pixel-wise between $\hat{o}_{t+1}$ and $o_{t+1}$. One common assumption is that the forward model in the learned state space is linear \cite{Goroshin15, Hoof16}. The transition is then just a linear combination of $s_t$ and $a_t$ as in Eq. \ref{eq:linear}. $W, U$ and $V$ are either fixed or learned parameters \cite{Hoof16}. \begin{equation} \hat{s}_{t+1}= W * \hat{s}_t + U * a_t + V \label{eq:linear} \end{equation} In a similar way, the \textit{Embed to Control} model (E2C) based on variational auto-encoders uses Eq.~\ref{eq:linear} to compute the mean $\mu$ of a distribution and learn supplementary parameters for the variance $\sigma$ of the distribution~\cite{Watter15}. Then, $\hat{s}_{t+1}$ is computed with Eq. \ref{eq:linear2}: \begin{equation} \hat{s}_{t+1} \sim \mathcal{N} (\mu=W* \hat{s}_t + U * a_t + V, \sigma) \label{eq:linear2} \end{equation} Using distributions to compute $\hat{s}_{t+1}$ allows to use the KL-divergence to train the forward model. This method is also used in \cite{Karl16} and \cite{Krishnan15}. However, the transition model in \cite{Krishnan15} considers the KL-divergence between $P(s_{t+1})$ and $P(\hat{s}_{t+1})$ and does not use the loss of the reconstruction based on $o_{t+1}$ and $\hat{o}_{t+1}$. The use of $a_t$ is a common feature in most forward models based approaches. In fact, as several future states $s_{t+1}$ from a given state are possible, $s_t$ alone does not contain enough information to predict $s_{t+1}$. The only approach that gets rid of the need for actions assumes that the transition from $s_{t-1}$ to $s_{t}$ allows to deduce the transition from $s_t$ to $s_{t+1}$ and uses several past states to predict $s_{t+1}$ \cite{Goroshin15}. Actions are therefore implicit in this approach. Another use of a forward model, connected to an intrinsic curiosity model (ICM) which helps agents explore and discover the environment out of curiosity when extrinsic rewards are sparse or not present at all, is proposed in \cite{Pathak17}. In this model, an intrinsic reward signal is computed from the forward model's loss function $\mathcal{L}_{fwd}$ ($\hat{f}$ is the forward function learned by the model, $\hat{\phi}$ is the encoding model): \begin{equation} \mathcal{L}_{fwd} (\hat{\phi}(o_{t+1}), \hat{f}(\hat{\phi}(o_{t}),a_{t})) = \frac{1}{2} \parallel \hat{f} (\hat{\phi}(o_{t}),a_{t}) − \hat{\phi}(o_{t+1}) \parallel_2 ^2 \end{equation} It is argued that there is no incentive in this model for $s_t$ to learn to encode any environmental features that cannot influence or are not influenced by the agent's actions. The learned exploration strategy of the agent is therefore robust to uncontrollable aspects of the environment such as the presence of distractor objects, changes in illumination, or other sources of noise in the environment \cite{Pathak17}. Forward models are therefore able to learn representations of controllable factors: in order to predict next state, the model must understand the object being controlled. This kind of representation can also be learned through a controllability prior \cite{Jonschkowski17}. If a robot acts by applying forces, controllable things should be those whose accelerations correlate with the actions of the robot. Accordingly, a loss function can be defined to minimize the covariance between an action dimension $i$ and the accelerations in the state dimension $i$. The following formula from \cite{Jonschkowski17} makes it explicit: \begin{equation} Controllability_i = e^{-cov(a_{t,i},s_{t+1,i})}, \label{equation_Prior_controllability} \end{equation} where $cov(a_{t,i},s_{t+1,i})$ is the covariance between the state $s_{t+1, i}$ at dimension $i$ and time $t$ and $a_{t,i}$ (the action at dimension $i$ that led to such state). Note that here the learned state is assumed to represent an acceleration. Related with this prior also is the notion of \textit{empowerment}~\cite{Klyubin05}, defined as an information-theoretic capacity of an agent’s actuation channel to influence its own evolution. The concept of empowerment is related to \textit{accountability} or \textit{agency}, i.e., recognizing when an agent is responsible for originating the change of state in the environment. \subsection{Learning an inverse model} The forward model approach can be turned around and, instead of learning to predict next state (given previous state and action), use current and next states to predict the action between them. The inverse model framework is used in SRL by firstly performing a projection of $o_t$ and $o_{t+1}$ onto learned states $s_t$ and $s_{t+1}$, and secondly, by predicting the action $\hat{a}_t$ that would explain the transition of $s_t$ into $s_{t+1}$ (Fig. \ref{fig:InverseModel}). As before, learning this model can impose constraints on the state representation to be able to efficiently predict actions. An example using inverse models to learn state representations is the Intrinsic Curiosity Module (ICM) \cite{Pathak17}. It integrates both an inverse and forward model and the authors argument that using an inverse model is a way to bypass the hard problem of predicting original observations (e.g., pixels in images), since actions have much lower dimension. A different kind of inverse model is used in \cite{Shelhamer17}, where the policy gradient algorithm used to learn a controller is augmented with auxiliary gradients from what is called \textit{self-supervised} tasks. In this case, in lack of external supervision, the prediction error resulting from interactions with the environment acts as a self-supervision. They learned a inverse dynamics model to retrieve from $o_t$ and $o_{t+1}$ the action $a_t$ performed between the two successive time steps. Note that connections among forward and inverse models are important: inverse models can provide supervision to learn representations that the forward model regularizes by learning to predict $s_{t+1}$~\cite{Agrawal16}. In practice, this is implemented by decomposing the joint loss function as a sum of the inverse model loss plus the forward model loss \cite{Agrawal16}. Conversely, \cite{zhang18} shows in an ablation study that using an inverse model (along with a forward model and an auto-encoder) is the factor that contributes the most to learning a good state representation. Another approach including forward and inverse models, as well as a reconstruction of the observation including multimodal inputs is \cite{Duan17}. \subsection{Using feature adversarial learning} \label{sec:adversarial} Adversarial networks \cite{Goodfellow14} can also be used for unsupervised learning of state representations. The use of the Generative Adversarial Network (GAN) framework to learn state representations is proposed in \cite{Chen16}. They present a model named InfoGAN that achieves the disentanglement of latent variables on 3D poses of objects. As described in \cite{Chen16}, the goal is to learn a generator distribution $P_G(o)$ that matches the real distribution $P_{data}(o)$. Instead of trying to explicitly assign a probability to every $o$ in the data distribution, GANs learn a generator network $G$ that samples from the generator distribution $P_G$ by transforming a noise variable $z \sim P_{noise}(z)$ into a sample $G(z)$. The noise variable has two components. A first one, $z_G$, randomly sampled from a Gaussian distribution, and a second one with smaller dimension, $z_U$, sampled from a uniform distribution. The latter is used during training so that the $G(z)$ has a high mutual information with $z_U$. Then, the sample from $z_U$ has a high correlation with $G(z)$ and can thus be considered as a state representation. This generator is trained by playing against an adversarial discriminator network $D$ that aims at distinguishing between samples from the true distribution $P_{data}$ and the generator distribution $P_G$. The authors succeed to learn states corresponding to object orientations from sequences of images. Another example of SRL with Generative Adversarial Networks is presented by \cite{Donahue16, Dumoulin16}. BiGAN and ALI are extensions of regular GANs to learn the double mapping from image space to latent space, and from latent space to image space. They allow the learned feature representation to be useful for auxiliary supervised discrimination tasks, and competitive with unsupervised and self-supervised feature learning. The BiGAN has also been experimented in \cite{Shelhamer17} to learn state representations used for reinforcement learning, but lead to lower performance than their own approach (Section \ref{sec:forward}). \subsection{Exploiting rewards} As opposed to RL, the use of a reward value in SRL is not compulsory. However, it can be used as supplementary information to help differentiating states and to learn task related representations. Rewards are helpful to disentangle meaningful information from a noisy or distracting one, and to tie the representation to a particular task. However, in a multi-task setting, this approach can also be used to learn a generic state representation that is relevant to different tasks. A \textit{predictable reward prior} which estimates $\hat{r}_{t+1}$ given a state $s_t$ and an action $a_t$ is implemented in \cite{Munk16} (along with a forward model) to learn a state for reinforcement learning. Another approach that exploits reward is presented in~\cite{Oh17}. Besides predicting the reward similarly to~\cite{Munk16}, they also learn to predict the value (discounted sum of future reward) of the next state and exploit this capacity to rely on planning multiple steps for policy learning. The author state that predicting rewards for multiple steps is much easier that predicting observations, while giving the important information for learning a policy. A dimensionality reduction model called \textit{reward weighted principal component analysis} (rwPCA), as another way of using rewards for state representation was proposed in \cite{Parisi17}. \textit{rwPCA} uses data collected by an RL algorithm and operates a dimensionality reduction strategy which takes reward into account to keep the information into a compressed form. The compressed data is afterwards used to learn a policy. On the same idea of constructing a task-related representation, \cite{Jonschkowski15} and \cite{Lesort17} use rewards as supplementary information to impose constraints on the state space topology. One of these constraints makes the space more suited to discriminate between states with different rewards. The state space is then particularly adapted to solve a given task. This constraint is called \textit{causality prior} in \cite{Jonschkowski15} and \cite{Lesort17}. It assumes that if we have two different rewards after performing the same action in two different time steps, then the two corresponding states should be differentiated and far away in the representation space (Equation \ref{equation_Prior_Caus}). \begin{equation} \mathcal{L}_{Caus}(D,\hat{\phi})=\mathbf{E}[ e^{-\parallel\hat{s}_{t_2}-\hat{s}_{t_1}\parallel^2} \mid a_{t_1}=a_{t_2},r_{t_1+1}\neq r_{t_2+1}] \label{equation_Prior_Caus} \end{equation} \subsection{Other objective functions} \label{sub:prior} In this section, we present other approaches assuming various specific constraints for state representation learning. Following~\cite{Lake16}, the learning process can be constrained by prior knowledge (either initially provided by the designer or acquired via learning) to allow the agent to leverage existing common sense, intuitive physics, physical laws, mental states of others, as well as other abstract regularities such as compositionality and causality. This kind of a priori knowledge is called \textit{prior} \cite{Bengio12}, \cite{Jonschkowski15}, \cite{Lesort17},\cite{Jonschkowski17}, and is defined through cost functions. These loss functions are applied in the state space in order to impose the required constraints to construct the model projecting the observations in the state space. In the following, $\Delta s_t = s_{t+1}-s_t$ is the difference in between states at times $t$ and $t+1$, and $D$ is a set of observations. \begin{itemize} \item \textbf{Slowness Principle}\\ The slowness principle assumes that interesting features fluctuate slowly and continuously through time and that a radical change inside the environment has low probability \cite{Wiskott02,Kompella11}. \begin{equation} \mathcal{L}_{Slowness}(D,\phi)=\mathbb{E}[\parallel\Delta\ s_t\parallel^2] \label{equation_Prior_Temporel} \end{equation} This assumption can have other naming depending on the unit of $s_t$, e.g., prior of time coherence (time) \cite{Jonschkowski15,Lesort17} or inertia (velocity) \cite{Jonschkowski17}. \item \textbf{Variability} \\ The assumption of this prior is that positions of relevant objects vary, and learning state representations should then focus on moving objects \cite{Jonschkowski17}. \begin{equation} \mathcal{L}_{Variability}(D,\phi)=\mathbb{E}[ e^{- \parallel s_{t1}- s_{t2}\parallel}] \label{equation_Prior_Variation} \end{equation} $e^{-distance}$ is used as a similarity measure that is 1 if the distance among states is 0 and goes to 0 with increasing distance between states. Note that this prior is counter-balancing the slowness prior introduced above as the slowness alone would lead to constant values. \item \textbf{Proportionality}\\ The proportionality prior introduced in \cite{Jonschkowski15} assumes that for the same action in different states, the reactions to this action will have proportional amplitude or effect. The representation then vary in the same amount for two equal actions in different situations. \begin{equation} \mathcal{L}_{Prop}(D,\phi)=\mathbb{E}[(\parallel\Delta s_{t_2}\parallel-\parallel\Delta s_{t_1}\parallel)^2 | a_{t_1}=a_{t_2}] \label{equation_Prior_Prop} \end{equation} \item \textbf{Repeatability}\\ This prior states that two identical actions applied at similar states should provide similar state variations, not only in magnitude but also in direction \cite{Jonschkowski15}. \begin{equation} \mathcal{L}_{Rep}(D,\phi)=\mathbb{E}[e^{-\parallel s_{t_2}-s_{t_1}\parallel^2}\parallel\Delta s_{t_2}-\Delta s_{t_1}\parallel^2 \mid a_{t_1}=a_{t_2}] \label{equation_Prior_Rep} \end{equation} \item \textbf{Dynamic verification}\\ Dynamic verification \cite{Shelhamer17} consists in identifying the corrupted observation $o_{t_c}$ in a history of $K$ observations $o_t$ where $t \in \llbracket 0,K \rrbracket$. Observations are first encoded into states and the sequence is classified by a learned function $f$ to output the corrupted time step. Negative samples are produced by incorporating observations from a wrong time step into the sequence of images. This discriminative approach forces SRL to encode the dynamics in the states. \item \textbf{Selectivity}\\ States can be learned by using the idea that factors such as objects correspond to `independently controllable' aspects of the world that can be discovered by interacting with the environment \cite{Thomas17}. Knowing the dimension $K$ of the state space, the aim is to train K policies $\pi_k$ with $k \in \llbracket 1,K \rrbracket$. The goal is that the policy $\pi_k$ causes a change in $s_t^{(k)}$ only, and not in any other feature. To quantify the change in $s_t^{(k)}$ when actions are taken according to $\pi_k$, the selectivity of a feature $k$ is: \begin{equation} \mathcal{L}_{sel}(D,\phi,k)=\mathbb{E} \bigg[ \frac{\parallel s_{t+1}^{(k)}-s_t^{(k)}\parallel}{\sum_{k'} \parallel s_{t+1}^{(k')}-s_{t}^{(k')}\parallel} | s_{t+1} \sim P_{s_t, s_{t+1}}^{a} \bigg] \label{equation_Prior_Sel} \end{equation} where $P_{s_t, s_{t+1}}^{a}$ is the environment transition distribution from $s_t$ to $s_{t+1}$ under action $a$. The selectivity of $s_{t}^{(k)}$ is maximal when only that single feature changes as a result of some action. Maximizing the selectivity improves the disentanglement of controllable factors in order to learn a good state representation. \end{itemize} \input{Tables/models_characteristics.tex} \subsection{Using hybrid objectives} Reconstruction of data in the observation space, forward models, inverse models, exploitation of rewards, and other objective functions presented in the previous sections are different approaches to tackle the state representation learning challenge. However, these approaches are not incompatible, and models often take advantage of several objective functions at the same time. For instance, interactively \textit{learning to poke by poking} \cite{Agrawal16} is an example of empirical learning of intuitive physics using $o_t$ and $o_g$ as current and goal images, respectively, in order to predict the poke action. The latter is composed by the location, angle and length of the action that sets the object in the state of goal image $o_g$. Simulations shows that using the inverse model or jointly the inverse and forward models improve performance at pushing objects and that when the training data available is reduced, the joint model outperforms the inverse model with a performance comparable to using a considerably larger amount of data. \input{graphes/hybrid} The authors in \cite{Finn15,Goroshin15} use the reconstruction of the observation and the slowness principle in their SRL approach. \cite{Goroshin15,Hoof16,Watter15,Assael15,Karl16,Ha18} combine the reconstruction of observation and forward models. \cite{Jonschkowski15,Lesort17} take advantage of rewards with a causality prior (Eq. \ref{equation_Prior_Caus}) and several other objective functions such as the slowness principle, proportionality, and repeatability to learn state representations. We illustrate, as an example, the combination of objective functions from \cite{Watter15} in \figurename~\ref{fig:AllModels}. Table \ref{tab:models_characteristics} summarizes all the reviewed models by showing, for each one, which proxies or surrogate functions have been used for learning: reconstruction of observation, prediction of the future (forward model) and/or retrieving actions (inverse model), and what kind of information is used: action and/or rewards. \section{Building blocks of State Representation Learning} \label{sec:tools} In this section, we cover various implementation aspects relevant to state representation learning and its evaluation. We refer to specific surrogate models, loss function specification tools or strategies that help constraining the information bottleneck and generalizing when learning low-dimensional state representations, \subsection{Learning tools} We first detail a set of models that through an auxiliary objective function, help learning a state representation. One or several of these learning tools can be integrated in broader SRL approaches as was previously described. \subsubsection{Auto-encoders (AE)} Auto-encoders (AE) are a common tool used to learn state representations that are widely used for dimensionality reduction \cite{Hinton06, Wang12,Wang16}. Their objective is to output a reproduction of the input. Its architecture is composed by an encoder and a decoder. The encoder projects the input to a latent space representation (often in lower dimension than the input), which is re-projected to the output afterwards by the decoder. In our problem setting, $o_t$ is the input, $s_t$ the latent representation, and $\hat{o}_{t}$ is the output. The dimensionality of the latent representation can be chosen depending on the dimension of the state representation we want to learn and enforcing it in such case. The AE will then automatically learn a compact representation by minimizing the reconstruction error between input and output. The usual loss function $\mathcal{L}$ to measure the reconstruction error is the mean squared error (MSE) between input and output, computed pixel-wise. However, it can be any norm. \begin{equation} Loss = \mathcal{L}( x, \hat{x}) \label{equation_Squared_error} \end{equation} Auto-encoders are used in different SRL settings \cite{Finn15,Mattner12}; PCA can also be considered as a particular case of auto-encoder \cite{Curran16}. \subsubsection{Denoising auto-encoders (DAE)} The main issue of auto-encoders is the risk of finding an easy but not satisfying solution to minimize the pixel reconstruction error. This occurs when the decoder reconstructs a kind of \textit{average} looking dataset. To make the training more robust to this kind of mean optimization solution, denoising auto-encoders (DAE) \cite{Vincent08,Vincent10} can be used. This architecture adds noise to the input and makes the ``average" image a more corrupted solution than the original AE. The DAE architecture is used in \cite{Hoof16} to learn visual and tactile state representations. The authors compared state representations learned by a DAE and a variational auto-encoder (VAE) by using the learned states in a reinforcement learning setting. They found that, in most cases, DAE state representation models gather less rewards than those with VAE state representations. \subsubsection{Variational auto-encoders (VAE)} The SRL literature has also benefited from the variational inference used in variational auto-encoders (VAE) \cite{Kingma13,Rezende14} to learn a mapping from observations to state representations. A VAE is an auto-encoder with probabilistic hidden cells: it interprets $\mathcal{S}$ as a set sampled from distribution $P(s_t|o_t)$. It then approximates $P(s_t|o_t)$ with a model $q_\theta(s_t|o_t)$ called the approximate posterior or recognition model. $\theta$ represents the parameters of the model, which, for instance, can be a neural network. The VAE also provides a generator which approximates $P(o_t|s_t)$ with a model $p_\phi$. $\phi$ represents the parameters of the generator. Both models $p$ and $q$ are then trained by minimizing the error between $o_t$ and $\hat{o}_t$ and the KL divergence between $q_\theta(s_t|o_t)$ and the normal distribution $\mathcal{N}(\mu=0, \sigma=\mathds{I}$) (where $\mu$ is the mean of the distribution, $\sigma$ its covariance matrix and $\mathds{I}$ the identity matrix). VAE-related models that do not use exactly the original VAE, but variations of it, are \cite{Watter15,Assael15,Krishnan15,Hoof16,Karl16,Higgins16}. \subsubsection{Siamese networks} Siamese networks \cite{Chopra05} consist of two or more identical networks that share their parameters, i.e., have the exact same weights. The objective of the siamese architecture is not to classify input data, but to differentiate between the inputs (\textit{same} versus \textit{different} class or condition, for example). This kind or architecture is useful to impose constraints in the latent space of a neural network. For example it can be used to learn similarity metrics or time dependencies, as it is done in time-contrastive networks \cite{Sermanet17}. In the context of SRL, siamese networks can be employed to implement some priors previously presented in Section \ref{sub:prior}. For example, two siamese networks can be used to compute a similarity loss and optimize the slowness principle (or temporality prior) between $s_t$ and $s_{t+1}$ as in \cite{Lesort17}. In \cite{Goroshin15} they use three siamese networks to compute three consecutive states at the same time that are fed into another model that predicts the next state. \input{graphes/siames.tex} \subsection{Observation/action spaces} \label{sec:obs_act_spaces} This section presents a summary of the dimensionality of the observation, state and action spaces, as well as the applications in which the reviewed papers are evaluated (Table~\ref{tab:dimension_table}). The continuity or discreetness of the action space is also shown. These are good proxies to assess the complexity of the problem tackled: the higher the dimensionality of observation and action, as well as the smaller we want the dimension of the state to be, the harder is the task of learning a state representation because much more information will need to be processed and filtered in order to keep only the information that is substantial. We note also that the reviewed literature often presents results with presumably higher dimensionality of learned states than theoretically needed (e.g. using state of dimension 6 for a 2 joints robotic arm). The dimensionality of the state may seem obvious when we are learning a state that should (according to the task) correlate with a clear dimension (position, distance, angle) in the environment. However, deciding the dimensionality of the state space is not always trivial when we are learning more abstract states with no clear dimension associated to it. For instance, visually representing the state associated to an Atari game scene in a complex situation is not as easy to interpret nor assess in comparison to the dimensionality of states associated to a position of an arm, its angle or velocity. Indeed, since the learning objective we reviewed are just proxies to guide state representation learning, they can lead to something different that the ideal and minimal state representation. In particular, increasing the capacity of the model by augmenting the dimensionality of the state above the dimension of the true state can lead to a better optimization of the learning objectives. \input{Tables/dimension_table.tex} \subsection{Evaluating learned state representations} \label{sec:evaluation} This section provides a review of validation metrics and embedding quality evaluation techniques used across the literature. These are summarized in Table \ref{tab:metrics}. \input{Tables/metric_tab.tex} The most common way of evaluating the quality of the learned state space is by letting an agent use the states to learn a control task, and thus assessing whether the representation is general enough to be transferable. This method is for example applied to evaluate the performance of an SRL algorithm using reinforcement learning \cite{Jonschkowski15,Jonschkowski17,Munk16,Hoof16,Finn15,Pathak17,Shelhamer17,Oh17,Parisi17,Assael15}. However, this approach is often very costly and inefficient in terms of time, computation and data. Also, various state-of-the-art RL algorithms may be applied to learn a policy and may result in very different performances for a given state representation. The uncertainty inherent to RL therefore makes RL algorithms sufficient but not practical nor appropriate to be a necessary condition to validate a particular state representation. In consequence, it would be desirable to have an intermediate manner to assess the representation that is independent of the algorithm applied to complete the task and there are, indeed, several more direct ways to assess the learned state space. For example, visual assessment of the representation's quality can be done using a Nearest-Neighbors approach as in \cite{Sermanet17, Pinto16}. The idea is to look at the nearest neighbors in the learned state space, and for each neighbor, retrieve their corresponding observation. Visual inspection can then reveal if these two observations indeed correspond to nearest neighbors in the ground truth state space $\tilde{s}$ we intend to learn. While the nearest neighbor coherence can be assessed visually, KNN-MSE is a quantitative metric derived from this qualitative information \cite{Lesort17}. Using the ground truth state value for every observation, KNN-MSE measures the distance between the value of an observation and the value of the nearest neighbor observations retrieved in the learned state space. A low distance means that a neighbor in the ground truth is still a neighbor in the learned representation, and thus, local coherence is conserved. For an observation $o$, KNN-MSE is computed using its associated learned state $s=\phi(o)$ as follows: \begin{equation}\label{eq:knn_mse_crit} \textrm{KNN-MSE}(s)=\frac{1}{k}\sum_{s' \in KNN(s,k) } || \tilde{s} - \tilde{s}' ||^2 \end{equation} where $\textrm{KNN}(s,k)$ returns the $k$ nearest neighbors of $s$ (chosen with the Euclidean distance) in the learned state space $\mathcal{S}$, $\tilde{s}$ is the ground truth associated to $s$, and $\tilde{s}'$ is the ground truth associated to $s'$. One of the characteristics that a good representation should possess is to produce a disentangled representation of variation factors. The evaluation of these characteristics can be done using the selectivity prior (see Section \ref{sub:prior} and Eq. \ref{equation_Prior_Sel}) from \cite{Thomas17}. This prior cares about the independence among variations of the representation under each action. However, it is applicable mainly if actions are known to be independent. Another way to quantitatively compare the degree of disentanglement reached by a model is using the disentanglement metric score \cite{Higgins16}. It assumes that the data is generated by a process in which the generative factors are known, interpretable, and that some are conditionally independent. In order to measure the disentanglement, it uses a simple low-capacity and low VC-dimension linear classifier's accuracy (reported as \textit{disentanglement metric score}). The classifier’s goal is to predict the generative factor that was kept fixed for a given difference between pairs of representations from the same latent factor. Other metrics from the area of manifold learning can be used, such as distortion \cite{Indyk01} and NIEQA \cite{Zhang12}; both share the same principle as two quantitative measures of the global quality of a representation: the representation space should, as much as possible, be an undistorted version of the original space. Distortion \cite{Indyk01} gives insight of the quality of a representation by measuring how the local and global geometry coherence of the representation changes with respect to the ground truth. It was designed in the \textit{embeddings} context as a natural and versatile paradigm for solving problems over metric spaces. NIEQA (Normalization Independent Embedding Quality Assessment) \cite{Zhang12} is a more complex evaluation than distortion that measures the local geometry quality and the global topology quality of a representation. NIEQA local part checks if the representation is locally equivalent to an Euclidean subspace that preserves the structure of local neighborhoods. NIEQA objectives are aligned with KNN-MSE \cite{Lesort17}, as a measure to assess the quality of the representation, especially locally. The global NIEQA measure is also based on the idea of preserving original structure in the representation space, but instead of looking at the neighbors, it samples ``representative” points in the whole state space. Then, it considers the preservation of the geodesic distance between those points in the state space. One last mechanism to assess SRL methods is to use supervised learning to learn a regression from the learned representation to its ground-truth \cite{Jonschkowski17}. The training and test sets are separated into two datasets to evaluate if the regression can generalize to unseen states. The assumption is that this regression measures how well meaningful features are encoded in the state. A good generalization would show a good encoding. \subsection{Evaluation scenarios} \label{sec:EvaluationScenarios} Datasets used to validate state representation learning include varied, but mainly simulated, environments because they are easier to reproduce and generate. Unlike in image recognition challenges where MNIST digits or ImageNet datasets prevail, in state representation learning, a varied set of regular video games or visuomotor tasks in robotics can be found as a test suite for robotics control. Examples of simulated environments include, among others: \begin{itemize} \item Pendulum (Inverted or classical): The goal is to represent the state of the pendulum \cite{Watter15,Jonschkowski17,Hoof16,Mattner12}. The pendulum starts in a random position, and the objective is to swing it up so it stays upright (there is no specified reward threshold at which the task is considered solved). \item Cart-Pole: consists of an inverted pendulum attached to a cart which moves along a frictionless track; the system is controlled applying +1 or -1 force to the cart, and a reward of +1 is provided for every time step that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center\footnote{\url{https://github.com/openai/gym/wiki/Leaderboard\#pendulum-v0}} (\cite{Watter15}, \cite{Jonschkowski17}). \item Atari games \cite{Bellemare13}: mostly low (2D) dimensional simulated environments with different agents and goals. In these games, states can be represented through different variables (time in achieving a task, amount of bonus, keeping alive, etc.) \cite{Shelhamer17,Oh17}. \item More advanced test games include \textit{VizDoom}, where the levels passed, reward accumulated and exploration levels are used as evaluation metrics \cite{Pathak17,Alvernaz17}. Likewise, Mario Benchmark \cite{Karakovskiy12} is a platform designed for reinforcement learning based on the ``Super Mario Bros" video game. This test suite is for example experimented in \cite{Curran16,Pathak17}. \item Other evaluation benchmarks tested in the reviewed works in this survey include simulated octopus arms \cite{Engel06,Munk16}, labyrinths \cite{Thomas17}, navigation grids \cite{Magrans18,Oh17}, driving cars \cite{Jonschkowski15}, or \textit{mountain car} scenarios \cite{Curran16}. Another example is the \textit{bouncing ball}, where the goal is to learn a representation of one bouncing ball position in 2D (x,y) \cite{Karl16}. \item In the robotics domain we can find benchmarks on robot manipulation skills \cite{Finn15,Hoof16} such as Baxter pushing a button \cite{Lesort17}, grasping \cite{Finn15}, stabilizing \cite{Hoof16}, poking objects \cite{Agrawal16,Duan17} or balancing a real pendulum \cite{Mattner12}. Nevertheless, some approaches achieve to learn in real environment scenarios, for instance, with mobile robots that explore an arena \cite{Jonschkowski15}. \end{itemize} Many of the latter simulated scenarios are part of Universe and OpenAI Gym \cite{Brockman16} or DeepMind Labs \cite{Beattie16}. These benchmarking tasks used in the most prominent state representation learning literature are summarized in Table \ref{tab:dimension_table}. \section{Discussion and future trends} In this section, we first discuss the implications of SRL for autonomous agents and the assessment, comparison and reproducibility of the representation learned. Finally, we explore the consequences of SRL on the interpretability of machine learning algorithms. \subsection{SRL models for autonomous agents} SRL methods provide unsupervised tools for autonomous agents to learn representations about the environment without extra annotations. They need, however, that the agent gathers data to learn. Therefore, the role of environment exploration is an important dimension to investigate in SRL. If the space is not sufficiently explored by the agent, acquisition of varied observations and exposure to actions that lead to optimal performance can be hindered \cite{Pathak17}. One way to incorporate exploration in SRL is to integrate curiosity or intrinsic motivations \cite{Oudeyer07} in the algorithm that collects data. The overall idea of this approach is to complement the extrinsic reward by an intrinsic reward that favors states where SRL makes the most progress. This is done for example in the Intrinsic Curiosity Module (ICM) \cite{Pathak17} by defining an intrinsic reward linked to the forward model error which encourages exploration. This approach is improved in \cite{Magrans18} by balancing this exploratory behavior with an homeostatic drive to also favor actions that lead to familiar state-action pairs. The reverse question of how the learned state space can influence the performance of intrinsic motivation approaches \cite{Pere18} is also relevant. The automatic exploration, designed to maximize the quality of a learned state representation, is a field to be further explored in order to build high quality representations. Another approach to gather enough relevant data could be to perform data augmentation by adding data from simulation; however, the problem is to make the model benefit from simulation data for real life applications, a problem that is known as the \textit{reality gap} \cite{Mouret13}. Nevertheless, using both kinds of data was shown to improve results in particular applications \cite{Bousmalis17}. An interesting research direction is therefore to study how to exploit simulation data to improve SRL for real world applications. Another problem to ultimately perform SRL autonomously (i.e., without manual parameter tuning) is the choice of the state representation dimension, which is made empirically in most reviewed approaches. The challenge of deciding the dimensionality automatically can be related to a bias-variance trade-off, as the dimensionality of the representation constrains the capacity of the model. Indeed, increasing the states dimension augments the capacity of the model, which, as a result, will be better at reducing the training error, but also leads to overfitting. As discussed in Section \ref{sec:obs_act_spaces}, learning criteria can be better optimized by models with large capacity, and thus, an automatic process is prone to over estimate the dimension needed. To avoid choosing manually the state dimension, it is possible to choose automatically a number of features from a larger set such that they have a certain variance and are orthogonal \cite{Parisi17}. It can be done by using PCA to produce a set of features in which the most significant ones are selected with respect to the chosen variance. PCA can also be modified to select reward related components \cite{Parisi17}. Although the variance has to be fixed a priori, the authors claim that this is usually easier than choosing the state dimension. Extending this technique to other state representation approach could be an interesting research direction. \subsection{Assessment, comparison and reproducibility in SRL} The assessment challenge of SRL is two-sided. First, there is no easy nor certified way for validating a learned representation. Secondly, the lack of common evaluation frameworks makes a fair comparison between approaches difficult. As mentioned in Section \ref{sec:evaluation}, the most objective method to evaluate the quality of representations is to check if the state representation learned can be used by an RL algorithm to solve a task more efficiently. However, this assessment is uncertain and unstable given the stochasticity of reinforcement learning algorithms \cite{Henderson17}. Moreover, it is not obvious which RL algorithm is the best choice, and thus, several should be used in the comparison. In practice, a large amount of policy evaluation runs is therefore required in order to provide a robust assessment, which is possible in simulation but is seldom applicable on real robots, given the robots fragility and experimentation time involved. In this case, it is therefore interesting to use several of the other measures presented in section \ref{sec:evaluation}, that only give partial information on the state representation quality, but are possible to apply for a comparison with a ground truth state. Comparing approaches from published results is also particularly hard because of the high variability of the environments and data used in the different approaches (as illustrated in Table~\ref{tab:dimension_table}). This points to the need of an evaluation framework incorporating several tasks and several evaluation metrics similar to the ones proposed for reinforcement learning such as the \textit{DeepMind Control Suite} \cite{Tassa18}. Reproducibility guidelines with proper experimental techniques and reporting procedures, as pointed in \cite{Henderson17} for RL, should also be defined for SRL. In the mean time, as there is not yet an ideal method for state representation assessment, researchers should at least make public their simulation environment (with possible ground truth), and use simulation settings from other approaches to provide fairer comparisons and facilitate the method reproducibility. Furthermore, we strongly encourage authors to entirely describe their experiments, in particular, report their data and models' characteristics. \subsection{Providing interpretable systems} In 2018, European Union regulations on algorithmic decision-making include a ``right to explanation", ``right to opt-out" and ``non discrimination" of models \cite{Goodman16}. Artificial intelligence research is thus granted with an opportunity to further provide meaningful explanations to why algorithms work the way they do. The interpretability of results in machine learning is however a challenging problem that needs proper definition \cite{Lipton16}. In any case, monitoring the degree to which AI systems show the same thinking flows as humans is invaluable and crucial; not only to explain how human cognition works, but also to help AI make better and more fair decisions \cite{Lake16}. We define interpretability in the SRL context as the capacity for a human to be able to link a variation in the representation to a variation in the environment, and be able to know why the representation was sensitive to this variation. As SRL is designed to be able to give this level of interpretability, it could help improving the understanding we have about learning algorithms' output. Indeed, the higher the dimension, the less interpretable the result is for humans. Therefore, the dimensionality reduction induced by SRL, coupled with the link to the control and possible disentanglement of variation factors, could be highly beneficial to improve our understanding capacity of the decisions made by algorithms using this state representation. \section{Conclusion} State Representation Learning algorithms are designed to find a way to compress high-dimensional observation data into a low and meaningful dimensional space for controlled systems. These models only require the agent's observations, its performed actions and, optionally, the reward of the associated task. This work aims at presenting an accessible guide to learn about SRL approaches, the existing tools for evaluation and the common simulation settings used as benchmark. We presented the learning objectives of the state of the art approaches on SRL and their resemblances and differences. We discussed afterwards the use of SRL for autonomous agents, the difficulties for comparing existing approaches and the interpretability of results. A general advice when building SRL models would be to integrate as many learning objectives as possible, depending on the available data. As an example, one could use a reconstruction objective for linking the state space to the observations, combined with a predictive objective (forward model) to capture dynamics, and a reward-based objective to apprehend the effects of actions performed. More general priors could also be added to force the state space to be coherent and understandable for humans. While many models integrate several of these objectives, no proposed model currently includes all of them together. As SRL is designed to automatically learn representations from a set of unlabeled observations, it could be used in future work to learn from evolving environments and could be a step towards continual or lifelong learning. Another area to explore in the future is the integration of exploration strategies for data collection specifically designed to be able to improve the state representation learned. \section{Acknowledgements} This research is funded by the DREAM project under the European Union's Horizon 2020 research and innovation program under grant agreement No 640891. We acknowledge Olivier Sigaud, Antonin Raffin, Cynthia Liem and other colleagues for insightful and detailed feedback. \bibliographystyle{apalike} \subsubsection{Manifold learning}
{ "timestamp": "2018-06-06T02:10:18", "yymm": "1802", "arxiv_id": "1802.04181", "language": "en", "url": "https://arxiv.org/abs/1802.04181" }
\section{Introduction} \label{sec:intro} We are interested in the unconstrained \emph{stochastic} non-convex optimisation problem \begin{subequations} \label{eq:problem} \begin{align} \min_{x\in\mathbb{R}^d}{f(x)}, \end{align} when the cost function~$f(x)$ is on the form \begin{align} f(x) = \frac{1}{n}\sum_{i=1}^{n}f_i(x) + R(x), \end{align} \end{subequations} where~$d$ denotes the dimension of the unknown variable~$x$ and~$n$ denotes the number of available observations, i.e. the size of the dataset. Here, $f_i(x)$ denotes a loss function and $R(x)$ denotes a regularizer. The stochasticity of the problem is due to the fact that we only have access to \emph{noisy} evaluations of the cost function~$f(x)$ and its gradient~$\nabla f(x)$ according to \begin{align} \label{eq:NoisyCostGrad} f_k = f(x_k) + e_k,\qquad g_k = \nabla f(x)|_{x = x_k} + v_k. \end{align} Here $e_k$ and $v_k$ denotes the noise on the function and gradient evaluations, respectively. We take a particular interest in situations where the number of data~$n$ and/or the number of unknowns~$d$ are vary large. The stochastic optimisation problem~\eqref{eq:problem} is one of the most commonly encountered problems within supervised machine learning. The stochastic nature of the problem arises in different ways. First we mention large-scale problems where it is prohibitive to evaluate the cost function and its gradient on the entire dataset. Instead it is divided into several mini-batches via a subsampling procedure, which also explains where the noise arises. \begin{figure}[ht!] \centering \includegraphics[width=0.5\columnwidth]{mnist_plot_mcavg} \caption{Solving the optimisation problem used in training a state-of-the-art deep convolutional neural network (CNN) used for recognizing images of handwritten digits from the MNIST data. Alg1 referes to our new developments in this paper, SG refers to basic stochastic gradient and Adam refers to \cite{KingmaB:2015}. For a full account of these experiments, see Section~\ref{sec:Exp}.} \label{fig:MNIST} \end{figure} As a second example we mention the use of numerical algorithms in approximately computing the cost function and its gradients, inevitably resulting in stochastic optimisation problems. We illustrate the result of our new developments on a problem of the first kind in Figure~\ref{fig:MNIST}, namely the optimisation problem arising in training a deep convolutional neural network. The first stochastic optimisation algorithm was introduced almost 70 years ago by \cite{RobbinsM:1951}. They made use of first-order information only, motivating the name stochastic gradient (SG) which is the contemporary machine learning term for these algorithms originally referred to as stochastic approximation. Interestingly most SG algorithms are not decent methods, since the stochastic nature of the update can easily produce a new iterate corresponding to an increase in the cost function, which is illustrated in Figure~\ref{fig:MNIST}. Instead they are in fact Markov chain methods, due to the fact that their update rule actually defines a particular Markov chain. This was indeed also clearly acknowledged already in the seminal paper by \cite{RobbinsM:1951}. \textbf{Contributions and key properties:} We will heavily build upon the Markov chain nature of SG and our key contribution is a new construction enabled via an auxiliary variable trick allowing us to define an \emph{extended} Markov chain. The key feature of this construction is that we can efficiently make use of second-order (curvature) information in computing the search direction. This curvature information stems from an estimate of the inverse Hessian that we compute using a bounded history of previous iterates and stochastic gradients. The computational cost and memory footprint of this computation scales linearly in the number of data. Another important contribution is a stochastic line search capable of adapting the step length. From our numerical experiments we can see that this capability seems beneficial, especially in the beginning. A practical feature is that our method only requires the user to select three tuning parameters, the size of the mini-batch, the size of the memory and the weight of a regulariser. We also develop a method for updating a Cholesky factor given the new measurement pair making our approach computationally cheap and numerically robust, which we illustrate using extensive numerical experiments comparing against current state-of-the-art methods on challenging large-scale real-world problems. \section{Background and related work} \label{sec:RW} Many numerical optimisation algorithms can be interpreted as learning algorithms, where the first step is to build a local model of the cost function~$f(x)$. This local model is then used to compute the next iterate, a new model is learned around this new iterate and the procedure is repeated. The so-called \emph{second-order} methods make use of quadratic Taylor series approximations $q_k(x)$ of $f(x)$ around the current iterate~$x_k$ \begin{align} \label{eq:2orderModel} q_k(x) &= f(x_k) + g_k^{\mathsf{T}}(x - x_k) + \frac{1}{2}(x - x_k)^{\mathsf{T}}H_k^{-1}(x - x_k), \end{align} where $g_k$ denotes an approximation of the gradient~$\nabla f(x_k)$ and $H_k$ denotes an approximation of the inverse Hessian $(\nabla^2f(x_k))^{-1}$. Direct minimisation of the quadratic model~\eqref{eq:2orderModel} suggests the following update of the iterates \begin{align} \label{eq:GeneralUpdate} x_{k+1} = x_k - \alpha_k H_k g_k, \end{align} where $\alpha_k$ denotes the step length. The matrix~$H_k$ will be referred to as the \emph{scaling matrix} since it scales the gradient approximation~$g_k$. Many algorithms (including our present developments) update the iterates according to~\eqref{eq:GeneralUpdate}, but they differ greatly in how the components are found. Choosing the scaling matrix to be the identity $H_k = I$ we are back at the basic first-order gradient methods and with $H_k = (\nabla^2f(x_k))^{-1}$ we have Newton's method. The \emph{quasi-Newton} methods sit somewhere inbetween these two extremes, in that they employ a scaling matrix~$H_k$ that is a tractable approximation of the inverse Hessian. It is indeed this partial use of second-order information (curvature) that makes the quasi-Newton methods more robust and capable of reaching higher accuracy compared to pure gradient-based methods. The standard quasi-Newton method is the BFGS method, named after its inventors \citep{Broyden:1967,Fletcher:1970,Goldfarb:1970,Shanno:1970}. In its basic form this algorithm does not scale to the large-scale settings we are interested in. The idea of only making use of the most recent iterates and gradients in forming the inverse Hessian approximation was later suggested by \cite{Nocedal:1980} and \cite{LiuN:1989}. The result is a computationally cheaper method with a significantly reduced memory footprint, explaining the name L-BFGS, where the~L stands for limited memory. Due to its simplicity and good performance this has become one of the most commonly used second-order methods for large-scale problems. Our developments makes use of the same trick underlying L-BFGS, but it is carefully tailored to the stochastic setting. After this background let us now turn our attention to the most relevant related work when it comes to solving the stochastic problems we are interested in. The basic first-order SG algorithms have recently been significantly improved by the introduction of various noise reduction techniques, including the following methods; stochastic variance reduced gradient (SVRG) by \cite{JohnsonZ:2013}, Stochastic average gradient (SAG) \citep{SchmidtLB:2013}, Semi-Stochastic Gradient Descent (S2GD) \citep{KonecnyR:2017}, and SAGA \citep{DefazioBLJ:2014saga}. They all compute the gradient approximation via subsampling. There has recently also been some developments for non-convex settings, see e.g. \cite{ReddiHSPS:2016} and \cite{AllenZhyH:2016}. A thorough and forward-looking overview of the SG algorithm and its use within a modern machine learning context is provided by \cite{BottouCN:2017}. It also includes interesting accounts of possible improvements along the lines of first-order noise reduction techniques and second-order methods. The well-known drawback of all first-order methods is that they do not make use of any curvature information. Analogously to the deterministic setting we can assemble methods that are numerically more robust and achieve better performance in general by also extracting and using second-order information, i.e. the curvature that is maintained in the form of the Hessian matrix or an approximation of it. Over the past decade we have witnessed increasing capabilities of these so-called \emph{stochastic quasi-Newton methods}. There is still scope for significant developments when it comes to methods in this class and in this paper we aim to push the current boundaries. The work by \cite{SchraudolphYG:2007} developed modifications of BFGS and its limited memory version applicable to online stochastic optimisation problems. There has also been a series of papers approximating the scaling matrix~$H_k$ with a diagonal matrix, see e.g. \cite{BordesBG:2009} and \cite{DuchiEY:2011}. The idea of exploiting regularization together with BFGS was successfully introduced by \cite{MokhtariR:2014}, where the scaling matrix~$H_k$ was modified using regularization. Later they \citep{MokhtariR:2015} also developed a stochastic L-BFGS algorithm without regularization. The idea of replacing the stochastic gradient difference in the BFGS update with a subsampled Hessian-vector product was recently introduced by \cite{ByrdHNS:2016} and \cite{WangMGL:2017} introduced a damped L-BFGS method. Over the past five years we have also seen quite a lot of fruitful activity in combining the stochastic quasi-Newton algorithms with various first-order noise reduction methods. \cite{MoritzNJ:2016} successfully showed that it is possible to combine the L-BFGS methods by \cite{ByrdHNS:2016} with the SVRG noise reduction algorithm by \cite{JohnsonZ:2013} to reduce the problem with noisy gradients. Along this line of work we also find \cite{GowerGR:2016} where the authors introduced a stochastic block BFGS update that they then combined with the SVRG method. Contrary to almost all of the existing work mentioned above we make explicit use of and build upon the fact that the SG algorithm is a particular Markov chain designed specifically to solve the stochastic optimisation problem. Related to the Markov chain theme, the highly innovative work by \cite{WellingT:2011} has recently sparked a relevant parallel development within the Markov chain Monte Carlo (MCMC) literature for the case when $f(x)$ can be interpreted as a likelihood function. The aim is to exploit the geometry of the target distribution (the posterior) by using constructions from stochastic optimisation and Langevin diffusion dynamics. The use of a carefully designed local curvature estimate was enabled by \cite{SimsekliBCR:2016} when they incorporated ideas from L-BGFS within an MCMC setting. The main focus of this MCMC work has been directed towards exploring the posterior distribution when the chain is initialised at a ``good'' initial point (e.g. \cite{teh2016consistency} assume a MAP estimate to start the chain). In contrast, here we are primarily interested in rapid convergence towards an area of minimum cost from any initial point and for a more general class of cost functions. \section{Algorithm aummary} \label{sec:Alg} The key innovation in our solution lies in an auxiliary variable construction allowing for line search within a stochastic quasi-Newton setting. Hence, we are no longer forced to make use of decreasing step lengths in solving stochastic optimisation problems. As can be seen in Algorithm~\ref{alg:SQN} the overall structure of our solution is similar to most existing solutions, but all details have been carefully tailored to the stochastic setting. We start by describing how the search direction is calculated (rows 4-5) in Section~\ref{sec:SearchDir}. Here, we take care to derive a numerically robust and fast update of the inverse Hessian approximation. The auxiliary variables construction (rows 7-9) described in Section~\ref{sec:Analysis} allows for the use of step lengths that adapt according the local geometry, resulting in a functionality very similar to standard deterministic second-order algorithms with line search. \begin{algorithm}[htb!] \caption{\textsf{Stochastic quasi-Newton with line search}} \begin{algorithmic}[1] \REQUIRE An initial estimate $x_1$, a maximum number of iterations $k_{\max}$ and maximum step-length $0 < \bar{\alpha}_{k} \leq 1$. Choose $\rho \in \{ 0,1\}$, where $\rho = 1$ provides SG decay rate on step length $\alpha_k$, and $\rho = 0$ guarantees that the step-length will not exceed $\bar{\alpha}_{k}$. Choose a step-length scaling factor $\kappa \in (0,1)$.% \STATE Set $k = 1$ and $\alpha_1 = \bar{\alpha}_{1}$ and perform the following. \WHILE{$k < k_{\max}$} \STATE \textbf{Search direction calculation:} \STATE \label{step1} Obtain a measurement of the cost function and its gradient \vspace{-5mm} \begin{subequations} \label{eq:1} \begin{align} f_k &= f(x_k) + e_k,\\ g_k &= \nabla f(x_k) + v_k. \end{align} \end{subequations} \STATE Calculate a search direction $p_k$ such that \begin{align} \label{eq:2} \begin{cases} p_k^{\mathsf{T}}g_k < 0, & \|g_k\| > 0,\\ p_k = 0, & \text{otherwise}. \end{cases} \end{align} \STATE \textbf{New iterate calculation:} \STATE Compute proposal $\xi_{k+1} = x_k+\alpha_k p_k$. \STATE Calculate the acceptance indicator variable \begin{align} \label{eq:12} c_k &= \begin{cases} 1, & \text{w.p.}\quad \max \{\rho\ ,\ a(\xi_{k+1} \mid x_k) \},\\ 0, & \text{otherwise}. \end{cases} \end{align} \STATE Update the variables \begin{subequations} \label{eq:13} \begin{align} x_{k+1} &= x_k + c_k \alpha_k p_k,\\ p_{k+1} &= p_k,\\ \alpha_{k+1} &= c_k \left (\frac{1}{k} \right)^\rho \bar{\alpha}_{k} + (1-c_k)\kappa \alpha_k. \end{align} \end{subequations} \IF{$c_k = 0$} \STATE Set $k \leftarrow k+1$ and return to step~7. \ELSE \STATE Set $k \leftarrow k+1$ and return to step~2. \ENDIF \ENDWHILE \end{algorithmic} \label{alg:SQN} \end{algorithm} \section{Search direction computation} \label{sec:SearchDir} In this section we address the problem of computing a search direction based on having a limited memory available for storing previous gradients and associated iterates. The approach we adopt is similar to limited memory quasi-Newton methods, but here we employ a direct least-squares estimate of the inverse Hessian matrix rather than more well-known methods such as damped L-BFGS and L-SR1. The main reason for considering the least-squares approach is that it appears to perform quite well against the alternative methods for the class of problems considered in this paper. We construct a limited-memory inverse Hessian approximation in Section \ref{sec:inverseHess} and show how to update this representation in Section~\ref{sec:fastNrobust}. Section~\ref{sec:descent} provides a means to ensure that a descent direction is calculated. \subsection{Inverse Hessian approximation} \label{sec:inverseHess} According to the Secant condition (see e.g. \cite{Fletcher:1987}), the inverse Hessian matrix $H_k$ should satisfy \begin{align} H_k y_k = s_k, \end{align} where $y_k = g_k - g_{k-1}$ and $s_k = x_k - x_{k-1}$. Since there are generally more unknown values in $H_k$ than can be determined from $y_k$ and $s_k$ alone, quasi-Newton methods update $H_k$ from a previous estimate by solving problems of the type \begin{equation} \begin{aligned} H_k = \arg \min_{H} \quad & \| H - H_{k-1} \|^2_{F,W}\\ \text{s.t.} \quad & H=H^{\mathsf{T}}, \quad Hy_k = s_k, \end{aligned} \end{equation} where $\|X\|^2_{F,W} = \|XW\|^2_F = \text{trace}(W^\mathsf{T} X^\mathsf{T} X W)$ and the choice of weighting matrix $W$ results in different algorithms (see \cite{Hennig:2015} for an interesting perspective on this). Here we employ a similar approach and determine $H_k$ as the solution to the following regularised least-squares problem \begin{align} \label{eqn:Hupdate} H_k = \arg \min_{H} \|HY_k - S_k\|^2_F + \lambda \|H - \bar{H}_k\|^2_F, \end{align} where $Y_k$ and $S_k$ hold a limited number of past $y_k$'s and $s_k$'s according to \begin{subequations} \begin{align} Y_k &\triangleq \bmat{y_{k-m+1},\ldots,y_k}, \\ S_k &\triangleq \bmat{s_{k-m+1}, \ldots,s_k}, \end{align} \end{subequations} and $m << n$ is the memory limit. The regulator matrix $\bar{H}_k$ acts as a prior on $H$ and can be modified at each iteration $k$. The parameter $\lambda >0 $ is used to control the relative cost of the two terms in \eqref{eqn:Hupdate}. It can be verified that the solution to the above least-squares problem (\ref{eqn:Hupdate}) is given by \begin{align} \label{eq:Hupdate} H_k = \left ( \lambda I + Y_kY_k^{\mathsf{T}} \right )^{-1} \left ( \lambda \bar{H}_k + Y_k S_k^{\mathsf{T}}\right ), \end{align} where $I$ denotes the identity matrix. The above inverse Hessian estimate can be used to generate a search direction in the standard manner by scaling the negative gradient, that is \begin{align} \label{eq:newAlgorithm:2} p_k = -H_k g_k. \end{align} However, for large-scale problems this is not practical since it involves the inverse of a large matrix. To ameliorate this difficulty, we adopt the standard approach by storing only a minimal (limited memory) representation of the inverse Hessian estimate $H_k$. To describe this, note that the dimensions of the matrices involved are \begin{align} H_k \in \mathbb{R}^{d \times d}, \qquad Y_k \in \mathbb{R}^{d \times m}, \qquad S_k \in \mathbb{R}^{d \times m}. \end{align} We can employ the Sherman--Morrison--Woodbury formula to arrive at the following equivalent expression for~$H_k$ \begin{align} H_k &= \left [ I - Y_k \left ( \lambda I + Y_k^{\mathsf{T}}Y_k \right )^{-1} Y_k^{\mathsf{T}} \right ] \left ( \bar{H}_k + \lambda^{-1} Y_k S_k^{\mathsf{T}}\right ). \end{align} Importantly, the matrix inverse $\left ( \lambda I + Y_k^{\mathsf{T}}Y_k \right )^{-1}$ is now by construction a positive definite matrix of size $m \times m$. Therefore, we will construct and maintain a Cholesky factor of $I + Y_k^{\mathsf{T}}Y_k$ since this leads to efficient solutions. In particular, if we express this matrix via a Cholesky decomposition \begin{align} R_k^{\mathsf{T}} R_k &= \lambda I + Y_k^{\mathsf{T}}Y_k, \end{align} where $R_k \in \mathbb{R}^{m \times m}$ is an upper triangular matrix, then the search direction $p_k = -H_kg_k$ can be computed via \begin{subequations} \begin{align} p_k &= -z_k + Y_k w_k,\\ z_k &= \bar{H}_k g_k + \lambda^{-1} Y_k(S_k^{\mathsf{T}}g_k),\\ w_k &= R_k^{-1} \left ( R_k^{-\mathsf{T}} \left ( Y_k^{\mathsf{T}} z_k\right ) \right ). \end{align} \end{subequations} Constructing $R_k$ can be achieved in several ways. The so-called normal-equation method constructs the (upper triangular) part of $\lambda I + Y_k^{\mathsf{T}}Y_k$ and then employs a Cholesky routine, which produces $R_k$ in $O(n\frac{m(m+1)}{2} + m^3/3)$ operations. Alternatively, we can compute $R_k$ by applying Givens rotations or Householder reflections to the matrix \begin{align} M_k = \bmat{\sqrt{\lambda} I \\ Y_k}. \end{align} This costs $O(2m^2((n+m) - m/3)$ operations, and is therefore more expensive, but typically offers better numerical accuracy \citep{GolubVL:2012}. \subsection{Fast and robust inclusion of new measurements} \label{sec:fastNrobust} In order to maximise the speed, we have developed a method for updating a Cholesky factor given the new measurement pair $(s_{k+1}, y_{k+1})$. Suppose we start with a Cholesky factor~$R_k$ at iteration~$k$ such that \begin{align} \label{eq:5} R_k^{\mathsf{T}} R_k &= \lambda I + Y_k^{\mathsf{T}}Y_k \end{align} and that we are given a new measurement pair $(s_{k+1},y_{k+1})$. Assume, without loss of generality, that $Y_k$ and $S_k$ are ordered in the following manner \begin{subequations} \begin{align} \label{eq:6} Y_k &\triangleq \bmat{\mathcal{Y}_1, y_{k-m+1}, \mathcal{Y}_2},\\ S_k &\triangleq \bmat{\mathcal{S}_1, s_{k-m+1}, \mathcal{S}_2}, \end{align} \end{subequations} where $\mathcal{Y}_1$, $\mathcal{Y}_2$, $\mathcal{S}_1$ and $\mathcal{S}_2$ are defined as \begin{subequations} \begin{align} \label{eq:8} \mathcal{Y}_1 &\triangleq \bmat{y_{k-m+\ell+1}, \ldots, y_k},\\ \mathcal{Y}_2 &\triangleq \bmat{y_{k-m+2},\ldots,y_{k-m+\ell}},\\ \mathcal{S}_1 &\triangleq \bmat{s_{k-m+\ell+1}, \ldots, s_k},\\ \mathcal{S}_2 &\triangleq \bmat{s_{k-m+2},\ldots,s_{k-m+\ell}}, \end{align} \end{subequations} and $\ell$ is an appropriate integer so that $Y_k$ and $S_k$ have $m$ columns. The above ordering arises from ``wrapping-around'' the index when storing the measurements. We create the new $Y_{k+1}$ and $S_{k+1}$ by replacing the oldest column entries, $y_{k-m+1}$ and $s_{k-m+1}$, with the latest measurements $y_{k+1}$ and $s_{k+1}$, respectively, so that \begin{subequations} \begin{align} \label{eq:7} Y_{k+1} &\triangleq \bmat{\mathcal{Y}_1, y_{k+1}, \mathcal{Y}_2},\\ S_{k+1} &\triangleq \bmat{\mathcal{S}_1, s_{k+1}, \mathcal{S}_2}, \end{align} \end{subequations} The aim is to generate a new Cholesky factor $R_{k+1}$ such that \begin{align} \label{eq:9} R_{k+1}^{\mathsf{T}} R_{k+1} &= \lambda I + Y_{k+1}^{\mathsf{T}}Y_{k+1}. \end{align} To this end, let the upper triangular matrix $R_k$ be written conformally with the columns of $Y_k$ as \begin{align} \label{eq:10} R_k = \bmat{\mathcal{R}_1 & r_1 & \mathcal{R}_2 \\ & r_2 & r_3 \\ & & \mathcal{R}_4} \end{align} so that $\mathcal{R}_1$ and $\mathcal{R}_2$ have the same number of columns as $\mathcal{Y}_1$ and $\mathcal{Y}_2$, respectively. Furthermore, $r_1$ is a column vector, $r_2$ is a scalar and $r_3$ is a row vector. Therefore, \begin{align} \label{eq:11} R_k^\mathsf{T} &R_k = \bmat{\mathcal{R}_1^\mathsf{T} \mathcal{R}_1 & \mathcal{R}_1^\mathsf{T} r_1 & \mathcal{R}_1^\mathsf{T} \mathcal{R}_2 \\ \cdot & r_2^2 + r_1^\mathsf{T} r_1 & r_1^\mathsf{T} \mathcal{R}_2 + r_2r_3 \\ \cdot & \cdot & \mathcal{R}_4^\mathsf{T} \mathcal{R}_4 + \mathcal{R}_2^\mathsf{T} \mathcal{R}_2 + r_3^\mathsf{T} r_3}\nonumber\\ &= \bmat{\lambda I+\mathcal{Y}_1^\mathsf{T}\mathcal{Y}_1 & \mathcal{Y}_1^\mathsf{T} y_{k-m+1} & \mathcal{Y}_1^\mathsf{T} \mathcal{Y}_2\\ \cdot & \lambda + y_{k-m+1}^\mathsf{T} y_{k-m+1} & y_{k-m+1}^\mathsf{T} \mathcal{Y}_2 \\ \cdot & \cdot & \lambda I + \mathcal{Y}_2^\mathsf{T} \mathcal{Y}_2} \end{align} By observing a common structure for the update $\lambda I + Y_{k+1}^\mathsf{T} Y_{k+1}$ it is possible to write \begin{align} \label{eq:14} &\lambda I + Y_{k+1}^\mathsf{T} Y_{k+1} \nonumber\\ &=\bmat{\lambda I+\mathcal{Y}_1^\mathsf{T}\mathcal{Y}_1 & \mathcal{Y}_1^\mathsf{T} y_{k+1} & \mathcal{Y}_1^\mathsf{T} \mathcal{Y}_2\\ \cdot & \lambda + y_{k+1}^\mathsf{T} y_{k-m+1} & y_{k+1}^\mathsf{T} \mathcal{Y}_2 \\ \cdot & \cdot & \lambda I + \mathcal{Y}_2^\mathsf{T} \mathcal{Y}_2}\nonumber\\ &= \bmat{\mathcal{R}_1^\mathsf{T} \mathcal{R}_1 & \mathcal{R}_1^\mathsf{T} r_4 & \mathcal{R}_1^\mathsf{T} \mathcal{R}_2 \\ \cdot & r_5^2 + r_4^\mathsf{T} r_4 & r_4^\mathsf{T} \mathcal{R}_2 + r_5r_6 \\ \cdot & \cdot & \mathcal{R}_6^\mathsf{T} \mathcal{R}_6 + \mathcal{R}_2^\mathsf{T} \mathcal{R}_2 + r_6^\mathsf{T} r_6} \end{align} where $r_4$, $r_5$ and $r_6$ are determined by \begin{subequations} \begin{align} \label{eq:15} r_4 &= \mathcal{R}_1^{-\mathsf{T}} (\mathcal{Y}_1^\mathsf{T} y_{k+1}),\\ r_5 &= \left ( \lambda + y_{k+1}^\mathsf{T} y_{k+1} - r_4^\mathsf{T} r_4 \right )^{1/2},\\ r_6 &= \frac{1}{r_5} \left ( y_{k+1}^\mathsf{T} \mathcal{Y}_2 - r_4^\mathsf{T} \mathcal{R}_2 \right ). \end{align} \end{subequations} The final term $\mathcal{R}_6$ can be obtained by noticing that \begin{align} \label{eq:16} \mathcal{R}_6^\mathsf{T} \mathcal{R}_6 + \mathcal{R}_2^\mathsf{T} \mathcal{R}_2 + r_6^\mathsf{T} r_6 &= \mathcal{R}_4^\mathsf{T} \mathcal{R}_4 + \mathcal{R}_2^\mathsf{T} \mathcal{R}_2 + r_3^\mathsf{T} r_3, \end{align} implies \begin{align} \label{eq:17} \mathcal{R}_6^\mathsf{T} \mathcal{R}_6 & = \mathcal{R}_4^\mathsf{T} \mathcal{R}_4 - r_6^\mathsf{T} r_6 + r_3^\mathsf{T} r_3. \end{align} Therefore $\mathcal{R}_6$ can be obtained in a computationally very efficient manner by down-dating and updating the Cholesky factor $\mathcal{R}_4$ with the rank-1 matrices $r_6^\mathsf{T} r_6$ and $r_3^\mathsf{T} r_3$, respectively (see e.g. Section 12.5.3 in \cite{GolubVL:2012}). \subsection{Ensuring a descent direction \label{sec:descent} In Algorithm~\ref{alg:SQN} we stipulate that the search direction $p_k$ must be chosen to mimic a descent direction such that $p_k^\mathsf{T} g_k < 0$. Due to the fact that the gradient is not exact, then this descent condition does not strictly enforce a descent direction, but it is nonetheless useful to satisfy the descent condition in practice. The search direction~$p_k$ as determined by~\eqref{eq:newAlgorithm:2} will not be a descent direction in general since the approximation $H_k$ of the inverse Hessian is not necessarily positive definite. Nevertheless, by observing that \begin{align} \label{eq:newAlgorithm:1} g_k^\mathsf{T} (p_k + \beta g_k) = g_k^\mathsf{T} p_k - \beta g_k^\mathsf{T} g_k, \end{align} we can always choose a $\beta \geq 0$ such that $p_k + \beta g_k$ is a descent direction with respect to the inexact gradient~$g_k$. For example, we can choose \begin{align} \label{eq:newAlgorithm:3} \beta = 2 \max \left \{ 0, \frac{p_k^\mathsf{T} g_k}{g_k^\mathsf{T} g_k} \right \}. \end{align} It is also worth pointing out that this situation occurred very infrequently during all of the experiments reported in Section~\ref{sec:Exp}. The above is by no means an optimal strategy, but it appears to perform very well in practice. \section{Auxiliary variable construction} \label{sec:Analysis} Algorithm~\ref{alg:SQN} offers two distinct variants. If the parameter $\rho=1$, then the algorithm will mimic a classical SG approach in that we accept every proposal~$\xi_{k+1}$ according to~\eqref{eq:12} and we are free to choose $\bar{\alpha}_{k}$ as a decaying sequence \begin{align} \label{eq:newAlgorithm:6} \bar{\alpha}_{k} \triangleq \frac{\bar{\alpha}_0}{k}, \quad \text{for some fixed } \bar{\alpha}_0 > 0. \end{align} Therefore, $\alpha_{k+1} = \bar{\alpha}_0/k$, which is a typical choice for many SG algorithms. In this case we can employ all the analysis from SG methods, see \cite{BottouCN:2017}. The alternative $\rho = 0$, offers a different approach, which is our main focus in this work as detailed in Section~\ref{sec:ASL}. Our algorithm produces a Markov chain and in Section~\ref{sec:EPE} it is briefly described how we can use it to extract a competitive point estimate. \subsection{Adaptive step lengths} \label{sec:ASL} When we set~$\rho = 0$ in Algorithm~\ref{alg:SQN} it will generate an $m^{\text{th}}$-order Markov chain $\{x_{k-m+1:k}, \alpha_{k-m+1:k}, u_{k-m+1:k}\}_{k\geq 1}$ where the notation $x_{k-m+1:k} \triangleq \{x_{k-m+1}, \ldots, x_k\}$ is used to represent the past $m$ iterates. The first auxiliary variable~$\alpha_k$ is the step length from Algorithm~\ref{alg:SQN} and the second auxiliary variable $u_k$ represents the information required to evaluate the approximate (noisy) cost and gradient. For example, in the case of subsampling, $u_k$ represents the subset of integers from $\{1, \ldots,n\}$ used to approximate the subsampled cost and associated gradient. In Sequential Monte Carlo (SMC) methods (used in Section~\ref{sec:SMC}), the auxiliary variable $u_k$ represents the selection of modes that propagate through the filter in order to again estimate the likelihood and its gradient (see \cite{AndrieuDH:2010} for details). In what follows, we make the dependence on the auxiliary variable $u_k$ explicit by using the notation that $f(x_k,u_k)$ is the cost approximation and $g(x_k,u_k)$ is the gradient of $f(x_k,u_k)$ with respect to $x$. The Markov chain evolves according to \begin{subequations} \label{eq:17} \begin{align} \label{eq:11} x_{k+1} &= x_k + c_k \alpha_k p_k,\\ p_k &= -H_kg_k - 2 \max \left \{ 0,\frac{g_k^\mathsf{T} H_k g_k}{g_k^\mathsf{T} g_k} \right \} g_k,\\ g_k &= g(x_k,u_k),\\ H_k &= H(x_{k-m+1:k},\alpha_{k-m+1:k}, u_{k-m+1:k}),\\ \alpha_{k+1} &= c_k + (1-c_k)\kappa \alpha_k, \label{eq:16} \end{align} \end{subequations} where $H(x_{k-m+1:k},\alpha_{k-m+1:k}, u_{k-m+1:k})$ is defined as $H_k$ in \eqref{eq:Hupdate}, but here we highlight that the inverse Hessian approximation is a function of the past $m$ iterates $x_{k-m+1:k}$ and of the auxiliary variables $u_{k-m+1:k}$ and $\alpha_{k-m+1:k}$ over this same window. The variable $c_k$ is determined by \begin{align} \label{eq:14} c_k &= \begin{cases} 1, & \text{w.p.}\quad a(x_k+\alpha_k p_k \mid x_k),\\ 0, & \text{otherwise}, \end{cases} \end{align} where the acceptance probability is calculated as \begin{subequations} \label{eq:newAlgorithm:7} \begin{align} a(\xi_{k+1},x_k) &= \begin{cases} 1 & \epsilon_k < 0,\\ \mathcal{C}(-\epsilon_k,\sigma^2) & \text{otherwise}. \end{cases}\\ \epsilon_k &\triangleq f(\xi_{k+1})-f(x_{k}), \end{align} \end{subequations} where $\mathcal{C}(-\epsilon_k,\sigma^2)$ denotes the cumulative distribution function for a Gaussian with mean~$-\epsilon_k$ and variance~$\sigma^2$. The acceptance probability in~\eqref{eq:newAlgorithm:7} has the effect of strictly accepting proposals that decrease the cost, while accepting those that increase the cost with a probability $\mathcal{C}(-\epsilon_k,\sigma^2)$. Therefore, a proposed $\xi_{k+1}$ that causes a large increase in the cost, relative to the uncertainty of the cost, is very unlikely to be accepted. Note that it is possible to readily calculate an unbiased estimate of the cost function variance~$\sigma^2$, and this can be re-evaluated as the algorithm progresses. Should a proposal be rejected then the step length is reduced according to $\alpha_{k+1} = \kappa \alpha_k$ and the algorithm returns to proposing a new~$\xi_{k+1}$ with reduced step length in Step~7 without calculating a new search direction (the intent is similar to stochastic line search algorithms \citep{MahsereciH:2017}). In the event that the proposal is accepted then $\alpha_{k+1} = \bar{\alpha}_{k}$, which for this variant of the algorithm was chosen as $\bar{\alpha}_{k} = 1$ for all~$k$. \textbf{Comments:} A natural question to ask is that of convergence of the proposed algorithm. Convergence of a Markov chain to an invariant distribution has been the subject of intense research within statistics and related communities, see e.g. \cite{MeynT:2009} for a solid textbook account. Essentially, if it can be shown that the Markov transition kernel is invariant, that the chain is irreducible, and that it is also aperiodic, then it will converge to a stationary distribution. However, it is not immediately obvious (or indeed possibly correct) to assert that the transition kernel devised in Algorithm~\ref{alg:SQN} is invariant. \subsection{Extracting estimates} \label{sec:EPE} As discussed above, Algorithm~\ref{alg:SQN} produces iterates~$\{x_k\}_{k\geq 1}$ that are distributed according to some underlying distribution $p(x)$, that in accordance with the acceptance probability, favours reductions in the cost function. As with standard Markov chain methods, we can then utilise these samples via a law of large numbers argument to form expectations of the type \begin{align} \label{eq:18} h = \int h(x) p(x) dx = \lim_{M \to \infty} \frac{1}{M} \sum_{k=1}^M h(x_k), \end{align} where $h(\cdot)$ refers to a test function. The utility of this approach is that we can produce as many samples from the target distribution as required in order to compute a desired expectation. In the experiments presented in Section~\ref{sec:Exp}, we employed a very simple strategy of computing the expected value of $x$, so that $h(x) = x$, which results in the following estimate \begin{align} \label{eq:newAlgorithm:8} \widehat{x} = \frac{1}{M} \sum_{k=k_{\min}}^{M+k_{\min}-1} x_k \end{align} where $k_{\min} > 0$ defines a minimum number of transient iterations to ignore in the calculation. The results summarised in Table~\ref{tab:table2} were calculated according to \eqref{eq:newAlgorithm:8} by using the final 20\% of the iterations. \section{Numerical experiments} \label{sec:Exp} Let us now put our new developments to test on a suite of problems from four different categories carefully chosen to exhibit different properties and challenges. In Section~\ref{sec:SE} we study a synthetic example to gauge the performance in a controlled setting. We then move on to more interesting and challenging problems involving large-scale and real-world data. In particular we will in Section~\ref{sec:MNIST} consider an optimisation problem arising from the use of deep learning to solve the classical machine learning benchmark MNIST\footnote{\url{yann.lecun.com/exdb/mnist/}}, where the task is to classify images of handwritten digits. Another commonly used benchmark is considered in Section~\ref{sec:LIBSVM}, namely the collection of logistic classification problems described by \cite{ChangL:2011} in the form of their library for support vector machines (LIBSVM). Finally we study a class of problems of much smaller scale, posing a different challenge in that for these problems it is inherently impossible to compute the cost function and the gradient exactly despite their small-scale nature. In our experiments we compare against relevant state-of-the-art methods. All experiments were run on a MacBook Pro 2.8GHz laptop with 16GB of RAM using Matlab 2017b. More details about some of the experiments and their background are available in the supplemental material. \subsection{Synthetic example -- Rosenbrock's banana function} \label{sec:SE} Let us start by demonstrating our proposed algorithm on a simple and possibly familiar problem, namely, that of minimising the Rosenbrock banana function (a contour plot of the Rosenbrock function is provided in Figure~\ref{fig:contour}). To emulate the stochastic nature of the problems considered in this paper, we have added artificial noise (standard deviation of $\sigma = 0.1$) to both the cost function and gradient calculations. The Rosenbrock function is well-known to cause difficulty for first-order methods because the Hessian matrix has disparate eigenvalues along its banana-shaped valley. To compare our approach, we also implemented the Adam algorithm from \cite{KingmaB:2015}. Figure~\ref{fig:contour} shows the first~$50$ iterates of both methods. Clearly the proposed algorithm is converging to a region around the optimal point while Adam is making slower progress along the valley. Figure~\ref{fig:bananaCost} shows the cost value as a function of iteration, and while both methods converge to a similar cost value, the proposed approach achieves this quite quickly. While it is difficult and ill-advised to draw strong conclusions from this tiny experiment, it does provide some confidence that the second-order information is indeed captured and exploited by our proposed algorithm. \begin{figure}[hb!] \centering \begin{subfigure}[h!]{0.45\columnwidth} \includegraphics[width=\columnwidth]{contour_overlay_plot} \caption{First $50$ iterates.} \label{fig:contour} \end{subfigure} \,\,\, \begin{subfigure}[h!]{0.45\columnwidth} \includegraphics[width=\columnwidth]{banana_Cost_plot} \caption{Cost per iteration} \label{fig:bananaCost} \end{subfigure} \caption{Rosenbrock's banana function. Figure (a) shows the contour lines of the cost function together with 50 iterates from Algorithm~\ref{alg:SQN} and Adam, respectively. Figure (b) shows the cost per iteration for the same two algorithms.} \end{figure} \subsection{MNIST} \label{sec:MNIST} Deep convolutional neural networks (CNNs) with multiple layers of convolution, pooling and nonlinear activation functions are delivering state-of-the-art results on many tasks in computer vision. We are here borrowing the stochastic optimisation problems arising in using such a deep CNN to solve the MNIST benchmark. The particular CNN structure used in this example employs $5\times 5$ convolution kernels, pooling layers and a fully connected layer at the end. We made use of the publicly available code provided by~\cite{Zhang:2016}, which contains all the implementation details. In Figure~\ref{fig:MNIST} we show the average cost versus time for $20$ Monte-Carlo trials with Algorithm~\ref{alg:SQN} (with $b=300$, $m=30$ and $\lambda = 0.1$), Adam developed by \cite{KingmaB:2015} and the basic SG algorithm. Note that the three algorithms all make use of the same gradients. \subsection{Logistic loss and a 2-norm regularizer} \label{sec:LIBSVM} The task here is to solve seven different empirical risk minimisation problems using a logistic loss function with an L2 regularizer. The data is taken from \cite{ChangL:2011}. These problems are commonly used for profiling optimisation algorithms of the kind introduced in this paper, facilitating comparison with existing state-of-the-art algorithms. More specifically, we have used the same set-up as \cite{GowerGR:2016}, which inspired this study. A summary of the salient features of each problem is provided in Table~\ref{tab:table}. Recall that our algorithm only requires the user to select two tuning parameters, namely the mini-batch size used ($b$), and the memory length ($m$). Our choices for these parameters are listed in Table~\ref{tab:table}. \begin{table}[h]\centering \small{ \ra{1.2} \begin{tabular}{lll|lll} \toprule \textbf{Problem} & $n$ & $d$ & $b$ & $m$ & $\lambda$\\ \midrule \texttt{gisette} & $6\thinspace000$ & $5\thinspace000$ & $500$ & $20$ & $1.0$\\ \texttt{covtype} & $581\thinspace012$ & $54$ & $763$ & $54$ & $0.04$\\ \texttt{HIGGS} & $11\thinspace000\thinspace000$ & $28$ & $3\thinspace317$ & $28$ & $0.04$\\ \texttt{SUSY} & $3\thinspace548\thinspace466$ & $18$ & $5\thinspace000$ & $18$ & $0.04$\\ \texttt{epsilon} & $400\thinspace000$ & $2\thinspace000$ & $1\thinspace000$ &$20$ & $0.2$\\ \texttt{rcv1} & $20\thinspace242$ & $47\thinspace236$ & $284$ & $2$& $0.2$\\ \texttt{URL} & $2\thinspace396\thinspace130$ & $3\thinspace231\thinspace961$ & $1\thinspace798$ & $50$& $0.04$\\ \bottomrule \end{tabular} \caption{List of seven problems (columns 1), the number of data points~$n$ (column 2), the number of variables $d$ (column 3), the mini-batch size~$b$ (column 4), the memory size~$m$ (column 5), and the regulariser~$\lambda$ (column 6).} \label{tab:table} } \end{table} \begin{table}[h] \small{ \centering \ra{1.2} \begin{tabular}{lllll} \toprule \textbf{Problem} & Alg1 & MNJ & GGR & SVRG \\ \midrule \texttt{gisette} & \bf{0.005} & 0.244 & 0.0176 & 0.172 \\ \texttt{covtype} & \bf{0.514} & 0.684 & \bf{0.514} & 0.667\\ \texttt{HIGGS} & \bf{0.638} & \bf{0.638} & \bf{0.638} & \bf{0.638}\\ \texttt{SUSY} & \bf{0.458} & \bf{0.458} & \bf{0.458} & \bf{0.458}\\ \texttt{epsilon} & \bf{0.282} & \bf{0.282} & \bf{0.282} & 0.421\\ \texttt{rcv1} & \bf{0.202} & \bf{0.202} & \bf{0.202} & 0.280 \\ \texttt{URL} & 0.0196 & \bf{0.0193} & 0.0249 & 0.0639\\ \bottomrule \end{tabular} \caption{Cost function values for each problem (columns 1), and each method Alg1 (column 2), MNJ (column 3), GGR (column 4) and SVRG (column 5). Minimum value in bold face.} \label{tab:table2} } \end{table} We compared Algorithm~\ref{alg:SQN} (denoted as \texttt{Alg1}) against three existing methods from the literature, namely, the limited memory stochastic block BFGS method from \cite{GowerGR:2016} (denoted as \texttt{GGR}) and the limited memory stochastic BFGS method of \cite{MoritzNJ:2016} (denoted as \texttt{MNJ}) and the stochastic variance reduced gradient (SVRG) by \cite{JohnsonZ:2013} (denoted \texttt{SVRG}). For the \texttt{GGR}, \texttt{MNJ} and \texttt{SVRG} approaches we used the recommended tuning of each algorithm. In the case of \texttt{GGR} we used the \texttt{prev} variant as this performed best across all test problems\footnote{The implementation for \texttt{GGR} and \texttt{MNJ} was downloaded from \url{www.maths.ed.ac.uk/~prichtar/i_software.html}}. The result is illustrated in Table~\ref{tab:table2} and Figure~\ref{fig:Res}. \begin{figure}[ht!] \centering \begin{subfigure}[h!]{0.35\columnwidth} \includegraphics[width=\columnwidth]{gisette_plot} \caption{\texttt{gisette}} \label{fig:gisette} \end{subfigure} \begin{subfigure}[h!]{0.35\columnwidth} \includegraphics[width=\columnwidth]{covtype_plot} \caption{\texttt{covtype}} \label{fig:covtype} \end{subfigure} \begin{subfigure}[h!]{0.35\columnwidth} \includegraphics[width=\columnwidth]{higgs_plot} \caption{\texttt{HIGGS}} \label{fig:higgs} \end{subfigure} \begin{subfigure}[h!]{0.35\columnwidth} \includegraphics[width=\columnwidth]{susy_plot} \caption{\texttt{SUSY}} \label{fig:susy} \end{subfigure} \begin{subfigure}[h!]{0.35\columnwidth} \includegraphics[width=\columnwidth]{epsilon_plot} \caption{\texttt{epsilon}} \label{fig:epsilon} \end{subfigure} \begin{subfigure}[h!]{0.35\columnwidth} \includegraphics[width=\columnwidth]{rcv1_plot} \caption{\texttt{RCV1}} \label{fig:rcv1} \end{subfigure} \begin{subfigure}[h!]{0.35\columnwidth} \includegraphics[width=\columnwidth]{url_plot} \caption{\texttt{URL}} \label{fig:url} \end{subfigure} \begin{subfigure}[h!]{0.35\columnwidth} \includegraphics[width=\columnwidth]{nlss_plot} \caption{Nonlinear SSM.} \label{fig:NLSSM} \end{subfigure} \caption{Performance on seven classification tasks using a logistic loss with a two-norm regulariser (Figures (a)--(g)). In Figure~(h) we show the result on a learning parameters in a challenging nonlinear dynamical system.} \label{fig:Res} \end{figure} \subsection{Nonlinear system identification}\label{sec:SMC} Another important application requiring stochastic optimisation problems to be solved is that of nonlinear system identification, where the task is to learn unknown parameters in nonlinear dynamical systems (see Appendix~\ref{sec:app} for further details). Here the stochasticity arises due to the fact that it is impossible to exactly evaluate the cost function (provided by maximum likelihood) and its gradients. Instead we have to resort to approximations resulting in noisy evaluations of the kind~\eqref{eq:NoisyCostGrad}. Consider the problem of learning the parameters~$b$ and~$q$ for the following nonlinear and time-varying state-space model, \begin{subequations} \label{eq:nlp} \begin{align} x_{t+1} &= 0.5x_t + b\frac{x_t}{1 + x_t^2} + 8\cos (1.2t) + q^{-1} w_t, \label{eq:54a}\\ y_t &= 0.05x_t^2 + e_t,\label{eq:49} \end{align} \end{subequations} where the true parameters are $b^\star = 25$ and $q^\star = 1/\sqrt{0.5}$. The noise terms are mutually independent and given by $w_t\sim\mathcal{N}(0, 1)$ and $e_t \sim \mathcal{N}(0, 0.1)$. This has been acknowledged as a challenging problem \citep{DoucetGA:2000,GodsillDW:2004} within the sequential Monte Carlo (SMC) community. The results using 100 measurements and 200 particles for 100 Monte--Carlo simulations are provided in Figure~\ref{fig:NLSSM}. \section{Conclusion and future work} \label{sec:Conc} In this paper we have developed a new approach for solving large-scale stochastic optimisation problems by combining curvature information in computing the search direction with the use of an adaptive step length that is regulated by the cost function. The local curvature information is captured using a limited memory method whose computational cost scales linearly in the data size. We demonstrate our approach on a range of problems from different fields of research including a suite of challenging large-scale problems. The proposed method performs well against state-of-the-art techniques and we believe that this provides some impetus for further research. As a final remark, an interesting situation occurs when we employ Algorithm~\ref{alg:SQN} with $\rho = 0$ together with a decaying maximum step length~$\bar{\alpha}_k$. In the limit, this mimics SG methods, but in early iterations it regulates the step length in order to reduce the cost. This circumvents the requirement of conservative initial step lengths. \section*{Acknowledgements} We would like to thank the participants of the Sydney control conference 2017 for very useful discussion and feedback on a presentation leading up to this work. We would also like to thank Fredrik Lindsten, Johan Dahlin and Jack Umenberger for very useful comments on an early draft of this paper. This research was financially supported by the Swedish Foundation for Strategic Research (SSF) via the project \emph{ASSEMBLE} (contract number: RIT15-0012) and the Swedish Research Council via the projects \emph{Learning flexible models for nonlinear dynamics} (contract number: 2017-03807) and \emph{NewLEADS - New Directions in Learning Dynamical Systems} (contract number: 621-2016-06079). \section{Appendix -- Learning nonlinear dynamical systems} \label{sec:app} \subsection{Problem formulation} Consider the following general nonlinear state-space model \begin{subequations} \begin{align} x_{t} &= f(x_{t-1},\theta) + w_t,\\ y_t &= h(x_t, \theta) + e_t, \end{align} \end{subequations} where $x_t$ denotes the state, $y_t$ denotes the measurement and~$\theta$ denotes the unknown (static) parameters. The two nonlinear functions $f(\cdot)$ and $h(\cdot)$ denotes the nonlinear functions describing the dynamics and the measurements, respectively. The process noise is Gaussian distributed with zero mean and covariance $Q$, $w_t\sim \mathcal{N}(0, Q)$ and the measurement noise is given by $e_t\sim\mathcal{N}(0, R)$. Finally, the initial state is distributed according to $x_0 \sim p(x_0\mid\theta)$. The problem we are interested in is to estimate the unknown parameters~$\theta$ by making use of the available measurements $y_{1:n} = \{y_1, y_2, \dots, y_{n}\}$ to maximize the likelihood function $p(y_{1:n} \mid \theta)$ \begin{align} \label{eq:MaxLik} \max_{\theta}{p(y_{1:n} \mid \theta)}. \end{align} In the supplemental material we provide more background on how to compute approximations of the likelihood function~\eqref{eq:MaxLik} and its gradients using sequential Monte Carlo (SMC) methods \citep{Gordon:1993,Kitagawa:1993}. For a tutorial introduction to SMC methods we refer to \cite{DoucetJ:2011} and their use in solving system identification problems is offered by \cite{SchonLDWNSD:2015} and \cite{Kantas:2015}. \subsection{Computing the likelihood and its gradient} The likelihood function can via repeated use of conditional probabilities be rewritten as \begin{align} p(y_{1:n}\mid \theta) = \prod_{t=1}^{n} p(y_t\mid y_{1:t-1},\theta), \end{align} with the convention that $y_{1:0} = \emptyset$. The one step ahead predictors are available via marginalization \begin{align} p(y_t\mid y_{1:t-1}, \theta) = \int p(y_t, x_t \mid y_{1:t-1}, \theta) \myd x_t = \int p(y_t\mid x_t, \theta) p(x_t\mid y_{1:t-1}, \theta) \myd x_t. \end{align} One intuitive interpretation of the above integral is that it corresponds to averaging over all possible values for the state $x_{t}$. The challenge is of course how to actually compute this integral. By making use of particle filter \citep{Gordon:1993,Kitagawa:1993} to approximate the likelihood we are guaranteed to obtain an unbiased estimate~\citep{DelMoral:2004}. The likelihood gradients can also be computed using particle filters, for example by making use of \emph{Fisher's identity} \citep{CappeMR:2005} \begin{align} \nabla_{\theta}\ell(\theta)\big|_{\theta = \theta_k} = \nabla_{\theta}\mathcal{Q}(\theta,\theta_k)\big|_{\theta = \theta_k} \end{align} where we have defined \begin{subequations} \begin{align} \ell(\theta) &= \ln p(y_{1:n}\mid \theta),\\ \mathcal{Q}(\theta,\theta_k) &= \int \ln p(x_{0:n}, y_{1:n} \mid \theta) p(x_{0:n}\mid y_{1:n}, \theta_k) \myd x_{0:n}. \end{align} \end{subequations} The particle filter---which is one member of the family of sequential Monte Carlo (SMC) methods---has a fairly rich history when it comes to solving nonlinear system identification problems. For an introductory overview we refer to~\cite{SchonLDWNSD:2015,Kantas:2015}. The likelihood and its gradient cannot be calculated exactly in this case and we therefore employed sequential Monte Carlo methods and Fisher's identity \citep{CappeMR:2005,NinnessWS:2010} to provide noisy estimates of both. The number of particles used to calculate these terms was 500 in all cases. Note that each simulation required no more than 8 seconds of computation time on a MacBook Pro 2.8GHz Intel i7. \bibliographystyle{apalike}
{ "timestamp": "2018-02-14T02:00:48", "yymm": "1802", "arxiv_id": "1802.04310", "language": "en", "url": "https://arxiv.org/abs/1802.04310" }
\section{Introduction} \label{sec:introduction} The reliable reduction of image noise poses a constantly recurring problem in today’s imaging systems. In healthcare, noise may limit the reliability of medical image data for subsequent clinical workflows. For instance, in radiology using computed tomography (CT) or related morphological imaging modalities, noise affects the analysis of anatomical structures and thus impedes diagnostic applications. In optical coherence tomography (OCT) for retinal imaging as another example use case, noise limits the measurement of structural features in the human eye, e.\,g. retinal layer properties. Apart from diagnostic applications, noise reduction is also a major theme for different interventional imaging modalities like fluoroscopically guided procedures. Low dose radiation exposure for patient safety leads to noisy and low-contrast fluroscopic sequences \citep{Amiot2016}. To mitigate these limitations, \textit{denoising} can be either implemented by means of customized hardware or via postprocessing of captured image data. While hardware-based denoising often leads to increased system complexities, image-based postprocessing facilitates denoising in a cost-effective way using computational methods. Despite the great progress in developing general denoising schemes for natural images, adopting them for medical data poses several challenges. First and foremost, there is a narrow ridge between achieving sufficient noise reduction and unwanted distortions of meaningful medical structures. Moreover, noise distributions in medical data often deviate from the commonly employed models for natural images like additive, white Gaussian noise (AWGN). For example, noise can follow multiplicative models or structured patterns related to acquisition parameters like in CT. General denoising algorithms have been mainly developed for 2-D data, e.\,g. color photographs, but denoising in medical imaging also needs to handle time-resolved and/or volumetric data. These requirements desire enhanced and robust denoising methods to be applicable within medical workflows. \begin{figure*}[!tp] \scriptsize \centering \includegraphics[width = 0.93\textwidth]{images/NeuerTeaser.png} \caption{We propose three modi of our spatio-temporal denoising algorithm. In the first modus (top), hereinafter called \textit{image denoising}, single images or a sequence of registered images are processed. The second modus (middle) processes volumes as well as a sequence of registered volumes and is called \textit{volumetric denoising}. The third modus (bottom) processes volumes as well as a sequence of registered volumes, outputs a sequence of volumes, and is called \textit{volumetric + temporal denoising}.} \label{fig:graphicalAbstract} \end{figure*} In this paper, we propose denoising for medical image data within a variational framework. As the key contribution, we introduce the class of \textit{quantile sparse image} (QuaSI) priors to model the appearance of noise-free medical data. Specifically, we propose a median filter based regularizer that is based on the QuaSI prior using the 0.5 quantile. This follows the idea that noise-free data should be a fixed point of the median filter and we show that this model facilitates structure-preserving denoising. To approach the resulting non-linear and non-convex optimization problem, we present an alternating direction method of multipliers (ADMM) scheme. Our algorithm can handle \textit{spatio-temporal} denoising by processing either single images or sequences of consecutive images. Furthermore, it enables denoising of volumetric data. Thus, it can be adjusted to the clinical needs within a target application. This paper is an extension of our prior work in \cite{Schirrmacher2017} and makes the following additional contributions: \begin{itemize} \item The algorithm as well as the QuaSI prior are extended to process volumetric medical data. \item An investigation of the convergence and parameter sensitivity of our algorithm is conducted. \item An extension of our algorithm is presented to process volumetric data in C-arm CT imaging. \end{itemize} The remainder of this paper is organized as follows. In \sref{sec:relatedWork}, we review related work on spatial and temporal denoising. \sref{sec:background} comprises the objective function of the energy minimization problem. In \sref{sec:quantileSparseImagePrior} the QuaSI prior is introduced. The numerical optimization of our denoising framework is derived in \sref{sec:deployingQuaSIForOCTDenoising}. In \sref{sec:experimentsAndResults}, an experimental evaluation of our method on publicly available benchmark data, clinical OCT scans as well as CT data is reported. Finally, section~\ref{sec:conclusion} contains our conclusion. \begin{figure}[!tb] \begin{center} \includegraphics[width = 0.45\textwidth]{images/graphicalAbstract.png} \end{center} \caption{Method overview: The proposed spatio-temporal denoising algorithm is based on an energy minimization formulation with three terms.} \label{fig:methodOverview} \end{figure} \section{Related Work} \label{sec:relatedWork} The image-based denoising techniques can be divided into two groups. \subsection{Spatial Denoising Methods} \label{sec:spatialDenoisingMethods} \textit{Spatial} or \textit{single-image} denoising has been extensively studied in the image processing community and various approaches emerged over the past decades. Local image filters perform smoothing of noisy images possibly in an adaptive way to preserve image structures \citep{Tomasi1998}. Non-local filtering also exploits the statistics of similar and repeating patches within images. One representative from this class is the successful BM3D method by \cite{Dabov2007}. However, these methods have been mainly designed for natural images under simplified assumptions like additive white Gaussian noise, which is inappropriate to describe speckle noise that is multiplicative in nature. Learning-based denoising, e.\,g. based on multilayer neural networks \citep{Burger2012}, hold the potential to handle speckle noise by learning noise distributions from training data. However, large-scale training data required for such methods is barely available for OCT. Some spatial filters that have been adopted for OCT denoising are the hybrid median filter, Lee filter, Wiener filter, or wavelet thresholding as investigated by \cite{Ozcan2007}. Global denoising methods for OCT have been introduced by \cite{Salinas2007} using non-linear diffusion and later by \cite{Duan2016} using second-order total generalized variation. \cite{Wong2010} have proposed structure-adaptive Bayesian estimation to handle speckle noise. One interesting approach has been proposed by \cite{Fang2012}, where dictionary learning based on B-scans with high signal-to-noise ratio (SNR) is used to denoise low SNR B-sans. Single-image denoising offers great flexibility in clinical applications of OCT as few assumptions on the scanning protocol are made. However, the noise reduction is limited as such methods can utilize single B-scans only. \subsection{Temporal Denoising Methods} \label{sec:temporalDenoisingMethods} \textit{Temporal} or \textit{multi-image} denoising methods consider coherence of consecutive images to improve noise reduction over single-image denoising. Such methods have been widely investigated for OCT and exploit sets of B-scans that are acquired sequentially from the same location or nearby positions. A popular approach in commercial systems is to register multiple of these B-scans and to average the registered scans to cancel out random noise. Averaging is computationally efficient but requires many repetitive acquisitions to effectively reduce speckle noise. \cite{Mayer2012} enhance simple averaging based on wavelet decompositions of B-scans to estimate local image structures and noise. Denoising is conducted in the wavelet domain by weighted averaging of wavelet coefficients according to the local image structure. \cite{Cheng2014} formulate OCT denoising from multiple scans as a low-rank matrix completion problem. \cite{Thapa2015} follow a similar notion and exploit the low-rank property on a patch-based level of multiple B-scans using weighted nuclear norm minimization. \cite{Bian2015} have proposed inter-frame and intra-frame priors for denoising using convex optimization. BM4D is an extension of the popular BM3D method to process volumetric data \citep{maggioni2013nonlocal}. All of these multi-image methods have in common that they require multiple input scans. This increases the overall acquisition time and therefore might lead to a higher patient discomfort. Also, they perform denoising on a B-scan level but ignore coherence of nearby B-scans within volumetric OCT data. If denoising of entire volumes is desired, simple consecutive processing of individual B-scans can lead to suboptimal results. In this paper, we mitigate both limitations by proposing a unified approach to handle denoising on a B-scan or volumetric level based on single or multiple scans. \section{Background} \label{sec:background} This section presents the variational framework for denoising volumetric data. Figure~\ref{fig:graphicalAbstract} illustrates three modi of this framework, namely image denoising, volumetric denoising, and volumetric+temporal denoising. The pipelines differ in the number of outputs and are therefore divided into multiple-input single-output (MISO) denoising and multiple-input multiple-output (MIMO) denoising. Throughout this paper, we use the following nomenclature. We denote a volume as a vector $\vec{g} \in \mathbb{R}^{N_z N_{xy}}$ composed of $N_z$ images $\vec{g}_z$, $z = 1, \ldots, N_z $ of size $N_{xy} = N_x N_y$ pixels. For the sake of convenience, 2-D images of size $N_x \times N_y$ are reshaped to vector notation using a row-wise scanning. A sequence of volumes is denoted as vector $\vec{G} \in \mathbb{R}^{N_t N_z N_{xy}}$, where $N_t$ is the number of volumes in the sequence. The input to the proposed framework is a sequence of $T$ volumes, where $1 \leq T \leq N_t$. For volumetric as well as volumetric+temporal denoising, we employ $Z$ consecutive images per volume ($1 < Z \leq N_z$), while image denoising is based on a single image in each volume ($Z = 1$). \subsection{Noise Model} In this paper, we consider several denoising applications with two different underlying noise models. In an \textit{additive} noise model, a noise-free volume $\vec{f} = (\vec{f}_1, \ldots, \vec{f}_Z)^\top$ is related to a noisy volume $\vec{g} = (\vec{g}_1, \ldots, \vec{g}_Z)^\top$ according to: \begin{equation} \label{eqn:noiseAdd} \vec{g} = \vec{f} + \vec{n}, \end{equation} where $\vec{n} = (\vec{n}_1, \ldots, \vec{n}_Z)^\top$ denotes an additive noise term. Common instances of this model are AWGN with stationary distribution of $\vec{n}$ or Poisson noise, where the variance of $\vec{n}$ depends on the measured image data. In a \textit{multiplicative} noise model, each captured volume $\vec{g}$ is related to a respective noise-free volume $\vec{f}$ according to: \begin{equation} \label{eqn:noiseMult} \vec{g} = \vec{f} \odot \vec{n}, \end{equation} where $\odot$ is the Hadamard (element-wise) product. We can turn the multiplicative model in \eqref{eqn:noiseMult} to the additive one in \eqref{eqn:noiseAdd} by transforming it to a logarithmic measurement domain. One common instance of this model is speckle noise that appears in OCT imaging \citep{Wong2010,Duan2016}. \subsection{Energy Minimization Formulation} Given a sequence of $T$ volumes $\vec{g}^{(t)}$ with $t = 1, \ldots, T$ that are either captured from the same position or from nearby positions and registered to each other, we propose MIMO and MISO denoising. In MISO denoising, we aim at estimating one noise-free volume $\hat{\vec{f}}$. We formulate denoising as the minimization of the objective function: \begin{equation} \begin{split} \hat{\vec{f}} &= \mathop{\mathrm{argmin}}_{\vec{f}} \sum_{t = 1}^{T} \rho \big( \vec{f} - \vec{g}^{(t)} \big) + \lambda R_{\mathrm{QuaSI}}(\vec{f}) + \mu \Vert\nabla \vec{f}\Vert_{1}. \end{split} \label{eqn:objective} \end{equation} The first term in \eqref{eqn:objective} denotes the data fidelity of $\vec{f}$ w.r.t.\,\, the input volumes $\vec{g}^{(t)}$. The second term is the proposed quantile sparse image (QuaSI) prior weighted by $\lambda \geq 0$. The third term denotes anisotropic total variation (TV) weighted by $\mu \geq 0$, which regularizes the spatial gradient $\nabla \vec{f} =~ (\nabla_{x}\vec{f}, \nabla_{y}\vec{f},\nabla_{z}\vec{f})^{\top}$. It is worth noting that the general denoising framework in \eqref{eqn:objective} can handle both noise reduction for entire volumes in 3-D as well as for individual images in 2-D by constraining the domain of both regularization terms. MIMO denoising follows a similar approach but aims at estimating a sequence of volumes $\hat{\vec{F}}$. We formulate MIMO denoising as the minimization of the objective function: \begin{equation} \begin{split} \hat{\vec{F}} = \mathop{\mathrm{argmin}}_{\vec{F}} &\rho \big( \vec{F} - \vec{G} \big) + \lambda R_{\mathrm{QuaSI}}(\vec{F}) + \mu \Vert\nabla \vec{F}\Vert_{1}\\ &+ \omega \Vert \nabla_t \vec{F} \Vert_1, \end{split} \label{eqn:objective3Dt} \end{equation} where $\nabla_{t} \vec{F}$ denotes the gradient of $\vec{F}$ in temporal direction and the associated TV regularization is weighted by $\omega \geq 0$. In \eqref{eqn:objective} and \eqref{eqn:objective3Dt}, the data fidelity terms use the loss function $\rho: \mathbb{R}^N \rightarrow \mathbb{R}_0^+$ to formulate the image formation. In general, the image formation needs to consider a mixture of noise, potential misalignments between the input volumes, or motion artifacts. Following prior work on mixed noise models in image restoration \citep{Kohler2015c}, we propose to use the Huber loss \citep{Ochs2015}: \begin{equation} \rho(\vec{l}) = \sum_{i = 1}^N \phi(l_i), \end{equation} where: \begin{equation} \phi(l) = \begin{cases} \frac{1}{2} l^2 & \text{if}~ l \leq \epsilon\\ \epsilon \left( |l| - \frac{1}{2}\epsilon \right) & \text{otherwise}, \end{cases} \end{equation} and $\epsilon > 0$ denotes the threshold of the Huber loss. This leads to an outlier-insensitive model while the underlying data fidelity is a convex term. \begin{figure*}[!tb] \scriptsize \centering \setlength\figurewidth{0.37\textwidth} \setlength\figureheight{0.5\figurewidth} \subfloat[Noisy B-scan $\vec{f}_{noisy}$ ]{ \includegraphics[width = 0.37\textwidth]{images/noisy.png} \label{fig:visualizeNoisy} } \qquad \subfloat[Gold standard B-scan $\vec{f}_{gold}$ ]{ \includegraphics[width = 0.37\textwidth]{images/gold.png} \label{fig:visualizeGold} } \subfloat[$\vec{r}_{noisy}$]{ \includegraphics[width = 0.37\textwidth]{images/QuaSIInitialEnhanced.png} \label{fig:visualizeQuaSIInitial} } \qquad \subfloat[$\vec{r}_{gold}$]{ \includegraphics[width = 0.37\textwidth]{images/QuaSIGoldEnhanced.png} \label{fig:visualizeQuaSIGold} } \subfloat[Histogram of $\vec{r}_{noisy} $]{ \input{images/QuaSIInitialHisto.tikz} \label{fig:histogramQuaSIInitial} } \qquad \subfloat[Histogram of $\vec{r}_{gold} $]{ \input{images/QuaSIGoldHisto.tikz} \label{fig:histogramQuaSIGold} } \caption{Analysis of our proposed QuaSI prior using median filtering $Q(\cdot)$ to model the appearance of OCT B-scans. (a) and (b) depict a noisy B-scan along with the respective gold standard taken from the pig eye dataset \cite{Mayer2012}. (c) and (d) show the residual $\vec{r} = \vec{f} - Q(\vec{f})$ of the QuaSI regularization term, where brighter pixels express higher residuals (contrast enhanced for visualization). (e) and (f) depict the corresponding histograms of the both residuals, where the histogram for the gold standard is sparse. Our QuaSI prior exploits the sparsity of $\vec{r} = \vec{f} - Q(\vec{f})$ for regularization in our variational denoising framework.} \label{fig:histogram} \end{figure*} \begin{figure*}[!tb] \centering \includegraphics[width = 0.8\textwidth]{images/linearization.png} \caption{Construction of the binary matrix to approximate the quantile filter $Q(\vec{f}) = \vec{Q}\vec{f}^t$.} \label{fig:lookUp} \end{figure*} \section{Quantile Sparse Image (QuaSI) Prior} \label{sec:quantileSparseImagePrior} A robust and efficient regularization term is of importance to achieve results with a high signal-to-noise ratio (SNR). The better the regularization term is able to model natural or medical images, the better the result of the optimization. Structure preservation is a sensitive issue when dealing with medical data. The images might contain small morphological structures that need to be preserved for the purpose of diagnosis. In order to tackle the challenges referred to above, the so called quantile sparse image (QuaSI) prior is introduced. \subsection{Definition of the Prior} The QuaSI prior is based on quantile filtering, where the quantile filter is denoted as $\tilde{\vec{f}} = Q(\vec{f})$. The $p$-quantile with $p \in [0, 1]$ is determined within a local neighborhood $\mathcal{N}(i)$. The local neighborhood consists of $d^3$ voxel, where $d$ denotes the width of the cubic filter kernel. For the $i$-th voxel in $\vec{f}$ we filter according to $\tilde{f}_i = \mathrm{quantile}_{\mathcal{N}(i)} (f_i, p)$. Inspired by the regularization by denoising priors by \cite{Romano2016a}, the denoised volume is a fixed point under the quantile filter. In this way: \begin{equation} R_{\mathrm{QuaSI}}(\vec{f}) = ~\left|\left| \vec{f} - Q(\vec{f}) \right|\right|_1. \label{eqn:quasiPrior} \end{equation} Specifically, regularization according to \eqref{eqn:quasiPrior} enforces sparsity of the residual $\vec{f} - Q(\vec{f})$. This offers a general model for regularization and -- depending on the application -- various types of statistics can be chosen for $Q(\vec{f})$. In this paper, we propose the median filter, where $\tilde{f}_i = \mathrm{median}_{\mathcal{N}(i)} (f_i)$. This follows the rationale that median filtering facilitates structure-preserving denoising under non-Gaussian noise. Further applications including erosion and dilation are not covered in this paper. In the literature \citep{Rohkohl2011}, quantiles are used to obtain a reference image to estimate non-periodic motion. Those examples are suitable applications that the QuaSI prior can handle. To validate the QuaSI prior using median filter regularization for denoising, we study its behavior under real measurement noise. For this purpose, we use the publicly available pig eye dataset by \cite{Mayer2012}, which provides a gold standard OCT B-scan obtained from the average of 455 registered noisy OCT B-scans. We compare a noisy OCT B-scan $\vec{f}_{noisy}$ with the gold standard $\vec{f}_{gold}$ in Fig.~\ref{fig:visualizeNoisy} and Fig.~\ref{fig:visualizeGold}. The residuals $\vec{r} = \vec{f} - \textit{Q}(\vec{f})$ of the QuaSI regularization term are illustrated in Fig.~\ref{fig:visualizeQuaSIInitial} for the noisy B-scan and in Fig.~\ref{fig:visualizeQuaSIGold} for the gold standard. Compared to the gold standard, the noisy B-scan yields a less sparse signal as shown in the histograms of both residuals in Fig.~\ref{fig:histogramQuaSIInitial} and Fig.~\ref{fig:histogramQuaSIGold}. Notice that the QuaSI regularization does not penalize image discontinuities. The histogram using the noisy B-scan contains less zero elements, while the histogram for the gold standard is sparse. Our proposed QuaSI prior exploits these observations for structure-preserving regularization in our variational denoising framework. \subsection{Linearization} In order to deal with the non-linearity of the quantile operator $\textit{Q}(\vec{f})$ the linearization $\textit{Q}(\vec{f}) = \vec{Q}\vec{f}$, similar to the work of \cite{Pan2016}, is performed. The binary matrix $\vec{Q}$ is assembled element-wise according to: \begin{equation} Q_{ij} =~ \begin{cases} 1 & \mathrm{if} \ j=q, \\ 0 & \mathrm{otherwise}, \end{cases} \label{eqn:linearizationQuasi} \end{equation} where $q = \arg\mathrm{quantile}_{r \in \mathcal{N}(i)} f_r$. This operation filters the $i$-th pixel according to the $p$-quantile in its local neighborhood $\mathcal{N}(i)$. For $\vec{f}^\prime = \vec{f}$ the linearization fullfills $\textit{Q}(\vec{f}^\prime) = \vec{Q} \vec{f}^\prime$, while otherwise $\vec{Q}$ serves as an approximation of the quantile filter. Figure~\ref{fig:lookUp} illustrates the construction of the binary matrix $\vec{Q}$ in 2-D. Each pixel is replaced by the quantile within its local neighborhood. The position of the quantile is stored in the binary matrix. In this example, the quantile is at position $j$. Thus, the $i$-th row of the matrix contains a one in the $j$-th column and zeros otherwise. The multiplication $\vec{Q}\vec{f}$ yields the quantile filtered result. \section{Deploying QuaSI for Denoising} \label{sec:deployingQuaSIForOCTDenoising} In this section, we show how the proposed QuaSI prior can be deployed for volumetric and temporal denoising. We derive two numerical optimization algorithms for denoising based on a MISO and a MIMO mode. \subsection{Multiple-Input Single-Output (MISO) Mode} \label{sec:MISO} MISO denoising in our framework is based on the energy minimization formulation in \eqref{eqn:objective}. In order to handle the non-smooth $L_1$ norm terms, we adopt ADMM optimization \citep{Goldstein2009}. To this end, \eqref{eqn:objective} is reformulated to the constrained optimization problem: \begin{equation} \begin{split} \hat{\vec{f}} =~ &\mathop{\mathrm{argmin}}_{\vec{f}} \sum_{t = 1}^{T} \rho \big( \vec{f} - \vec{g}^{(t)} \big) + \lambda \Vert \vec{u} \Vert_1 + \mu \Vert\vec{v}\Vert_{1} \\ & \mathrm{\ such\ that\ } \ \vec{u} = ~\vec{f} - \textit{Q}(\vec{f}),~\vec{v} = ~ \nabla \vec{f}, \end{split} \label{eqn:forCT} \end{equation} where $\vec{u}$ and $\vec{v}$ are auxiliary variables. Then, an unconstrained optimization problem is obtained from \eqref{eqn:forCT} using quadratic penalty functions according to: \begin{equation} \begin{split} \hat{\vec{f}} = ~&\mathop{\mathrm{argmin}}_{\vec{f}} \sum_{t = 1}^{T} \rho \big( \vec{f} - \vec{g}^{(t)} \big) + \mu\Vert \vec{v} \Vert_{1} + \lambda\Vert \vec{u} \Vert_{1} \\ & + \dfrac{\alpha}{2} \Vert \vec{u} - \vec{f} + \textit{Q}(\vec{f})\Vert_{2}^{2} + \dfrac{\beta}{2} \Vert \vec{v} - \nabla \vec{f} \Vert_{2}^{2}. \end{split} \label{eqn:forCT2} \end{equation} The Lagrangian multipliers $\alpha > 0$ and $\beta > 0$ enforce the constraints $\vec{u} = f - \textit{Q}(\vec{f})$ and $\vec{v} = \nabla \vec{f}$. If $\alpha,~\beta \rightarrow \infty$, we end up at the original problem \eqref{eqn:objective}. In order to strictly enforce the constraint, the Bregman variables $\vec{b}_u$ and $\vec{b}_v$ are introduced. Then, we minimize the augmented Lagrangian: \begin{equation} \begin{split} &\mathcal{L}_{\mathrm{AL}}(\vec{f}, \vec{u}, \vec{v}, \vec{b}_u, \vec{b}_v) = ~\sum_{t = 1}^{T} \rho \big( \vec{f} - \vec{g}^{(t)} \big)\\ & + \dfrac{\alpha}{2} \Vert \vec{u} - \vec{f} + \textit{Q}(\vec{f}) - \vec{b}_{u} \Vert_{2}^{2} + \lambda\Vert \vec{u} \Vert_{1}\\ &+ \dfrac{\beta}{2} \Vert \vec{v} - \nabla \vec{f} - \vec{b}_v\Vert_{2}^{2} + \mu\Vert \vec{v} \Vert_{1}. \end{split} \label{eqn:augmentedLagrangian} \end{equation} We iteratively optimize \eqref{eqn:augmentedLagrangian} by alternating minimization w.r.t.\,\, the individual parameters. Hence, three subproblems emerge, where the $L_1$-Norm is decoupled from the $L_2$-Norm. The minimization of the augmented Lagrangian \eqref{eqn:augmentedLagrangian} w.r.t.\,\, $\vec{f}$ can be solved in a least square sense. Therefore, the binary matrix $\vec{Q}$ is constructed using the result $\vec{f}^k$ from the previous iteration, where $k$ denotes the iteration index. In order to cope with the Huber loss, iteratively re-weighted least squares (IRLS) is applied. Solving the resulting least squares problem leads to the linear system: \begin{align} \vec{A}\vec{f}^{k+1} &= \vec{b} \label{eqn:cgEquationSystem}\\ \vec{A} &= \sum_{t=1}^{T} \vec{W}^{(t)} + \beta \nabla^{\top}\nabla + \alpha \vec{M}^\top \vec{M} \label{eqn:cgEquationSystem:matA}\\ \begin{split} \vec{b} &= \sum_{t=1}^{T} \vec{W}^{(t)} \vec{g}^{(t)}\\ &+ \beta \nabla^\top( \vec{v} - \vec{b}_{v}) + \alpha \vec{M}^{\top}( \vec{u}- \vec{b}_{u}), \end{split} \label{eqn:cgEquationSystem:vecB} \end{align} where $\vec{M} = \vec{I} - \vec{Q}$ with the identity matrix $\vec{I}$. In \eqref{eqn:cgEquationSystem} - \eqref{eqn:cgEquationSystem:vecB}, $\vec{W}^{(t)}$ are diagonal weight matrices constructed from $\vec{f}^k$. Using the intermediate result $\vec{f}^{k}$, we can compute the weights for IRLS according to: \begin{equation} W_{ii}^{(t)} = ~ \dfrac{\phi^\prime \left(f^k_i - g_i^{(t)} \right)}{\left|f^k_i - g_i^{(t)}\right|}, \label{eqn:huberWeight} \end{equation} where $\phi^\prime(l)$ is the derivative of the Huber loss. The threshold of the Huber loss is set to $\epsilon = 1.345\sigma$ to achieve a 95-percent efficiency of the estimator under Gaussian noise \citep{Ochs2015}. We use the median absolute deviation (MAD) rule to obtain a consistent estimate of the standard deviation according to $\sigma = 1.4826 \cdot \mathrm{MAD}(f^k_i - g_i^{(t)})$ \citep{Rousseeuw1987}. To solve the linear system \eqref{eqn:cgEquationSystem}, conjugate gradient (CG) iterations are used. The minimization of the augmented Lagrangian \eqref{eqn:augmentedLagrangian} w.r.t.\,\, the auxiliary variables can be done by exploiting the separability of the problem. Given the estimate for the intermediate result $\vec{f}^{k+1}$, this leads to the element-wise updates: \begin{align} u_i^{k+1} = ~ \mathrm{shrink} (&[\vec{f}^{k+1} - \vec{Q} \vec{f}^{k+1} + \vec{b}^{k}_{u}]_i, \lambda/\alpha) \label{eqn:updateAuxU},\\ v_i^{k+1} = ~\mathrm{shrink} (&[\nabla \vec{f}^{k+1} + \vec{b}^{k}_v]_i, \mu / \beta ).\label{eqn:updateAuxV}, \end{align} where $\mathrm{shrink}(z, \gamma) = \mathrm{sign}(z) \max(z - \gamma, 0)$ denotes the shrinkage operator \citep{Goldstein2009}. Given an estimate for the intermediate result $\vec{f}^{k+1}$ as well as the auxiliary variables $\vec{u}^{k+1}$ and $\vec{v}^{k+1}$, the Bregman variables are updated according to: \begin{align} \vec{b}^{k+1}_u &=~ \vec{b}^{k}_u + (\vec{f}^{k+1} - \vec{Q} \vec{f}^{k+1} - \vec{u}^{k+1}), \label{eqn:updateBregmanU}\\ \vec{b}^{k+1}_v &=~ \vec{b}_v^{k} + (\nabla\vec{f}^{k+1} - \vec{v}^{k+1}). \label{eqn:updateBregmanV} \end{align} Algorithm \ref{alg:denoisingAlgorithm2D} summarizes the proposed ADMM based iteration scheme. Overall, we use two nested optimization loops to solve \eqref{eqn:forCT}. We use the mean of the input images as an initial guess \smash{$\vec{f}^1$} as well as \smash{$\vec{u}^1 = \vec{v}^1 = \vec{0}$, $\vec{b}_u^1 = \vec{b}_v^1 = \vec{0}$}. The weight matrices for IRLS are updated at every iteration. The linearization $\vec{Q}$ of the quantile filter is updated every $K_{\text{inner}}$ iterations, assuming the position of the quantile does not change within the next $K_{\text{inner}}$ iterations. This assumption speeds up the algorithm, as the construction of the matrix is time-consuming. Note that $K_{\text{inner}}$ should not be chosen too large in order to avoid a bad approximation of the quantile filter. A proper evaluation of the convergence of the algorithm is presented in Sect.~\ref{sec:convergenceAndParameterSensitivity}. \begin{algorithm}[!t] \caption{MISO denoising with QuaSI prior} \label{alg:denoisingAlgorithm2D} \begin{algorithmic} \State Set $\vec{u}^1 = \vec{v}^1 = \vec{b}_u^1 = \vec{b}_v^1 = \vec{0}$, $\vec{f}^1 = \frac{1}{T} \sum_{t = 1}^T \vec{g}^{(t)}$ \For{$k = 1, \ldots, K_{\mathrm{outer}}$} \State Assemble $\vec{Q}$ from $\vec{f}^k$ according to \eqref{eqn:linearizationQuasi} \For{$i = 1, \ldots, K_{\mathrm{inner}}$} \State Update weights $\vec{W}^{(t)}$ using \eqref{eqn:huberWeight} \State Update $\vec{f}^{k+1}$ using CG for \eqref{eqn:cgEquationSystem} \State Update $\vec{u}^{k+1}$ and $\vec{v}^{k+1}$ using \eqref{eqn:updateAuxU} - \eqref{eqn:updateAuxV} \State Update $\vec{b}_u^{k+1}$ and $\vec{b}_v^{k+1}$ using \eqref{eqn:updateBregmanU} - \eqref{eqn:updateBregmanV} \EndFor \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[!t] \caption{MIMO denoising with QuaSI prior} \label{alg:MIMODenoising} \begin{algorithmic} \State Set $\vec{F}^1 = \vec{G}$, $\vec{U}^1 = \vec{V}^1 = \vec{D}^1 = \vec{B}^1_U = \vec{B}^1_V = \vec{B}^1_D = \vec{0}$ \For{$k = 1, \ldots, K_{\mathrm{outer}}$} \State Assemble $\vec{Q}$ from $\vec{F}^{k}$ according to \eqref{eqn:linearizationQuasi} \For{$i = 1, \ldots, K_{\mathrm{inner}}$} \State Update weights $\vec{W}^{(t)}$ using \eqref{eqn:huberWeight} \State Update $\vec{F}^{k+1}$ using CG for \eqref{eqn:cgEquationSystemMIMO} \State Update $\vec{U}^{k+1}$, $\vec{V}^{k+1}$, $\vec{D}^{k+1}$ using \eqref{eqn:updateU} - \eqref{eqn:updateD} \State Update $\vec{B}_U^{k+1}$, $\vec{B}_V^{k+1}$, $\vec{B}_D^{k+1}$ using \eqref{eqn:updateBU} - \eqref{eqn:updateBD} \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsection{Multiple-Input Multiple-Output (MIMO) Mode} \label{sec:MIMO} MIMO denoising follows a similar optimization approach and is based on the energy minimization formulation in \eqref{eqn:objective3Dt}. To this end, the augmented Lagrangian is given by: \begin{align} \begin{aligned} &\mathcal{L}_{\mathrm{AL}}(\vec{F}, \vec{U}, \vec{V},\vec{D}, \vec{B}_{U}, \vec{B}_{V}, \vec{B}_{D}) = \rho(\vec{F}-\vec{G})\\ &+ \frac{\alpha}{2} \left| \left| \vec{U} - \vec{F} + Q(\vec{F}) - \vec{B}_{u} \right|\right|_{2}^{2} + \lambda\|\vec{U}\|_{1} \\ &+ \frac{\beta}{2} \left| \left| \vec{V} - \nabla_{x,y,z}\vec{F} - \vec{B}_V \right|\right|_{2}^{2} + \mu \|\vec{V}\|_{1} \\ &+ \frac{\gamma}{2} \left| \left| \vec{D} - \nabla_{t}\vec{F} - \vec{B}_{D} \right|\right|_{2}^{2} + \omega \|\vec{D}\|_{1}, \end{aligned} \label{objective_2_reformat} \end{align} where $\vec{U}$, $\vec{V}$, and $\vec{D}$ denote auxiliary variables with the respective Bregman variables $\vec{B}_{U}$, $\vec{B}_{V}$, and $\vec{B}_{D}$ to enforce the constraints of spatial TV, QuaSI, and temporal TV regularization, respectively. Following MISO denoising as presented in \sref{sec:MISO}, we linearize the non-linear quantile operator $Q(\vec{F}) = (Q(\vec{f}_1), \ldots, Q(\vec{f}_T))^\top$ using \eqref{eqn:linearizationQuasi}. Then, we have $Q(\vec{F}) = \vec{Q}\vec{F}$, where $\vec{Q} = (\vec{Q}_1, \ldots, \vec{Q}_T)^\top$ and for each volume $\vec{f}_t$ in the sequence $\vec{F}$ we have $Q(\vec{f}_t) = \vec{Q}_t \vec{f}_t$. Based on this linearization, we solve \eqref{objective_2_reformat} with an alternating scheme by minimizing w.r.t.\,\, the individual parameters. The minimization w.r.t.\,\, $\vec{F}$ leads to the linear system: \begin{align} \vec{A}\vec{F}^{k+1} &= \vec{b} \label{eqn:cgEquationSystemMIMO}\\ \vec{A} &= \vec{W} + \beta \nabla_{x,y,z}^\top\nabla_{x,y,z} + \gamma \nabla_t^\top\nabla_t + \alpha \vec{M}^\top\vec{M} \label{eqn:cgEquationSystem:MIMOA}\\ \begin{split} \vec{b} &= 2 \vec{W}\vec{G} + \beta\nabla_{x,y,z}^\top(\vec{D}_{x,y,z}-\vec{B}_{x,y,z}) \\ & + \gamma\nabla_t^\top(\vec{D}_t - \vec{B}_t) + \alpha \vec{M}^\top(\vec{U} - \vec{B}_u), \end{split} \label{eqn:cgEquationSystem:MOMOb} \end{align} where $\vec{W}$ is a diagonal weight matrix associated with $\vec{F}^k$ and constructed from the Huber loss according to \eqref{eqn:huberWeight}. We then solve \eqref{eqn:cgEquationSystem:MOMOb} using CG iterations. The auxiliary variables $\vec{U}$, $\vec{V}$, and $\vec{D}$ are updated element-wise according to: \begin{align} U_i^{k+1} = ~ \mathrm{shrink} (&[\vec{F}^{k+1} - \vec{Q} \vec{F}^{k+1} + \vec{B}^{k}_{U}]_i, \lambda/\alpha) \label{eqn:updateU} \\ V_i^{k+1} = ~\mathrm{shrink} (&[\nabla_{x,y,z} \vec{F}^{k+1} + \vec{B}^{k}_V]_i, \mu / \beta ) \\ D_i^{k+1} = ~\mathrm{shrink} (&[\nabla_t \vec{F}^{k+1} + \vec{B}^{k}_D]_i, \omega / \gamma ). \label{eqn:updateD} \end{align} Given the intermediate sequence $\vec{F}^{k+1}$ along with the auxiliary variables $\vec{U}^{k+1}$, $\vec{V}^{k+1}$, and $\vec{D}^{k+1}$, the Bregman variables are updated according to: \begin{align} \vec{B}^{k+1}_U &=~ \vec{B}^{k}_U + (\vec{F}^{k+1} - \vec{Q} \vec{F}^{k+1} - \vec{U}^{k+1}) \label{eqn:updateBU}\\ \vec{B}^{k+1}_V &=~ \vec{B}_V^{k} + (\nabla_{x,y,z} \vec{F}^{k+1} - \vec{V}^{k+1}) \\ \vec{B}^{k+1}_D &=~ \vec{B}_D^{k} + (\nabla_t \vec{F}^{k+1} - \vec{D}^{k+1}).\label{eqn:updateBD} \end{align} An illustration of the proposed optimization scheme is given in Algorithm \ref{alg:MIMODenoising}. \section{Applications and Evaluation} \label{sec:experimentsAndResults} In order to show the applicability of the proposed framework for image, volumetric and volumetric+temporal denoising, we evaluate our framework in different diagnostic and interventional imaging workflows namely OCT as well as C-arm CT. Specifically, we benchmark our method on different datasets including comparisons to the state-of-the-art in the respective fields. \begin{figure*}[!tb] \scriptsize \centering \setlength \figurewidth{0.35\textwidth} \setlength \figureheight{0.75\figurewidth} \subfloat{ \includegraphics[width = 0.6\textwidth]{images/legendMSRCNR.png} } \subfloat{ \input{images/pigEyePSNR_color.tikz} } \qquad \subfloat{ \input{images/pigEyeSSIM_color.tikz} } \caption{Quantification of noise reduction in terms of mean PSNR and SSIM for different denoising methods on the pig eye dataset for different numbers of input images. The points on the curves denote the average PSNR, and SSIM respectively, over the entire pig eye dataset using the number of input images denoted on the x-axis.} \label{fig:pigEyeQualityMeasures} \end{figure*} \begin{figure*}[!tb] \scriptsize \centering \subfloat[Noisy input image]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1cm, width = 1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/9_noisy_f5.png}}; \spy on (0.8,0.9) in node [left] at (2.55, -0.8); \end{tikzpicture}\label{fig:pigEyeDataImages:noisy} } \subfloat[AVG]{\begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1cm, width = 1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/9_avg_f5.png}}; \spy on (0.8,0.9) in node [left] at (2.55, -0.8); \end{tikzpicture}\label{fig:pigEyeDataImages:AVG} } \subfloat[BED \citep{Wong2010}]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1cm, width = 1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/9_bed_f5.png}}; \spy on (0.8,0.9) in node [left] at (2.55, -0.8); \end{tikzpicture} } \subfloat[BM3D \citep{Dabov2007}]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1cm, width = 1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/9_bm3d_f5.png}}; \spy on (0.8,0.9) in node [left] at (2.55, -0.8); \end{tikzpicture} } \subfloat[WMF \citep{Mayer2012}]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1cm, width = 1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/9_wmf_f5.png}}; \spy on (0.8,0.9) in node [left] at (2.55, -0.8); \end{tikzpicture} } \subfloat[BNLM2D \citep{Coupe2009}]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1cm, width = 1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/9_bnlm2D_f5.png}}; \spy on (0.8,0.9) in node [left] at (2.55, -0.8); \end{tikzpicture} } \subfloat[DnCNN \cite{Zhang2017}]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1cm, width = 1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/9_dncnn_f5.png}}; \spy on (0.8,0.9) in node [left] at (2.55, -0.8); \end{tikzpicture}\label{fig:pigEyeDataImages:dncnn} } \subfloat[Ours (TV + QuaSI)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1cm, width = 1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/9_quasi_f5.png}}; \spy on (0.8,0.9) in node [left] at (2.55, -0.8); \end{tikzpicture}\label{fig:pigEyeDataImages:ours} } \caption{Denoising on position 9 from the pig eye dataset using $5$ B-scans. \protect\subref{fig:pigEyeDataImages:noisy} Noisy image, \protect\subref{fig:pigEyeDataImages:AVG} -- \protect\subref{fig:pigEyeDataImages:ours} AVG, BED \citep{Wong2010}, BM3D \citep{Dabov2007}, WMF \citep{Mayer2012}, BNLM2D \citep{Coupe2009}, DnCNN \citep{Zhang2017} and the proposed method.} \label{fig:pigEye} \end{figure*} \subsection{Optical Coherence Tomography Denoising} Throughout all experiments on the OCT data, we adopted our framework to image and volumetric denoising. For denoising on a B-scan level, the parameters were set to $\mu = 0.075 \cdot T$, $\lambda = 5.0 \cdot T$, $\alpha = 100.0 \cdot T$, $\beta = 1.5 \cdot T$, $K_{\mathrm{outer}} = 20$ and $K_{\mathrm{inner}} = 2$ for $T$ B-scans and $3 \times 3$ median filtering to setup the QuaSI prior. In order to find appropriate standard parameter for the proposed method, we proceeded as follows. The parameter search was conducted on the pig eye dataset, using a clinical relevant image section of eye position 11 and 12 with 5 noisy B-scans each. First, the parameter of the proposed algorithm with pure TV regularization were set using a grid search approach for $\mu$ and $\beta$. To quantify the image quality, peak-signal-to-noise ratio (PSNR) and structural similarity index (SSIM) were evaluated in addition to a qualitative investigation. Second, the parameter of the proposed algorithm with QuaSI + TV regularization were set, using the optimal TV weights from the previous investigation. For volumetric denoising based on $Z = 6$ adjacent B-scans, the parameters were set to $\mu = 0.0007 \cdot T$, $\lambda = 1.0 \cdot T$, $\alpha = 120.0 \cdot T$, $\beta = 0.05 \cdot T$, $K_{\mathrm{outer}} = 20$ and $K_{\mathrm{inner}} = 2$ for $T$ volumes and $3 \times 3 \times 3$ median filtering. The proposed algorithm for volumetric denoising was evaluated on clinical data only. The selection of standard parameters was performed in the same way as for denoising on a B-scan level. Using $Z = 6$ adjacent B-scans in $T = 5$ volumes from only 1 patient, the TV weights followed by the QuaSI weights were set. \begin{figure*}[!tb] \scriptsize \centering \setlength \figurewidth{0.35\textwidth} \setlength \figureheight{0.65\figurewidth} \subfloat{ \includegraphics[width = 0.6\textwidth]{images/legendMSRCNR.png} } \subfloat{ \input{images/clinicalMSR_color.tikz} } \qquad \subfloat{ \input{images/clinicalCNR_color.tikz} } \caption{Quantification of noise reduction in terms of mean MSR and CNR measures for denoising on a B-scan level on our clinical dataset for different numbers of input images. The plots illustrate the mean MSR and CNR of the whole clinical dataset and the 5 foreground regions. Each point on the curves denotes the mean MSR and CNR using the number of input images specified on the x-axis as input to state-of-the-art denoising methods and the proposed algorithm with the QuaSI prior.} \label{fig:patientDataQualityMeasures} \end{figure*} \begin{figure*}[!p] \centering \scriptsize \subfloat[Noisy image (MSR: 2.68, CNR: 2.47)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/2605_OD__XFast_f5_noisy_regions.png}}; \spy on (1.22,0.22) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImages:noisy} } \subfloat[AVG (MSR: 3.17, CNR: 3.17)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/2605_XFast_avg_f5.png}}; \spy on (1.22,0.22) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImages:avg} } \subfloat[BM3D \cite{Dabov2007} (MSR: 4.61, CNR: 4.85)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/2605_XFast_bm3d_f5.png}}; \spy on (1.22,0.22) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImages:bm3d} } \subfloat[BED \cite{Wong2010} (MSR: 4.67, CNR: 4.85)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/2605_XFast_bed_f5.png}}; \spy on (1.22,0.22) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImages:bed} } \subfloat[WMF \cite{Mayer2012} (MSR: 3.67, CNR: 3.55)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/2605_XFast_wmf_f5.png}}; \spy on (1.22,0.22) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImages:wmf} } \subfloat[DnCNN (MSR: 3.54, CNR: 3.70)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/2605_XFast_dncnn_f5.png}}; \spy on (1.22,0.22) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImages:dncnn} } \subfloat[BNLM2D (MSR: 5.04, CNR: 5.32)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/2605_XFast_bnlm2D_f5.png}}; \spy on (1.22,0.22) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImages:wnlm2D} } \subfloat[Ours (MSR: 5.02, CNR: 5.36)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/2605_XFast_quasi_f5.png}}; \spy on (1.22,0.22) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImages:ours} } \caption{Visual comparison of denoising results using our clinical dataset with the central B-scan of $T = 5$ volumes from a 46 years old male patient with diabetic retinopathy. \protect\subref{fig:patientDataImages:noisy} Noisy image with manually selected background (red) and foreground regions (green) to determine MSR and CNR. \protect\subref{fig:patientDataImages:avg} -- \protect\subref{fig:patientDataImages:ours} AVG, BM3D \citep{Dabov2007}, BED \citep{Wong2010}, WMF \citep{Mayer2012}, DnCNN \citep{Zhang2017}, BNLM2D \citep{Coupe2009}, and the proposed method.} \label{fig:patientDataImages} \end{figure*} \subsubsection{Datasets} \label{sec:datasets} To evaluate the performance of the proposed denoising algorithm, we conducted experiments on two different OCT datasets. This comprises ex-vivo benchmark data and real clinical data. For an evaluation of denoising on B-scan level, we used the publicly available pig eye dataset provided by \cite{Mayer2012}. The dataset comprises 455 B-scans corresponding to 35 eye positions with 13 scans per position and was captured ex-vivo with a Spectralis HRA \& OCT. The published B-scans were registered to each other to compensate for geometric shifts. We apply denoising to sets of $T$ registered B-scans with $T \in [1,13]$ to demonstrate the influence of different numbers of input B-scans on the denoising result. The pig eye dataset provides a gold standard B-scan that was obtained by averaging all 455 registered scans. The quality of the denoising algorithm was evaluated by assessing the fidelity of a denoised B-scan w.r.t.\,\, the gold standard using the peak-signal-to-noise ratio (PSNR) as well as the structural similarity index (SSIM). In order to evaluate and compare B-scans with volumetric denoising, we use clinical data. A prototype ultrahigh-speed swept-source OCT system with 1050\,nm wavelength and a sampling rate of 400,000 A-scans per second \citep{Choi2013a} was used to acquire volumetric data of 14 human subjects. Proliferative and non-proliferative diabetic retinopathy, early age-related macular degeneration and one healthy subject were imaged on two volumes per subject, where each B-scan was acquired five times in immediate succession. We use 500 A-scans by 500 B-scans for a field size of $3 \times 3$\,mm. For denoising on a B-scan level, the central B-scan of each volume is used, while volumetric denoising is performed on adjacent B-scans including the central one. As the clinical data does not provide a gold standard, we follow prior work by \cite{Fang2012,Ozcan2007,Wong2010} and measure the noise reduction using the mean-to-standard-deviation ratio (MSR) and the contrast-to-noise ratio (CNR) according to: \begin{align} \mathrm{MSR} &= \frac{\mu_{f}}{\sigma_{f}}\\ \mathrm{CNR} &= \frac{| \mu_{f} - \mu_{b} |} { \frac{1}{2} \sqrt{(\sigma_{f}^{2} + \sigma_{b}^{2})}}, \end{align} where $\mu_{f}$ and $\mu_{b}$ as well as $\sigma_{f}$ and $\sigma_{b}$ are the means and standard deviations of the intensities in a foreground and a background region, respectively. The regions to determine MSR and CNR were manually selected for the central B-scan, see Fig.~\ref{fig:patientDataImages:noisy}. \subsubsection{Comparison to the State-of-the-Art} We compared our method against seven competing denoising approaches. As representatives of general-purpose methods, we evaluated BM3D \citep{Dabov2007} as well as a deep denoising CNN (DnCNN) \citep{Zhang2017}, which are state-of-the-art in the field of natural image denoising. We also evaluated non-local means-based speckle noise filtering (BNLM2D) that has been originally proposed for ultrasound image denoising \citep{Coupe2009}. In terms of spatial filters customized for OCT, we used Bayesian estimation denoising (BED) \citep{Wong2010}. In the field of temporal methods using multiple registered B-scans, we evaluate simple averaging (AVG) as a baseline as well as wavelet multi-frame denoising (WMF) \citep{Mayer2012}. To ensure fair comparisons between spatial and temporal methods, we provide the average of all B-scans as input for single-image denoising (BM3D, BNLM2D, DnCNN, and BED). In contrast, AVG and WMF are pure temporal approaches that process multiple registered B-scans. Notice that all of these methods can only operate on individual 2-D B-scans to denoise volumetric data and are therefore compared to our proposed method on a B-scan level. The parameters of the competing methods were set according to suggestions of the authors and adapted to the OCT data. \begin{figure*}[!tb] \centering \scriptsize \subfloat{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1.1cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/DataSet27_f1_avg.png}}; \spy on (0.2,1.2) in node [left] at (1.88, -0.59); \end{tikzpicture} } \subfloat{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1.1cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/DataSet27_f5_avg.png}}; \spy on (0.2,1.2) in node [left] at (1.88, -0.59); \end{tikzpicture} } \subfloat{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1.1cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/DataSet27_f13_avg.png}}; \spy on (0.2,1.2) in node [left] at (1.88, -0.59); \end{tikzpicture} } \setcounter{subfigure}{0} \subfloat{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1.1cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/DataSet27_f1_tv.png}}; \spy on (0.2,1.2) in node [left] at (1.88, -0.59); \end{tikzpicture} } \subfloat{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1.1cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/DataSet27_f5_tv.png}}; \spy on (0.2,1.2) in node [left] at (1.88, -0.59); \end{tikzpicture} } \subfloat{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1.1cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/DataSet27_f13_tv.png}}; \spy on (0.2,1.2) in node [left] at (1.88, -0.59); \end{tikzpicture} } \setcounter{subfigure}{0} \subfloat[$T = 1$]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1.1cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/DataSet27_f1_quasi.png}}; \spy on (0.2,1.2) in node [left] at (1.88, -0.59); \end{tikzpicture} } \subfloat[$T = 5$]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1.1cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/DataSet27_f5_quasi.png}}; \spy on (0.2,1.2) in node [left] at (1.88, -0.59); \end{tikzpicture} } \subfloat[$T = 13$]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.5, height=1.1cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.31\textwidth]{images/DataSet27_f13_quasi.png}}; \spy on (0.2,1.2) in node [left] at (1.88, -0.59); \end{tikzpicture} } \caption{This comparison aims at demonstrating the improvement of the proposed spatio-temporal denoising with TV + QuaSI regularization (third row) compared to simple averaging of registered B-scans (top row) and the proposed spatio-temporal denoising with TV regularization only (second row) for different numbers of input images. For the comparison, dataset 27 from the pig eye dataset was used to evaluate the proposed algorithm with and without the QuaSI prior using the standard parameter.} \label{fig:pigEyeImages} \end{figure*} \begin{figure*} \centering \subfloat{ \includegraphics[width = 0.25\textwidth]{images/legendTVvsQuaSI.png}}\\ \subfloat{ \includegraphics[width=0.95\textwidth]{images/TVvsQuaSI.png}} \caption{Mean PSNR, SSIM, MSR and CNR measures to quantify noise reduction with and without the QuaSI prior for 1, 5 and 13 input images. The two bar graphs on the left hand side illustrate the average PSNR and SSIM over the entire pig eye dataset using the proposed algorithm with and without QuaSI prior and the standard parameters. The average MSR and CNR over the entire clinical dataset is shown in the two bar graphs on the right hand side.} \label{fig:TVvsQuaSImeasures} \end{figure*} First, we conducted experiments for denoising on B-scan level on the pig eye dataset. Figure~\ref{fig:pigEyeQualityMeasures} depicts the mean PSRN and SSIM of the competing denoising methods w.r.t.\,\, the gold standard for different numbers of input B-scans. We observed quantitatively that our proposed method consistently outperforms the competing BM3D, BED, and WMF denoising methods regardless of the number of input frames. Moreover, using only $T = 2$ input B-scans, our spatio-temporal method achieved comparable results to averaging $T = 5$ B-scans. The proposed method performs better than BNLM2D for $T < 5$ input B-scans. This reveals that our method is more economic regarding the number of required input scans. This property is essential for clinical applications, where acquiring more scans might lead to unacceptable long acquisition times. Figure~\ref{fig:pigEye} depicts qualitative results for $T = 5$ B-scans. Here, the proposed algorithm using the QuaSI prior achieved superior performance in terms of noise reduction, while anatomical structures like retinal layers are preserved. Comparable results are achieved by BNLM2D, but the latter suffers from small streak-like artifacts. DnCNN achieved comparable results to simple averaging both regarding quantitative measures and qualitative assessment. Second, denoising on a B-scan level was studied on our clinical datasets using the non-reference MSR and CNR measures for a quantitative evaluation. Figure~\ref{fig:patientDataQualityMeasures} depicts the averaged MSR and CNR measures for different numbers of input images. Overall, we observed that BNLM2D and our proposed method achieved the best noise reduction expressed by both measures. Figure~\ref{fig:patientDataImages} compares the denoising performance on one example dataset. We found that AVG, WMF, and BED facilitate structure-preserving denoising but were prone to noise breakthroughs in homogeneous areas, which lowers their MSR and CNR. In contrast, BM3D achieved superior noise reduction but suffered from streak artifacts. Similar observations were made in related work on OCT denoising \citep{Fang2012} and can be explained by the assumption of additive white Gaussian noise used for BM3D. The proposed method achieved a decent tradeoff between noise reduction and structure preservation. \subsubsection{Impact of the QuaSI Prior} We used the pig eye dataset as well as clinical data to evaluate the performance of our spatio-temporal denoising algorithm with and without the QuaSI prior. Figure~\ref{fig:pigEyeImages} illustrates the impact of the QuaSI prior on the denoising result for the pig eye data compared to simple averaging and pure TV regularization. In terms of noise reduction, the proposed variational framework outperformed simple averaging. Especially in the enlarged region, a noticeable difference between averaging and the proposed denoising algorithm is shown. In homogeneous areas, the algorithm considerably suppressed speckle noise, while preserving important structures. The noise reduction was superior when using a combination of the QuaSI prior and the TV prior for regularization as shown for the retinal structures in the enlarged region. In addition, the QuaSI prior contributed to structure-preservation and avoided staircasing artifacts that typically appear in TV denoising. Figure~\ref{fig:TVvsQuaSImeasures} illustrates the impact of the QuaSI prior using PSNR and SSIM (for the pig eye data) as well as MSR and CNR (for clinical data) for different numbers of input scans. Here, our denoising framework with QuaSI prior outperformed TV denoising in terms of all measures. \begin{figure*}[!tb] \centering \scriptsize \subfloat[BM4D $T=1$ (MSR: 5.14, CNR: 5,63)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/3035_BM4D_1.png}}; \spy on (-0.75,0.12) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImagesK1:BM4D:1} } \subfloat[BM4D $T=5$ (MSR: 6.10, CNR: 5.65)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/3035_BM4D.png}}; \spy on (-0.75,0.12) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImagesK5:BM4D:5} } \subfloat[B-scan denoising $T=1$ (MSR: 5.68, CNR: 5.25)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/3035_2D_1.png}}; \spy on (-0.75,0.12) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImagesK1:2D} } \subfloat[B-scan denoising $T=5$ (MSR: 7.78, CNR: 7.15)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/3035_2D.png}}; \spy on (-0.75,0.12) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImagesK5:2D} } \subfloat[Volumetric denoising $T=1$ (MSR: 6.43, CNR: 5.99)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/3035_3D_1.png}}; \spy on (-0.75,0.12) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImagesK1:3D} } \subfloat[Volumetric denoising $T=5$ (MSR: 6.85, CNR: 6.51)]{ \begin{tikzpicture}[spy using outlines={rectangle,orange,magnification=2.8, height= 3.55cm, width = 1.1cm, connect spies, every spy on node/.append style={thick}}] \node {\pgfimage[width = 0.35\textwidth]{images/3035_3D.png}}; \spy on (-0.75,0.12) in node [left] at (4, 0); \end{tikzpicture}\label{fig:patientDataImagesK5:3D} } \caption{Denoising on the clincial dataset using $T = 5$ registered volumes from a 67 years old male patient with non-proliferative diabetic retinopathy. The left column illustrates the results of the proposed method on a B-scan level with $Z = 1$ scan \protect\subref{fig:patientDataImagesK1:2D} and on a volumetric level \protect\subref{fig:patientDataImagesK1:3D} as well as BM4D \protect\subref{fig:patientDataImagesK1:BM4D:1} with $Z = 6$ consecutive scans using $T = 1$ input volume. The right column illustrates the results of the proposed method on a B-scan level with $Z = 1$ scan \protect\subref{fig:patientDataImagesK5:2D} and on a volumetric level \protect\subref{fig:patientDataImagesK5:3D} as well as BM4D \protect\subref{fig:patientDataImagesK5:BM4D:5} with $Z = 6$ consecutive scans using $T = 5$ registered input volumes.} \label{fig:patientDataImages2Dvs3D} \end{figure*} \subsubsection{B-scan vs. Volumetric Denoising} \begin{table*}[!tb] \centering \small \begin{tabular}{L{1cm}cccccc}\toprule & \multicolumn{3}{c}{$T = 1$ volume} & \multicolumn{3}{c}{$T = 5$ volumes} \\ & \multicolumn{1}{C{2cm}} {BM4D \citep{maggioni2013nonlocal}}& \multicolumn{1}{C{2cm}} {B-scan denoising}& \multicolumn{1}{C{2cm}} {Volumetric denoising}& \multicolumn{1}{C{2cm}} {BM4D \citep{maggioni2013nonlocal}}& \multicolumn{1}{C{2cm}} {B-scan denoising}& \multicolumn{1}{C{2cm}} {Volumetric denoising} \\ \midrule MSR &5.16&5.35&5.77 &5.38& 6.50 & 6.31 \\ \midrule CNR &5.00&5.27&5.60 & 5.23&6.38 & 6.18 \\ \bottomrule \end{tabular} \caption{Mean MSR and CNR measures for 1 and 5 registered input volumes on the clinical data. For B-scan denoising, the central B-scan is used and for volumetric denoising 6 adjacent B-scans including the central one are used. The B-scan-wise average of $T = 5$ input volumes served as input to BM4D \citep{maggioni2013nonlocal}.} \label{tab:2Dvs3D} \end{table*} So far, we evaluated denoising of volumetric OCT data by simply processing individual B-scans. In order to evaluate the impact of true volumetric denoising to simple B-scan wise denoising in our proposed framework, we used our clinical dataset. Volumetric denoising processes 6 consecutive B-scans including the central one. That way, CNR and MSR measures from the previous experiments can be used for comparison. Table~\ref{tab:2Dvs3D} shows the mean MSR and CNR using $T = 1$ and $T = 5$ registered input volumes. The proposed method is compared to BM4D \citep{maggioni2013nonlocal} using $T = 1$ volume and the average of $T = 5$ volumues as an input. Here, we found that our volumetric denoising achieved better results in terms of noise reduction for $T = 1$ input volume, as adjacent B-scans affect denoising positively. For $T = 5$ input volumes, we found that our B-scan denoising achieved slightly better results in terms of noise reduction. However, as opposed to noise reduction, volumetric denoising achieved superior performance in structure preservation by exploiting coherence between adjacent B-scans. This is depicted in Fig.~\ref{fig:patientDataImages2Dvs3D}, where the retinal layers in the magnified region can be better distinguished. \begin{figure*}[!tb] \scriptsize \centering \setlength\figurewidth{0.255\textwidth} \setlength\figureheight{0.74\figurewidth} \subfloat{ \includegraphics[width = 0.8\textwidth]{images/legendIter_color.png} }\\ \subfloat{ \input{images/conv_PSNR_iter.tikz} } ~ \subfloat{ \input{images/conv_SSIM_iter.tikz} } ~ \subfloat{ \input{images/conv_energy_iter.tikz} } \caption{Convergence analysis for our proposed optimization scheme in OCT B-scan denoising using different combinations of iteration numbers $K_{\mathrm{outer}}$, $K_{\mathrm{inner}}$ and $K_{\mathrm{cg}}$. For each combination, we depict the value of the energy function optimized by ADMM along the with PSNR of the intermediate denoised images over the iterations.} \label{fig:convergenceIterations} \end{figure*} \begin{figure*}[!tb] \scriptsize \centering \setlength \figurewidth{0.35\textwidth} \setlength \figureheight{0.64\figurewidth} \subfloat{ \includegraphics[width = 0.4\textwidth]{images/legendLambda_color.png} }\\ \subfloat{ \input{images/conv_PSNR_lambda.tikz} } \qquad \subfloat{ \input{images/conv_SSIM_lambda.tikz} } \caption{Convergence analysis for our proposed optimization scheme in OCT B-scan denoising using different QuaSI regularization weights $\lambda$. For each parameter setting, we depict the influence of $\lambda$ using the PSNR and SSIM of the intermediate denoised image over the iterations.} \label{fig:convergenceLambda} \end{figure*} \begin{figure*}[!tb] \scriptsize \centering \setlength \figurewidth{0.35\textwidth} \setlength \figureheight{0.64\figurewidth} \subfloat{ \includegraphics[width = 0.4\textwidth]{images/legendAlpha_color.png} }\\ \subfloat{ \input{images/conv_PSNR_alpha.tikz} } \qquad \subfloat{ \input{images/conv_SSIM_alpha.tikz} } \caption{Convergence analysis for our proposed algorithm in OCT B-scan denoising using different Lagrangian multiplier $\alpha$ for ADMM optimization. For each parameter setting, we depict the influence of $\alpha$ using the PSNR and SSIM of the intermediate denoised image over the iterations.} \label{fig:convergenceAlpha} \end{figure*} \subsubsection{Convergence and Parameter Sensitivity} \label{sec:convergenceAndParameterSensitivity} \begin{figure*}[!tb] \scriptsize \centering \setlength \figurewidth{0.35\textwidth} \setlength \figureheight{0.6\figurewidth} \subfloat{ \includegraphics[width = 0.35\textwidth]{images/legendSensitivity.png} } \subfloat{ \input{images/sensitivityPSNR.tikz} } \qquad \subfloat{ \input{images/sensitivitySSIM.tikz} } \caption{Parameter sensitivity analysis for the interplay of the QuaSI regularization weight $\lambda$ and the Lagrangian multiplier $\alpha$ used for ADMM to B-scan denoising. The PSNR and SSIM measures were evaluated for a clinical relevant region of position 11 from the pig eye dataset. Each measure was determined for different QuaSI parameters $\lambda$ and $\alpha$ while keeping the TV regularization weight $\mu = 0.075$ and the corresponding Lagrangian multiplier $\beta = 1.5$ fixed.} \label{fig:sensitivityQuaSI} \end{figure*} The convergence of the proposed algorithm is shown experimentally on a B-scan level. By our definition, the algorithm converges if a stationary point of the objective function \eqref{eqn:objective} is reached. The value of the objective, hereinafter referred to as energy, is computed after every update of the intermediate image $\vec{f}^{k+1}$. In addition, PSNR and SSIM of the intermediate image are computed. Based on the optimal parameter setting $\mu = 0.075 \cdot T$, $\lambda = 5.0 \cdot T$, $\alpha = 100.0 \cdot T$, $\beta = 1.5 \cdot T$, $K_{\mathrm{outer}} = 30$, $K_{\mathrm{inner}} = 10$ and $K_{\mathrm{cg}} = 3$ for B-scan denoising, we denoise the pig eye dataset 9 with $T = 8$ B-scans. Figure~\ref{fig:convergenceIterations} shows the impact of $K_{\mathrm{outer}}$, $K_{\mathrm{inner}}$, and $K_{\mathrm{cg}}$ on the convergence using three different parameter settings, where $K_{\mathrm{outer}} \cdot K_{\mathrm{inner}}=300$ for a fair comparison. The approximation of the QuaSI prior is updated every $K_{\mathrm{inner}}$ iterations. We found that increasing numbers of inner iterations ($K_{\mathrm{inner}} = 10$) or CG iterations ($K_{\mathrm{cg}} = 30$) impair the convergence properties of the algorithm as shown by the peaks of the energy and the PSNR. This is mainly caused by the rare update of the linearization $\vec{Q}$. If the linearization is updated every iteration ($K_{\mathrm{inner}} = 1$), the convergence is improved as no approximation is necessary but the computational complexity is increased. The optimal setting ($K_{\mathrm{outer}} = 30$, $K_{\mathrm{inner}} = 10$, $K_{\mathrm{cg}} = 3$) provides an excellent tradeoff between stable convergence and low computational complexity. Figure~\ref{fig:convergenceLambda} shows the influence of the QuaSI regularization weight $\lambda$ to the convergence of our algorithm. We found that with decreasing $\lambda$, the PSNR and SSIM measures increase slower due to the low impact of the QuaSI prior. For the optimal setting $\lambda = 5.0$, we observed a fast convergence of our iteration scheme. Notice that further increasing $\lambda$ does not affect the convergence, which underlines effectiveness of the proposed QuaSI prior and the robustness of our iteration scheme. Figure~\ref{fig:convergenceAlpha} depicts the influence of the Lagrangian multiplier $\alpha$, which enforces the constraint $\vec{u} = \vec{f} - \vec{Q}(\vec{f})$ in our ADMM optimization. For $\alpha \rightarrow \infty$, the augmented Lagrangian \eqref{eqn:augmentedLagrangian} results in the objective function \eqref{eqn:objective}. Hence, decreasing $\alpha$ impairs the convergence as shown by the peaks in the PSNR and SSIM measures over the iterations. Choosing $\alpha$ too large resulted in slower convergence compared to the proposed parameter setting $\alpha = 100$. In order to show the interplay of the QuaSI regularization weight $\lambda$ and the corresponding Lagrangian multiplier $\alpha$ used for ADMM, Fig.~\ref{fig:sensitivityQuaSI} depicts the influence of different configurations to B-scan denoising using fixed TV parameters ($\mu = 0.075$, $\beta = 1.5$). We evaluated the denoising performance in terms of the PSNR and SSIM measures for a clinical relevant region showing retinal layers. Overall, we observed that increasing $\lambda$ and thus the impact of QuaSI consistently improved denoising, whereas the sensitivity against $\alpha$ is lower over several orders of magnitudes. Notice that our QuaSI prior was insensitive against oversmoothing as shown by the convergence of PSNR and SSIM for large $\lambda$. \subsection{C-Arm Computed Tomography Denoising} C-arm computed tomography (CT) denotes an imaging modality where an X-ray source and detector are mounted on opposing sides of a C-shaped gantry. That gantry is further able to rotate around a patient lying on a table, thus allowing to acquire CT-like projection images. Using image reconstruction techniques \citep{zeng2010medical, strobel20093d}, these projection images can finally be transformed into a volumetric representation of the object under consideration. Clinically, C-arm CT is both used for acquiring single volumetric images as well as for acquiring sequences of volumes, as it is for example used in perfusion imaging for acute stroke diagnosis \citep{univis91367737}. While single volumes just provide static information about the morphology itself, the acquisition of volume sequences typically involves injection of contrast agent during the acquisition, thus making the volume sequences provide additional temporal information. Similar to conventional CT, photon effects as well as patient movement and angular undersampling usually deteriorate the image quality by introducing both structured and unstructured noise, see Fig.~\ref{figure:ct_syn} b, Fig.~\ref{figure:ct_real} a. For our experiments, the noise $\vec{n}$ in reconstructed CT volumes is modeled as additive noise according to \eqref{eqn:noiseAdd}, and is further composed of both shot noise $\vec{p}$ \cite{} and structured noise $\vec{s}$, i.\,e.\,\, \begin{equation} \vec{n} = \vec{p} + \vec{s}. \end{equation} While shot noise in the acquired projection data results from fluctuations measured by the sensor, various processing steps during the reconstruction process complicate an exact statistical description of the noise in the resulting volumetric data \citep{Fessler}. Structured noise comes in the form of high-frequent streak artifacts, causes by angular undersampling. \subsubsection{Datasets} \label{sec:CTData} We applied the proposed denoising algorithm on simulated C-arm CT data as well as on acquired, real patient data. For our application, this results in two cases: single volumes can be denoised using \textit{volumetric denoising} (for the sake of convenience, we further refer to this method as SISO - single volume input, single volume output), while sequences of volumes are processed using \textit{volumetric + temporal denoising} (MIMO - multiple volume input, multiple volume output), cf. Fig.~\ref{fig:graphicalAbstract}. order to evaluate the denosing, we particularly investigated simulated data since it provides a known ground truth. The simulated data is based on a digital brain CT phantom \citep{aichert2013realistic}, which was used in combination with a simulation framework mimicking the acquisition process of a C-arm CT system \citep{univis91387862}. We added Poisson noise and simulated minor patient movement during the generation of the simulated data by rotating the head up to a total of \ang{5} around z-axis between the individual scans. After reconstructing the generated projection data, the individual volumes are co-registered again to assert pixel correspondence between the volumes. Due to the slight different positions of the head within individual volumes, the resulting streak artifacts slightly differ between the co-registered individual volumes. For a numerical comparison of different algorithms, we calculate the peak signal to noise ratio (PSNR) and the structured similarity index measure (SSIM) \citep{wang2004image} using the digital phantom data as ground truth. In addition to the simulated data, we also apply the proposed methods to real patient data which was clinically acquired during a perfusion imaging procedure. \subsubsection{Comparison to the State-of-the-Art} Current approaches towards noise reduction in CT imaging are, for example, based on anisotropic filtering or rely on a heuristic detection of streaks and vessel structures \citep{univis91578504, univis91131693, univis91420770}. \begin{table}[!tb] \begin{center} \small \begin{tabular}{lcccc} \toprule & \multirow{2}{*}{Input} & \multirow{2}{*}{BM4D} & QuaSI & QuaSI \\ & & & (SISO) & (MISO) \\ \midrule PSNR & 32.105 & 32.485 & 32.462 & 34.788 \\ \midrule SSIM & 0.883 & 0.914 & 0.925 & 0.943 \\ \bottomrule \end{tabular} \caption{PSNR and SSIM for the input data, BM4D \cite{maggioni2013nonlocal} and the QuaSI methods.} \label{table:measures} \end{center} \end{table} \begin{figure}[!tb] \centering \subfloat[Ground truth]{ \includegraphics[width = 0.228\textwidth]{images/slice24/slice_24_GT.png} } \subfloat[Noisy input]{ \includegraphics[width = 0.228\textwidth]{images/slice24/slice_24_input.png} }\\ \subfloat[SISO with QuaSI]{ \includegraphics[width = 0.228\textwidth]{images/slice24/slice_24_wQ_3D_ssim.png} } \subfloat[SISO without QuaSI]{ \includegraphics[width = 0.228\textwidth]{images/slice24/slice_24_woQ_3D_ssim.png} }\\ \subfloat[MIMO with QuaSI]{ \includegraphics[width = 0.228\textwidth]{images/slice24/slice_24_wQ_3Dt_ssim.png} } \subfloat[MIMO without QuaSI]{ \includegraphics[width = 0.228\textwidth]{images/slice24/slice_24_woQ_3Dt_ssim.png} } \caption{Denoising on simulated C-arm CT data. (a) and (b) denote the ground truth data and the noisy input to the algorithm, respectively. (c) and (d) denote the denoised result with and without the QuaSI prior when using only a single volume (SISO) of the sequence. (e) and (f) denote the denoised result with and without the QuaSI prior, when using 1 volume (MISO). Note that for MISO, the input to the algorithm is not just the single volume as shown in the figure, but consists of a sequence of volumetric data.} \label{figure:ct_syn} \end{figure} \begin{figure*}[!tb] \subfloat[Noisy input]{ \includegraphics[width = 0.32\textwidth]{images/real/noisyinput.png} } \subfloat[SISO with QuaSI]{ \includegraphics[width = 0.32\textwidth]{images/real/q3d.png} } \subfloat[MIMO with QuaSI]{ \includegraphics[width = 0.32\textwidth]{images/real/q3dt.png} } \caption{Denoising on real clinical C-arm CT data. (a) denotes the noisy input, (b) denotes the denoised result when using only a single volume (SISO) of the sequence. (c) denotes the denoised result when using a volume sequence (MIMO). Note that for MIMO, the input to the algorithm is not just the single volume as shown in the figure, but consists of a sequence of volumetric data.} \label{figure:ct_real} \end{figure*} We compared the results from the proposed methods to the results from BM4D \citep{maggioni2013nonlocal}, which processes volumetric data and is an extension to the well-known BM3D \citep{Dabov2007}. We set the parameters of our method to $\alpha = 0.1 $, $ \lambda = 0.0005 $, $\beta = 0.1 $, $ \mu = 0.005$, $\gamma = 90 $ and $ \omega = 0.8$. These parameters have been optimized by investigating grid search on a small patch of the phantom data. The median filter regularization is computed on a $3\times3\times3$ kernel. The algorithms are applied to and evaluated on a subset of the brain volume consisting of 30 consecutive slices. The slices, see Fig.~\ref{figure:ct_syn} for synthetic and Fig.~\ref{figure:ct_real} for the real data, show the complete head and contain all structures of interest such as bones, white matter, gray matter and (contrast-enhanced) vessels. The results from the evaluation of the realistic brain phantom show that the proposed denoising algorithm outperforms BM4D with regards to PSNR and SSIM, see Table \ref{table:measures}. Vessel structures are well-preserved within both volumes and boundaries between gray and white matter are perceivable. Further, a qualitative comparison between processed data with and without the use of the QuaSI prior (Fig.~\ref{figure:ct_syn} c,d and e,f) shows that the QuaSI prior is able to further lower the amount of noise in the volumetric image data. \section{Conclusion} \label{sec:conclusion} In this paper, we have presented the quantile sparse image (QuaSI) prior and a corresponding spatio-temporal denoising algorithm suitable for volumetric OCT or CT data. For OCT denoising, we proposed two pipelines to either process B-scans or volumetric OCT data. The numerical optimization is derived using a linearization of the quantile filter and an alternating direction method of multipliers scheme for efficient minimization. We can show that a combination of QuaSI and Total Variation regularization outperforms state-of-the-art methods in terms of quantitative measures. Interestingly, our method can be applied to both CT and OCT data through minor modifications of the denoising pipeline. This suggests that it may be worthwhile to evaluate the potential of the QuaSI prior for inverse problems of other imaging modalities in future work. \section*{References}
{ "timestamp": "2019-06-18T02:23:43", "yymm": "1802", "arxiv_id": "1802.03943", "language": "en", "url": "https://arxiv.org/abs/1802.03943" }
\section{Introduction} Competition between on-site spin-orbit coupling (SOC), Coulomb repulsion and crystal field interactions in Iridates gives rise to a plethora of unusual features. For one of the most studied iridium-based compounds, Sr$_2$IrO$_4$, localized transport~\cite{ChengSun2016,Kim2008,Kim2009}, absence of metalization at high pressures~\cite{Haskel2012,Zocco2014} and emergence of an odd-parity hidden order in Rh-doped Sr$_2$IrO$_4$~\cite{ZhaoNature2016,JeongNatCom2017} were observed experimentally but are still debated from a theoretical standpoint. On the other hand, despite many experimental indications of possible superconductivity in doped Sr$_2$IrO$_4$ -- including observation of Fermi arcs and a $d$-wave gap in electron-doped Sr$_2$IrO$_4$~\cite{Wang2015b, Kim2014, KimNature2016} - no direct signatures of the superconducting state, such as zero electrical resistance and/or Meissner effect, have been observed in these systems yet. The ground state of Sr$_2$IrO$_4$ is believed to be an antiferromagnet (AFM) of pseudospin $j_{\rm eff} =1/2$. The experimental low-energy magnon dispersion is described well by the Heisenberg model with up to third neighbor.\cite{JHKim2012} On the theoretical side, such Heisenberg model is derived by projecting the superexchange Kugel-Khomskii model~\cite{Oles2005} onto the spin-orbit (SO) basis.~\cite{Jackeli2009} However, this is a valid approach only if the virtual intermediate doubly occupied states considered in the second order perturbation theory can be well approximated by the $^3T_1$, $^1T_2$, $^1E$ and $^1A_1$ basis set. Such a basis set is an eigenbasis of the full Coulomb Hamiltonian which includes the 10\textit{Dq} crystal field as well as the Hund's coupling, but not SOC. In other words, this approach is, strictly speaking, valid only in the limit of crystal field and Hund's coupling much larger than SOC. In that case, the multiplet structure of $d^4$ configuration is well described by the \textit{LS} coupling scheme. This is indeed the assumption made in many of the earlier works,\cite{Meetei2015, Sato2015, Kush2018, QChen2017} for instance in \onlinecite{Paerschke2017} while deriving the \textit{t-J}-like model of Sr$_2$IrO$_4$ to calculate the PES spectra. The PES spectra, thus obtained, reproduces the low-energy features of the experimental spectra remarkably well, which is both interesting and intriguing. For materials with the large atomic number $Z$, such as Ir, SOC is expected to be large since it scales proportionally to $Z^4$. The SO splitting in the $5d$ shell of $5d$ transition metals is $\sim 0.5$ eV. In comparison, for transition metal (TM) atoms with partially filled $3d$ shells, such as Fe, Ni and Co, it is one order of magnitude smaller ($\sim 0.05$ eV). For such cases, {\it LS} coupling scheme describes the multiplet structure well.\cite{Sobelman} For atoms with partially filled $4d$ shells, such as Ru, Rh and Pd, the SO splitting is $\sim 0.1$ eV and there are increasing deviations from the LS coupling scheme.\cite{Sobelman} For even heavier atoms, such as Bi and Pb, where SO splitting is $\sim 2$ eV, the LS coupling is expected to fail. In such cases, \textit{jj} coupling scheme would be an appropriate choice to describe the multiplet structure. Quantitatively, the relative strength of SOC and electron correlation is measured in terms of the ratio,~\cite{Zvezdin} \begin{equation} \chi = \frac{\xi}{F_2}\,, \end{equation} where $\xi$ is the (single particle) on-site SOC strength and $F_2$ is a Slater integral connected to the Slater parameter $F^{(2)}$ as $F_{2} = F^{(2)}/49$ for $d^2$ configuration.~\cite{Racah2} Using the Racah parameters $B=420$ cm$^{-1}$ and $C=2100$ cm$^{-1}$ for Ir$^{4+}$ ion\cite{Andlauer} leads to $F_2 = 720\:\mathrm{cm}^{-1}$. Substituting $\xi=0.4\:\mathrm{eV}\approx3226\:\mathrm{cm}^{-1}$, we get \begin{equation} \chi \approx 4.5. \end{equation} The \textit{LS} coupling scheme is known to be a good approximation for $\chi\lesssim 1$.~\cite{Zvezdin} Therefore, for the case of iridium the choice of the \textit{LS} coupling scheme is questionable. $4d$ and $5d$ TM oxides with $J=0$ ground state has attracted a lot of attention as it can lead to interesting effects such as excitonic magnetism in Van-Vleck type Mott Insulators \cite{Agrestini2018arxive} or even triplon condensation and triplet superconductivity.\cite{Horsdal2016, Chaloupka2016, Khaliullin2013, Akbari2014} Here, caution must be exercised in the choice of the coupling scheme. For example, the authors of Ref.~[\onlinecite{Khaliullin2013}] claim that $4d$ and $5d$ transition metal ions with the $t_\mathrm{2g}^4$ configuration such as Re$^{3+}$, Ru$^{4+}$, Os$^{4+}$ and Ir$^{5+}$ realize a low-spin $S = 1$ state because of relatively large Hund's coupling and, therefore, the multiplet structure should be calculated within the \textit{LS} coupling scheme. While this is likely to be true for Ru$^{4+}$ as a $4d$-element, which is, in fact, the only element discussed in detail in Refs.~[\onlinecite{Chaloupka2016, Khaliullin2013, Akbari2014}], the validity of the statement for heavier transition metal ions with partially filled $5d$ shell is not \textit{a priori} known. In fact, recent analysis of resonant inelastic X-ray scattering data on double-perovskite iridium oxides with a formal valency of Ir$^{5+}$ yields SOC strength $\lambda = 0.42$ eV and Hund's coupling $J_H=0.25$ eV , suggesting {\it jj} coupling scheme to be appropriate for Ir$^{5+}$. \cite{Yuan2017} One of the most prominent differences in the weak and strong SOC strengths is the multiparticle multiplet structure which, in turn, affects the experimentally observed features such as the PES spectra. A clear understanding of how the low-energy description of SOC driven insulators modifies in the weak and strong SOC limits is fundamental in developing a satisfactory theoretical description for these systems. In this article, therefore, we investigate the implication of the two coupling schemes in the effective low-energy description of the ARPES spectra. We discuss the multiplet structures of $5d$ TM ions with the $t_{\rm 2g}^4$ configuration in the weak and strong SOC limits, defined by the $LS$ and $jj$ coupling scheme, respectively. We, then, construct an effective low-energy $t$-$J$ Hamiltonian used to describe the ARPES spectra. For brevity, we focus on ${\rm Sr_2IrO_4}$ to calculate the theoretical spectra within the Self Consistent Born Approximation (SCBA) in the {\it jj} coupling scheme and make explicit comparison with the corresponding results obtained earlier within the {\it LS} coupling scheme~\cite{Paerschke2017} as well as the experimental results. This is particularly relevant in view of the fact that, despite consensus, the validity of the {\it LS} coupling for ${\rm Sr_2IrO_4}$ has not been established. Also, a satisfactory theoretical description of ${\rm Sr_2IrO_4}$ is still being developed.~\cite{JHKim2012,Cao2017} The present work provides an indirect evidence of the validity of the {\it LS} coupling scheme for ${\rm Sr_2IrO_4}$. More importantly, we explicitly show the particular manifestation of the coupling schemes on the kinetic part of a generalized \textit{t-J}-like Hamiltonian and discuss its ramifications. This further allows us to speculate and discuss other scenarios where such implications could be drastic. This article is organized as follows. First, in Section \ref{Section:j-jL-Sintro}, we discuss the \textit{LS} and the \textit{jj} coupling schemes within the perturbation theory calculation of the multiplet structure. In particular, for the case of two holes on t$_\mathrm{2g}$ shell relevant for theoretical modeling of the ARPES spectra of Iridates. In Section \ref{Section:j-jL-S2sites}, we discuss how the choice of the coupling scheme manifests itself in the \textit{t-J} model. In Section \ref{Section:j-jL-Siridate}, the relevance of all these results to the calculation of ARPES spectra on Sr$_2$IrO$_4$ will be discussed. Finally, we discuss some of the subtle issues and conclude in Sections \ref{Sec:Discussions} \& \ref{section:jjLSconclusions}, respectively. \section{Coupling schemes} \label{Section:j-jL-Sintro} Calculating the ARPES spectral function for Sr$_2$IrO$_4$ amounts to calculating the Green's function for the hole introduced into the AF $j=1/2$ ground state in the photoemission process.~\cite{Paerschke2017} In the octahedral crystal field, the $d$ levels split into $t_\mathrm{2g}$ and $e_\mathrm{g}$ manifolds. There are five electrons per Ir, so effectively there is one hole residing on the lower $t_\mathrm{2g}$ manifold. While the $t_{\rm 2g}$ manifold is composed of $d_{xy}$, $d_{xz}$, and $d_{yz}$ orbitals, the hole carries an effective orbital momentum $l=1$ and a spin $s=1/2$ due to orbital moment quenching.~\cite{AbragamBleaney} Due to strong on-site SOC, the t$_\mathrm{2g}$ levels further split into $j=1/2$ doublet and $j=3/2$ quartet and the hole occupies the lower energy doublet.\cite{Kim2008, Jackeli2009} Adding a hole to the Ir$^{4+}$ ion leads to the $5d^4$ configuration. Since each hole has effective orbital momentum $l=1$ per hole,~\cite{AbragamBleaney} the $d^4$ configuration effectively mimics the $p^2$ configuration and we focus on the multiplet structure of the latter. The multiplet structure depends on the coupling scheme, as shown in Fig.~\ref{fig:j-jL-S} and discussed in the following. It is important to note that the need for considering either {\it LS} or {\it jj} coupling scheme arises only for the cases when there are more than one fermions per site. In such cases, the multi-particle multiplet structure differs in the weak and strong SOC limits. For undoped Sr$_2$IrO$_4$, with only one hole per site, both SOC and correlation effects can be treated on equal footing.\cite{Zhong2013,Griffith} \begin{figure}[!hb] \begin{center} \includegraphics[width=1.01\columnwidth]{JJvsLScartoon6.pdf} \caption{Schematic representation of the multiplet structure for a $p^2$ configuration in the \textit{LS} coupling scheme (left) and the \textit{jj} coupling scheme (right). The singlet-triplet splitting $\lambda = \xi/2$ where $\xi$ is the (single particle) on-site SOC strength and $\Delta$ is splitting between $J=1$ and $J=2$ states that depends on Coulomb interactions and Hunds coupling. The mixing between $^3P_0$ and $^1S_0$ multiplets is schematically shown by the dotted line. For comparison, the energy reference has been chosen to be equal in both coupling schemes. \label{fig:j-jL-S}} \end{center} \end{figure} We begin with the full Hamiltonian of a system \begin{equation} \label{Hamfull_j-jL-S} \mathcal{H}=\mathcal{H}_{\rm Cen}+\mathcal{H}_{\rm res}+\mathcal{H}_\mathrm{\rm SOC}. \end{equation} Here, $\mathcal{H}_{\rm Cen}$ is the central field Hamiltonian and includes kinetic energy of all electrons, nucleus-electron Coulomb interaction and central-symmetric part $S(r_i)$ of the Coulomb electron-electron repulsion: \begin{equation} \label{Hamcen} \mathcal{H}_\mathrm{Cen}=\sum_{i=1}^{N}\left(-\frac{1}{2}\nabla_{r_i}^2-\frac{Z}{r_i}+S(r_i)\right), \end{equation} where $Z$ is the atomic number of the nucleus and $N$ is the total amount of electrons in the system. Residual Coulomb Hamiltonian describes the angular part of the Coulomb interaction between electrons: \begin{equation} \label{Hamres} \mathcal{H}_\mathrm{res}=\sum_{i>j}^{N}\frac{1}{r_{ij}}-\sum_{i=1}^{N}{S(r_i)}, \end{equation} and $\mathcal{H}_\mathrm{SOC}$ describes the sum of all the on-site spin-orbit interactions \begin{equation} \label{HamSOC_j-jL-S} \mathcal{H}_\mathrm{SOC}=\lambda \textbf{L} \cdot \textbf{S}=\sum_{i=1}^{N}{\xi_i \textbf{l}_i \cdot \textbf{s}_i}. \end{equation} Eq.~(\ref{Hamfull_j-jL-S}) can be solved perturbatively, taking $\mathcal{H}_\mathrm{Cen}$ to be the unperturbed part of the Hamiltonian. The eigenstates of this unperturbed system are described by $\psi_\mathrm{cen}$: \begin{equation} \mathcal{H}_\mathrm{Cen}\left|\psi_\mathrm{Cen}\right\rangle=E_\mathrm{Cen}\left|\psi_\mathrm{Cen}\right\rangle, \end{equation} and define the electronic configuration $\psi_\mathrm{Cen} = \left|n_1\:l_1, n_2\:l_2, ...\,, n_\mathrm{N} l_\mathrm{N} \right\rangle$ where $n_i$ is a principal quantum number of the $i$-th particle. Relative strengths of $\mathcal{H}_{\rm res}$ and $\mathcal{H}_{\rm SOC}$ dictates the order of perturbation and leads to two different coupling schemes in the limiting cases. If $ \mathcal{H}_\mathrm{res} > \mathcal{H}_\mathrm{SOC}$, then the strongest perturbation to the eigenstates of $\mathcal{H}_\mathrm{Cen}$ can be calculated as $\left\langle\psi_\mathrm{Cen}\left| \mathcal{H}_\mathrm{res}\right|\psi_\mathrm{Cen}\right\rangle$. Electronic configurations then split into multiplet terms \begin{equation} \psi^{LS}=\left|S\:M_S\:L\:M_L \right\rangle, \end{equation} characterized by the total orbital $\textbf{L}$ and spin $\textbf{S}$ momenta. SOC further splits these levels and each level is now described by the total momenta $\textbf{J}=\textbf{L}+\textbf{S}$, as can be seen in Fig.~\ref{fig:j-jL-S}. On the other hand, the \textit{jj} coupling scheme is applicable if $\mathcal{H}_\mathrm{SOC} > \mathcal{H}_\mathrm{res}$, implying that $\mathcal{H}_\mathrm{SOC}$ is the strongest perturbation to $\mathcal{H}_\mathrm{Cen}$. In practice, this means that \textit{L} and \textit{S} are not good quantum numbers anymore (i.e. they don't even form good first order approximation to the (unknown) eigenbasis of the total Hamiltonian Eq. (\ref{Hamfull_j-jL-S})) and the total \textbf{J} momentum has to be calculated as a sum of individual \textbf{j} momenta characterizing each particle. In order to obtain the multiplet structure in the \textit{LS} (\textit{jj}) coupling scheme, an unambiguous link between the product states $\left|\zeta \sigma \right\rangle \left|\zeta' \sigma' \right\rangle$ (for two holes) and the final multiplet set $\left|S, M_S, L, M_L \right\rangle$ ($\left|j, m_{j},j',m_{j'} \right\rangle$) should be established, where $\zeta, \zeta' = xy,yz,xz$ indicate the orbitals occupied by the holes, and $\sigma, \sigma' = \uparrow, \downarrow$. This is followed by another basis transformation to obtain the states in the total ${\bf J}$ momenta. In the end, the correspondence between different J-states in the two coupling schemes can be obtained. This involves working with all possible configurations and could be tedious (for details, see Appendix \ref{AppA}). If, however, the multiplet structure in one of the coupling schemes is known, the multiplet structure in the other scheme can be obtained easily: the correspondence between the multiplets $\psi^{LS}_{S\:L\:J\:M_J}$ and $\psi^{jj}_{j\:j'\:J\:M_J}$ obtained within the \textit{LS} and \textit{jj} coupling schemes can, in general, be described as~\cite{Sobelman} \begin{equation} \label{transition9j} \psi^{jj}_{j\:j'\:J\:M_J}=\sum_{L,S}{\left(ss'[S]ll'[L]J|sl[j]s'l'[j']J\right)\psi^{LS}_{S\:L\:J\:M_J}}. \end{equation} Since the transition between \textit{LS} and the \textit{jj} coupling scheme is a change of the scheme of summation of four angular momenta, the transformation coefficients in~(\ref{transition9j}) can be expressed in terms of $9j$ symbols:~\cite{Sobelman} \begin{align} \label{9jsymbols} &\left(ss'[S]ll'[L]J|sl[j]s'l'[j']J\right) = \\ & \sqrt{\left(2S+1\right)\left(2L+1\right)\left(2j+1\right)\left(2j'+1\right)}\nonumber \left\{ \begin{array}{ccc} l & l' & L \\ j & j' & J \\ \frac{1}{2} & \frac{1}{2} & S \end{array} \right\}. \end{align} The values of the factor \begin{align} \left\{ \begin{array}{ccc} l & l' & L \\ j & j' & J \\ \frac{1}{2} & \frac{1}{2} & S \end{array} \right\}=A\left(SLJ;\:jj'J\right) \end{align} are given, for example, in Table (5.23) of Ref.~[\onlinecite{Sobelman}] or in Ref.~[\onlinecite{matsunobu1955tables}]. Let us explicitly calculate how $\psi^{jj}_\mathrm{\frac{1}{2}\:\frac{1}{2}\:0\:0}$ transforms into the $\psi^{LS}_{S\:L\:0\:0}$. \begin{align} \label{renormJLScoef} \psi^{jj}_\mathrm{\frac{1}{2}\:\frac{1}{2}\:0\:0} &= \sum_{L,S}\sqrt{\left(2S+1\right)\left(2L+1\right)\left(2\cdot\frac{1}{2}+1\right)\left(2\cdot\frac{1}{2}+1\right)}\nonumber\\ &\times A\left(S\ L\ 0;\:\frac{1}{2}\ \frac{1}{2}\ 0\right)\psi^{LS}_{S\:L\:0\:0}. \end{align} Using table (5.23) of~[\onlinecite{Sobelman}] we calculate the values of $A\left(S\ L\ 0;\:\frac{1}{2}\ \frac{1}{2}\ 0\right)$ and arrive at \begin{eqnarray} \label{renormJLScoef2} \psi^{jj}_\mathrm{\frac{1}{2}\:\frac{1}{2}\:0\:0} & = & \frac{1}{\sqrt{3}}\psi^{LS}_\mathrm{0\; 0 \;0 \;0}+\sqrt{\frac{2}{3}}\psi^{LS}_\mathrm{1\; 1 \;0 \;0} \,\,\,\\ & = & \frac{1}{\sqrt{3}}\psi\left(^1S_{0, M_J=0}\right)+\sqrt{\frac{2}{3}}\psi\left(^3P_{0, M_J=0}\right) \,\,.\nonumber \end{eqnarray} On the other hand, composition of the $J=1$ state remains unchanged in the two coupling schemes. Similar to the $J=0$ states, there will also be a mixing between higher energy states, such as the two $J=2$ states, $^1D_2$ and $^3P_2$. However, the mixing between $J=2$ states is omitted from Fig.~\ref{fig:j-jL-S} for clarity. Using Eq. (\ref{transition9j}), it is, therefore, possible to obtain the relative composition of the multiplets in the different coupling schemes. This has interesting consequences for the low-energy effective $t-J$ Hamiltonian and the ARPES spectra. More importantly, this already provides an estimate of the relative redistribution of the spectral weight in the ARPES spectra. \section{Manifestation of the coupling scheme in the $t-J$ model} \label{Section:j-jL-S2sites} Time evolution in the Green’s function of the hole introduced into Sr$_2$IrO$_4$ in the photoemission process is determined by the Hamiltonian \begin{align} \label{Hamd4} {\mathcal{H}}={\mathcal{H}}_{\rm mag}+{\mathcal{H}}_{\rm SOC}+{\mathcal{H}}_{\rm{t}}, \end{align} where ${\mathcal{H}}_{\rm mag}$ is Heisenberg Hamiltonian describing the ground state of the system which depends on first-, second- and third- neighbor exchange parameters $J_1$, $J_2$ and $J_3$, ${\mathcal{H}}_{\rm SOC}$ describes the on-site energy of the triplet states, and ${\mathcal{H}}_{\rm{t}}$ represents the kinetic energy of the hole.\cite{Paerschke2017} As we are interested in the low-energy description, in the following, we will consider only the low energy sector of the multiplet structure consisting of $J=0$ and $J=1$ states. The $J=2$ states lie at much higher energies, approximately twice as large as the singlet-triplet splitting \cite{Griffith, AbragamBleaney}, and are expected to have a small contribution to the low-energy model. We note, however, that the resulting reduced Hilbert space is not complete. As a result, a basis transformation between the product state basis and the multiplet basis (see Appendix \ref{AppA}) in this reduced Hilbert space is not proper and leads to issues with normalization. Therefore, we consider the full set of 15 configurations (microstates) formed by two holes residing on the $t_{\rm 2g}$ orbitals while deriving the correspondence between the multiplet structures in the two coupling schemes. The (physical) cutoff is to be imposed only after arriving at the final basis set which is a good approximation to the eigenstates of the full Hamiltonian. Detailed knowledge of the multiplet composition in terms of the product states is also required for deriving the $t-J$ Hamiltonian. Therefore, in the following, we have used the explicit transformations in the {\it jj} coupling scheme, discussed in Appendix \ref{jjbasistr} (Eq. (\ref{jjLSHamU3}) \& (\ref{jjfinal})). Nevertheless, for completeness and for pedagogical reasons, we provide and discuss both the schemes in detail in Appendix \ref{AppA}. We consider the kinetic energy part of the effective $t-J$ model ${\mathcal{H}}_{\rm{t}}$ in the two coupling schemes. The derivation within the {\it jj} coupling scheme closely follows that in the {\it LS} coupling scheme\cite{Paerschke2017} and consists of two main steps. We start with the application of basis transformations Eq. (\ref{jjLSHamU3}) \& (\ref{jjfinal}) to the hopping term of \textit{t-J} model $\langle 5d^4_\textbf{i} \,5d^5_\textbf{j} | \mathcal{H}_\mathrm{t} | 5d^5_\textbf{i}\, 5d^4_\textbf{j}\rangle$ where $\mathcal{H}_\mathrm{t}$ is a general one-particle tight-binding (TB) Hamiltonian adopted from Ref.~[\onlinecite{Paerschke2017}]. Subsequently, we apply the slave-fermion, Holstein-Primakoff, Fourier, and Bogoliubov transformations, leading to: \begin{align} \label{Hamd4partsJJ} &{\mathcal{H}}^{jj}_{\mathrm{t}}= \sum\limits_{\textbf{k}}\left(\textbf{h}_{\textbf{k}\mathrm{A}}^{\dagger}\hat{W}^{0}_{\textbf{k}}\textbf{h}^{\phantom{\dagger}}_{\textbf{k} \mathrm{A}}\! +\!\textbf{h}_{\textbf{k} \mathrm{B}}^{\dagger}\hat{W}^{0}_\textbf{k} \textbf{h}^{\phantom{\dagger}}_{\textbf{k} \mathrm{B}} \right)\! +\\ &\! \sum\limits_{\textbf{k}, \textbf{q}} \left( \textbf{h}_{\textbf{k-q} \mathrm{B}}^{\dagger} \hat{W}^{\mathrm{\alpha}}_{\textbf{k},\textbf{q}} \textbf{h}^{\phantom{\dagger}}_{\textbf{k} \mathrm{B}} \alpha_\textbf{q}^{\dagger} \!+\! \textbf{h}_{\textbf{k-q} \mathrm{A}}^{\dagger} \hat{W}^{\mathrm{\beta}}_{\textbf{k},\textbf{q}}\textbf{h}^{\phantom{\dagger}}_{\textbf{k}\mathrm{B}} \beta_\textbf{q}^{\dagger}\!+\!\mathrm{h.c.}\right),\nonumber \end{align} where $\textbf{h}^{\dagger}$ ($\textbf{h}$) represents the hole creation (annihilation) operator written in the low-energy multiplet basis comprising of singlet ($S_{A/B}$) and tripet states ($T_{m\,A/B}$) with $m = 0,\pm 1$ at spin-sublattices A and B: \begin{equation} \label{totalJcutbasisAB} \hat{J} = \left\{S_\mathrm{A}, T_{1 \mathrm{A}}, T_{0 \mathrm{A}}, T_{-1 \mathrm{A}}, S_\mathrm{B}, T_{1 \mathrm{B}}, T_{0 \mathrm{B}}, T_{-1 \mathrm{B}}\right\}, \end{equation} $A$/$B$ represent the spin sublattice index accounting for the AF order and $\alpha^\dag$($\alpha$)/$\beta^\dag$($\beta$) represents the magnon creation (annihilation) operator on the two sublattices. For a realistic description of the motion of charge excitation in the AF background of $j =1/2$ pseudospins in Sr$_2$IrO$_4$, we consider tight binding parameters obtained from density functional theory \cite{Paerschke2017} and exchange couplings up to third neighbor that fit the experimental magnon dispersion. Hopping parameters are described by $8 \times 8$ matrices due to charge excitation's internal degree of freedom and have been denoted by $W$. The terms $\hat{W}^{0}_{\textbf{k}}$ describe the nearest, next nearest, and third neighbor free hopping of the polaron (i.e. not coupled to magnons) and the vertices $\hat{W}^{\mathrm{\alpha}}_{\textbf{k},\textbf{q}}$ and $\hat{W}^{\mathrm{\beta}}_{\textbf{k},\textbf{q}}$ describe the polaronic hopping. They are given by \begin{align} \label{W0} \hat{W}^{0}_\textbf{k}= \left(\begin{smallmatrix} \frac{3}{2}F_1 & 0 & \textrm{-}\sqrt{\frac{3}{2}}F_2 & 0 & 0 & \sqrt{\frac{3}{2}}P_2 & 0 & \textrm{-}\sqrt{\frac{3}{2}}P_1\\ 0 & F_4 & 0 & 0 & \sqrt{\frac{3}{2}}P_1 & 0 & Q_1 & 0 \\ \textrm{-}\sqrt{\frac{3}{2}}F_2 & 0 & F_3 & 0 & 0 & Q_2 & 0 & Q_1 \\ 0 & 0 & 0 & 0 & \textrm{-}\sqrt{\frac{3}{2}}P_2 & 0 & Q_2 & 0 \\ 0 & \sqrt{\frac{3}{2}}P_1 & 0 & \textrm{-}\sqrt{\frac{3}{2}}P_2 & \frac{3}{2}F_1 & 0 & \sqrt{\frac{3}{2}}F_2 & 0\\ \sqrt{\frac{3}{2}}P_2 & 0 & Q_2 & 0 & 0 & 0 & 0 & 0 \\ 0 & Q_1 & 0 & Q_2 & \sqrt{\frac{3}{2}}F_2 & 0 & F_3 & 0 \\ \textrm{-}\sqrt{\frac{3}{2}}P_1 & 0 & Q_1 & 0 & 0 & 0 & 0 & F_4 \\ \end{smallmatrix}\right), \end{align} for the free hopping matrix while the matrices containing vertices are \begin{align} \label{Walpha} &\hat{W}^{\alpha}_{\textbf{k},\textbf{q}}= \left(\begin{smallmatrix} 0 & \sqrt{\frac{3}{2}}L_3 & 0 & \textrm{-}\sqrt{\frac{3}{2}}L_3 & \frac{3}{2}Y_1 & 0 & \textrm{-}\sqrt{\frac{3}{2}}W_2 & 0\\ \sqrt{\frac{3}{2}}L_3 & 0 & L_1 & 0 & 0 & Y_4 & 0 & W_1 \\ 0 & L_1 & 0 & L_1 & \textrm{-}\sqrt{\frac{3}{2}}W_2 & 0 & Y_2 & 0 \\ \textrm{-}\sqrt{\frac{3}{2}}L_3 & 0 & L_1 & 0 & 0 & W_1 & 0 & Y_3 \\ 0 & 0 & 0 & 0 & 0 & \sqrt{\frac{3}{2}}L_4 & 0 & \textrm{-}\sqrt{\frac{3}{2}}L_4 \\ 0 & 0 & 0 & 0 & \sqrt{\frac{3}{2}}L_4 & 0 & L_2 & 0 \\ 0 & 0 & 0 & 0 & 0 & L_2 & 0 & L_2 \\ 0 & 0 & 0 & 0 & \textrm{-}\sqrt{\frac{3}{2}}L_4 & 0 & L_2 & 0 \\ \end{smallmatrix}\right), \end{align} and \begin{align} \label{Wbeta} &\hat{W}^{\beta}_{\textbf{k},\textbf{q}}= \left(\begin{smallmatrix} 0 & \sqrt{\frac{3}{2}}L_4 & 0 & \textrm{-}\sqrt{\frac{3}{2}}L_4 & 0 & 0 & 0 & 0 \\ \sqrt{\frac{3}{2}}L_4 & 0 & L_2 & 0 & 0 & 0 & 0 & 0 \\ 0 & L_2 & 0 & L_2 & 0 & 0 & 0 & 0 \\ \textrm{-}\sqrt{\frac{3}{2}}L_4 & 0 & L_2 & 0 & 0 & 0 & 0 & 0 \\ \frac{3}{2}Y_1 & 0 & \sqrt{\frac{3}{2}}W_2 & 0 & 0 & \sqrt{\frac{3}{2}}L_3 & 0 & \textrm{-}\sqrt{\frac{3}{2}}L_3 \\ 0 & Y_3 & 0 & W_1 & \sqrt{\frac{3}{2}}L_3 & 0 & L_1 & 0 \\ \sqrt{\frac{3}{2}}W_2 & 0 & Y_2 & 0 & 0 & L_1 & 0 & L_1 \\ 0 & W_1 & 0 & Y_4 & \textrm{-}\sqrt{\frac{3}{2}}L_3 & 0 & L_1 & 0 \\ \end{smallmatrix}\right), \end{align} where ${\bf k}$-dependent hopping elements $P_i$, $Q_i$, $F_i$, and ${\bf k}$-, ${\bf q}$-dependent vertices $Y_i$, $W_i$ and $L_i$ are given in Appendix~\ref{AppD}. Therefore, by means of Holstein-Primakoff transformation, we have effectively mapped the complicated many-body problem onto a simpler one, describing the motion of a polaronic quasiparticle composed of charge excitations dressed by the $j=1/2$ magnons. This is achieved by projecting out the interaction of magnons with each other as well as their renormalization by the quasiparticle propagator. These approximations comprise the well-known self-consisted Born approximation.\cite{Martinez1991,Liu1992, Sushkov1994,Brink1998, Shibata1999, Wang2015} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.95\columnwidth]{JJLSscheme7.pdf} \caption{Charge excitation on Sr$_2$IrO$_4$: (a) Without on-site spin-orbit coupling, there is one hole on degenerate $xy$, $yz$ and $xz$ orbitals in the ground state on sites $i-1$ and $i+1$, and a charge excitation, i.e. a many-body state consisting of two holes on site $i$. (b) With on-site spin-orbit coupling, the ground state is described by antiferromagnetically ordered $j=1/2$ isospins and the charge excitation of a total momentum J possesses internal multiplet structure, calculated within {\it LS} or {\it jj} coupling schemes. (c) The same as (b), mapped onto polaronic problem. Propagation of the charge excitation is described by polaron dressed by $j =1/2$ magnons. Upon hopping, it creates a broken antiferromagnetic bond of misaligned spins, shown by the wavy line. Here, $\tau$'s denote first-, second- and third neighbor tight binding parameters obtained from density functional theory \cite{Paerschke2017} translated into the many-body language in an exact-diagonalization fashion. $t$'s stand for same hopping parameters in presence of strong on-site spin-orbit coupling. $W$'s are derived from the $t$'s upon downfolding the model onto polaronic formalism and describe hopping parameter of the charge excitation as well as its coupling to magnons. $W$'s are $8 \times 8$ matrices due to charge excitation's internal degree of freedom.} \label{cartoon2} \end{center} \end{figure} A schematic description of these steps and qualitative origin of $W$ terms is shown in Fig. \ref{cartoon2}. In the absence of SOC, the ground state consists of one hole per site, with a spin up or down and occupying one of the three degenerate $t_{\rm 2g}$ orbitals, and a charge excitation composed of two holes (site $i$, see Fig.~\ref{cartoon2}(a)). The charge excitation is a many-body configuration $\left|a \sigma \right\rangle \left|b \sigma' \right\rangle$, described by total spin $S$ and orbital moment $L$. Wavefunction overlap $\tau$ between neighboring holes is material specific and can be obtained from density functional calculations.\cite{Paerschke2017} In the presence of SOC (Fig.~\ref{cartoon2}(b)), the ground state with one hole per site is an antiferromagnet of $j =1/2$ pseudospins. The excited state, previously described by $S$ and $L$, must now be described using total $J$ momentum, connected to $L$ and $S$ using either \textit{LS} or \textit{jj} coupling scheme. Hopping parameters $t$ capture the motion of the charge excitations and their interaction with the $j=1/2$ magnons and are derived from $\tau$'s using basis transformations from \textit{LS} and \textit{jj} coupling schemes as discussed in see \ref{AppA}. Within SCBA (Fig.~\ref{cartoon2}(c)), only the non-crossing diagrams for the fermion-magnon interaction are retained, leading to quasiparticle dressed with the $j=1/2$ magnon (polaron). The motion of the polaron is now described by the matrices $W$ which involves the coupling between the excitation and magnons and are derived from $t$'s by application of the slave-fermion, Holstein-Primakoff, Fourier, and Bogoliubov transformations (see App. \ref{AppC1}). The structural similarity between the resulting Hamiltonians in the two coupling schemes (see Eq.~(\ref{Hamd4partsJJ}) above and Eq.~(\ref{Hparts})) is evident. However, the W-terms describing the free and polaronic hoppings are different from the corresponding terms in the $LS$ coupling scheme. Comparing Eqs~(\ref{W0}\,--\,\ref{Wbeta}) with Eqs.~(\ref{V0}\,--\,\ref{Vb}), one finds that changing the coupling scheme results in renormalization of free-polaron dispersion $ \hat{W}^{0}_{\textbf{k}}$ and vertices $ \hat{W}^{\mathrm{\alpha}}_{\textbf{k},\textbf{q}}$ and $ \hat{W}^{\mathrm{\beta}}_{\textbf{k},\textbf{q}}$, in particular for the matrix elements corresponding to the propagation of the polaron with a singlet $S_{\mathrm{A,B}}$ character. Thus, in the \textit{t-J} model, the coupling scheme manifests itself in the following way: each term of kinetic Hamiltonian~(\ref{Hamd4partsJJ}) containing $h^\dag_{\mathrm{S\,(A,\,B)}}$ ($h^{\phantom{\dag}}_{\mathrm{S\,(A,\,B)}}$) operator gets a renormalization factor of $\sqrt{\frac{3}{2}}$ while those containing two of singlet creation (annihilation) operators get a factor of $\frac{3}{2}$. The above renormalization can be explained by the mixing of the two $J=0$ states, $^3P_0$ and $^1S_0$, as one goes from the \textit{LS} to the \textit{jj} limit. This mixing is shown schematically in Fig.~\ref{fig:j-jL-S} with dotted lines. Therefore, although the choice of the coupling scheme can not result in the change of the number of multiplets or appearance of new multiplets, it can, however, have interesting consequences for the low-energy effective model. As evident from Eq.~(\ref{renormJLScoef2}), part of the spectral weight of $^3P_0$ configuration in \textit{LS} coupling scheme is transferred to higher energies in \textit{jj} coupling scheme, whereas some spectral weight from higher $^1S_0$ state is transferred to lower energies. In other words, the singlet state in the \textit{jj} coupling scheme gets some admixture of previously excited states and only $\sqrt{\frac{2}{3}}$ of the spectral weight of the singlet derived in the \textit{LS} coupling scheme. This results in renormalization of the hopping amplitudes and vertices by a factor of $\sqrt{\frac{3}{2}}$, seen in Eqs.~(\ref{W0}\,--\,\ref{Wbeta}). The physical consequences of this renormalization will be discussed in the next section where the theoretical ARPES spectrum for Sr$_2$IrO$_4$ in both coupling schemes will be compared. \section{Influence of the coupling scheme on the spectral function of ${\rm Sr_2IrO_4}$} \label{Section:j-jL-Siridate} Having obtained the vertices (Eqs.~(\ref{W0}\,--\,\ref{Wbeta})) describing the propagation of the polaron in Sr$_2$IrO$_4$, we calculate the Green's functions of the polaron and plot its spectral function whithin self consistent Born Approximation (SCBA).\cite{Paerschke2017} Since we don't know the exact value of splitting $\Delta$ between $\psi^{jj}_\mathrm{1\:M\:\frac{1}{2}\:\frac{3}{2}}$ and $\psi^{jj}_\mathrm{2\:M'\:\frac{1}{2}\:\frac{3}{2}}$ (see Fig.~\ref{fig:j-jL-S}) which also depends on the Hund's coupling $J_\mathrm{H}$, we consider $\Delta$ as a free parameter and perform calculations for three values of $\Delta$ such that the singlet-triplet splitting\footnote{The factor of 5/8 originates from the fact that the (1/2, 3/2) state splits into a quintet ($J=2$) and a singlet ($J=1$).} $\lambda-5/8\Delta$ takes values between $\lambda/2$ and $\lambda/4$ (see Fig.~\ref{fig:ARPESjj}). \begin{figure}[!t] \raggedright \begin{minipage}{0.45\linewidth} \subfigure{ \includegraphics[width=1.03\linewidth]{58delta_half.pdf} \llap{ \parbox[b]{2.75in}{\small{(a)}\\\rule{0ex}{1.2in} }}\label{fig:ARPESjj:a}% } \subfigure{ \includegraphics[width=1.03\linewidth]{JJLS_line_J1_0_splitting_Lambda_over_three.pdf} \llap{ \parbox[b]{2.75in}{\small{(b)}\\\rule{0ex}{1.2in} }}\label{fig:ARPESjj:b}% } \subfigure{ \includegraphics[width=1.03\linewidth]{58delta_0p75.pdf} \llap{ \parbox[b]{2.75in}{\small{(c)}\\\rule{0ex}{1.2in} }}\label{fig:ARPESjj:c}% } \end{minipage} \begin{minipage}{0.45\linewidth} \subfigure{ \includegraphics[width=0.85\linewidth]{ARPES2.pdf} \llap{ \parbox[b]{2.5in}{\small{(d)}\\\rule{0ex}{2.6in} }} \label{exp} } \subfigure{ \includegraphics[width=1.03\linewidth]{fig2a_everything.pdf} \llap{ \parbox[b]{2.75in}{\small{(e)}\\\rule{0ex}{1.2in} }} \label{fig:ARPESLS} } \end{minipage} \caption{PES spectral function of the low-energy (polaronic) model developed for the quasi-two-dimensional iridates within the \textit{jj} coupling scheme and solved using the self-consistent Born approximation. The value of Coulomb splitting $\Delta$ varies so that singlet-triplet splitting:$\lambda-5/8\Delta$ is (a) $\lambda/2$, (b) $\lambda/3$, (c) $\lambda/4$. ARPES experimental data (reproduced from Ref.[~\onlinecite{Nie2015}]) and spectral function calculated within the \textit{LS} coupling scheme (reproduced from Ref.~\onlinecite{Paerschke2017}) are shown for comparison in panels (d) and (e) respectively. Here spin-orbit coupling $\lambda=\xi/2$ where one-particle SOC $\xi=0.382$ eV following Ref.~\onlinecite{Naturecom2014Maria}; hopping integrals calculated as the best fit to the density functional theory (DFT) band structure as discussed in Ref.~\onlinecite{Paerschke2017}: $t_1=-0.2239$ eV, $t_2=-0.373$ eV, $t'=-0.1154$ eV, $t_3=-0.0592$ eV, $t''=-0.0595$ eV; spectra offset by (a)\,--\,(c) $E=-0.97$ eV, (e) $E=-0.77$ eV; broadening $\delta = 0.01$ eV. \label{fig:ARPESjj}} \end{figure} There are many recent ARPES experiments revealing the shape of the iridate spectral functions,~\cite{Kim2008, Wang2013, delaTorre2015, Liu2015, Brouet2015, Cao2016, KimNature2016, Yamasaki2016,Nie2015} one of which~\cite{Nie2015} is shown on the Fig.~\ref{exp}. The salient features of the spectral function are (i) lowest-energy quasiparticle peak at ($\pi$,$0$) or ($0$,$\pi$)($X$ point), followed by an energy gap of $\gtrsim 0.4$ eV, (ii) well defined peak at ($0$,$0$) ($\Gamma$ point), and (iii) a plateau around ($\pi/2$,$\pi/2$) ($M$ point). While the qualitative features in all the experiments are same, there are some quantitative differences. For instance, the splitting between the peaks at the $X$ point and the $\Gamma$ point varies in the range $0.15 - 0.25$ eV --- a feature crucial for explicit comparison with the experimental data. Comparing Fig.~\ref{fig:ARPESjj:a} and Fig.~\ref{exp}, one can see that the low energy peaks at $M$ and $\Gamma$ points are present in the theoretical ARPES spectra obtained within both the coupling schemes. However, as opposed to the {\it LS} coupling scheme, for the {\it jj} coupling scheme, the peak at the $\Gamma$ point is significantly softened in the theoretical spectra. Furthermore, the energy gap between the peak positions at the $\Gamma$ point and the quasiparticle peak at $M$ is much larger for any value of singlet-triplet splitting. \begin{figure}[!t] \centering \subfigure{ \includegraphics[width=0.46\linewidth]{splitting_lambda_free.pdf} \llap{ \parbox[b]{2.7in}{\small{(a)}\\\rule{0ex}{1.15in} }}\label{fig:ARPESjj:3a}% } \subfigure{ \includegraphics[width=0.46\linewidth]{jj_polaronic.pdf} \llap{ \parbox[b]{2.7in}{\small{(b)}\\\rule{0ex}{1.15in} }}\label{fig:ARPESjj:3b}% } \caption{ Free and polaronic contributions to the spectrum in Fig.~\ref{fig:ARPESjj:a}. (a) Theoretical photoemission spectral function with only propagation of the hole not coupled to magnons allowed as achieved by setting $ \hat{W}^\mathrm{{\alpha}}_{\textbf{k}}=\hat{W}^\mathrm{{\beta}}_{\textbf{k}}\equiv0$. (b) Theoretical photoemission spectral function with only polaronic propagation via coupling to magnons allowed (i.e. no free dispersion) as achieved by setting $\hat{W}^{0}_{\textbf{k}} \equiv 0$. Parameters as in Fig.~\ref{fig:ARPESjj}. However, note the different energy scale. \label{fig:ARPESjj:3}} \end{figure} As Coulomb $\Delta$ is varied, the most prominent change in the spectral function calculated within the \textit{jj} coupling scheme is the change in the energy gap between the peak at the $\Gamma$ point and the quasiparticle peak. Although the size of this gap depends on the value of the singlet-triplet splitting, it is not fully determined by it. This shift of the quasiparticle peak is understood as an effect of the renormalization of the polaronic coupling discussed earlier. Relatively good qualitative and quantitative agreement with the experiment is obtained only with a small gap of $\lambda/4$ (Fig.~\ref{fig:ARPESjj:c}), which implies $\Delta \sim \lambda$. However, as $\Delta$ becomes comparable to $\lambda$, the \textit{LS} coupling scheme should be used, which indeed shows a good qualitative and quantitative agreement with the experiments (Fig.~\ref{fig:ARPESLS}). It is interesting to note that in both {\it LS} and {\it jj} coupling schemes, there is a reasonably sharp peak at ($\pi/2, \pi/2$) as compared to a plateau in the experimental data. Although the peak at ($\pi/2,\pi/2$) is suppressed in the theoretical spectra too, owing to charge excitation scattering on magnons, clearly, this effect is not pronounced enough. This could arise due to overestimation of the quasiparticle spectral weight in SCBA.~\cite{Martinez1991} Other possibilities include effects beyond the approximations made in the present study, such as hybridization of the TM $d$ orbitals with the O $2p$ orbitals. Such effects are known to be important in cuprates where depending on the photon energy O $2p$ or Cu $3d$ weights are observed in the ARPES spectra. However, for quasi-2D iridium oxides, both {\it ab-initio} quantum chemistry calculation, as well as ARPES experiments, suggest that the charge gap is of the order of $0.5$ eV, while the Ir-O charge transfer gap is approximately 2-3 eV.~\cite{Katukuri2012, Uchida2014} Moreover, the charge gap in the iridates is believed to be a Mott-gap~\cite{Carter2013} that is much smaller than the charge transfer gap, putting the iridates in the Mott-Hubbard regime. Yet another possibility is the role of higher lying states in the multiplet structure. However, since a realistic description of all the other low-energy features of the ARPES spectra is obtained for the singlet-triplet splitting $\lambda -5\Delta/8 = 0.25 \lambda$ or in the $LS$ coupling scheme, the relative energy difference between the $J=1$ and the $J=2$ states is $\gtrsim \lambda$. Therefore, they are expected to have an insignificant contribution to the low-energy features. Nevertheless, such effects can not be ruled out completely. Fig.~\ref{fig:ARPESjj:3} shows the relative contributions of the free and the polaronic part of the spectra in the {\it jj} coupling scheme for the singlet-triplet gap equal to $\lambda$. Comparison with the corresponding results in the {\it LS} coupling\cite{Paerschke2017} indicates a stronger influence on the polaronic part of the spectra (Fig.~\ref{fig:ARPESjj:3b}) rather than on the free part (Fig.~\ref{fig:ARPESjj:3a}). Indeed, the hole of a singlet character has the largest contribution to the low-energy band (see Fig.~\ref{fig:ARPESjj:4}) and when the strength of its coupling to magnons is increased by a factor of $\frac{3}{2}$, the band gets additionally renormalized, thus indicating the importance of the polaronic processes. \begin{figure}[!t] \centering \subfigure{ \includegraphics[width=0.45\linewidth]{J0.pdf} \llap{ \parbox[b]{1.6in}{\small{(a)}\\\rule{0ex}{1.25in} }}\label{fig:ARPESjj:4b}% } \subfigure{ \includegraphics[width=0.45\linewidth]{J1.pdf} \llap{ \parbox[b]{1.6in}{\small{(b)}\\\rule{0ex}{1.25in} }}\label{fig:ARPESjj:4c}% } \caption{$J$-resolved theoretical photoemission spectral function of Fig.~\ref{fig:ARPESjj:a}, with (a) showing the $J=0$ contribution (motion of a ``singlet hole'') and (b) the $J=1$ contribution (motion of a ``triplet hole''). \label{fig:ARPESjj:4}} \end{figure} \section{Discussions} \label{Sec:Discussions} Most of the SO driven strongly correlated materials lie in the intermediate spin-orbit coupling regime rather than in the extreme well defined by the \textit{LS} or \textit{jj} coupling schemes.\cite{Sobelman} In fact, knowledge of the composition of the low-energy states and the relative energy splittings unambiguously dictates which coupling scheme is appropriate. In the absence of quantum chemistry results for Ir-$d^4$ configuration, one needs to resort to indirect verification of a suitable theoretical model. For ions with intermediate SOC, ground state multiplets are in general much better captured by the \textit{LS} coupling scheme than the excited states.\cite{Zvezdin} For example, even for some rare-earth compounds which have $\xi\approx 1\,-\,10$, \textit{LS} coupling usually describes the experimentally measured lowest multiplet quite well, which is however not the case for higher excited states. For example, for Er$^{+3}$ ion, which has a value of $\xi\approx 5.53$ close to Ir, the ground-state wave function is given by~\cite{Zvezdin} \begin{equation} \label{eqZvezdin1} |\psi_{GS}\rangle=0,982|^4I\rangle-0,186|^2K\rangle\approx|^4I_{15/2}\rangle\,. \end{equation} i.e. the ground state is indeed well described by the \textit{LS} coupling scheme. However, already for the highest exited multiplet in the same term we have \begin{align} \label{eqZvezdin2} &|\psi_{1}\rangle=0,627|^4I\rangle-0,416|^2K\rangle-0,342|^2G\rangle -\\ &-0,219|^2H\rangle+0,276|^2G'\rangle+0.438|^2H'\rangle\,.\nonumber \end{align} We see that the multiplet $^4I$, which according to the \textit{LS} coupling scheme should describe $ |\psi_{1}\rangle$, has in fact only $39$\% contribution in the corresponding excited wave function.~\cite{Zvezdin} It is also important to note that, in the case of Ir, the first excited state $^3P_1$ is not affected by the coupling scheme choice as there exist a unique $J=1$ state. However, this is not the case for, i.e., $p^3$ and $p^4$ configurations. In $p^3$ configuration, two lowest multiplets, $^4S_{\frac{3}{2}}$ and $^4D_{\frac{3}{2}}$, can in general mix with each other as well as with higher lying $3P_{\frac{3}{2}}$. In the $p^4$ configuration, where the order of some states is inverted as compared to the $p^2$ configuration, the first two excited multiplets $^3P_0$ and $^3P_2$ do change places upon going from one coupling scheme to another,\cite{Sobelman} probably rendering more pronounced effects in the theoretical description. One can, in general, expect much bigger ramifications of the coupling scheme choice in the cases where the composition of the excited states are different as well since under the same values of SOC they usually do get renormalized much more than the ground state, as exemplified by Eqs.~(\ref{eqZvezdin1})\,--\,(\ref{eqZvezdin2}). Naturally, the same renormalization effect discussed in the present work would also be observed for an electron in the material with $t^1_\mathrm{2g}$ configuration in the ground state and strong on-site SOC for any geometry and choice of hopping parameters. For example, deriving a \textit{t-J} model for a honeycomb iridates with one hole which forms the many-body $d^4$ configurations as well, one would get the same renormalization of the kinetic Hamiltonian when going from \textit{LS} to \textit{jj} limit, even though the motion of free charge on the honeycomb lattice is described by a completely different TB model: the hoppings between different orbitals are much larger than the hoppings between the same ones and they are moreover strongly bond-dependent.\cite{Foyevtsova2013} For the present case, employing the DFT-based TB parameters accounts for the crystal field effects and distortions such as octahedra rotation. We note, However, considerable differences from the present case are expected in strong distortions, {\it e.g.} under pressure, due to additional mixing of the states,\cite{Bogdanov2015} and, even more importantly, the renormalization of the Clebsch-Gordan coefficients.\cite{Jackeli2009} Furthermore, the fact that multiplet structure of Ir$^{5+}$ can be so well described by \textit{LS} coupling scheme also suggests that the superexchange model for Sr$_2$IrO$_4$ can be derived by simply projecting the Kugel-Khomskii model\cite{Oles2005} onto the spin-orbit coupled basis as done in e.g. Ref. [\onlinecite{Jackeli2009}]. \section{Conclusions} \label{section:jjLSconclusions} In conclusion, we have studied the ARPES spectra for quasi-2D square lattice iridates in weak and strong SOC strengths where the multiplet structures are well defined by different coupling schemes. Specifically, we have studied how the choice of the coupling scheme can influence the multiplet structure and consequently the low-energy effective model for ${\rm Sr_2IrO_4}$, effectively described by $p^2$ configuration. We have shown that for a \textit{t-J}-like model for Sr$_2$IrO$_4$, the \textit{jj} coupling scheme induces renormalization of the vertices in the kinetic part of the Hamiltonian and prominent changes in the spectral function calculated within SCBA. We have compared the spectra calculated in both coupling schemes to the experimental ARPES data. Interestingly, despite large SOC, we find much better agreement to the experiment for the model derived within the \textit{LS} coupling scheme. We argue that just as well as for many rare-earth compounds, which have comparable SOC strength, the spin-orbit coupling, albeit strong, is yet weak enough to allow for a successful description of the ground state in the framework of the \textit{LS} coupling scheme. For other electronic configurations, such as $p^3$ or $p^4$, where all of the low-energy multiplets are renormalized as we go from \textit{LS} to \textit{jj} coupling scheme \cite{Rubio1986}, more dramatic consequences are expected in the theoretical ARPES spectra. Although, the choice of the coupling scheme and the effective low-energy model can be guided by the knowledge of the composition and relative energy splittings of the multiplets, in the absence of such experimental and/or quantum chemistry studies, the validity of the same must be ascertained. \section{Acknowledgements} Authors thank Manuel Richter, Klaus Koepernik, Krzysztof Wohlfeld, Jeroen van den Brink, Flavio Nogueira, Dmytro Inosov and Robert Eder for helpful suggestions and discussions. RR acknowledges financial support from the European Union (ERDF) and the Free State of Saxony via the ESF project 100231947 (Young Investigators Group Computer Simulations for Materials Design - CoSiMa.)
{ "timestamp": "2018-08-02T02:08:07", "yymm": "1802", "arxiv_id": "1802.04158", "language": "en", "url": "https://arxiv.org/abs/1802.04158" }
\section{Introduction} Since Stanley's 1975 proof of the upper bound conjecture for simplicial spheres via the Stanley-Reisner ring, the study of graded rings associated to combinatorial objects has yielded many deep insights into combinatorics (and vice versa). The \emph{Chow ring} of an atomic\xspace lattice, defined by Feichtner and Yuzvinsky in \cite{fy} is the latest instance of the pattern. The power of Feichtner and Yuzvinsky's construction was demonstrated by Adiprasito, Huh, and Katz, who applied a slight variation of it to the lattice of flats of a matroid in order to resolve the long-standing Heron-Rota-Welsh conjecture. Along the way, they also show that Chow rings arising from geometric lattices satisfy Poincar\'e duality and versions of the hard Lefschetz theorem and the Hodge-Riemann relations. Here, we explore some of Chow rings' combinatorial structure. \subsubsection*{Organization} In the remainder of this section, we summarize some of our main results; Section \ref{sec:background} contains the definitions of matroids and Chow rings. In Section \ref{sec:hilbert}, we derive an explicit form (in terms of permutation statistics) for the Hilbert series of the Chow ring of the matroid associated to a finite vector space. The Charney-Davis quantities of such matroids are computed in Section \ref{sec:charneydavis}. In Section \ref{sec:uniform} we state the specializations of our results to the case of uniform matroids. Finally, in Section \ref{sec:conjectures} we present conjectures and ideas for further work. \subsection{Summary of main results} Let $\ensuremath{\mathbb{F}}_q$ be the finite field of order $q$. Associated to the finite vector space $\ensuremath{\mathbb{F}}_q^n$ is the matroid $M_r(\ensuremath{\mathbb{F}}_q^n)$ whose independent sets are linearly independent subsets of $\ensuremath{\mathbb{F}}_q^n$ of size at most $r$. The lattice of flats of $M_r(\ensuremath{\mathbb{F}}_q^n)$ is given by the collection of subspaces of $\ensuremath{\mathbb{F}}_q^n$ of dimension at most $r$ ordered by inclusion together with the maximal subspace $\ensuremath{\mathbb{F}}_q^n$. In addition, let $U_{n,r}$ denote the uniform matroid of rank $r$ on ground set $[n] := \{1,2,\ldots,n\}$. The lattice of flats of $U_{n,r}$ consists of all subsets of $[n]$ of size at most $r$, together with $[n]$, all ordered by inclusion. Finally, for any matroid $M$, let $A(M)$ be the Chow ring of $M$, and let $H(A(M_r(\ensuremath{\mathbb{F}}_q^n)),t)$ be the Hilbert series of $A(M_r(\ensuremath{\mathbb{F}}_q^n))$ (defined in Section \ref{sec:backgroundAlgebra}). \begin{thm} \label{cor:linearhilbert} For $r = 1,\dots, n$ the Hilbert series of $A\big( M_r(\ensuremath{\mathbb{F}}_q^n) \big)$ is given by \begin{equation} H\big( A(M_r(\ensuremath{\mathbb{F}}_q^n)),t \big) = \sum_{\sigma\in \mathfrak{S}_n}q^{\maj(\sigma) - \exc(\sigma)}t^{\exc(\sigma)} - \sum_{j=r}^{n-1} \sum_{\sigma\in F_{n,n-j}}q^{\maj(\sigma) - \exc(\sigma)}t^{r-\exc(\sigma)} \label{qRankHilb} \end{equation} where $F_{n,n-j}$ is the set of permutations in $\mathfrak{S}_n$ with at least $n-j$ fixed points. \end{thm} In particular, when $r = n$, the Hilbert series of $A\big( M_n(\ensuremath{\mathbb{F}}_q^n) \big)$ is \[ H\Big( A\big( M_n(\ensuremath{\mathbb{F}}_q^n) \big), t\Big) = \sum_{\sigma\in \mathcal \mathfrak{S}_n}q^{\maj(\sigma) - \exc(\sigma)}t^{\exc(\sigma)} = A_n(q,t), \] the $n$th $\maj$-$\exc$ $q$-Eulerian polynomial considered by Shareshian and Wachs in \cite{sw}. We also study the Charney-Davis quantity of $A(M_r(\ensuremath{\mathbb{F}}_q^n))$, defined as $(-1)^{\frac{r-1}{2}}H(A(M_r(\ensuremath{\mathbb{F}}_q^n)), -1)$ for odd $r$ (see Section \ref{sec:backgroundAlgebra}). When $r$ is even, the Charney-Davis quantity vanishes (see Remark \ref{rmk:evencd}). When $r$ is odd, the Charney-Davis quantity has an interpretation in terms of the signature of a quadratic form on the Chow ring (see Remark \ref{rmk:signature}), and in this case, we derive two formulas for the for the Charney-Davis quantity, one in terms of determinants and one in terms of the $q$-secant numbers.\begin{thm} \label{thm:linearcd} \begin{enumerate}[label=(\alph*)] \item For odd $r$, the Charney-Davis quantity of $A\big( M_r(\ensuremath{\mathbb{F}}_q^n) \big)$ is \[ (-1)^{\frac{r-1}{2}}\sum_{k=0}^{\frac{r-1}{2}}\genfrac{[}{]}{0pt}{}{n}{2k}_qE_{2k,q}\] where $E_{2k,q}$ is the $q$-analogue of the $k$-th secant number (see Definition \ref{defn:ts}). \item More explicitly, for odd $r$ the Charney Davis quantity in part (a) is equal to \[ (-1)^{\frac{r-1}{2}}\left(1+[n]_q!\sum_{a = 1}^{\frac{r-1}{2}} \frac{(-1)^a}{[n-2a]_q!}\Delta_{a,q}\right) \] for $\Delta_{a,q}$ the determinant \[ \Delta_{a,q} = \det\left( \begin{array}{ccccc} \frac{1}{[2]_q!} & 1 & 0 & \cdots & 0\\ \frac{1}{[4]_q!} & \frac{1}{[2]_q!} & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{1}{[2a-2]_q!} & \frac{1}{[2a-4]_q!} & \frac{1}{[2a-6]_q!} & \cdots& 1\\ \frac{1}{[2a]_q!} & \frac{1}{[2a-2]_q!} & \frac{1}{[2a-4]_q!} & \cdots& \frac{1}{[2]_q!} \end{array} \right). \] \end{enumerate} \end{thm} All of these invariants are $q$-analogs of the corresponding invariants of the Chow ring of the uniform matroid. \section{Definitions and Background} \label{sec:background} In this section, we first define the Charney-Davis quantity. We then define Chow rings and state some salient results on them. Finally, we give a brief review of some permutation statistics, which we use to establish notation and introduce some of the $q$-analogs that will later appear. For an introduction and reference about matroid theory, we refer the reader to \cite{oxley}. \subsection{Hilbert Series and the Charney-Davis Quantity} \label{sec:backgroundAlgebra} Let $R$ be an $\mathbb N$-graded $\ensuremath{\mathbb{Z}}$-algebra with the property that for all $d \in \mathbb N$, the degree-$d$ homogeneous component $R_d$ of $R$ is a torsion-free $\ensuremath{\mathbb{Z}}$-module. We can then define the \word{Hilbert function} of $R$ by $h(R, d) \coloneqq \dim_\ensuremath{\mathbb{Z}} R_d$ and the \word{Hilbert series} of $R$ by $H(R,t) \coloneqq \sum_{d \in \mathbb N} h(R,d) t^d$. The Hilbert series of some rings, including those that we will study, are \word{symmetrical}, meaning that there exists an $r \geq 0$ such that $h(R, d) = 0$ for $d > r$, $h(R, r) \neq 0$, and $h(R, d) = h(R, r-d)$ for all $0 \leq d \leq r$. When the Hilbert series of $R$ is a polynomial of degree $r$, we call the number \[ \CD(R) := \begin{cases} (-1)^{r/2} H(R,-1), & r \textrm{ even} \\ H(R,-1), & r \textrm{ odd} \end{cases} \] the \word{Charney-Davis quantity} of $R$. In particular, if $R$ has symmetric Hilbert series of odd degree, then $\CD(R) = 0$. The Charney-Davis quantity was introduced in \cite{cdquant} and is related to a conjecture of Charney and Davis for posets associated to flag simplicial complexes. See \cite{athansurvey} for a more recent framework towards approaching questions stemming from Charney and Davis' original conjecture. For an alternative interpretation of the Charney-Davis quantity in the context of the Chow ring of a matroid, see Remark \ref{rmk:signature}. \subsection{Chow Rings of Matroids} \label{sec:chow} Let $M$ be a finite matroid on ground set $E$; that is, a pair $(E, \mathcal{I})$ where $\emptyset \subsetneq \mathcal{I} \subset 2^E$ is the collection of \word{independent sets of $M$} and satisfies \begin{enumerate} \item $A \in \mathcal{I} \implies 2^A \subset \mathcal{I}$, and \item if $A,B \in \mathcal{I}$ with $\#A > \#B$ then there exists $x \in A \setminus B$ such that $B \cup \{x\} \in \mathcal{I}$. \end{enumerate} The \word{rank} of $S \subset E$ is the size of any maximal independent subset of $S$, and the \word{closure} of $S$ is $\cl(S) \coloneqq \setof{x \in E}{\rank(S \cup \{x\}) = \rank(S)}$. We will call $S$ a \word{flat} if $\cl(S) = S$. The flats of $M$, ordered by inclusion, form a geometric lattice $L = L(M)$ called the \word{lattice of flats} of $M$. We will write $\bot$ for the minimal flat of $M$, and $\top$ for the maximal flat of $M$. \begin{defn} The \word{Chow ring} of $M$ on ground set $E$ with lattice of flats $L$ is \[ A(L) \coloneqq A(M) \coloneqq \ensuremath{\mathbb{Z}}[x_F\,\colon F \in L(M) \setminus \{\bot\} ] / (I_1 + I_2) \] where $I_1$ and $I_2$ are the ideals with generators \begin{align*} I_1 &= \idealof{ x_F x_G}{\textrm{$F$ and $G$ are incomparable}} \\ I_2 &= \idealof{ \sum_{i \in F \in L(M)} x_F}{i \in E} \end{align*} \end{defn} Each homogeneous component of a Chow ring is a torsion-free $\ensuremath{\mathbb{Z}}$-module (see Cor. 1 in \cite{fy}), so we may speak of its Hilbert function and Hilbert series as a $\ensuremath{\mathbb{Z}}$-algebra, as defined in Section \ref{sec:backgroundAlgebra}. We now state some results on Chow rings of matroids that we will make use of later in the paper. \subsubsection{Gr\"obner Basis and Hilbert Series} Feichtner and Yuzvinsky found a Gr\"obner basis for this ring and proved the following theorem about its Hilbert series in \cite{fy}. \begin{thm}[\cite{fy} Corollary 2] \label{thm:fyhilbert} The Hilbert series of $A(L)$ is \[H(A(L), t) = 1 + \sum _{\bot = F_{0} < F_{1} < \dots < F_{m}} \prod _{i=1} ^{m} \frac{t(1 - t^{\rank F_{i} - \rank F_{i-1} - 1})}{1 - t}.\] where the sum is taken over all chains of flats $\bot = F_0 < F_1 < \cdots < F_m$ in $L$. In particular, the Hilbert function is given combinatorially as follows. \[ \dim A(L)_k = \#\setof{ x_{F_1}^{\alpha_1}\cdots x_{F_\ell}^{\alpha_\ell} }{ { 1\leq \alpha_i\leq {\rm rk}(F_i) - {\rm rk}(F_{i+1}) - 1 , \,\,\, \su \alpha_i = k} } \] where the set on the right ranges over all flats $F_1>\cdots >F_\ell$ in $L(M)$. \end{thm} \subsubsection{Poincar\'e duality} Adiprasito, Huh, and Katz show Chow rings of matroids satisfy a form of Poincar\'e duality. \begin{thm}[Poincar\'e duality; c.f. \cite{ahk} Theorem 6.19] Let $M$ be a matroid of rank $r$. For $q \leq r-1$, the multiplication map \[ A^q(M)\times A^{r-1-q}(M)\to A^{r-1}(M) \] defines an isomorphism \[ A^{r-1-q}(M)\iso {\rm Hom}_\ensuremath{\mathbb{Z}}(A^q(M),A^{r-1}(M)) \] \label{Poincare1} \end{thm} \begin{rmk} It is an immediate consequence of Corollary 6.11 of \cite{ahk} that $A^{r-1}(M) \iso \ensuremath{\mathbb{Z}}$. Hence, Theorem \ref{Poincare1} implies that $\dim_\ensuremath{\mathbb{Z}} A^{r-1-q}(M) = \dim_\ensuremath{\mathbb{Z}} A^q(M)$. This shows that $A(M)$ has a symmetrical Hilbert series. If we speak of the Hilbert series or Charney-Davis quantity of a matroid $M$, then we are referring to that of its Chow ring $A(M)$. \end{rmk} \begin{rmk} \label{rmk:signature} Since $A^{r-1}(M) \iso \ensuremath{\mathbb{Z}}$, when $r$ is odd, the squaring map $Q: A^{(r-1)/2}(M) \times A^{(r-1)/2}(M) \to A^{r-1}(M)$ with $Q(x) = x^2$ defines a quadratic form on $A^{(r-1)/2}(M)$. By Theorem 1.1 of \cite{leungreiner}, the fact that the Hodge-Riemann relations hold for $A(M)$ implies that the signature of this quadratic form is equal to the Charney-Davis quantity of $A(M)$. \end{rmk} \subsection{Permutation Statistics and Polynomials} In this section, we will establish notation for permutation statistics. We will also discuss Eulerian polynomials, which will appear when we examine the Hilbert series of Chow rings, and the tangent-secant numbers, which will appear when we examine the Charney-Davis quantities. Let $\mathfrak{S}_n$ denote the symmetric group on $n$ letters. \begin{defn} Let $\sigma \in \mathfrak{S}_n$ be a permutation. Then, define the statistics \begin{align*} \inv(\sigma) &= \#\setof{(i,j)}{\sigma(i)>\sigma(j)} \\ \des(\sigma) &= \#\setof{i\in [n-1]}{\sigma(i+1)<\sigma(i)} \\ \exc(\sigma) &= \#\setof{i\in [n]}{\sigma(i)>i} \\ \maj(\sigma) &= \sum_{i,\; \sigma(i)<\sigma(i+1)} i \\ \end{align*} \end{defn} \subsubsection{Eulerian polynomials} The Eulerian polynomials and their $q$-analogs appear in the Hilbert series of the matroids that we study. To motivate the $q$-analogs, we first review the classical Eulerian polynomials. \begin{defn} The Eulerian polynomial $A_n(t)$ is the polynomial \[ A_n(t) = \sum_{\omega\in \mathfrak{S}_n} t^{\exc(\omega)} \] \end{defn} These polynomials have many interesting applications; see \cite{pk} for further exposition. The polynomials $A_n(t)$ satisfy the following identities \begin{prop}[\cite{pk} Theorem 1.4] \[ \displaystyle A_n(t) = \sum_{k = 0}^{n-1}\binom{n}{k}A_k(t)(t+1)^k \] \end{prop} \begin{prop}[\cite{pk} Theorem 1.6] \label{prop:eulerexp} The exponential generating function of the polynomials $A_n(t)$ is \[ \sum_{n\geq 0}A_n(t)\frac{x^n}{n!} = \frac{t-1}{t-e^{z(t-1)}}. \] \end{prop} \noindent The coefficient of $t^k$ in $A_n(t)$ is the \word{$n$-th Eulerian number} and is written \[ A(n,k) \coloneqq \genfrac{<}{>}{0pt}{}{n}{k} \coloneqq \#\setof{\sigma\in \mathfrak{S}_n}{\exc(\sigma) = k}.\] \noindent Now, we discuss the $\maj$-$\exc$ $q$-Eulerian polynomials of Shareshian and Wachs. \begin{defn} The $n$th $\maj$-$\exc$ \emph{$q$-Eulerian polynomial} (or merely $q$-Eulerian polynomial) $A_n(q,t)$ is the polynomial \[ A_n(q,t) \coloneqq A_n^{\maj,\exc}(q,tq^{-1}) = \sum_{\sigma\in \mathfrak{S}_n}q^{\maj(\sigma) - \exc(\sigma)}t^{\exc(\sigma)} \] As above, define the $q$-Eulerian number $\genfrac{<}{>}{0pt}{}{n}{j}_q$ to be the coefficient of $t^j$ \[ \genfrac{<}{>}{0pt}{}{n}{j}_q \coloneqq \sum_{\substack{\sigma\in \mathfrak{S}_n \\ \exc(\sigma) = j}} q^{\maj(\sigma) - \exc(\sigma)} = \sum_{\substack{\sigma\in \mathfrak{S}_n \\ \exc(\sigma) = j}} q^{\maj(\sigma) - j} \] \end{defn} \noindent The following theorem gives a $q$-analog of Proposition \ref{prop:eulerexp}. \begin{thm}[\cite{sw}, Thm 1.1] \label{thm:swexp} The $q$-Eulerian polynomials $A_n(q,t)$ are the unique polynomials with $q$-exponential generating function \[ \sum_{n\geq 0} A_n(q,t)\frac{x^n}{[n]_q!} =\frac{(t-1)e_q(x)}{te_q(x) - e_q(tx)} \] where $e_q(x) \coloneqq \sum_{n \geq 0} \frac{x^n}{[n]_q!}$ is the $q$-exponential function. \end{thm} \subsubsection{Tangent-Secant numbers} The tangent-secant numbers and a $q$-analog of them will appear in our investigation of Charney-Davis quantities. \begin{defn} The \emph{$n$-th tangent-secant number} $E_{n}$ is the coefficient of $\frac{x^n}{n!}$ in the exponential generating function \[ \tanh(x)+\sech(x) = \sum_{n\geq 0}E_n\frac{x^n}{n!} \] \end{defn} \begin{rmk} In the literature, the numbers $E_{2n}$ are often referred to as the Euler numbers. To avoid confusion with the Eulerian numbers, we will refrain from using this language. Instead, we call the numbers $E_{2n}$ the \word{secant numbers} and the numbers $E_{2n+1}$ the \word{tangent numbers}. The nomenclature that we use is justified by the observation that, since $\tanh(x)$ is odd and $\sech(x)$ even, \[ \tanh(x) = \sum_{n\geq 0} E_{2n+1}\frac{x^{2n+1}}{(2n+1)!}\text{\quad and\quad}\sech(x) = \sum_{n\geq 0}E_{2n}\frac{x^{2n}}{(2n)!}. \] Hence, \[ \tan(x) = \sum_{n\geq 0}(-1)^nE_{2n+1}\frac{x^{2n+1}}{(2n+1)!}\text{\quad and\quad}\sec(x) = \sum_{n\geq 0}(-1)^nE_{2n}\frac{x^{2n}}{(2n)!}. \] \end{rmk} In Section \ref{sec:charneydavis}, we will also prove $q$-analogues of the following. \begin{prop}[\cite{stanley:altPerms}, equation 1.8] \label{prop:sectandet} For all $n$, we have $E_{2n} = (-1)^n(2n)!\Delta_n$ for the following determinant \[ \Delta_n = \det\left( \begin{array}{ccccc} \frac{1}{2!} & 1 & 0 & \cdots & 0\\ \frac{1}{4!} & \frac{1}{2!} & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{1}{(2n-2)!} & \frac{1}{(2n-4)!} & \frac{1}{(2n-6)!} & \cdots& 1\\ \frac{1}{(2n)!} & \frac{1}{(2n-2)!} & \frac{1}{(2n-4)!} & \cdots& \frac{1}{2!} \end{array} \right) \] \end{prop} \begin{prop}[cf. \cite{eulernos}] For all $n$, $\displaystyle E_{2n} = -\sum_{k=0}^{n-1}\binom{2n}{2k}E_{2k}$. \end{prop} To define the \word{$q$-tangent-secant numbers}, let \begin{align*} \sinh_q(t) &\coloneqq \sum_{n\geq 0} \frac{t^{2n+1}}{(q;q)_{2n+1}} & \cosh_q(t) &\coloneqq \sum_{n\geq 0} \frac{t^{2n}}{(q;q)_{2n}} \\ \sech_q(t) &\coloneqq \frac{1}{\cosh_q(t)} & \tanh_q(t) &\coloneqq \frac{\sinh_q(t)}{\cosh_q(t)} \end{align*} where $(t;q)_n = (1-t)(1-tq)\cdots (1-tq^{n-1})$ is the \emph{Pochhamer symbol}. \begin{defn} \label{defn:ts} The \word{$n$-th $q$-tangent-secant number}, $E_{n,q}$, is the coefficient of $t^n$ in the generating function \[\sech_q(t) + \tanh_q(t) = \sum_{n\geq 0} E_{n,q}\frac{t^n}{(q;q)_n}.\] \end{defn} Up to signs, the tangent-secant numbers in Definition \ref{defn:ts} agree with those studied in the work of Foata and Han and of Josuat-Verg{\`e}s in \cite{foata} and \cite{jv}, respectively. \begin{rmk} In the case $q=1$, $E_{n,q} = E_{n}$ is the classical $n$th tangent/secant number. \end{rmk} \section{Hilbert series of vector space matroids} \label{sec:hilbert} The main results of this section will be Theorem \ref{cor:linearhilbert}, the expression of the Hilbert series in terms of $q$-Eulerian polynomials, and the resulting specialization to the uniform matroid. \subsection{Method for calculating Hilbert series of Chow rings} \label{sec:methods} \let\olddim\dim \renewcommand{\dim}{\ensuremath{\operatorname{\olddim_{\ensuremath{\mathbb{Z}}}}}} { We begin by deriving a useful recurrence for the Hilbert series of the Chow ring of a matroid. The technique we present below makes use of Theorem \ref{thm:fyhilbert} covered above to give a formula for the Hilbert series of any geometric lattice $L$ of rank $r+1$ with the property \begin{equation} \label{eq:UpperBdProp} [Z, \top] \iso [Z', \top]\text{ for all }Z, Z' \in L\text{ with }\rank(Z) = \rank(Z'). \tag{$*$} \end{equation} In the following, we assume that $L$ is such a lattice. \begin{prop} If $L$ is a geometric lattice such that property \eqref{eq:UpperBdProp} holds and $(Z_1, \ldots, Z_r)$ is a sequence of elements of $L$ with $\rank(Z_i) = i$ for all $i$, then \[ H(A(L),t) = [r+1]_t + t \sum_{i = 2}^r |L_i|\, [i-1]_t \, H( A([Z_i, \top]), t). \label{eq:recurrence} \] \end{prop} \begin{proof} From Theorem \ref{thm:fyhilbert}, we have \[ \dim A^q(L) = \#\setof{ x_{F_1}^{\alpha_1}\cdots x_{F_\ell}^{\alpha_\ell} }{ { 1\leq \alpha_i\leq {\rm rk}(F_i) - {\rm rk}(F_{i+1}) - 1 , \,\,\, \su \alpha_i = q} } \] where $F_1 > F_2 > \cdots > F_\ell$ ranges over all chains of elements of $L$. For each $2\leq j\leq r$, define \[ N_{q,j}\coloneqq \#\setof{ x_{F_1}^{\alpha_1}\cdots x_{F_\ell}^{\alpha_\ell} }{ { 1\leq \alpha_i\leq {\rm rank}(F_i) - {\rm rank}(F_{i+1}) - 1 , \,\,\, \su \alpha_i = q,\,\,\, {\rm rank}(F_1) = j} } \] Then $\dim A^q(L) = \sum_{j=2}^{r+1} N_{q,j}$. Now for each $2\leq j\leq r$, property \eqref{eq:UpperBdProp} implies \begin{align*} N_{q,j} &= \# L_j\cdot\#\setof{ x_{Z_j}^{\alpha_1}x_{F_2}^{\alpha_2}\cdots x_{F_\ell}^{\alpha_\ell} }{\substack{Z_j = F_1>F_2>\cdots>F_\ell, \\ { 1\leq \alpha_i\leq {\rm rk}(F_i) - {\rm rk}(F_{i+1}) - 1 , \,\,\, \su \alpha_i = q} }}\\ & = \#L_j \cdot \sum_{p=1}^{j-1}\#\setof{ x_{Z_j}^{p}x_{F_2}^{\alpha_2}\cdots x_{F_\ell}^{\alpha_\ell} }{\substack{Z_j = F_1>F_2>\cdots>F_\ell \\ { 1\leq \alpha_i\leq {\rm rk}(F_i) - {\rm rk}(F_{i+1}) - 1 , \,\,\, \sum_{i=2}^\ell \alpha_i = q-p}} }\\ & = \#L_j \cdot \sum_{p=1}^{j-1}\dim A^{q-p}([Z_j,\top]) \end{align*} While $N_{q,r+1} = \#\{x_\top^q\} = 1$. Hence, we have \[\label{ahkrec} \dim A^q(L) = 1 + \sum_{i = 2}^r |L_i| \sum_{p=1}^{i-1} \dim A^{q-p}([Z_i, \top]) .\] This recurrence for the dimension of a homogeneous component can be lifted to a recurrence for the Hilbert series of $A(L)$ in the following manner. For a fixed $0 \leq k \leq r-1$, let $(Z_1, \ldots, Z_r)$ be a sequence of elements of $L$ with $\rank(Z_i) = i$ for all $i$. Then \begin{align*} H(L,t) &= \sum_{q = 0}^r \dim A^q(L)\; t^q \\ &= \sum_{q = 0}^r \left( 1 + \sum_{i = 2}^r \#L_i \cdot \sum_{p=1}^{i-1} \dim A^{q-p}([Z_i, \top]) \right) t^q \\ &= [r+1]_t + \sum_{i = 2}^r \#L_i \cdot \sum_{p=1}^{i-1} \sum_{q = 0}^r \dim A^{q-p}([Z_i, \top]) \; t^q \end{align*} Since $\dim A^{q-p}([Z_i,\top]) = 0$ when $q-p < 0$ by convention, the innermost sum above really only runs from $q = p$ to $q = r$. Making this change and setting $k = q-p$, we can rewrite the above as \[ [r+1]_t + \sum_{i = 2}^r \#L_i \cdot \sum_{p = 1}^{i-1} t^p \sum_{k = 0}^{r-p} \dim A^k([Z_i, \top]) \; t^k. \] Now, observe that $\rank([Z_i, \top]) = r + 1 - i$ and that $p \leq i-1$, so $r-p \geq r - i + 1$. Hence, $\sum_{k = 0}^{r-p} \dim A^k([Z_i, \top]) t^k = H([Z_i, \top], t)$ for every $p$ and $i$, so we obtain the proposition. \end{proof} } We will now state the recurrence for the Hilbert series that one gets by applying Proposition \eqref{eq:recurrence} to matroids of special interest. \subsubsection*{Uniform matroids} Each upper interval of $L(U_{n, r+1})$ is the lattice of flats of a uniform matroid on a smaller ground set and of lower rank. Hence \[ H( A(U_{n, r+1}), t) = [r+1]_t + t \sum_{i = 2}^r \binom{n}{i} [i-1]_t\, H( A(U_{n-i, r+1-i}),t). \] In particular, if we define $A(U_{0,0}) = \ensuremath{\mathbb{Z}}$, then for the case $r = n-1$ we have \begin{align*} H( A(U_{n, n}), t) &= [n]_t + t \sum_{i = 2}^{n-1} \binom{n}{i} [i-1]_t\, H( A(U_{n-i, n-i}),t) \\ &= 1 + t \sum_{i = 1}^{n} \binom{n}{i} [i-1]_t\, H( A(U_{n-i, n-i}),t). \end{align*} \subsubsection*{Subspaces of vector spaces over finite fields} The formula for vector spaces over finite fields is a $q$-analog of the one for the uniform matroid. \[ H\left( A\big(M_{r+1}(\ensuremath{\mathbb{F}}^n_q)\big), t\right) = [r+1]_t + t \sum_{i = 2}^r [i-1]_t \, \genfrac{[}{]}{0pt}{}{n}{i}_q H\left( A\big( M_{r+1-i}( \ensuremath{\mathbb{F}}_q^{n-i}) \big), t\right) \] In particular, if we write $M(\ensuremath{\mathbb{F}}_q^n) = M_n(\ensuremath{\mathbb{F}}_q^n)$ and set $A(M(\ensuremath{\mathbb{F}}_q^0)) = \ensuremath{\mathbb{Z}}$, then similar to the uniform case, for $r = n-1$, \begin{equation} H\left( A\big(M(\ensuremath{\mathbb{F}}^n_q)\big), t\right) = 1 + t \sum_{i = 1}^n [i-1]_t \, \genfrac{[}{]}{0pt}{}{n}{i}_q H\left( A\big( M( \ensuremath{\mathbb{F}}_q^{n-i}) \big), t\right) \label{FqRec} \end{equation} \subsection{Full-rank vector space matroid} \label{sec:subspace} Write $M(\ensuremath{\mathbb{F}}_q^n) = M_{n}(\ensuremath{\mathbb{F}}_q^n)$. The main result of this section is a proof that the Hilbert series of $A\big(M(\ensuremath{\mathbb{F}}_q^n)\big)$ is the $\maj$-$\exc$ $q$-Eulerian polynomial of \cite{sw}. We also find a new recurrence for the $q$-Eulerian polynomials. To characterize the Hilbert series of $A\big( M(\ensuremath{\mathbb{F}}_q^n) \big)$, we compute its $q$-exponential generating function. \begin{lem} Define $h_0 \coloneqq 1$. The $q$-exponential generating function of $h_n(t) \coloneqq H\Big(A\big(M(\ensuremath{\mathbb{F}}_q^n)\big),t\Big)$ is given by \[ F(t,x) \coloneqq \sum_{n\geq 0}h_n(t)\frac{x^n}{[n]_q!} = \frac{(t-1)e_q(t)}{te_q(t) - e_q(tx)} \] where $e_q$ denotes the $q$-exponential function $e_q(x) \coloneqq \sum_{n\geq 0}\frac{x^n}{[n]_q!}$. \label{qegf} \end{lem} \begin{proof} By equation \eqref{FqRec}, we have the relation \[ h_n = 1+t\sum_{i=1}^{n}[i-1]_t\genfrac{[}{]}{0pt}{}{n}{i}_q h_{n-i} \] Then, the generating function $F(t,x)$ satisfies \begin{align*} F(t,x) &= 1+\sum_{n\geq 1}\frac{x^n}{[n]_q!}+t\sum_{n\geq 1}\sum_{i = 1}^n\left( [i-1]_t\genfrac{[}{]}{0pt}{}{n}{i}_qh_{n-i} \right)\frac{x^n}{[n]_q!}\\ & = e_q(x) + t\sum_{n\geq 1}\sum_{i = 1}^n\left( [i-1]_t\frac{x^i}{[i]_q!} \right)\left( h_{n-i}\frac{x^{n-i}}{[n-i]_q!} \right)\\ & = e_q(x)+tF(t,x)G(t,x) \end{align*} for $G(t,x) = \sum_{i\geq 1}[i-1]_t\frac{x^i}{[i]_q!}$. We can rewrite $G(t,x)$ as \begin{align*} G(t,x) &= \frac{1}{t - 1}\sum_{i\geq 1}(t^{i-1} - 1)\frac{x^i}{[i]_q!} = \frac{1}{t - 1}\left(\frac{e_q(tx)-1}{t} - e_q(x) + 1\right) \\ &= \frac{1}{t^2 - t}\Big(e_q(tx) - te_q(x) + t - 1\Big) \end{align*} Substituting into the equation above and solving for $F$, we get \[ F(t,x) = \frac{e_q(x)}{1 - \frac{1}{t-1}\Big( e_q(tx) - te_q(x)\Big)} = \frac{(t-1)e_q(x)}{te_q(x) - e_q(tx)}\qedhere \] \end{proof} \begin{cor} \label{cor:fullRankLinearHilbert} The Hilbert series of $A(M(\ensuremath{\mathbb{F}}_q^n))$ is equal to $A_n(q,t)$. \end{cor} \begin{proof} The $q$-exponential generating function of the Hilbert series $h_n(t) = H(A(M(\ensuremath{\mathbb{F}}_q^n)), t)$ is the same as the one for the $q$-Eulerian polynomials given in Theorem \ref{thm:swexp}. \end{proof} As a corollary, we find a interpretation of the $q$-Eulerian numbers. \begin{cor} \[ \genfrac{<}{>}{0pt}{}{n}{k}_q = \#\setof{x_{V_{1}}^{\alpha_1} \dots x_{V_{\ell}}^{\alpha_\ell}}{\substack{{V_1\subsetneq \cdots \subsetneq V_\ell\text{ are subspaces of }\ensuremath{\mathbb{F}}_q^n} \\ {1\leq \alpha_i \leq \dim V_i - \dim V_{i-1} - 1,\; \sum_i\alpha_i = k}}}\] \label{qEulerianNos} \end{cor} \begin{proof} By Theorem \ref{thm:fyhilbert} and Corollary \ref{qEulerianNos}, both quantities count $\dim A(M(\ensuremath{\mathbb{F}}_q^n))_k$ \end{proof} \begin{rmk} \label{rmk:combqhilb} In the notation of Subsection \ref{subsec:lowrank}, Corollary \ref{qEulerianNos} states that \[ \genfrac{<}{>}{0pt}{}{n}{k}_q = \#M_{n,n,k} \] \end{rmk} \begin{rmk} In the course of proving the results above, we discovered the following recurrence for the $q$-Eulerian polynomials. \end{rmk} \begin{prop} Let $H_n(t)= H(A(M(\mathbb F_q^n)),t)$ denote the Hilbert series of $A(M(\mathbb F_q^n))$, and let $(a;q)_n \coloneqq (1-a)(1-aq)\cdots (1-aq^{n-1})$ be the Pochhammer symbol. Then $h_n$ satisfies the recurrence \begin{align} h_n(t) &= \sum_{k=0}^{n-1} \genfrac{[}{]}{0pt}{}{n}{k}_{q} h_k(t) \prod_{i=1}^{n-1-k}(t-q^i) \label{rec} \\ &= \sum_{k=0}^{n-1} \genfrac{[}{]}{0pt}{}{n}{k}_{q} t^{n-1-k}\cdot h_k(t)\cdot (q/t;q)_{n-1-k}. \nonumber \end{align} \label{qeulerianrec} \end{prop} To the authors' knowledge, the recurrence in proposition \ref{qeulerianrec} does not yet appear in the literature, and it provides a $q$-analogue for the following well-known recurrence for the Eulerian polynomials \[ A_n(t) = \sum_{k=0}^{n-1} \binom{n}{k}A_k(t)(t-1)^{n-1-k}. \] For a proof of Proposition \ref{qeulerianrec}, see our REU report \cite{report}. \subsection{Lower rank vector space matroids} \label{subsec:lowrank} Next, we find an explicit form for the Hilbert series of lower rank vector space matroids $M_r(\ensuremath{\mathbb{F}}_q^n)$ with $r < n$. The main result of this section is Theorem \ref{cor:linearhilbert}. We will first give a brief overview of our methodology and set up some notation. We study the Hilbert series of $A\big(M_r(\ensuremath{\mathbb{F}}_q^n)\big)$ by descending induction on the rank $r$; in particular, we consider the differences $\Delta_{n,r,q}(t) \coloneqq H\Big( A\big(M_{r+1}(\ensuremath{\mathbb{F}}_q^n), t\big) \Big) - H\Big( A\big(M_{r}(\ensuremath{\mathbb{F}}_q^n), t\big) \Big)$ for $1\leq r\leq n$. Write \[ \Delta_{n,r,q}(t) = a_{n,r,q}^{(r)}t^r + a_{n,r,q}^{(r-1)}t^{r-1} + \cdots +a_{n,r,q}^{(0)} \] for $a_{n,r,q}^{(k)}\in \ensuremath{\mathbb{Z}}$. We will show that $a_{n,r,q}^{(k)}$ is a $q$-analogue of the number \[ \#\setof{\sigma\in F_{n,n-r}}{\exc(\sigma) = r-k}. \] where $F_{n,n-r} \coloneqq \setof{ \sigma \in \mathfrak{S}_n}{\#\fix(\sigma) \geq n-r}$. In particular, we will express \[ a_{n,r,q}^{(k)} = \sum_{i = 0}^r \genfrac{[}{]}{0pt}{}{n}{i}_{q} D_{i,r-k,q} = \sum_{i = 0}^r \genfrac{[}{]}{0pt}{}{n}{r-i}_{q} D_{r-i,k-i,q} \] where $\mathcal{D}_{n} \subseteq \mathfrak{S}_{n}$ is the set of derangements, and $D_{n,k,q}$ is a $q$-analogue of the number \[\#\setof{\sigma\in \mathcal D_n}{\exc(\sigma) = r-k}.\] Define \begin{align*} N_{n,r}&\coloneqq N_{n,r}(q) \coloneqq \setof{x_{\top}^{\alpha_0}x_{V_1}^{\alpha_1}\cdots x_{V_\ell}^{\alpha_\ell}}{\substack{{\ensuremath{\mathbb{F}}_q^n \supsetneq V_1\supsetneq \cdots\supsetneq V_\ell\text{ are subspaces of }\ensuremath{\mathbb{F}}_q^n\text{ of rank }\leq r,}\\{\alpha_0\leq r-{\rm dim}(V_1)\text{ and } 1\leq \alpha_i\leq {\rm dim}(V_i) - {\rm dim}(V_{i+1}) - 1}}} \\ M_{n,r,k}&\coloneqq M_{n,k,r}(q) \coloneqq \setof{x_{\top}^{\alpha_0}x_{V_1}^{\alpha_1}\cdots x_{V_\ell}^{\alpha_\ell}\in N_{n,r}}{\deg x_{\top}^{\alpha_0}x_{V_1}^{\alpha_1}\cdots x_{V_\ell}^{\alpha_\ell} = k} \\ T_{n,k,q} &\coloneqq \setof{x_{\top}^{\alpha_0}x_{V_1}^{\alpha_1}\cdots x_{V_\ell}^{\alpha_\ell} \in M_{n,n,k}}{\alpha_0\geq 1} \\ D_{n,k,q}&\coloneqq \#T_{n,k,q}. \end{align*} For notational convenience, we suppress the dependence on $q$ in $N_{n,r}(q)$ and $M_{n,r,k}(q)$. By Theorem \ref{thm:fyhilbert}, $\dim \big(A(M_r(\ensuremath{\mathbb{F}}_q)) \big)_k = \#M_{n,r,k}$. Note that we have inclusions $M_{n,r,k}\subseteq M_{n,r+1,k}$ and the complement of $M_{n,r,k}$ in $M_{n,r+1,k}$ is the set \[ M_{n,r+1,k} \setminus M_{n,r,k} = \setof{x_{\top}^i x_{V_1}^{\alpha_1}\cdots x_{V_\ell}^{\alpha_\ell} \in M_{n,r+1,k}}{0\leq i\leq r,\; \; {\rm dim}(V_1) = r - i} \] Identifying $V_1 = \ensuremath{\mathbb{F}}_q^{r-i}$ we obtain, for each fixed $0\leq i\leq r$, a bijection \begin{align*} \setof{x_{\top}^i x_{V_1}^{\alpha_1}\cdots x_{V_\ell}^{\alpha_\ell} \in N_{n,r,k}}{{\rm dim}(V_1) = r - i} &\to \setof{V_1\subsetneq \ensuremath{\mathbb{F}}_q^n}{{\rm dim}(V_1) = r-i}\times T_{r-i,k-i,q} \\ x_{\top}^i x_{V_1}^{\alpha_1}\cdots x_{V_\ell}^{\alpha_\ell} &\mapsto (V_1, x_{V_1}^{\alpha_1}\cdots x_{V_\ell}^{\alpha_\ell}) \end{align*} Hence, summing over possible values of the exponent $i$ of $x_\top$ gives \begin{equation} \#(M_{n,k,r+1} \setminus M_{n,k,r}) = \sum_{i = 0}^r \genfrac{[}{]}{0pt}{}{n}{r-i}_q D_{r-i,k-i,q}. \label{hilbseriesK} \end{equation} We will now give a combinatorial description of $D_{n,k,q}$ in terms of elementary statistics on $\mathfrak S_n$. To do so, we establish some notation. For $\sigma\in \mathfrak{S}_A$ for $A = \{a_1<\cdots <a_k\}$ an ordered set, let the \emph{reduction} of $\sigma$ be the permutation $\overline{\sigma}$ in $\mathfrak{S}_k$ such that $\sigma(a_i) = a_{\overline{\sigma}(i)}$. For $\sigma\in \mathfrak{S}_n$, its \emph{derangement part} ${\rm dp}(\sigma)$ is the reduction of $\sigma$ along its nonfixed points. The following lemma of Wachs will be essential. \begin{lem}[\cite{wac} Corollary 3] For all $\gamma\in \mathcal D_k$ and $n\geq k$, \[ \sum_{\substack{{\rm dp}(\sigma) = \gamma \\ \sigma\in \mathfrak{S}_n}}q^{\maj(\sigma)} = q^{\maj(\gamma)}\genfrac{[}{]}{0pt}{}{n}{k}_q \] \label{wachsLemma} \end{lem} From this lemma, another useful identity follows. \begin{cor} For any integers $n,q,k\geq 0$, \[ \sum_{\substack{\sigma\in \mathcal D_{n-i} \\ \exc(\sigma) = k}}q^{\maj(\sigma) - \exc(\sigma)}\genfrac{[}{]}{0pt}{}{n}{n-i}_q = \sum_{\substack{\sigma\in \mathfrak{S}_n\\ \exc(\sigma) = k \\ \#\fix(\sigma) = i}}q^{\maj(\sigma) - \exc(\sigma)} \] \label{waccor} \end{cor} \begin{proof} From Lemma \ref{wachsLemma}, we have the identity \[ \sum_{\substack{\gamma\in \mathcal D_{n-i} \\ \exc(\gamma) = k}}q^{\maj(\gamma) - \exc(\gamma)}\genfrac{[}{]}{0pt}{}{n}{n-i}_q = \sum_{\substack{\gamma\in \mathcal D_{n-i} \\ \exc(\gamma) = k}}q^{-\exc(\gamma)}\sum_{\substack{\sigma\in \mathfrak{S}_n \\ {\rm dp}(\sigma) = \gamma}}q^{\maj(\sigma)} = \sum_{\substack{\sigma\in \mathfrak{S}_n \\ \exc(\sigma) = k \\ \#\fix(\sigma) = i}}q^{\maj(\sigma) - \exc(\sigma)}.\qedhere \] \end{proof} We now make use of this identity to give a combinatorial interpretation to both $D_{n,k,q}$ and $a_{n,r,q}^{(k)}$. \begin{lem} For $D_{n,k,q}$ as above, \[ D_{n,k,q} = \sum_{\substack{\sigma\in \mathcal D_n \\ \exc(\sigma) = n-k}}q^{\maj(\sigma) - \exc(\sigma)} \] \label{qDerangements} \end{lem} \begin{proof} We proceed by induction on $k$. For $k = 0$, the result is vacuous. For $k>0$, set \begin{align*} S_{\alpha_0} &\coloneqq \setof{x_\top^{\alpha_0}x_{V_1}^{\alpha_1}\cdots x_{V_\ell}^{\alpha_\ell} \in M_{n,n,k-1}}{{\rm dim}(V_1) = n-\alpha_0-1} \\ S &\coloneqq M_{n,n,k - 1}. \end{align*} Then, the map on monomials taking $x_{\top}^{\alpha_0}x_1^{\alpha_1}\cdots x_\ell^{\alpha_\ell}\mapsto x_{\top}^{\alpha_0-1}x_1^{\alpha_1}\cdots x_\ell^{\alpha_\ell}$ gives an injective map \[ \varphi\colon T_{n,k,q}\to S. \] Moreover, $S$ is the disjoint union $S = {\rm Im}(\varphi)\sqcup \coprod_{a\geq 0}S_a$. Considering the choice of the second largest subspace, \[ \#S_a = \genfrac{[}{]}{0pt}{}{n}{n-a-1}_q D_{n-a-1,k-a-1,q} \] While from Remark \ref{rmk:combqhilb}, \[ \#S = \genfrac{<}{>}{0pt}{}{n}{k-1}_q = \genfrac{<}{>}{0pt}{}{n}{n-k}_q \] where the latter equality follows from Poincar\'e duality for $A\big(M(\ensuremath{\mathbb{F}}_q^n)\big)$. Therefore, by induction, \begin{align} D_{n,k,q} &= \#T_{n,k,q} = \#S - \sum_{a\geq 0}\#S_a = \genfrac{<}{>}{0pt}{}{n}{n-k}_q - \sum_{b\geq 1} \genfrac{[}{]}{0pt}{}{n}{n-b}_{q} D_{n-b,k-b,q} \nonumber\\ & =\sum_{\substack{\sigma\in \mathfrak{S}_n \\ \exc(\sigma) = n-k}}q^{\maj(\sigma) - \exc(\sigma)} - \sum_{b\geq 1}\sum_{\substack{\gamma\in \mathcal D_{n-b} \\ \exc(\gamma) = n-k}}q^{\maj(\gamma) - \exc(\gamma)}\genfrac{[}{]}{0pt}{}{n}{n-b}_q \label{inductioneqn} \end{align} Then applying Corollary \ref{waccor}, the right-hand side of equation \ref{inductioneqn} can be expanded as \begin{align*} \sum_{\substack{\sigma\in \mathfrak{S}_n \\ \exc(\sigma) = n-k}}q^{\maj(\sigma) - \exc(\sigma)} - \sum_{b\geq 1}\sum_{\substack{\sigma\in \mathfrak{S}_n \\ \exc(\sigma) = n-k \\ \#\fix(\sigma) = b}}q^{\maj(\sigma) - \exc(\sigma)}=\sum_{\substack{\sigma\in \mathcal D_n \\ \exc(\sigma) = n-k}}q^{\maj(\sigma) - \exc(\sigma)} \end{align*} completing the induction and proof of the theorem. \end{proof} \begin{lem} Let $F_{n,k}$ denote the set $F_{n,k} = \setof{\sigma\in \mathfrak{S}_n}{\#\fix(\sigma)\geq k}$. The difference of Hilbert series $\Delta_{n,r,q}(t)$ is given by \[ \Delta_{n,r,q}(t) = H\Big( A\big(M_{r+1}(\ensuremath{\mathbb{F}}_q^n), t\big) \Big) - H\Big( A\big(M_{r}(\ensuremath{\mathbb{F}}_q^n), t\big) \Big) = \sum_{\sigma\in F_{n,n-r}}t^{r-\exc(\sigma)}q^{\maj(\sigma) - \exc(\sigma)} \] In particular, the coefficients $a_{n,r,q}^{(k)}$ satisfy \begin{equation} a_{n,r,q}^{(k)} = \sum_{\substack{\sigma\in F_{n,n-r} \\ \exc(\sigma) = r-k}} q^{\maj(\sigma) - \exc(\sigma)}\label{qRankHilbFxn} \end{equation} \label{qkernel} \end{lem} \begin{proof} Applying Theorem \ref{qDerangements} and Corollary \ref{waccor} to equation \eqref{hilbseriesK} gives \begin{align*} a_{n,r,q}^{(k)} &= \sum_{i=0}^r \genfrac{[}{]}{0pt}{}{n}{r-i}_qD_{r-i,k-i,q} = \sum_{i = 0}^r \genfrac{[}{]}{0pt}{}{n}{r-i}_q \sum_{\substack{\sigma\in \mathcal D_{r-i} \\ \exc(\sigma) = r-k}} q^{\maj(\sigma) - \exc(\sigma)} \\ &= \sum_{i=0}^r\sum_{\substack{\sigma\in \mathfrak{S}_n \\ \#\fix(\sigma) = n-r+i \\ \exc(\sigma) = r-k}} q^{\maj(\sigma) - \exc(\sigma)} \\ &= \sum_{\substack{\sigma\in F_{n,n-r} \\ \exc(\sigma) = r-k}}q^{\maj(\sigma) - \exc(\sigma)}.\qedhere \end{align*} \end{proof} These two lemmas yield the main result. \begin{proof}[Proof of Theorem \ref{cor:linearhilbert}] Equation \eqref{qRankHilb} follows from a direct substitution of \eqref{qRankHilbFxn} into the formula \begin{align*} H\big(A(M_r(\ensuremath{\mathbb{F}}_q^n),t \big) &= H\big(A(M_{r+1}(\ensuremath{\mathbb{F}}_q^n)),t\big)-\Delta_{n,r,q}(t) \\ &= \cdots = H\big( A( M(\ensuremath{\mathbb{F}}_q^n)),t\big) - \sum_{j = r}^{n-1} \Delta_{n,j,q}(t) \qedhere \end{align*} \end{proof} When $r = n-1$, the Hilbert series assumes a more pleasing form. \begin{cor} If $r = n-1$, the Hilbert series of $A\big( M_{n-1}(\ensuremath{\mathbb{F}}_q^n) \big)$ is \[ H\Big( A\big( M_{n-1}(\ensuremath{\mathbb{F}}_q^n) \big), t\Big) = \sum_{\sigma\in \mathcal D_n}q^{\maj(\sigma) - \exc(\sigma)}t^{\exc(\sigma)-1} \] \end{cor} \begin{proof} For the case $r = n-1$, the coefficient of $t^k$ in \eqref{qRankHilb} can be simplified as follows. \begin{align*} \sum_{\substack{\sigma\in \mathfrak{S}_n \\ \exc(\sigma) = k}}q^{\maj(\sigma) - \exc(\sigma)} &- \sum_{\substack{\sigma\in F_{n,1} \\ \exc(\sigma) = n-k-1}}q^{\maj(\sigma) - \exc(\sigma)} \\ = \sum_{\substack{\sigma\in \mathfrak{S}_n \\ \exc(\sigma) = n-k-1}}q^{\maj(\sigma) - \exc(\sigma)} &- \sum_{\substack{\sigma\in F_{n,1} \\ \exc(\sigma) = n-k-1}}q^{\maj(\sigma) - \exc(\sigma)} \\ = \sum_{\substack{\sigma\in \mathcal D_n \\ \exc(\sigma) = n-k-1}}q^{\maj(\sigma) - \exc(\sigma)} & \end{align*} Then, \[ H\Big( A\big( M_r(\ensuremath{\mathbb{F}}_q^n) \big), t\Big) = \sum_{\sigma\in \mathcal D_n}q^{\maj(\sigma) - \exc(\sigma)}t^{n-1-\exc(\sigma)} = \sum_{\sigma\in \mathcal D_n}q^{\maj(\sigma) - \exc(\sigma)}t^{\exc(\sigma)-1} \] where the last equality follows from Poincar\'e duality of $A(M_{n-1}(\ensuremath{\mathbb{F}}_q^{n}))$. \end{proof} \begin{rmk} The proof presented in the previous section can be reformulated in terms of strong maps of Chow rings. Namely, consider the graded, surjective ring homomorphisms \[ \pi_{n,r,q}\colon A(M_{r+1}\big(\ensuremath{\mathbb{F}}_q^n)\big)\to A\big(M_r(\ensuremath{\mathbb{F}}_q^n)\big) \] defined by taking variables $x_{V}\in A(M_{r+1}\big(\ensuremath{\mathbb{F}}_q^n)\big)$ to zero if $\dim(V) = r+1$ and to the corresponding variable $x_V\in A\big(M_r(\ensuremath{\mathbb{F}}_q^n)\big)$ otherwise. Then, if $K_{n,r,q} = \ker(\pi_{n,r,q})$, additivity of Hilbert series gives \[ H(K_{n,r,q},t) = H\Big( A\big(M_{r+1}(\ensuremath{\mathbb{F}}_q^n), t\big) \Big) - H\Big( A\big(M_{r}(\ensuremath{\mathbb{F}}_q^n), t\big) \Big) = \Delta_{n,r,q}(t) \] Therefore, Lemma \ref{qkernel} gives a formula for the Hilbert series of the kernel of the above so-called ``strong maps'' of Chow rings. \end{rmk} \begin{rmk} Note that the characterization of the Hilbert series of $A(M_r(\ensuremath{\mathbb{F}}_q^n))$ for $r = n-1,n$ together with the results of \cite{ahk} give an alternate proof of the unimodality and symmetry of the polynomials \[ \sum_{\sigma\in \mathfrak S_n}q^{\maj(\sigma) - \exc(\sigma)}t^{\exc(\sigma)}\text{ and }\sum_{\sigma\in \mathcal D_n}q^{\maj(\sigma) - \exc(\sigma)}t^{\exc(\sigma)-1}. \] However, it should be noted that in \cite{qgamma}, Shareshian and Wachs prove more general statements. Namely, they prove that the coefficients of the above polynomials are $q$-unimodal and, in fact, $q$-$\gamma$-nonnegative. That is, a difference of consecutive coefficients lies in $\ensuremath{\mathbb{N}}[q]$ as a polynomial in $q$, and moreover, its $\gamma$-vector has coordinates in $\ensuremath{\mathbb{N}}[q]$. See Theorems 4.4 and 6.1 of \cite{qgamma} for more explicit formulae and a proof. \end{rmk} \section{Charney-Davis quantities of vector space matroids} \label{sec:charneydavis} The main result of this section is a proof of Theorem \ref{thm:linearcd}, which gives two formulas for the Charney-Davis quantity of $A\big(M_r(\mathbb F_q^n)\big)$, one in terms of determinants and one in terms of $q$-tangent-secant numbers. We prove the formula that is in terms of determinants immediately; we will prove the formula in terms of $q$-tangent-secant numbers later. \begin{proof}[Proof of Theorem \ref{thm:linearcd} (b)] If $r = 1$, then $H\Big(A\big( M_r(\ensuremath{\mathbb{F}}_q^n) \big),t\Big) = 1$, and the theorem follows trivially. Now suppose that $r>1$ is odd, and let $\CD(n,r) = H\Big(A\big(M_r(\ensuremath{\mathbb{F}}_q^n)\big),-1\Big)$ be the unsigned Charney-Davis quantity of $A\big(M_r(\ensuremath{\mathbb{F}}_q^n)\big)$. Substituting $t = -1$ into Theorem \ref{thm:fyhilbert}, the formula for the Hilbert series from \cite{fy} is \[ \CD(n,r) = 1+\sum_{\substack{{\ensuremath{\mathbf{r}},\, r_k<r}\\{\forall i, r_i - r_{i-1}\text{ is even}}}}(-1)^{|\ensuremath{\mathbf{r}}|}\prod_{i = 1}^{|\ensuremath{\mathbf{r}}|} \genfrac{[}{]}{0pt}{}{n-r_{i-1}}{r_i - r_{i-1}}_{q}. \] where $|\ensuremath{\mathbf{r}}|$ is the number of entries in the tuple $\ensuremath{\mathbf{r}}$. Breaking into cases based on whether $\ensuremath{\mathbf{r}} = (r_1<\cdots<r_k)$ has $r_k = r-1$, we get a decomposition of the above as \[ \left\{1+\sum_{\substack{{\ensuremath{\mathbf{r}},\, {r_k<r-2}}\\{\forall i, r_i - r_{i-1}\text{ is even}}}}(-1)^{|\ensuremath{\mathbf{r}}|}\prod_{i = 1}^{|\ensuremath{\mathbf{r}}|} \genfrac{[}{]}{0pt}{}{n-r_{i-1}}{r_i - r_{i-1}}_q\right\} + \left\{\sum_{\substack{{\ensuremath{\mathbf{r}},\, {r_k=r-1}}\\{\forall i, r_i - r_{i-1}\text{ is even}}}}(-1)^{|\ensuremath{\mathbf{r}}|}\prod_{i = 1}^{|\ensuremath{\mathbf{r}}|} \genfrac{[}{]}{0pt}{}{n-r_{i-1}}{r_i - r_{i-1}}_{q}\right\} \] where the former term is $\CD(n,r-2)$ and the latter we denote by $T_{n,q}(r-1)$. Then, considering terms in the sum with $r_{k-1} = b$, one obtains the recurrence \[ T_{n,q}(2a) = -\sum_{b=0}^{a-1} \genfrac{[}{]}{0pt}{}{n-2b}{2a-2b}_qT_{n,q}(2b) \;\;\text{with initial condition }\;\;T_{n,q}(0) = 1 \] Solving this linear recurrence with Cramer's rule gives \begin{equation} \label{Tform} T_{n,q}(2a) = (-1)^a \det\left( \begin{array}{ccccc} \genfrac{[}{]}{0pt}{}{n}{2}_q & 1 & 0 & \cdots & 0\\ \genfrac{[}{]}{0pt}{}{n}{4}_q & \genfrac{[}{]}{0pt}{}{n-2}{2}_q & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \genfrac{[}{]}{0pt}{}{n}{2a-2}_q & \genfrac{[}{]}{0pt}{}{n-2}{2a-4}_q & \genfrac{[}{]}{0pt}{}{n-4}{2a-6}_q& \cdots& 1\\ \genfrac{[}{]}{0pt}{}{n}{2a}_q & \genfrac{[}{]}{0pt}{}{n-2}{2a-2}_q & \genfrac{[}{]}{0pt}{}{n-4}{2a-4}_q& \cdots& \genfrac{[}{]}{0pt}{}{n-2a+2}{2}_q \end{array} \right) \end{equation} Rewriting the determinant in \eqref{Tform} by pulling out common factors in the numerator, resp. denominators, of each column, resp.\ row, gives \begin{align*} T_{n,q}(2a) &= (-1)^a\frac{[n]_q!}{[n-2a]_q!}\det\left( \begin{array}{ccccc} \frac{1}{[2]_q!} & 1 & 0 & \cdots & 0\\ \frac{1}{[4]_q!} & \frac{1}{[2]_q!} & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{1}{[2a-2]_q!} & \frac{1}{[2a-4]_q!} & \frac{1}{[2a-6]_q!} & \cdots& 1\\ \frac{1}{[2a]_q!} & \frac{1}{[2a-2]_q!} & \frac{1}{[2a-4]_q!} & \cdots& \frac{1}{[2]_q!} \end{array} \right) \\ &= (-1)^a\frac{[n]_q!}{[n-2a]_q!}\Delta_{a,q} \end{align*} Then, the unsigned Charney-Davis quantity for odd $r$ is \begin{align*} \CD(n,r) &= \CD(n,r-2)+T_{n,q}(2k) = \cdots = \CD(n,1)+\sum_{a = 1}^{\frac{r-1}{2}} T_{n,q}(2a)\\ & = 1+[n]_q!\sum_{a = 1}^{\frac{r-1}{2}} \frac{(-1)^a}{[n-2a]_q!}\Delta_{a,q}. \end{align*} Then, the result follows by multiplication by the appropriate sign. \end{proof} \begin{ex} For the case $n = r = 5$, Theorem \ref{thm:linearcd} becomes the following identity \[ q^8+2q^7+3q^6+4q^5+3q^4+2q^3+q^2 = 1+[5]_q!\left[ -\frac{1}{[3]_q!}\det\left(\frac{1}{[2]_q!}\right) +\det\left(\begin{array}{cc} \frac{1}{[2]_q!} & 1 \\ \frac{1}{[4]_q!} & \frac{1}{[2]_q!} \end{array}\right) \right] \] which one can directly verify. \end{ex} \begin{rmk} \label{rmk:evencd} For even $r$, Theorem 6.19 of \cite{ahk} implies the Hilbert series of $A(M_r(\ensuremath{\mathbb{F}}_q^n))$ is symmetric of even degree. Consequently, $H\big( A(M_r(\ensuremath{\mathbb{F}}_q^n)),-1 \big) = 0$ and the Charney-Davis quantity vanishes. \end{rmk} Having the determinantal formula above, we now work towards a more compact formula using the $q$-tangent/secant numbers. \begin{prop} Let $E_{n,q}$ denote the $n$-th $q$-tangent/secant number. The following identities hold: \[ E_{2n,q} = (-1)^n [2n]_q! \Delta_{n,q} \] \[ E_{2n+1, q} = \CD(2n+1, 2n+1) = 1 + [2n+1]_q! \sum_{a = 1}^n \frac{(-1)^a}{[2n-2a + 1]_q!} \Delta_{a,q} \] \label{prop:qSecTan} \end{prop} \begin{proof} Let \[ \mathcal E_{2n,q} \coloneqq (-1)^n [2n]_q! \Delta_{n,q} \] \[ \mathcal E_{2n+1, q} \coloneqq \CD(2n+1, 2n+1) = 1 + [2n+1]_q! \sum_{a = 1}^n \frac{(-1)^a}{[2n-2a + 1]_q!} \Delta_{a,q}. \] Consider the generating functions \[ F(t) = \sum_{n\geq 0}\mathcal E_{2n,q}\frac{t^{2n}}{(q;q)_{2n}} \;\;\;\;\; \text{ and } \;\;\;\;\; G(t) = \sum_{n\geq 0}\mathcal E_{2n+1,q}\frac{t^{2n+1}}{(q;q)_{2n+1}} \] It suffices to show $F(t) = \sech_q(t)$ and $G(t) = \tanh_q(t)$. Observe that by expanding by minors in the first column, $\Delta_{n,q}$ satisfies the recurrence \[ \Delta_{n,q} = \sum_{k = 1}^n\frac{(-1)^{k+1}}{[2k]_q!}\Delta_{n-k,q} \] Then since $(q;q)_{2n} = \frac{[n]_q!}{(1-q)^n}$, \begin{align*} F(t) &= \sum_{n\geq 0}(-1)^n\big( t(1-q) \big)^{2n}\Delta_{n,q} = 1+\sum_{n\geq 1}(-1)^n\big( t(1-q) \big)^{2n}\sum_{k=1}^n\frac{(-1)^{k+1}}{[2k]_q!}\Delta_{n-k,q}\\ & = 1+\sum_{r \geq 0} \sum_{k\geq 1}(-1)^{r+1}\Delta_{r,q}\frac{\big( t(1-q) \big)^{2(r+k)}}{[2k]_q!}\\ &= 1+\left(\sum_{k\geq 1}\frac{\big( t(1-q) \big)^{2k}}{[2k]_q!}\right)\left( \sum_{r\geq 0}(-1)^{r+1}\Delta_{r,q} \big( t(1-q) \big)^{2r} \right) \\ &= 1-\left(\sum_{k\geq 1}\frac{t^{2k}}{(q;q)_{2k}}\right)F(t)= 1-(\cosh_q(t)-1)F(t) \end{align*} Therefore, solving for $F(t)$ gives \[ F(t) = 1/\cosh_q(t) = \sech_q(t) \] Since $F(t) = \sech_q(t)$ as power series in $\ensuremath{\mathbb{Q}}(q)[\![t]\!]$, it follows that $\mathcal E_{2n,q} = E_{2n,q}$. Now consider $G(t)$. Set $\Delta_{0,q} = 1$. We have \begin{align*} G(t) &= \sum_{n\geq 0}\left( [2n+1]_q! \sum_{a = 0}^n \frac{(-1)^a}{[2n-2a + 1]_q!} \Delta_{a,q} \right)\frac{t^{2n+1}}{(q;q)_{2n+1}} \\ &= \sum_{n\geq 0} \sum_{a = 0}^n \frac{(-1)^a\Delta_{a,q}}{[2n-2a + 1]_q!} {\big(t(1-q)\big)^{2n+1}} \\ &= \sum_{k\geq 0}\sum_{a\geq 0} \frac{(-1)^a\Delta_{a,q}}{[2k + 1]_q!} \big(t(1-q)\big)^{2(a+k)+1} \\ &= \left(\sum_{k\geq 0} \frac{t^{2k+1}}{(q;q)_{2k+1}}\right)\left( \sum_{a\geq 0} (-1)^a\Delta_{a,q}t^{2a} \right) \\ &= \sinh_q(t)\sech_q(t) = {\rm tanh}_q(t)\qedhere \end{align*} \end{proof} \begin{rmk} With notation as in the proof above, equation (2.6) of \cite{stanley:altPerms} immediately implies that $\mathcal E_{2n,q} = E_{2n,q}$. See equation (2.7) of the same article for a determinantal formula for $E_{2n+1,q}$ and other formulae. \end{rmk} \begin{rmk} Proposition \ref{prop:qSecTan} implies that the numbers $E_{n,q}$ are the $q$-secant and $q$-tangent numbers studied in \cite{foata} and \cite{jv}. In particular, we have \[ E_{n,q} = \sum_{\sigma\in \mathfrak I_n}q^{\exc(\sigma)} \] where $\mathfrak{I}_n$ denotes the number of alternating permutations of size $n$. \end{rmk} Theorem \ref{thm:linearcd}(a) now follows from Thm \ref{thm:linearcd}(b) and Prop \ref{prop:qSecTan}. \section{Invariants of uniform matroids} \label{sec:uniform} Recall that the uniform matroid $U_{n,r}$ is the matroid whose independent sets consist of all subsets of $[n]$ of cardinality at most $r$. Theorem \ref{thm:fyhilbert} gives a formula for the Hilbert series of $A\big(M(\ensuremath{\mathbb{F}}_q^n)\big)$, \[ H\Big(A\big(M_r(\ensuremath{\mathbb{F}}_q^n)\big),t\Big) = 1+\sum_{\ensuremath{\mathbf{r}}} \prod _{i=1} ^{|\ensuremath{\mathbf{r}}|} \frac{t(1-t^{r_i-r_{i-1}-1})}{1-t}\genfrac{[}{]}{0pt}{}{n-r_{i-1}}{r_i - r_{i-1}}_q \] where the sum is over all tuples of dimensions $\ensuremath{\mathbf{r}} = (0=r_0<r_1<\cdots<r_{|\ensuremath{\mathbf{r}}|}\leq r)$. In particular, when $q = 1$, formula above specializes to what Theorem \ref{thm:fyhilbert} gives for $H\Big(A(U_{n,r}),t\Big)$. From this it follows that any invariant of $A(U_{n,r})$ that can be computed in terms of its Hilbert series can be computed by instead considering the corresponding invariant of $A(M_r(\ensuremath{\mathbb{F}}_q^n))$ and setting $q = 1$. We record a number of results obtained this way below. \begin{thm}[see Theorem \ref{cor:linearhilbert}] \label{thm:linearhilbert} For $r = 0, 1, \ldots, n$ and $F_{n,k}:=\setof{\sigma\in \mathfrak{S}_n}{\#\fix(\sigma) \geq k}$, the Hilbert series of $A\big( U_{n,r} \big)$ is given by \[ H\big( U_{n,r},t \big) = \sum_{\sigma\in \mathfrak{S}_n}t^{\exc(\sigma)} - \sum_{j=r}^{n-1} \sum_{\sigma\in F_{n,n-j}}t^{r-\exc(\sigma)} \] In particular, if $r = n$, the Hilbert series of $A(U_{n,n})$ is the $n$-th Eulerian Polynomial and if $r = n-1$, the Hilbert series of $A\big( U_{n,n-1} \big)$ is \[ H\Big( A\big( U_{n,n-1} \big), t\Big) = \sum_{\sigma\in \mathcal D_n}t^{\exc(\sigma)-1} \] \end{thm} \begin{thm}[see Theorem \ref{thm:linearcd}] \label{CDuniform} For odd $r$, the Charney-Davis quantity for the uniform matroid, $U_{n,r}$, of rank $r$ and dimension $n$ is \[ \sum_{k=0}^{\frac{r-1}{2}}\binom{n}{2k}E_{2k} \] where $E_{2\ell}$ is the $\ell$-th secant number, i.e. \[ \sech(t) = \sum_{\ell\geq 0}E_{2\ell}\frac{t^{2\ell}}{(2\ell)!} \] \end{thm} \begin{rmk} For $r = n$ odd, a standard recurrence shows \[ \sum_{k=0}^{\frac{n-1}{2}}\binom{n}{2k}E_{2k} = E_{n} \] In particular, Theorem \ref{CDuniform} specializes to those in page 275 of \cite{rcharney} and page 52 of \cite{hshell}. \end{rmk} \begin{rmk} Those interested in the $\gamma$-polynomial of $A(U_{n,r})$ for $r = n, n-1$ should see Theorem 11.1 of \cite{pr} and Theorem 4.1 of \cite{athan}. The former gives the $\gamma$-vector of $A(U_{n,n})$ in the context of the $\gamma$-vector of the permutohedron. Since $H\big( A(U_{n,n-1}),t \big)$ is the local $h$-vector of the barycentric subdivision of the permutohedron, Athanasiadis' survey \cite{athan} gives the analogous interpretation of the $\gamma$-vector of $H\big(A(U_{n,n-1}),t\big)$. \end{rmk} \section{Conjectures and future work} \label{sec:conjectures} Our data points to a possible relationship between order complexes and Chow rings. Let $\Delta(P)$ be the order complex of a poset $P$, and for any simplicial complex $S$, denote the $h$-polynomial of $S$ by \[ h(S, t) \coloneqq \sum_{i = 0}^{\olddim(S)} f_{i-1}(x-1)^{\olddim(S) -i} \] where $f_j$ is the number of $j$-dimensional faces of $S$ and $f_{-1} = 1$ by convention. \begin{prop}[\cite{pk} Theorem 9.1, \url{https://oeis.org/A008292}] For all $n\geq1$, \[h\big(\Delta(L(U_{n,n})), t\big) = H\big(A(U_{n,n}),t\big) \] \end{prop} The corresponding statement for the uniform matroids $U_{n,r}$ with $r < n$ has small counterexamples, but can be modified as follows. \begin{conj} For $r<n$, we have \[ h\big(\Delta(L(U_{n,r})), t\big) = t^{2} \sum_{i = 1}^{r} \binom{n-i-1}{r-i} H(A(U_{n,i}), t). \] \label{conj:order-complex-hpoly} \end{conj} Since it is relatively simple to compute the $f$-vector of $\Delta(L(U_{n,r}))$, this would also give a formula for $H(A(U_{n,i+1}),t)$. \begin{rmk} Conjecture \ref{conj:order-complex-hpoly} is equivalent to the equality $F_n(t,u) = H_n(t,u+1)$ for the polynomials \begin{align*} F_n(t,u) &= \sum_{r = 0}^{n-2} h( \Delta(\mathcal L(U_{n,r+1}\setminus\{\top,\bot\})),t)u^{n-2-r} \\ H_n(t,u) &= \sum_{r=0}^{n-2}H(A(U_{n,r+1}),t)u^{n-2-r} \end{align*} \end{rmk} For more conjectures and some other results pertaining to Chow rings of general atomic\xspace lattices, see \cite{report}. \section*{Acknowledgments} This research was carried out as part of the 2017 summer REU program at the School of Mathematics, University of Minnesota, Twin Cities, and was supported by NSF RTG grant DMS-1148634 and by NSF grant DMS-1351590. The authors would like to thank Victor Reiner, Pavlo Pylyavskyy, and Benjamin Strasser for their mentorship and support. \nocite{*} \printbibliography \end{document}
{ "timestamp": "2018-02-13T02:23:22", "yymm": "1802", "arxiv_id": "1802.04241", "language": "en", "url": "https://arxiv.org/abs/1802.04241" }
\section{Introduction} \label{sec:introduction} Anyons~\cite{Wilczek1982,Leinaas1977}, particles with an exchange statistics interpolating between bosons and fermions, play a crucial role in fascinating concepts of modern condensed matter physics, such as topological quantum phases, in particular fractional quantum Hall effect~\cite{Laughlin1983,Halperin1984,Camino2005,Kim2005} and topological quantum computing~\cite{Kitaev2003, Nayak2008}. The experimental search and manipulation of anyons has attracted a large interest in recent years, including spin and boson models~\cite{Kitaev2003, Shen2014,Pan2003, Zhang2008, Feng2013}, and systems of ultracold atoms~\cite{Paredes2001,Duan2003,Micheli2006,Aguado2008,Jiang2008}. While theoretically settled, an unambiguous detection of anyonic (quasi-) particles, e.g. by interferometric measurements~\cite{Camino2005}, is still the object of active research~\cite{Jiang2008, Bonderson2008}. Although anyons were originally proposed for two dimensional~(2D) systems, one-dimensional~(1D) anyons have been theoretically studied as well~\cite{Haldane1991, Ha1994, Murthy1994, Wu1995, Zhu1996, Amico1998,Kundu1999, Batchelor2006, Girardeau2006}. The exotic properties of 1D (Abelian) anyon models include asymmetric momentum distributions~\cite{Calabrese2007,Patu2007,Hao2008,Hao2009,Calabrese2009,Tang2015}, particle dynamics~\cite{DelCampo2008,Hao2012,Wang2014}, entanglement properties~\cite{Santachiara2007,Guo2009, Marmorini2016} or statistically induced Mott insulator to superfluid quantum phase transitions~\cite{Keilmann2011,Greschner2015,Forero2016,Zhang2017}. Despite this theoretical interest, the experimental realization of 1D anyons is still missing. A (pseudo) anyon Hubbard model (AHM) in 1D optical lattices may be engineered by means of Raman-assisted tunneling~\cite{Keilmann2011, Greschner2014AB,Greschner2015anyon}. Pseudo anyons exhibit anyonic commutation off-site, but on-site behave as bosons, i.e. there may be more than one particle per lattice site. A drastically simplified realization of the AHM may be realized by means of periodically driven lattices~\cite{Straeter2016}. A proper three-color modulation of the lattice depth has been proposed for the realization of a two-component 1D anyon Hubbard model (2AHM)~\cite{Cardarelli2016}. As for 2D anyons, revealing the anyonic character of the engineered 1D quasi-particles remains an interesting open question. This paper proposes two feasible experiments in which the modified statistics may be revealed in ultra-cold lattice gases. On the one hand, we show that expansion experiments using the three-color modulation of Ref.~\cite{Cardarelli2016} may reveal the formation of anomalous bound-state pairs. These pairs, which result from the anyonic exchange statistics, anticipate the emergence of the exotic partially paired phase~(PP) predicted for the 2AHM~\cite{Cardarelli2016}. On the other hand, combining the three-color modulation with spin-dependent tilting and Raman-assisted coupling at the system boundaries allows for the realization of an effective one-component hardcore (i.e. with at most one particle per site) AHM with periodic boundary conditions~(PBC). This effective synthetic ring set-up allows for interferometric measurements that reveal the statistical angle that characterizes the anyonic character of the particles. The paper is organized as follows. After an introduction to the three-color modulation scheme in Sec.~\ref{sec:3color} and a discussion of the mappings to anyon models in the low-density limit, Sec.~\ref{sec:interferometer} is devoted to the expansion of particles in the hardcore AHM with PBC, and its characteristic dependence on the statistical angle. Finally in Sec.~\ref{sec:2AHMexpand} we discuss expansion experiments for the 2AHM model, and conclude in Sec.~\ref{sec:outlook} with a summary and a short outlook. \section{Three-color modulation of an interacting Fermi-gas} \label{sec:3color} In the following we will recapitulate and extend the multicolor modulation scheme introduced in Ref.~\cite{Cardarelli2016}. The main experimental idea is depicted in Figs.~\ref{fig:scheme1} and \ref{fig:scheme2}. A two-component ($\sigma=0,1$ corresponding to spin $\uparrow$ and $\downarrow$) Fermi (or hardcore Bose) gas is loaded into a 1D optical lattice. We assume a tilted lattice, with an energy shift $\Delta$ between neighboring sites~(Fig.~\ref{fig:scheme1}) due to acceleration or tilting in gravity, and, hence, the tilting is spin independent. An interesting alternative is to employ a magnetic field gradient, which leads to a spin dependent tilting~(Fig.~\ref{fig:scheme2}). The system is then described by the Fermi-Hubbard Hamiltonian: \begin{equation} \HHop(t)\!=\! -J(t)\sum_{j,\sigma} \! \left [ c_{j+1,\sigma}^\dag c_{j,\sigma}\! +\! \mathrm{H.c.} \right ]\! + \HHop_{\rm int} + \HHop_{\rm tilt}, \label{eq:H_full_J(t)} \end{equation} where $c_{j,\sigma}$ is the annihilation operator of a fermion with spin $\sigma$ at site $j$, and the tilting is given by \begin{align} \HHop_{\rm tilt} \!=\! \Delta \sum_{j,\sigma} \epsilon_\sigma j n_{j,\sigma}\,, \end{align} where $\epsilon_\sigma=1$ for the spin-independent tilting and $(-1)^\sigma$ for the spin-dependent case. Importantly, we assume that the particles experience a repulsive on-site interaction, \begin{align} \HHop_{\rm int} \!=\! U \sum_j n_{j,\uparrow}n_{j,\downarrow} \,. \end{align} In both cases we assume that a direct hopping of the particles between the lattice sites can be neglected, $J(t) \ll \Delta, |\Delta\pm U| $. The hopping is restored by a resonant modulation the optical lattice. Following Ref.~\cite{Cardarelli2016} we assume a fast periodic modulation of the optical lattice depths~\cite{Ma2011}, $V(t)=V_0+\delta V(t)$, with $\delta V\ll V_0$. One could equivalently assume a fast lattice shaking such as discussed in Ref.~\cite{Straeter2016}. We may then integrate out the fast periodic drivings and obtain the effective anyon models as we will discuss in the following. \begin{figure}[tb] \begin{center} \includegraphics[width=1\linewidth]{tilt1_scheme.eps} \caption{Sketch of the lattice set-up with a spin independent tilting $\Delta$ and the relevant hopping processes (i)-(iv). Blue (red) bullets correspond to spin $\uparrow$ ($\downarrow$) particles.} \label{fig:scheme1} \end{center} \end{figure} \subsection{The spin-independent tilting} From Fig.~\ref{fig:scheme1} we may identify four relevant hopping processes in the spin-independent tilting which we would like to restore by means of the resonant drivings and the corresponding energy differences $\Delta E_i$: \begin{itemize} \item (i) a single atom hops to an empty site to its right $\Delta E_1\!=\!\Delta$; \item (ii) a single atom tunnels to occupied site at its right: $\Delta E_{2}\!=\!\Delta\!+\!U$; \item (iii) same as (ii) but the hopping is to the left: $\Delta E_{3}\!=\! U\! -\!\Delta$; \item (iv) an atom in a doubly-occupied site tunnels into a single-occupied site~("doublon hopping"): $\Delta E_{4}\!=\!\Delta$ \end{itemize} As discussed in Ref.~\cite{Cardarelli2016}, a separate but simultaneous driving with three frequencies, $\omega_1=\Delta$, $\omega_2=\Delta+U-\tilde U$, and $\omega_3=-\Delta+U-\tilde U$, allows for (quasi-) resonantly restoring the four hopping processes. The detuning $|\tilde U|\ll U$ allows for the introduction of an effective two-body interaction. In the following we will set $\tilde{U}=0$. Hence, we choose a modulation of the laser intensity as \begin{align} \delta V(t)=\delta V \sum_{s=1,2,3} \cos(\omega_s t+\phi_s) \end{align} which corresponds to a modulation of the tunneling amplitude as \begin{align} \delta J(t)= \delta J \sum_s \cos(\omega_s t+\phi_s)\,. \end{align} An important aspect is here, that we may arbitrarily choose the phases $\phi_s$ for all three processes, which can be exploited to realize the fractional statistics. After integrating out the resonant driving we recover a model without tilting \begin{align} \!\HHop_{\text{eff}}\!=\! -\frac{\delta J}{2} \!\sum_{j,\sigma}\! c_{j+1,\sigma}^\dag \e^{i \phi |n_{\j+1,\bar\sigma}\!-\! n_{j,\bar\sigma}|} c_{j,\sigma} + \dots, \label{eq:H_eff} \end{align} Interestingly, within the scheme we may also control the amplitude of the three drivings separately which opens further possibilities, e.g. allows for the simulation of more general correlated hopping Hubbard models with asymmetric hopping amplitudes for doublons and single particles. Here, however we focus on the properties of the AHM with symmetric hoppings. As discussed in Ref.~\cite{Cardarelli2016} higher order terms in this scenario (the ellipsis in Eq.~\eqref{eq:H_eff}), may induce effective interactions between nearest neighbor~(NN) sites due to the virtual hopping of particles. Those NN interactions open additional interesting possibilities for the observation of interacting quantum gases. While this aspect was analyzed in detail in Ref.~\cite{Cardarelli2016}, here in the following we will neglect this issue. At low lattice filling $\rho$, for which processes (iv) may be neglected, a Jordan-Wigner like transformation~\cite{Keilmann2011}, \begin{equation} f_{j,\sigma}=\e^{-\ii 2 \phi \sum_{1\leq l< j} n_l} \e^{-\ii \phi n_j} c_{j,\sigma}, \end{equation} maps model Eq.~\eqref{eq:H_eff} into a 2AHM: \begin{eqnarray} \!\!\!\HHop_{\text{2AHM}}\! \! &=&\! \! - \frac{\delta J_1}{2} \sum_{j,\sigma} (f_{j,\sigma}^{\dagger}f_{j+1,\sigma}^{\phantom \dagger} \!+\! \text{H.c.}) \!+\! \tilde{U}\HHop_{\text{int}}. \label{eq:2AHM} \end{eqnarray} The operators $f_{j,\sigma}$ and $f_{j,\sigma}^\dagger$ characterize anyon-like hardcore particles, that fulfill a deformed exchange statistics: \begin{align} &f_{j,\sigma} f_{k,\sigma'}^\dagger + \mathcal{F}_{j,k} f_{k,\sigma'}^\dagger f_{j,\sigma} = \delta_{j,k} \delta_{\sigma,\sigma'} , \nonumber\\ &f_{j,\sigma} f_{k,\sigma'} + \mathcal{F}_{j,k} f_{k,\sigma'} f_{j,\sigma} = 0 . \end{align} The complex parameter $\mathcal{F}_{j,k}$ determines the statistics of the system: \begin{eqnarray} &{\cal F}_{j,k}:=\left \{ \begin{array}{ll} e^{-\ii 2 \phi},\ \quad & j>k, \\ 1, \quad & j=k, \\ e^{\ii 2 \phi}, \quad & j<k, \end{array} \right. \quad & \end{eqnarray} where the condition ${\cal F}_{j,j}=1$ sets the hard-core behavior of the particles. Note that for $\phi=0$ we retrieve the two-component Fermi-Hubbard model, while $\phi=\pi/2$ corresponds to the two-component hard-core Bose-Hubbard model. Non-trivial quantum effects may be observed even for $\tilde{U}=0$ and $\phi=\pi/2$~\cite{Cardarelli2016}. \begin{figure}[tb] \begin{center} \includegraphics[width=1\linewidth]{tilt2_scheme.eps} \caption{Lattice shaking scheme with a magnetic field gradient for the realization of the hardore anyon hubbard model. Microwave fields $\Omega$ couple the boundaries of the system.} \label{fig:scheme2} \end{center} \end{figure} \subsection{Magnetic field gradient} We now consider the case of a spin dependent tilting of the optical lattice such as realized in a magnetic field gradient Fig.~\ref{fig:scheme2}. Again we may identify four hopping processes and three corresponding frequencies. \begin{itemize} \item (i) single atom hops to an empty site to its right $\Delta E_1\!=\!\Delta$; \item (ii) single $\uparrow$ ($\downarrow$) atom tunnels to occupied site at its right (left): $\Delta E_{2}\!=\!\Delta\!+\!U$; \item (iii) the same event as (ii) but the hopping is to the left (right): $\Delta E_{3}\!=\! U\! -\!\Delta$; \item (iv) doublon hopping: $\Delta E_{4}\!=\!\Delta$ \end{itemize} Hence, now the same three color-modulation scheme allows to realize opposite phases for the hopping of $\uparrow$ and $\downarrow$ particles of the AHM. \begin{align} H^{\rm SD}_{\rm eff} \!=\! -\frac{\delta J}{2} \sum_{ \substack{j=0\cdots L\\ \sigma=0,1}} c_{j,\sigma}^\dagger e^{i (-1)^\sigma \phi |n_{\j+1,\bar\sigma}\!-\! n_{j,\bar\sigma}|} c_{i+1,\sigma} + {\rm H.c.} \,. \label{eq:H_eff_SD} \end{align} Interestingly, the phase $\phi$, contrary to Model~\eqref{eq:H_eff}, has no effect on the spectrum of the model and can be gauged out in an OBC system by a simple redefinition of the fermion operators. This, however, depends on the boundary conditions and is no longer possible if we couple the two components by means of a resonant laser or microwave field. If we assume the particles to be trapped in a steep box-shaped trap, such that the boundaries of the systems are well defined, we may couple the boundaries~\cite{Boada2015} through spin-flips terms: \begin{align} H^{\rm SD}_{\rm eff} + \left ( c_{0,1}^\dagger c_{0,0} + c_{L,1}^\dagger c_{L,0} + {\rm H.c.} \right ) \,. \end{align} For low densities we may now interpret the system as an anyon model with single-component particles in $2L$ sites and with PBC. Most importantly, two effective single-component particles pick up a phase $\phi$ when exchanging their position, i.e. by traveling once around the ring. In the low density limit (i.e. if we again may neglect process (iv)) we obtain a model of, now spin-less, hardcore anyons on a synthetic ring: \begin{align} H_{\rm AHM}=\sum_{i=0\cdots 2L} \alpha_i^\dagger \alpha_{i+1} + \alpha_L^\dagger \alpha_0 + {\rm H.c.}. \label{eq:HAHM} \end{align} The anyons $\alpha_i$ obey the hardcore constraint $(\alpha_i^\dagger)^2\equiv 0$, and the deformed exchange relation, \begin{align} \alpha_j \alpha_k^\dagger + \rm{e}^{-\ii 2\phi\, {\rm sgn}(j-k)} \alpha_k^\dagger \alpha_j = \delta_{jk} \\ \alpha_j \alpha_k + \rm{e}^{-\ii 2\phi\, {\rm sgn}(j-k)} \alpha_k \alpha_j = 0 \end{align} It is important to note that, without further interactions, Model~\eqref{eq:HAHM} is integrable. For OBC a Jordan Wigner transformation maps it to the case of free fermions and the spectrum, as well as those properties that depend on the density, are unaffected by the phase $\phi$~(see e.g. Ref.~\cite{Hao2008} and references therein for a detailed discussion on the Jordan-Wigner transformation in OBC and PBC). The quasi-momentum distribution and the single particle density matrix certainly exhibit a strong dependence on the statistics. However, an experiment will only measure the fermionic momentum distribution (since only local hoppings in the model are affected). This changes, however, for PBC. Certainly, the model is still integrable, but a mapping to free fermions leads to a density-dependent boundary term \begin{align} H_{\rm AHM} = \sum_i c_i^{\dagger} c_{i+1} + \e^{\ii \phi \sum_{0<j<L-1} n _j} c_L^\dagger c_0 + {\rm H.c.} \end{align} We will show in the following how this effective boundary term will affect the real space density during the time evolution after a quantum quench. \begin{figure}[b] \begin{center} \includegraphics[width=1\linewidth]{expansion_scheme.eps} \caption{Scheme of the experimental protocols discussed in the paper. (a) Interferometer scheme for the PBC anyon model of Sec.~\ref{sec:interferometer}. (b) Doublon expansion for the 2AHM discussed in Sec.~\ref{sec:2AHMexpand}} \label{fig:expansion_scheme} \end{center} \end{figure} \section{Dynamical probing of the exchange statistics} \label{sec:interferometer} The experimental setup described above, allows for the engineering of 1D anyons with an arbitrary statistical angle $0\leq \phi\leq \pi/2$. In the following, we propose an interferometer scheme that reveals the anyonic character by means of an expansion experiment in a small lattice system. The general idea is sketched in Fig.~\ref{fig:expansion_scheme}~(a). Initially, a spin polarized cloud of two or more particles is prepared in the center of the lattice. For concreteness we first consider exactly two (spin $\uparrow$) particles tightly confined to the two adjacent central sites. After that, we discuss the case of a larger cloud with fixed average particle number. \begin{figure}[h] \begin{center} \includegraphics[width=1\columnwidth]{fig2d_free_N2_L24.eps} \caption{Time evolution for (a) and (b) fermions ($\phi=0$), (c) and (d) $\phi=\pi/2$ anyons and (e) and (f) hardcore bosons ($\phi=\pi/2$). Panels (a),(c) and (e) show the $\uparrow$ component (i.e. sites $0\dots 5$ of the PBC ring system) and panels (b), (d) and (f) the $\downarrow$ component (sites $6 \dots 11$).} \label{fig:2dexpandfree} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=1\columnwidth]{fig_full_eff_N2_L12.eps} \includegraphics[width=1\columnwidth]{fig_full_eff_N2_L12_n1.eps} \caption{Time evolution of the central density for the (a) spin $\uparrow$ component $n_0$ and the (b) spin $\downarrow$ component $n_1$ for the effective hardcore anyon model for fermions ($\phi=0$) and hardcore bosons ($\phi=\pi/2$). The dashed lines depict a comparison with the full three-color modulation model with $\Delta/J_0=40$, $U/J_0 = 20$~(real space length $L=12$ sites, 2 particles, $\delta J=0.5J$). The inset of (a) shows the density $n_0(x=L/2)$ for $\tau/J=L+1$ of the effective model (solid line) and the full three-color modulation simulation~(symbols) as a function of the statistical angle $\phi$.} \label{fig:full_eff} \end{center} \end{figure} \subsection{Two particle interference} In Fig.~\ref{fig:2dexpandfree} we show the evolution of the density of two particles for different statistical angles $\phi=0$, $\pi/4$ and $\pi/2$. The upper panels of Fig.~\ref{fig:2dexpandfree} show the evolution of the density of the spin-$\uparrow$ component $n_0$ and in the lower panels the spin-$\downarrow$ component $n_1$. Although the particle expansion is diffusive and we cannot monitor the position of single particles, one may observe the emergence of an interference pattern at the center of the system after particles have in average traveled once through the whole lattice at $\tau \sim J L$. Fig.~\ref{fig:full_eff} depicts in detail the time evolution of the central density for both components for $\phi=0$ (effective fermions) and $\phi=\pi/2$ (effective hardcore bosons). For $\tau / J\gtrsim L/2$ the curves noticeably depend on the statistical angle. In particular close to the classical point of return $\tau \sim J L$ the density $n_0(L/2)$ shows a strong dependence on the statistical angle. The inset of Fig.~\ref{fig:full_eff}~(a) depicts this central spin $\uparrow$ density $n_0$ for $\tau/J=L+1$, where we observe a distinct peak for fermions and a local minimum for the bosonic case (dashed line in Fig.~\ref{fig:full_eff}). Fig.~\ref{fig:full_eff} also compares the evolution of the full three-color modulation Model~\eqref{eq:H_full_J(t)} and the effective PBC AHM~\eqref{eq:HAHM}. Due to higher order terms the two corresponding curves separate during the time evolution, however, for the given parameters the time evolution of Model~\eqref{eq:H_full_J(t)} recovers very well the hardcore anyon model over the full range of $\tau/J\lesssim 2L$ shown in Fig.~\ref{fig:full_eff}. While Model~\eqref{eq:HAHM} is integrable as discussed above, for the real time evolution of the interacting two component three-color modulated Fermi-Hubbard model we employ exact diagonalization techniques in combination with a higher order Runge-Kutta method. \subsection{Fixed average particle density} Experiments with single site resolution~\cite{bakr2009,sherson2010, endres2011, cheneau2012} may allow for the controlled initial preparation of a two particle state and the subsequent observation of the time evolution of the (possibly spin-resolved) density~\cite{Tai2016} corresponding to Fig.~\ref{fig:full_eff}. In the following we relax these conditions and analyze the possibility of an interferometrical measurement with a larger cloud with fixed average density. Initially we again assume a fully polarized sample with all particles prepared in a tight trap and ensemble-average over several realizations of the setting with fluctuating total particle densities with average density $n_{\rm avg}$ (for concreteness we choose an ensemble with $\rho(n)\sim \e^{-(n-n_{\rm avg})^2}$)) A measurement of the total spin-polarization, \begin{align} \Delta n = \sum_x \langle n_{0}(x) - n_{1}(x)\rangle \,, \end{align} may be used as a indicator of the anyonic exchange statistics. As the particles travel to the other half of the chain, they start to interfere and differences in the average populations of the two components may be measured. After long enough waiting time this difference may be quite pronounced. In Fig.~\ref{fig:sf_time_evolv} we show the ensemble averaged value of $\Delta n$ after a fixed time $\tau / J=2L$ as a function of the statistical angle $\phi$ for different values of $n_{\rm avg}$. The curves are not a monotonous function of the phase $\phi$ and depend on $n_{\rm avg}$, however exhibit a strong dependence on the statistics of the particles. \begin{figure}[h] \begin{center} \includegraphics[width=1\columnwidth]{fig_sf_avg_L12_t48.eps} \caption{Spin polarization as a function of the statistical angle $\phi$ for various average fillings $n_{\rm avg}$.} \label{fig:sf_time_evolv} \end{center} \end{figure} \section{Dynamical Probing of pairing in the 2AHM} \label{sec:2AHMexpand} We now return to the 2AHM~\eqref{eq:2AHM} introduced in Ref.~\cite{Cardarelli2016}. We discuss how an expansion experiment may reveal the unconventional pairing properties of the 2AHM in the (pseudo) boson limit. For the case of a pseudo-anyon Hubbard model (single component, softcore anyons) similar ideas have been discussed in Ref.~\cite{Wang2014}. \subsection{Bound pairs in the 2AHM} Contrary to the hardcore AHM the phase diagram of Model~\eqref{eq:2AHM} depends strongly on the statistical angle $\phi$. Indeed as a function of $\phi$ and the filling, a plethora of ground-state phases may be found. This includes the emergence of the PP phase and a paired (singlet superconducting, SS)-phase even for vanishing interactions $\tilde{U}=0$ (see Ref.~\cite{Cardarelli2016}). A detailed analysis of the full ground state phase diagram of model~\eqref{eq:2AHM} as a function of the phase $\phi$ will be published elsewhere. Both PP and SS phases can be understood from the unconventional emergence of paired states in the spectrum of the model. Following the analysis of Ref.~\cite{Cardarelli2016} and similar calculations for the softcore AHM~\cite{Zhang2017}, one observes that for a finite $\phi>0$ bound states may form in the two-particle spectrum even for vanishing on-site interactions $\tilde{U}=0$. For the 2AHM with vanishing on-site interaction term $\tilde{U}=0$, we find two-body bound states with a dispersion relation \begin{align} E_K = \pm 2 \sqrt{2} t \frac{\cos(K) \cos(2\phi)+1}{\sqrt{\cos(K) (2 \cos(2\phi)-1)+1}} \end{align} Here $K$ is the total momentum of the two-particle solution and $\pi/2<K<3\pi/3$. For $\phi>\pi/3$ the bound state spectrum $E_K$ has a local minimum at $\pi$. As discussed in Refs.~\cite{Cardarelli2016} due to a quasi-condensation of bound pairs in this point PP and SS phases can form as the fractional statistics also induces an effective interaction between the anyons. Several examples of the two-particle spectrum are shown in Fig.~\ref{fig:2AHM_dilute}. \begin{figure}[tb] \centering \includegraphics[width=1.\linewidth]{fig_dilute.eps} \caption{Unconventional bound states in the two-particle spectrum of model~\eqref{eq:2AHM} (vanishing on-site interactions $\tilde{U}=0$) as a function of the total momentum of two particles $K$. Dashed lines depict the two-particle scattering continuum. Solid lines show the bound states for (from top to bottom) $\phi/\pi=0.1$, $0.2$, $0.3$, $0.4$ and $0.5$.} \label{fig:2AHM_dilute} \end{figure} \subsection{Expansion dynamics} The formation of unconventional bound states resulting from the anyonic exchange statistics may be revealed by the characteristic expansion of a cloud of particles (now with balanced spin and OBC) into an empty lattice (Fig.~\ref{fig:expansion_scheme}~(b)). We consider the particles initially with opposite spin on two adjacent sites in the center of an empty lattice. In Fig.~\ref{fig:2AHM_texpansion} we show the time evolution of the real space density $n_0(x)+n_1(x)$, and in Fig.~\ref{fig:2AHM_texpansion_s} the spin $n_0(x)-n_1(x)$ for several values of $\phi$. All examples show a light-cone like ballistic expansion of the density with constant velocity independent of the statistical angle $\phi$, corresponding to single unbound particles moving into the empty lattice. Contrary to the case of softcore anyons~\cite{Wang2014}, the light cone is symmetric for all $\phi$. As soon as bound states can be found in the two-particle spectrum for a finite $\phi>0$ we observe a second light cone, most evident in Fig.~\ref{fig:2AHM_texpansion}~(b). As this feature is absent in the spin-density picture~(see Fig.~\ref{fig:2AHM_texpansion_s}) we conclude that it corresponds to bound pairs of particles. The pairs exhibit a larger effective mass due to the flatness of the bound state band~(see Fig.~\ref{fig:2AHM_dilute}) and hence the second inner light cone is much steeper. Interestingly, for our choice of initial state, the expansion of the bound-state fraction almost stops for $\phi=\pi/2$~(Fig.~\ref{fig:2AHM_dilute}~(c)). To further quantify this expansion dynamics we monitor the evolution of the average expansion of the cloud, \begin{align} \Delta j(\tau) = \sqrt{\langle n_j (j-L/2)^2 \rangle (\tau)} \,, \end{align} which after some initial time becomes of the form $\Delta j (\tau) \sim \gamma \tau$. This expansion rate $\gamma$ is shown in Fig.~\ref{fig:2AHM_texpansion}~(d) and depends monotonously on the statistical angle $\phi$. As expected, for free fermions ($\phi=0$, $U=0$) we find $\gamma=\sqrt{2}$. For finite statistical angles the expansion rate is reduced due to the enhanced tendency to form bound pairs. \begin{figure}[t] \centering \includegraphics[width=1.\linewidth]{fig2d_2AHM_N2_L60.eps} \caption{Expansion dynamics of the total density $n_0+n_1(x)$ of the 2AHM with (a) $\phi=0$, (b) $\phi = 0.4\pi$ and (c) $\phi=\pi/2$ ($\tilde{U}=0$, $L=60$ sites) initially prepared as fully localized state (2 particles) on the two adjacent central sites (compare Fig.~\ref{fig:expansion_scheme}~(b)). Panel (d) depicts the calculated expansion rate $\gamma/J$ as a function of the statistical angle $\phi$ (see text).} \label{fig:2AHM_texpansion} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.\linewidth]{fig2d_2AHM_N2_L60_s.eps} \caption{Expansion dynamics of the spin-density $|n_0-n_1|(x)$ of the 2AHM. The parameters are the same as in Fig.~\ref{fig:2AHM_texpansion}.} \label{fig:2AHM_texpansion_s} \end{figure} \section{Conclusion and Outlook} \label{sec:outlook} In summary we have proposed a versatile experimental scheme to engineer different types of anyon Hubbard models extending the work of Ref.~\cite{Cardarelli2016}. By means of fast periodic drivings (lattice shaking or lattice depth modulation) of a two component Fermi gas in a tilted lattice a 2AHM may be engineered, whose spectrum exhibits a non trivial dependence on the statistical phase. Expansion experiments, employed for the 2AHM may reveal properties of the unconventional quantum phases of the model. In particular a clear tendency of forming bound pairs may be observed in the pseudo-boson limit, revealing the underlying mechanism of the formation of the PP-phase. For a spin-dependent tilting the same scheme realizes a model, in which for OBC the effect of the phase may be gauged out and, hence, has no influence on the dynamics or statics of the model if one focuses on observables such as local densities. The situation changes drastically if one allows for Raman-assisted spin flips at the system boundaries. This scenario, may be mapped to a single component hardcore anyon model in a synthetic ring. We have shown how fractional quantum statistics may be monitored by means of a simple interferometer scheme. The density of a cloud of expanding particles and the total spin polarization may be used to reveal clearly the exchange statistics. The implementation of PBC in cold atom scenarios itself has attracted considerable interest, since PBC allow for example for the observation of intriguing topological phenomena such as the Aharonv-Bohm effect or the study of persistent currents~\cite{Buttiker1983}. Experimentally ring-shaped traps~\cite{Sauer2001,Gupta2005,Ryu2007,Lesanovsky2007} have been realized and recently the implementation of PBC and further complex geometries using synthetic dimensions~\cite{Celi2014,Boada2015} or Laguerre-Gauss beams \cite{Lacki2016} has been proposed. In this context, further interesting experimental possibilities of our proposal for the realization of PBC could include an additional phase factor $\phi_1$ to the modulations, which would allow to create a ring model penetrated by a finite flux. One may now employ this setting to analyze properties of persistent currents as function of for example interactions, particle statistics and temperature and study variations of the Drude weight. It is important to note that our proposal is not limited to 1D lattices, although only for this case the interpretation in terms of an anyon-model is valid. By adding an extended lattice in a second real space direction one may create a 2D or cylinder like system with unconventional correlated hoppings and fluxes.\\ \begin{acknowledgments} We thank Marco Roncaglia and Andr\'e Eckardt for useful discussions. We acknowledge support of the German Research Foundation DFG (projects RTG 1729 and no. SA 1031/10-1). SG also acknowledges support by the Swiss National Science Foundation under Division II. Simulations were carried out on the cluster system at the Leibniz University of Hannover, Germany. \end{acknowledgments}
{ "timestamp": "2018-02-13T02:16:40", "yymm": "1802", "arxiv_id": "1802.03970", "language": "en", "url": "https://arxiv.org/abs/1802.03970" }
\subsection{Algorithm} \label{sub:active_impl} \begin{algorithm}[tb] \caption{Active $\epsilon$-greedy} \label{alg:egreedy_active} $\pi_1$; $\epsilon$; $C_0 > 0$. $\verb+explore+(x_t)$: \begin{algorithmic} \STATE $A_t = \{a : \verb+loss_diff+(\pi_t, x_t, a) \leq \Delta_{t,C_0}\}$; \STATE $p_t(a) = \frac{\epsilon}{K} \1\{a \in A_t\} + (1 - \frac{\epsilon |A_t|}{K}) \1\{\pi_t(x_t) = a\}$; \STATE {\bfseries return} $p_t$; \end{algorithmic} $\verb+learn+(x_t, a_t, \ell_t(a_t), p_t)$: \begin{algorithmic} \STATE $\hat{\ell}_t = \verb+estimator+(x_t, a_t, \ell_t(a_t), p_t(a_t))$; \STATE $\hat{c}_t(a) = \begin{cases} \hat{\ell}_t(a), &\text{ if }p_t(a) > 0\\ 1, &\text{ otherwise.} \end{cases}$ \STATE $\pi_{t+1} = \verb+csc_oracle+(\pi_t, x_t, \hat{c}_t)$; \end{algorithmic} \end{algorithm} The simplicity of the $\epsilon$-greedy method described in Section~\ref{sub:e_greedy} often makes it the method of choice for practitioners. However, the uniform exploration over randomly selected actions can be quite inefficient and costly in practice. A natural consideration is to restrict this randomization over actions which could plausibly be selected by the optimal policy $\pi^* = \arg\min_{\pi \in \Pi} L(\pi)$, where $L(\pi) = \E_{(x, \ell) \sim D}[\ell(\pi(x))]$ is the expected loss of a policy~$\pi$. To achieve this, we use techniques from disagreement-based active learning~\citep{hanneke2014theory,hsu2010algorithms}. After observing a context~$x_t$, for any action~$a$, if we can find a policy~$\pi$ that would choose this action ($\pi(x_t) = a$) instead of the empirically best action~$\pi_t(x_t)$, while achieving a small loss on past data, then there is disagreement about how good such an action is, and we allow exploring it. Otherwise, we are confident that the best policy would not choose this action, thus we avoid exploring it, and assign it a high cost. The resulting method is in Algorithm~\ref{alg:egreedy_active}. Like RegCB, the method requires a known loss range $[c_{min}, c_{max}]$, and assigns a loss~$c_{max}$ to such unexplored actions (we consider the range~$[0, 1]$ in Algorithm~\ref{alg:egreedy_active} for simplicity). The disagreement test we use is based on empirical loss differences, similar to the Oracular CAL active learning method~\citep{hsu2010algorithms}, denoted \verb+loss_diff+, together with a~threshold: \[\Delta_{t, C_0} = \sqrt{C_0 \frac{K \log t}{\epsilon t}} + C_0 \frac{K \log t}{\epsilon t}.\] A practical implementation of \verb+loss_diff+ for an online setting is given below. We analyze a theoretical form of this algorithm in Section~\ref{sub:active_analysis}, showing a formal version of the following theorem: \begin{theorem} \label{thm:regret_informal} With high-probability, and under favorable conditions on disagreement and on the problem noise, active $\epsilon$-greedy achieves expected regret $O(T^{1/3})$. \end{theorem} Note that this data-dependent guarantee improves on worst-case guarantees achieved by the optimal algorithms~\citet{agarwal2014taming,dudik2011efficient}. In the extreme case where the loss of any suboptimal policy is bounded away from that of~$\pi^*$, we show that our algorithm can achieve constant regret. While active learning algorithms suggest that data-dependent thresholds~$\Delta_t$ can yield better guarantees~\citep[\emph{e.g.},][]{huang2015efficient}, this may require more work in our setting due to open problems related to data-dependent guarantees for contextual bandits~\citep{agarwal2017open}. In a worst-case scenario, active $\epsilon$-greedy behaves similarly to $\epsilon$-greedy~\citep{langford2008epoch}, achieving an $O(T^{2/3})$ expected regret with high probability. \paragraph{Practical implementation of the disagreement test.} We now present a practical way to implement the disagreement tests in the active $\epsilon$-greedy method, in the context of online cost-sensitive classification oracles based on regression, as in Vowpal Wabbit. This corresponds to the $\verb+loss_diff+$ method in Algorithm~\ref{alg:egreedy_active}. Let~$\hat{L}_{t-1}(\pi)$ denote the empirical loss of policy~$\pi$ on the (biased) sample of cost-sensitive examples collected up to time $t-1$ (see Section~\ref{sub:active_analysis} for details). After observing a context $x_t$, we want to estimate \[ \verb+loss_diff+(\pi_t, x_t, \bar a) \approx \hat{L}_{t-1}(\pi_{t,\bar a}) - \hat{L}_{t-1}(\pi_t), \] for any action $\bar a$, where \begin{align*} \pi_t &= \arg\min_\pi \hat{L}_{t-1}(\pi) \\ \pi_{t,\bar a} &= \arg\min_{\pi:\pi(x_t)=\bar a} \hat{L}_{t-1}(\pi). \end{align*} In our online setup, we take $\pi_t$ to be the current online policy (as in Algorithm~\ref{alg:egreedy_active}), and we estimate the loss difference by looking at how many online CSC examples of the form $\bar c := (\1\{a \ne \bar{a}\})_{a=1..K}$ are needed (or the importance weight on such an example) in order to switch prediction from $\pi_t(x_t)$ to $\bar a$. If we denote this importance weight by~$\tau_{\bar a}$, then we can estimate $\hat L_{t-1}(\pi_{t,\bar{a}}) - \hat L_{t-1}(\pi_t) \approx \tau_{\bar a}/t$. \paragraph{Computing $\tau_{\bar a}$ for IPS/DR.} In the case of IPS/DR, we use an online CSC oracle, which is based on $K$ regressors $f(x, a)$ in VW, each predicting the cost for an action~$a$. Let $f_t$ be the current regressors for policy $\pi_t$, $y_t(a) := f_t(x_t, a)$, and denote by $s_t(a)$ the \emph{sensitivity} of regressor $f_t(\cdot, a)$ on example $(x_t, \bar{c}(a))$. This sensitivity is essentially defined to be the derivative with respect to an importance weight~$w$ of the prediction $y'(a)$ obtained from the regressor after an online update $(x_t, \bar{c}(a))$ with importance weight~$w$. A similar quantity has been used, \emph{e.g.}, by~\citet{huang2015efficient,karampatziakis2011online,krishnamurthy2017active}. Then, the predictions on actions $\bar{a}$ and $a$ cross when the importance weight~$w$ satisfies $y_t(\bar{a}) - s_t(\bar{a}) w = y_t(a) + s_t(a) w$. Thus, the importance weight required for action~$\bar{a}$ to be preferred (\emph{i.e.}, smaller predicted loss) to action~$a$ is given by: \[ w_{\bar{a}}^a = \frac{y_t(\bar{a}) - y_t(a)}{s_t(\bar{a}) + s_t(a)}. \] Action~$\bar{a}$ will thus be preferred to all other actions when using an importance weight $\tau_{\bar{a}} = \max_a w_{\bar{a}}^a$. \paragraph{Computing $\tau_{\bar a}$ for \MTR{}.} Although Algorithm~\ref{alg:egreedy_active} and the theoretical analysis require CSC in order to assign a loss of 1 to unexplored actions, and hence does not directly support \MTR{}, we can consider an approximation which leverages the benefits of \MTR{} by performing standard \MTR{} updates as in $\epsilon$-greedy, while exploring only on actions that pass a similar disagreement test. In this case, we estimate $\tau_{\bar a}$ as the importance weight on an online regression example $(x_t, 0)$ for the regressor $f_t(\cdot, \bar a)$, needed to switch prediction to $\bar a$. If $s_t(\bar a)$ is the sensitivity for such an example, we have $\tau_{\bar a} = (y_t(\bar a) - y_t^*) / s_t(\bar a)$, where $y_t^* = \min_a y_t(a)$. \subsection{Theoretical Analysis} \label{sub:active_analysis} This section presents a theoretical analysis of the active $\epsilon$-greedy method introduced in Section~\ref{sub:active_impl}. We begin by presenting the analyzed version of the algorithm together with definitions in Section~\ref{ssub:active_algo}. Section~\ref{ssub:active_correctness} then studies the correctness of the method, showing that with high probability, the actions chosen by the optimal policy are always explored, and that policies considered by the algorithm are always as good as those obtained under standard $\epsilon$-greedy exploration. This section also introduces a Massart-type low-noise condition similar to the one considered by~\citet{krishnamurthy2017active} for cost-sensitive classification. Finally, Section~\ref{ssub:active_regret} provides a regret analysis of the algorithm, both in the worst case and under disagreement conditions together with the Massart noise condition. In particular, a formal version of Theorem~\ref{thm:regret_informal} is given by Theorem~\ref{thm:regret_massart}, and a more extreme but informative situation is considered in Proposition~\ref{prop:policy_gap}, where our algorithm can achieve constant regret. \subsubsection{Algorithm and definitions} \label{ssub:active_algo} We consider a version of the active $\epsilon$-greedy strategy that is more suitable for theoretical analysis, given in Algorithm~\ref{alg:greedy_active}. This method considers exact CSC oracles, as well as a CSC oracle with one constraint on the policy ($\pi(x_t) = a$ in Eq.\eqref{eq:csc_constraint}). The threshold~$\Delta_t$ is defined later in Section~\ref{ssub:active_correctness}. Computing it would require some knowledge about the size of the policy class, which we avoid by introducing a parameter~$C_0$ in the practical variant. The disagreement strategy is based on the Oracular CAL active learning method of~\citet{hsu2010algorithms}, which tests for disagreement using empirical error differences, and considers biased samples when no label is queried. Here, similar tests are used to decide which actions should be explored, in the different context of cost-sensitive classification, and the unexplored actions are assigned a loss of 1, making the empirical sample biased ($\hat{Z}_T$ in Algorithm~\ref{alg:greedy_active}). \paragraph{Definitions.} Define $Z_T = \{(x_t, \ell_t)\}_{t=1..T} \subset \mathcal{X} \times {\mathbb R}^K$, $\tilde{Z}_T = \{(x_t, \tilde{\ell}_t)\}_{t=1..T}$ (biased sample) and $\hat{Z}_T = \{(x_t, \hat{\ell}_t)\}_{t=1..T}$ (IPS estimate of biased sample), where $\ell_t \in [0, 1]^K$ is the (unobserved) loss vector at time~$t$ and \begin{align} \tilde{\ell}_t(a) &= \begin{cases} \ell_t(a), &\text{ if }a \in A_t\\ 1, &\text{ o/w} \end{cases} \\ \hat{\ell}_t(a) &= \begin{cases} \frac{\1\{a = a_t\}}{p_t(a_t)} \ell_t(a_t), &\text{ if }a \in A_t\\ 1, &\text{ o/w.} \end{cases} \label{eq:ell_hat} \end{align} For any set $Z \subset \mathcal{X} \times {\mathbb R}^K$ defined as above, we denote, for $\pi \in \Pi$, \begin{align*} L(\pi, Z) = \frac{1}{|Z|} \sum_{(x, c) \in Z} c(\pi(x)). \end{align*} We then define the empirical losses $L_T(\pi) := L(\pi, Z_T)$, $\hat{L}_T(\pi) := L(\pi, \hat{Z}_T)$ and $\tilde{L}_T(\pi) := L(\pi, \tilde{Z}_T)$. Let $L(\pi) := \E_{(x, \ell) \sim D} [ \ell(\pi(x)) ]$ be the expected loss of policy $\pi$, and $\pi^* := \arg\min_{\pi \in \Pi} L(\pi)$. We also define $\rho(\pi, \pi') := P_x(\pi(x) \ne \pi'(x))$, the expected disagreement between policies $\pi$ and $\pi'$, where $P_x$ denotes the marginal distribution of $D$ on contexts. \begin{algorithm}[tb] \caption{active $\epsilon$-greedy: analyzed version} \label{alg:greedy_active} \begin{algorithmic} \STATE {\bfseries Input:} exploration probability $\epsilon$. \STATE Initialize: $\hat{Z}_0 := \emptyset$. \FOR{$t = 1, \ldots$} \STATE Observe context $x_t$. Let \begin{align} \pi_t &:= \arg\min_{\pi} L(\pi, \hat{Z}_{t-1}) \nonumber \\ \pi_{t,a} &:= \arg\min_{\pi : \pi(x_t) = a} L(\pi, \hat{Z}_{t-1}) \label{eq:csc_constraint} \\ A_t &:= \{a : L(\pi_{t,a}, \hat{Z}_{t-1}) - L(\pi_{t}, \hat{Z}_{t-1}) \leq \Delta_t \} \label{eq:at_def} \end{align} \STATE Let \[ p_t(a) = \begin{cases} 1 - (|A_t| - 1) \epsilon / K, &\text{ if }a = \pi_t(x_t)\\ \epsilon / K, &\text{ if }a \in A_t \setminus \{\pi_t(x_t)\} \\ 0, &\text{ otherwise.} \end{cases} \] \STATE Play action $a_t \sim p_t$, observe $\ell_t(a_t)$ and set $\hat{Z}_t = \hat{Z}_{t-1} \cup \{(x_t, \hat{\ell}_t)\}$, where $\hat{\ell}_t$ is defined in~\eqref{eq:ell_hat}. \ENDFOR \end{algorithmic} \end{algorithm} \subsubsection{Correctness} \label{ssub:active_correctness} We begin by stating a lemma that controls deviations of empirical loss differences, which relies on Freedman's inequality for martingales~\citep[see, \emph{e.g.},][Lemma 3]{kakade2009generalization}. \begin{lemma}[Deviation bounds] \label{lemma:deviations} With probability $1 - \delta$, the following event holds: for all $\pi \in \Pi$, for all $T \geq 1$, \begin{align} |(\hat{L}_T(\pi) - \hat{L}_T(\pi^*)) - (\tilde{L}_T(\pi) - \tilde{L}_T(\pi^*))| &\leq \sqrt{\frac{2K \rho(\pi, \pi^*) e_T}{\epsilon}} + \left(\frac{K}{\epsilon} + 1 \right) e_T \label{eq:tilde_bound} \\ |(L_T(\pi) - L_T(\pi^*)) - (L(\pi) - L(\pi^*))| &\leq \sqrt{\rho(\pi, \pi^*) e_T} + 2e_T, \label{eq:sample_bound} \end{align} where $e_T = \log (2|\Pi| / \delta_T) / T$ and $\delta_T = \delta / (T^2 + T)$. We denote this event by~$\mathcal{E}$ in what follows. \end{lemma} \begin{proof} We prove the result using Freedman's inequality~\citep[see, \emph{e.g.},][Lemma 3]{kakade2009generalization}, which controls deviations of a sum using the conditional variance of each term in the sum and an almost sure bound on their magnitude, along with a union bound. For~\eqref{eq:tilde_bound}, let $(\hat{L}_T(\pi) - \hat{L}_T(\pi^*)) - (\tilde{L}_T(\pi) - \tilde{L}_T(\pi^*)) = \frac{1}{T}\sum_{t=1}^T R_t$, with \begin{align*} R_t = \hat{\ell}_t(\pi(x_t)) - \hat{\ell}_t(\pi^*(x_t)) - (\tilde{\ell}_t(\pi(x_t)) - \tilde{\ell}_t(\pi^*(x_t))). \end{align*} We define the $\sigma$-fields $\mathcal{F}_t := \sigma(\{x_i, \ell_i, a_i\}_{i=1}^t)$. Note that $R_t$ is $\mathcal{F}_t$-measurable and \begin{align*} \E[\hat{\ell}_t(\pi(x_t)) - \hat{\ell}_t(\pi^*(x_t)) | x_t, \ell_t] = \tilde{\ell}_t(\pi(x_t)) - \tilde{\ell}_t(\pi^*(x_t)), \end{align*} so that $\E[R_t | \mathcal{F}_{t-1}] = \E[\E[R_t |x_t, \ell_t] | \mathcal{F}_{t-1}] = 0$. Thus, $(R_t)_{t\geq 1}$ is a martingale difference sequence adapted to the filtration $(\mathcal{F}_t)_{t\geq 1}$. We have \begin{align*} |R_t| \leq |\hat{\ell}_t(\pi(x_t)) - \hat{\ell}_t(\pi^*(x_t))| + |\tilde{\ell}_t(\pi(x_t)) - \tilde{\ell}_t(\pi^*(x_t))| \leq \frac{K}{\epsilon} + 1. \end{align*} Note that $\E[\hat{\ell}_t(\pi(x_t)) - \hat{\ell}_t(\pi^*(x_t)) | x_t, \ell_t] = \tilde{\ell}_t(\pi(x_t)) - \tilde{\ell}_t(\pi^*(x_t))$, so that \begin{align*} \E[R_t^2 | \mathcal{F}_{t-1}] &= \E[\E[R_t^2 | x_t, \ell_t, A_t] | \mathcal{F}_{t-1}] \\ &\leq \E[ \E[(\hat{\ell}_t(\pi(x_t)) - \hat{\ell}_t(\pi^*(x_t)))^2| x_t, \ell_t, A_t] | \mathcal{F}_{t-1}] \\ &\leq \E \left[\E \left[\frac{(\1\{\pi(x_t) = a_t\} - \1\{\pi^*(x_t) = a_t\})^2}{p_t(a_t)^2} | x_t, \ell_t, A_t \right] | \mathcal{F}_{t-1} \right] \\ &\leq \E \left[\E \left[\frac{\1\{\pi(x_t) \ne \pi^*(x_t)\}(\1\{\pi(x_t) = a_t\} + \1\{\pi^*(x_t) = a_t\})}{p_t(a_t)^2} | x_t, \ell_t, A_t \right] | \mathcal{F}_{t-1} \right] \\ &= \E\left[\frac{2 K \1\{\pi(x_t) \ne \pi^*(x_t)\}}{\epsilon} |\mathcal{F}_{t-1} \right] = \frac{2K}{\epsilon} \rho(\pi, \pi^*). \end{align*} Freedman's inequality then states that~\eqref{eq:tilde_bound} holds with probability $1 - \delta_T/2 |\Pi|$. For~\eqref{eq:sample_bound}, we consider a similar setup with \begin{align*} R_t = \ell_t(\pi(x_t)) - \ell_t(\pi^*(x_t)) - (L(\pi) - L(\pi^*)). \end{align*} We have $\E[R_t|\mathcal{F}_{t-1}] = 0$, $|R_t| \leq 2$ and $\E[R_t^2|\mathcal{F}_{t-1}] \leq \rho(\pi, \pi^*)$, which yields that $\eqref{eq:sample_bound}$ holds with probability $1 - \delta_T/2 |\Pi|$ using Freedman's inequality. A union bound on $\pi \in \Pi$ and $T \geq 1$ gives the desired result. \end{proof} \paragraph{Threshold.} We define the threshold $\Delta_T$ used in~\eqref{eq:at_def} in Algorithm~\ref{alg:greedy_active} as: \begin{equation} \label{eq:threshold} \Delta_T := \left(\sqrt{\frac{2K}{\epsilon}} + 1\right) \sqrt{e_{T-1}} + \left( \frac{K}{\epsilon} + 3\right) e_{T-1}. \end{equation} We also define the following more precise deviation quantity for a given policy, which follows directly from the deviation bounds in Lemma~\ref{lemma:deviations} \begin{equation} \label{eq:deviation_tot} \Delta_T^*(\pi) := \left(\sqrt{\frac{2K}{\epsilon}} + 1\right) \sqrt{\rho(\pi, \pi^*) e_{T-1}} + \left( \frac{K}{\epsilon} + 3\right) e_{T-1}. \end{equation} Note that we have $\Delta_T^*(\pi) \leq \Delta_T$ for any policy $\pi$. The next lemma shows that the bias introduced in the empirical sample by assigning a loss of 1 to unexplored actions is favorable, in the sense that it will not hurt us in identifying~$\pi^*$. \begin{lemma}[Favorable bias] \label{lemma:favorable_bias} Assume $\pi^*(x_t) \in A_t$ for all $t \leq T$. We have \begin{equation} \label{eq:favorable_bias} \tilde{L}_{T}(\pi) - \tilde{L}_{T}(\pi^*) \geq L_{T}(\pi) - L_{T}(\pi^*). \end{equation} \end{lemma} \begin{proof} For any $t \leq T$, we have $\tilde{\ell}_t(a) \geq \ell_t(a)$, so that $\tilde{L}_{T}(\pi) \geq L_{T}(\pi)$. Separately, we have $\tilde{\ell}_t(\pi^*(x_t)) = \ell_t(\pi^*(x_t))$ for all $t \leq T$ using the definition of $\tilde{\ell}_t$ and the assumption $\pi^*(x_t) \in A_t$, hence $\tilde{L}_{T}(\pi^*) \geq L_{T}(\pi^*)$. \end{proof} We now show that with high probability, the optimal action is always explored by the algorithm. \begin{lemma} \label{lemma:pi_star} Assume that event~$\mathcal{E}$ holds. The actions given by the optimal policy are always explored for all $t \geq 1$, i.e., $\pi^*(x_t) \in A_t$ for all $t \geq 1$. \end{lemma} \begin{proof} We show by induction on $T \geq 1$ that $\pi^*(x_t) \in A_t$ for all $t = 1, \ldots, T$. For the base case, we have $A_1 = [K]$ since $\hat{Z}_0 = \emptyset$ and hence empirical errors are always equal to 0, so that $\pi^*(x_1) \in A_1$. Let us now assume as the inductive hypothesis that $\pi^*(x_t) \in A_t$ for all $t \leq T-1$. From deviation bounds, we have \begin{align*} \hat{L}_{T-1}(\pi_T) - \hat{L}_{T-1}(\pi^*) &\geq \tilde{L}_{T-1}(\pi_T) - \tilde{L}_{T-1}(\pi^*) - \left(\sqrt{\frac{2K \rho(\pi, \pi^*)e_{T-1}}{\epsilon}} + (K/\epsilon + 1) e_{T-1}\right) \\ L_{T-1}(\pi_T) - L_{T-1}(\pi^*) &\geq L(\pi_T) - L(\pi^*) - \left(\sqrt{\rho(\pi, \pi^*)e_{T-1}} + 2e_{T-1}\right). \end{align*} Using Lemma~\ref{lemma:favorable_bias} together with the inductive hypothesis, the above inequalities yield \begin{align*} \hat{L}_{T-1}(\pi_T) - \hat{L}_{T-1}(\pi^*) \geq L(\pi_T) - L(\pi^*) - \Delta_T^*(\pi_T). \end{align*} Now consider an action $a \notin A_t$. Using the definition~\eqref{eq:at_def} of $A_t$, we have \begin{align*} \hat{L}_{T-1}(\pi_{T,a}) - \hat{L}_{T-1}(\pi^*) &= \hat{L}_{T-1}(\pi_{T,a}) - \hat{L}_{T-1}(\pi_{T}) + \hat{L}_{T-1}(\pi_{T}) - \hat{L}_{T-1}(\pi^*) \\ &> \Delta_T - \Delta_T^*(\pi_T) = 0, \end{align*} which implies $\pi^*(x_T) \ne a$, since $\hat{L}_{T-1}(\pi_{T,a})$ is the minimum of $\hat{L}_{T-1}$ over policies satisfying $\pi(x_T)=a$. This yields $\pi^*(x_T) \in A_T$, which concludes the proof. \end{proof} With the previous results, we can now prove that with high probability, discarding some of the actions from the exploration process does not hurt us in identifying good policies. In particular, $\pi_{T+1}$ is about as good as it would have been with uniform $\epsilon$-exploration all along. \begin{theorem} \label{thm:correctness} Under the event $\mathcal{E}$, which holds with probability $1 - \delta$, \begin{equation*} L(\pi_{T+1}) - L(\pi^*) \leq \Delta_{T+1}^*(\pi_{T+1}). \end{equation*} In particular, $L(\pi_{T+1}) - L(\pi^*) \leq \Delta_{T+1}$. \end{theorem} \begin{proof} Assume event $\mathcal{E}$ holds. Using~(\ref{eq:tilde_bound}-\ref{eq:sample_bound}) combined with Lemma~\ref{lemma:favorable_bias} (which holds by Lemma~\ref{lemma:pi_star}), we have \begin{align*} L(\pi_{T+1}) - L(\pi^*) \leq \hat{L}_T(\pi_{T+1}) - \hat{L}_T(\pi^*) + \Delta_{T+1}^*(\pi_{T+1}) \leq \Delta_{T+1}^*(\pi_{T+1}). \end{align*} \end{proof} \paragraph{Massart noise condition.} We introduce a low-noise condition that will help us obtain improved regret guarantees. Similar conditions have been frequently used in supervised learning~\citep{massart2006risk} and active learning~\citep{hsu2010algorithms,huang2015efficient,krishnamurthy2017active} for obtaining better data-dependent guarantees. We consider the following Massart noise condition with parameter $\tau > 0$: \begin{equation}\tag{M} \label{eq:massart} \rho(\pi, \pi^*) \leq \frac{1}{\tau} (L(\pi) - L(\pi^*)). \end{equation} This condition holds when $\E[\min_{a \ne \pi^*(x)} \ell(a) - \ell(\pi^*(x)) | x] \geq \tau$, $P_x$-almost surely, which is similar to the Massart condition considered in~\cite{krishnamurthy2017active} in the context of active learning for cost-sensitive classification. Indeed, we have \begin{align*} L(\pi) - L(\pi^*) &= \E[\1\{\pi(x) \ne \pi^*(x)\} (\ell(\pi(x)) - \ell(\pi^*(x))] \\ &\qquad + \E[\1\{\pi(x) = \pi^*(x)\} (\ell(\pi^*(x)) - \ell(\pi^*(x)))] \\ &\geq \E \left[\1\{\pi(x) \ne \pi^*(x)\} \left(\min_{a \ne \pi^*(x)} \ell(a) - \ell(\pi^*(x)) \right) \right] \\ &= \E[\1\{\pi(x) \ne \pi^*(x)\} \E[\min_{a \ne \pi^*(x)} \ell(a) - \ell(\pi^*(x))|x]] \\ &\geq \E[\1\{\pi(x) \ne \pi^*(x)\} \tau] = \tau \rho(\pi, \pi^*), \end{align*} which is precisely~\eqref{eq:massart}. The condition allows us to obtain a fast rate for the policies considered by our algorithm, as we now show. \begin{theorem} Assume the Massart condition~\eqref{eq:massart} holds with parameter~$\tau$. Under the event $\mathcal{E}$, which holds w.p. $1 - \delta$, \begin{equation*} L(\pi_{T+1}) - L(\pi^*) \leq C \frac{K}{\tau \epsilon} e_T, \end{equation*} for some numeric constant~$C$. \end{theorem} \begin{proof} Using Theorem~\ref{thm:correctness} and the Massart condition, we have \begin{align*} L(\pi_{T+1}) - L(\pi^*) &\leq \Delta_{T+1}^*(\pi_{T+1}) = \left(\sqrt{\frac{2K}{\epsilon}} + 1\right) \sqrt{\rho(\pi_{T+1}, \pi^*) e_{T}} + \left( \frac{K}{\epsilon} + 3\right) e_{T} \\ &\leq \left(\sqrt{\frac{2K}{\epsilon}} + 1\right) \sqrt{(L(\pi_{T+1}) - L(\pi^*)) e_{T} / \tau} + \left( \frac{K}{\epsilon} + 3\right) e_{T} \\ &\leq \sqrt{\frac{8K e_{T}}{\tau \epsilon}(L(\pi_{T+1}) - L(\pi^*))} + \frac{4 K e_{T}}{\epsilon}. \end{align*} Solving the quadratic inequality in $L(\pi_{T+1}) - L(\pi^*)$ yields the result. \end{proof} \subsubsection{Regret Analysis} \label{ssub:active_regret} In a worst-case scenario, the following result shows that Algorithm~\ref{alg:greedy_active} enjoys a similar~$O(T^{2/3})$ regret guarantee to the vanilla $\epsilon$-greedy approach~\citep{langford2008epoch}. \begin{theorem} Conditioned on the event~$\mathcal{E}$, which holds with probability $1- \delta$, the expected regret of the algorithm is \begin{equation*} \E[R_T | \mathcal{E}] \leq O \left(\sqrt{\frac{KT \log(T|\Pi|/\delta)}{\epsilon}} + T \epsilon \right). \end{equation*} Optimizing over the choice of $\epsilon$ yields a regret $O(T^{2/3} (K \log(T|\Pi|/\delta))^{1/3})$. \end{theorem} \begin{proof} We condition on the $1- \delta$ probability event~$\mathcal{E}$ that the deviation bounds of Lemma~\ref{lemma:deviations} hold. We have \begin{align*} \E[\ell_t(a_t) - \ell_t(\pi^*(x_t)) | \mathcal{F}_{t-1}] &= \E[\1\{a_t = \pi_t(x_t)\} (\ell_t(\pi_t(x_t)) - \ell_t(\pi^*(x_t)))| \mathcal{F}_{t-1}] \\ &\qquad + \E[\1\{a_t \ne \pi_t(x_t)\}(\ell_t(a_t) - \ell_t(\pi^*(x_t))) | \mathcal{F}_{t-1}] \\ &\leq \E[\ell_t(\pi_t(x_t)) - \ell_t(\pi^*(x_t)) | \mathcal{F}_{t-1}] + \E[\E[1 - p_t(\pi_t(x_t)) | x_t] | \mathcal{F}_{t-1}] \\ &\leq L(\pi_t) - L(\pi^*) + \epsilon. \end{align*} Summing over~$t$ and applying Theorem~\ref{thm:correctness} together with $\Delta_t^*(\pi) \leq \Delta_t$, we obtain \begin{align*} \E[R_T | \mathcal{E}] &= \E \left[ \sum_{t=1}^T \ell_t(a_t) - \ell_t(\pi^*(x_t)) | \mathcal{E} \right] \\ &\leq 1 + \sum_{t=2}^T \E[L(\pi_t) - L(\pi^*) + \epsilon | \mathcal{F}_{t-1}, \mathcal{E}] \\ &\leq 1 + T \epsilon + \sum_{t=2}^T \Delta_t. \end{align*} Using $\sum_{t=2}^T \sqrt{e_t} \leq O(\sqrt{T \log(8 T^2 |\Pi|/\delta)})$ and $\sum_{t=2}^T e_t \leq O(\log(8 T^2 |\Pi|/\delta) \log T )$, we obtain \begin{equation*} \E[R_T | \mathcal{E}] \leq O \left(1 + \sqrt{\frac{KT \log(T|\Pi|/\delta)}{\epsilon}} + \frac{K\log(T|\Pi|/\delta)}{\epsilon} \log T + T \epsilon \right), \end{equation*} which yields the result. \end{proof} \paragraph{Disagreement definitions.} In order to obtain improvements in regret guarantees over the worst case, we consider notions of disagreement that extend standard definitions from the active learning literature~\cite[\emph{e.g.},][]{hanneke2014theory,hsu2010algorithms,huang2015efficient} to the multiclass case. Let $B(\pi^*, r) := \{\pi \in \Pi : \rho(\pi, \pi^*) \leq r\}$ be the ball centered at~$\pi^*$ under the (pseudo)-metric $\rho(\cdot, \cdot)$. We define the disagreement region~$DIS(r)$ and disagreement coefficient~$\theta$ as follows: \begin{align*} DIS(r) &:= \{x : \exists \pi \in B(\pi^*, r) \quad \pi(x) \ne \pi^*(x)\} \\ \theta &:= \sup_{r>0} \frac{P(x \in DIS(r))}{r}. \end{align*} The next result shows that under the Massart condition and with a finite disagreement coefficient~$\theta$, our algorithm achieves a regret that scales as $O(T^{1/3})$ (up to logarithmic factors), thus improving on worst-case guarantees obtained by optimal algorithms such as~\citet{agarwal2012contextual,agarwal2014taming,dudik2011efficient}. \begin{theorem} \label{thm:regret_massart} Assume the Massart condition~\eqref{eq:massart} holds with parameter~$\tau$. Conditioning on the event~$\mathcal{E}$ which holds w.p. $1 - \delta$, the algorithm has expected regret \begin{equation*} \E[R_T|\mathcal{E}] \leq O \left(\frac{K \log(T|\Pi|/\delta)}{\tau \epsilon} \log T + \frac{\theta}{\tau} \sqrt{\epsilon KT \log(T|\Pi|/\delta)} \right). \end{equation*} Optimizing over the choice of $\epsilon$ yields a regret \begin{equation*} \E[R_T|\mathcal{E}] \leq O \left( \frac{1}{\tau} (\theta K \log(T|\Pi|/\delta))^{2/3} (T \log T)^{1/3} \right). \end{equation*} \end{theorem} \begin{proof} Assume $\mathcal{E}$ holds. Let $t \geq 2$, and assume $a \in A_t \setminus \{\pi^*(x_t)\}$. Define \begin{equation*} \pi_a = \begin{cases} \pi_t, &\text{ if }\pi_t(x_t) = a\\ \pi_{t,a}, &\text{ if }\pi_t(x_t) \ne a, \end{cases} \end{equation*} so that we have $\pi_a(x_t) = a \ne \pi^*(x_t)$. \begin{itemize} \item If $\pi_a = \pi_t$, then $L(\pi_a) - L(\pi^*) \leq \Delta_t^*(\pi_a) \leq \Delta_t$ by Theorem~\ref{thm:correctness} \item If $\pi_a = \pi_{t,a}$, using deviation bounds, Lemma~\ref{lemma:pi_star} and~\ref{lemma:favorable_bias}, we have \begin{align*} L(\pi_a) - L(\pi^*) &= L(\pi_{t,a}) - L(\pi^*) \\ &\leq \hat{L}_{t-1}(\pi_{t,a}) - \hat{L}_{t-1}(\pi^*) + \Delta_t^*(\pi_{t,a}) \\ &= \underbrace{\hat{L}_{t-1}(\pi_{t,a}) - \hat{L}_{t-1}(\pi_t)}_{\leq \Delta_t} + \underbrace{\hat{L}_{t-1}(\pi_t) - \hat{L}_{t-1}(\pi^*)}_{\leq 0} + \Delta_t^*(\pi_{t,a}) \\ &\leq 2 \Delta_t, \end{align*} where the last inequality uses $a \in A_t$. \end{itemize} By the Massart assumption, we then have $\rho(\pi_a, \pi^*) \leq 2 \Delta_t / \tau$. Hence, we have $x_t \in DIS(2 \Delta_t / \tau)$. We have thus shown \begin{equation*} \E [\E[\1\{a \in A_t \setminus \{\pi^*(x_t)\}\} | x_t] | \mathcal{F}_{t-1}] \leq \E[P(x_t \in DIS(2 \Delta_t / \tau)) | \mathcal{F}_{t-1}] \leq 2 \theta \Delta_t / \tau. \end{equation*} We then have \begin{align*} \E[\ell_t(a_t) - \ell_t(\pi^*(x_t)) | \mathcal{F}_{t-1}] &= \E [\1\{a_t = \pi_t(x_t)\} (\ell_t(\pi_t(x_t)) - \ell_t(\pi^*(x_t))) \\ &\quad + \1\{a_t = \pi^*(x_t) \wedge a_t \ne \pi_t(x_t)\} (\ell_t(\pi^*(x_t)) - \ell_t(\pi^*(x_t))) \\ &\quad + \sum_{a=1}^K \1\{a_t = a \wedge a \notin \{\pi_t(x_t), \pi^*(x_t)\}\} (\ell_t(a) - \ell_t(\pi^*(x_t))) | \mathcal{F}_{t-1}] \\ &\leq \E [ \ell_t(\pi_t(x_t)) - \ell_t(\pi^*(x_t)) | \mathcal{F}_{t-1}] \\ &\quad + \E \left[\sum_{a=1}^K \E[\1\{a_t = a \} \1\{a \notin \{\pi_t(x_t), \pi^*(x_t)\}\}|x_t] | \mathcal{F}_{t-1} \right] \\ &= L(\pi_t) - L(\pi^*) + \sum_{a=1}^K \E[ \E[ p_t(a) \1\{a \notin \{\pi_t(x_t), \pi^*(x_t)\}\} | x_t] | \mathcal{F}_{t-1}] \\ &\leq L(\pi_t) - L(\pi^*) + \frac{\epsilon}{K} \sum_{a=1}^K \E[\E[\1\{a \in A_t \setminus\{\pi^*(x_t)\}\} | x_t] | \mathcal{F}_{t-1}] \\ &\leq C \frac{K}{\tau \epsilon} e_{t-1} + 2 \epsilon \theta \Delta_t / \tau, \end{align*} where we used \begin{align*} p_t(a) \1\{a \notin \{\pi_t(x_t), \pi^*(x_t)\}\} &= \frac{\epsilon}{K} \1\{a \in A_t \setminus \{\pi_t(x_t), \pi^*(x_t)\}\} \\ &\leq \frac{\epsilon}{K} \1\{a \in A_t \setminus \{\pi^*(x_t)\}\}. \end{align*} Summing over $t$ and taking total expectations (conditioned on $\mathcal{E}$) yields \begin{equation*} \E[R_T|\mathcal{E}] \leq O \left(\frac{K \log(T|\Pi|/\delta)}{\tau \epsilon} \log T + \frac{\epsilon \theta}{\tau} \left(\sqrt{\frac{KT \log(T|\Pi|/\delta)}{\epsilon}} + \frac{K\log(T|\Pi|/\delta)}{\epsilon} \log(T) \right) \right), \end{equation*} and the result follows. \end{proof} Finally, we look at a simpler instructive example, which considers an extreme situation where the expected loss of any suboptimal policy is bounded away from that of the optimal policy. In this case, Algorithm~\ref{alg:greedy_active} can achieve constant regret when the disagreement coefficient is bounded, as shown by the following result. \begin{proposition} \label{prop:policy_gap} Assume that $L(\pi) - L(\pi^*) \geq \tau > 0$ for all $\pi \ne \pi^*$, and that $\theta < \infty$. Under the event~$\mathcal{E}$, the algorithm achieves constant expected regret. In particular, the algorithm stops incurring regret for $T > T_0 := \max \{t : 2 \Delta_t > \tau \}$. \end{proposition} \begin{proof} By Theorem~\ref{thm:correctness} and our assumption, we have $L(\pi_t) - L(\pi^*) \leq \1\{\Delta_t \geq \tau\} \Delta_t$. Similarly, the assumption implies that $\rho(\pi, \pi^*) \leq \1\{L(\pi) - L(\pi^*) \geq \tau\}$, so that using similar arguments to the proof of Theorem~\ref{thm:regret_massart}, we have \begin{equation*} \E [\E[\1\{a \in A_t \setminus \{\pi^*(x_t)\}\} | x_t] | \mathcal{F}_{t-1}] \leq \theta \1\{2 \Delta_t \geq \tau\}. \end{equation*} Following the proof of Theorem~\ref{thm:regret_massart}, this implies that when~$t$ is such that $2 \Delta_t < \tau$, then we have \begin{align*} \E[\ell_t(a_t) - \ell_t(\pi^*(x_t)) | \mathcal{F}_{t-1}] = 0. \end{align*} Let $T_0 := \max \{t : 2 \Delta_t \geq \tau \}$. We thus have \begin{align*} \E[R_T | \mathcal{E}] \leq 1 + \sum_{t=2}^{T_0} (\Delta_t + \epsilon). \end{align*} \end{proof} \subsection{$\epsilon$-greedy and greedy} \label{sub:e_greedy} \paragraph{$\epsilon$-greedy.} \begin{algorithm}[tb] \caption{$\epsilon$-greedy} \label{alg:egreedy} $\pi_1$; $\epsilon > 0$ (or $\epsilon = 0$ for Greedy). $\verb+explore+(x_t)$: \begin{algorithmic} \STATE {\bfseries return} $p_t(a) = \epsilon / K + (1 - \epsilon) \1\{\pi_{t}(x_t) = a\}$; \end{algorithmic} $\verb+learn+(x_t, a_t, \ell_t(a_t), p_t)$: \begin{algorithmic} \STATE $\pi_{t+1} = \verb+oracle+(\pi_t, x_t, a_t, \ell_t(a_t), p_t(a_t))$; \end{algorithmic} \end{algorithm} We consider an importance-weighted variant of the epoch-greedy approach of Langford and Zhang~\cite{langford2008epoch}, given in Algorithm~\ref{alg:egreedy}. We also experimented with a variant we call active $\epsilon$-greedy, that uses notions from disagreement-based active learning~\cite{hanneke2014theory,hsu2010algorithms} in order to reduce uniform exploration to only actions that could plausibly be taken by the optimal policy. This approach is detailed in Appendix~\ref{sec:active_e_greedy_appx}, along with a theoretical analysis. \paragraph{Greedy.} When taking $\epsilon = 0$, only the greedy action given by the current policy is explored. With the \MTR{} reduction and the linear regressors described in Section~\ref{sub:exp_setup}, this corresponds to an online version of the greedy algorithm in~\cite{bastani2017exploiting}. If multiple actions get the same score according to the current regressor, we break ties randomly. \subsection{Bagging} \label{sub:bagging} \begin{algorithm}[th] \caption{Bag} \label{alg:bag} $\pi_1^1, \ldots, \pi_1^N$. $\verb+explore+(x_t)$: \begin{algorithmic} \STATE {\bfseries return} $p_t(a) \propto |\{i : \pi_t^i(x_t) = a\}|$;\footnotemark \end{algorithmic} $\verb+learn+(x_t, a_t, \ell_t(a_t), p_t)$: \begin{algorithmic} \FOR{$i = 1, \ldots, N$} \STATE $\tau^i \sim Poisson(1)$; \hfill \COMMENT{with $\tau^1 = 1$ for bag-greedy} \STATE $\pi_{t+1}^i = \verb+oracle+^{\tau^i}(\pi_t^i, x_t, a_t, \ell_t(a_t), p_t(a_t))$; \ENDFOR \end{algorithmic} \end{algorithm} \footnotetext{When policies are parametrized using regressors as in our implementation, we let $\pi_t^i(x)$ be uniform over all actions tied for the lowest cost, and the final distribution is uniform across all actions tied for best according to one of the policies in the bag. The added randomization gives useful variance reduction in our experiments.} This approach, shown in Algorithm~\ref{alg:bag}, maintains a collection of $N$ policies $\pi_t^1, \ldots, \pi_t^N$ meant to approximate a posterior distribution over policies (or, in Bayesian terminology, the parameter that generated the data) via the Bootstrap. This approximate posterior is used to choose actions in a Thompson sampling fashion~\citep{agrawal2013thompson,chapelle2011empirical,russo2017tutorial,thompson1933likelihood}. Each policy is trained on a different online bootstrap sample of the observed data~\citep{qin2013efficient,oza2001online}. The online bootstrap performs a random number~$\tau$ of online updates to each policy instead of one (this is denoted by $\verb+oracle+^{\tau}$). This is also known as online Bootstrap Thompson sampling~\citep{eckles2014thompson,osband2015bootstrapped}. In contrast to these works, which simply play the arm given by one of the~$N$ policies chosen at random, we compute the full action distribution~$p_t$ resulting from such a sampling, and leverage this information for improved importance weights in loss estimation as in previous work~\citep{agarwal2014taming}. \paragraph{Greedy bagging.} With a single policy ($N = 1$), Algorithm~\ref{alg:bag} resembles the Greedy algorithm, up to the randomized number of online updates. We found this method to often be outperformed by Greedy, thus our experiments consider a simple optimization of bagging where the first policy is always updated once ($\tau^1 = 1$ in Algorithm~\ref{alg:bag}), which typically performs better than bagging, though not significantly for larger values of~$N$. \subsection{Cover} \label{sub:cover} \begin{algorithm}[tb] \caption{Cover} \label{alg:cover} $\pi_1^1, \ldots, \pi_1^N$; $\epsilon_t = \min(1/K, 1/\sqrt{Kt})$; $\psi > 0$. $\verb+explore+(x_t)$: \begin{algorithmic} \STATE $p_t(a) \propto |\{i : \pi_t^i(x_t) = a\}|$; \STATE {\bfseries return} $\epsilon_t + (1 - \epsilon_t) p_t$; \hfill \COMMENT{for cover} \STATE {\bfseries return} $p_t$; \hfill \COMMENT{for cover-nu} \end{algorithmic} $\verb+learn+(x_t, a_t, \ell_t(a_t), p_t)$: \begin{algorithmic} \STATE $\pi_{t+1}^1 = \verb+oracle+(\pi_t^1, x_t, a_t, \ell_t(a_t), p_t(a_t))$; \STATE $\hat{\ell}_t = \verb+estimator+(x_t, a_t, \ell_t(a_t), p_t(a_t))$; \FOR{$i = 2, \ldots, N$} \STATE $q_i(a) \propto |\{j \leq i - 1 : \pi_{t+1}^j(x_t) = a\}|$; \STATE $\hat{c}(a) = \hat{\ell}_t(a) - \frac{\psi \epsilon_t}{\epsilon_t + (1 - \epsilon_t)q_i(a)}$; \STATE $\pi_{t+1}^i = \verb+csc_oracle+(\pi_t^i, x_t, \hat{c})$; \ENDFOR \end{algorithmic} \end{algorithm} This method, given in Algorithm~\ref{alg:cover}, is based on Online Cover, an online approximation of the ILOVETOCONBANDITS algorithm of~\cite{agarwal2014taming}. The approach maintains a collection of $N$ policies, $\pi_t^1, \ldots, \pi_t^N$, meant to approximate a covering distribution over policies that are good for both exploration and exploitation. The first policy $\pi_t^1$ is trained on observed data using the oracle as in previous algorithms, while subsequent policies are trained using cost-sensitive examples which encourage diversity in the predicted actions compared to the previous policies. Our implementation differs from the Online Cover algorithm~\cite[Algorithm 5]{agarwal2014taming} in how the diversity term in the definition of~$\hat{c}(a)$ is handled (the second term). When creating cost-sensitive examples for a given policy~$\pi^i$, this term rewards an action~$a$ that is not well-covered by previous policies (\emph{i.e.}, small~$q_i(a)$), by subtracting a term that decreases with $q_i(a)$ from the loss. While Online Cover considers a fixed $\epsilon_t = \epsilon$, we let~$\epsilon_t$ decay with~$t$, and introduce a parameter~$\psi$ to control the overall reward term, which bears more similarity with the analyzed algorithm. In particular, the magnitude of the reward is~$\psi$ whenever action~$a$ is not covered by previous policies (\emph{i.e.}, $q_i(a) = 0$), but decays with $\psi \epsilon_t$ whenever $q_i(a) > 0$, so that the level of induced diversity can decrease over time as we gain confidence that good policies are covered. \paragraph{Cover-NU.} In order to reduce the level of exploration of Cover and be more competitive with the Greedy method, we propose a variant of Cover with no exploration outside of the actions chosen by covering policies, denoted by Cover-NU (for No Uniform exploration). \subsection{RegCB} \label{sub:regcb} \begin{algorithm}[tb] \caption{RegCB} \label{alg:regcb} $f_1$; $C_0 > 0$. $\verb+explore+(x_t)$: \begin{algorithmic} \STATE $l_t(a) = \verb+lcb+(f_t, x_t, a, \Delta_{t,C_0})$; \STATE $u_t(a) = \verb+ucb+(f_t, x_t, a, \Delta_{t,C_0})$; \STATE $p_t(a) \propto \1\{a \in \arg\min_{a'} l_t(a')\}$; \hfill \COMMENT{RegCB-opt variant} \STATE $p_t(a) \propto \1\{l_t(a) \leq \min_{a'} u_t(a')\}$; \hfill \COMMENT{RegCB-elim variant} \STATE {\bfseries return} $p_t$; \end{algorithmic} $\verb+learn+(x_t, a_t, \ell_t(a_t), p_t)$: \begin{algorithmic} \STATE $f_{t+1} = \verb+reg_oracle+(f_t, x_t, a_t, \ell_t(a_t))$; \end{algorithmic} \end{algorithm} We consider the confidence-bound algorithm introduced by Foster et al.~\cite{foster2018practical} based on regression oracles, together with an online approximation that follows the sensitivity analysis given by Krishnamurthy et al.~\cite{krishnamurthy2017active} in the context of active learning. Our method is shown in Algorithm~\ref{alg:regcb}. The algorithm maintains a regressor~$f_t : \mathcal{X} \times \{1, \ldots, K\}$ and, given a new context~$x_t$, computes lower and upper confidence bounds $l_t(a) \leq f_t(x_t, a) \leq u_t(a)$. These are computed by adding ``virtual'' importance-weighted regression examples with low and high costs, and finding the largest importance weight leading to an excess squared loss smaller than~$\Delta_{t, C_0}$, where \[\Delta_{t,C_0} = \frac{C_0 \log (Kt)}{t},\] and~$C_0$ controls the width of the confidence bounds. This importance weight can be found using regressor sensitivities and a binary search procedure as described in~\cite[Section 7.1]{krishnamurthy2017active}. Note that this requires costs to be in a known range~$[c_{min}, c_{max}]$. In contrast to~\cite{krishnamurthy2017active}, we set the labels of the ``virtual'' examples to $c_{min} - 1$ for the lower bound and~$c_{max} + 1$ for the upper bound, instead of $c_{min}$ and $c_{max}$. The \emph{optimistic} variant then selects the action with smallest lower bound estimate, similar to LinUCB, while the \emph{elimination} variant explores uniformly on actions that may plausibly be the best. \subsection{$\epsilon$-greedy and greedy} \label{sub:e_greedy} \begin{algorithm}[tb] \caption{$\epsilon$-greedy} \label{alg:egreedy} $\pi_1$; $\epsilon > 0$ (or $\epsilon = 0$ for Greedy). $\verb+explore+(x_t)$: \begin{algorithmic} \STATE {\bfseries return} $p_t(a) = \epsilon / K + (1 - \epsilon) \1\{\pi_{t}(x_t) = a\}$; \end{algorithmic} $\verb+learn+(x_t, a_t, \ell_t(a_t), p_t)$: \begin{algorithmic} \STATE $\pi_{t+1} = \verb+oracle+(\pi_t, x_t, a_t, \ell_t(a_t), p_t(a_t))$; \end{algorithmic} \end{algorithm} We consider an importance-weighted variant of the epoch-greedy approach of~\citet{langford2008epoch}, given in Algorithm~\ref{alg:egreedy}. The method acts greedily with probability~$1 - \epsilon$, and otherwise explores uniformly on all actions. Learning is achieved by reduction to off-policy optimization, through any of the three reductions presented in Section~\ref{sub:reductions}. We also experimented with a variant we call active $\epsilon$-greedy, that uses notions from disagreement-based active learning~\citep{hanneke2014theory,hsu2010algorithms} in order to reduce uniform exploration to only actions that could plausibly be taken by the optimal policy. While this variant often improves on the basic~$\epsilon$-greedy method, we found that it is often outperformed empirically by other exploration algorithms, and thus defer its presentation to Appendix~\ref{sec:active_e_greedy_appx}, along with a theoretical analysis, for reference. \paragraph{Greedy.} When taking $\epsilon = 0$ in the $\epsilon$-greedy approach, with the IWR reduction, we are left with a fully greedy approach that always selects the action given by the current policy. This gives us an online variant of the greedy algorithm of~\citet{bastani2017exploiting}, which regresses on observed losses and acts by selecting the action with minimum predicted loss. Although this greedy strategy does not have an explicit mechanism for exploration in its choice of actions, the inherent diversity in the distribution of contexts may provide sufficient exploration for good performance and provable regret guarantees~\citep{bastani2017exploiting,kannan2018smoothed}. In particular, under appropriate assumptions including a diversity assumption on the contexts, one can show that all actions have a non-zero probability of being selected at each step, providing a form of ``natural'' exploration from which one can establish regret guarantees. Empirically, we find that Greedy can perform very well in practice on many datasets (see Section~\ref{sec:experiments}). If multiple actions get the same score according to the current regressor, we break ties randomly. \subsection{Bagging (online Bootstrap Thompson sampling)} \label{sub:bagging} \begin{algorithm}[th] \caption{Bag} \label{alg:bag} $\pi_1^1, \ldots, \pi_1^N$. $\verb+explore+(x_t)$: \begin{algorithmic} \STATE {\bfseries return} $p_t(a) \propto |\{i : \pi_t^i(x_t) = a\}|$;\footnotemark \end{algorithmic} $\verb+learn+(x_t, a_t, \ell_t(a_t), p_t)$: \begin{algorithmic} \FOR{$i = 1, \ldots, N$} \STATE $\tau^i \sim Poisson(1)$; \hfill \COMMENT{with $\tau^1 = 1$ for bag-greedy} \STATE $\pi_{t+1}^i = \verb+oracle+^{\tau^i}(\pi_t^i, x_t, a_t, \ell_t(a_t), p_t(a_t))$; \ENDFOR \end{algorithmic} \end{algorithm} \footnotetext{When policies are parametrized using regressors as in our implementation, we let $\pi_t^i(x)$ be uniform over all actions tied for the lowest cost, and the final distribution is uniform across all actions tied for best according to one of the policies in the bag. The added randomization gives useful variance reduction in our experiments.} We now consider a variant of Thompson sampling which is usable in practice with optimization oracles. Thompson sampling provides a generic approach to exploration problems, which maintains a belief on the data generating model in the form of a posterior distribution given the observed data, and explores by selecting actions according to a model sampled from this posterior~\citep[see, \emph{e.g.},][]{agrawal2013thompson,chapelle2011empirical,russo2017tutorial,thompson1933likelihood}. While the generality of this strategy makes it attractive, maintaining this posterior distribution can be intractable for complex policy classes, and may require strong modeling assumptions. In order to overcome such difficulties and to support the optimization oracles considered in this paper, we rely on an approximation of Thompson sampling known as the online Bootstrap Thompson sampling~\citep{eckles2014thompson,osband2015bootstrapped}, or bagging~\citep{agarwal2014taming}. This approach, shown in Algorithm~\ref{alg:bag}, maintains a collection of~$N$ policies $\pi_t^1, \ldots, \pi_t^N$ meant to approximate the posterior distribution over policies via the online Bootstrap~\citep{agarwal2014taming,eckles2014thompson,osband2015bootstrapped,oza2001online,qin2013efficient}, and explores in a Thompson sampling fashion, by averaging action decisions across all policies (hence the name \emph{bagging}). Each policy is trained on a different online Bootstrap sample of the observed data, in the form of interaction records. The online Bootstrap performs a random number $\tau$ of online updates to each policy instead of one (this is denoted by $\verb+oracle+^\tau$ in Algorithm~\ref{alg:bag}). We use a Poisson distribution with parameter~1 for~$\tau$, which ensures that in expectation, each policy is trained on~$t$ examples after~$t$ steps. In contrast to~\citet{eckles2014thompson,osband2015bootstrapped}, which play the arm given by one of the~$N$ policies chosen at random, we compute the full action distribution~$p_t$ resulting from such a sampling, and leverage this for loss estimation, allowing learning by reduction to off-policy optimization as in~\citet{agarwal2014taming}. As in the~$\epsilon$-greedy algorithm, Bagging directly relies on off-policy learning and thus all three reductions are admissible. \paragraph{Greedy bagging.} We also consider a simple optimization that we call \emph{greedy bagging}, for which the first policy~$\pi^1$ is trained on the true data sample (like Greedy), that is, with~$\tau$ always equal to one, instead of a bootstrap sample with random choices of~$\tau$. We found this approach to often improve on bagging, particularly when the number of policies~$N$ is~small. \subsection{Cover} \label{sub:cover} \begin{algorithm}[tb] \caption{Cover} \label{alg:cover} $\pi_1^1, \ldots, \pi_1^N$; $\epsilon_t = \min(1/K, 1/\sqrt{Kt})$; $\psi > 0$. $\verb+explore+(x_t)$: \begin{algorithmic} \STATE $p_t(a) \propto |\{i : \pi_t^i(x_t) = a\}|$; \STATE {\bfseries return} $\epsilon_t + (1 - \epsilon_t) p_t$; \hfill \COMMENT{for cover} \STATE {\bfseries return} $p_t$; \hfill \COMMENT{for cover-nu} \end{algorithmic} $\verb+learn+(x_t, a_t, \ell_t(a_t), p_t)$: \begin{algorithmic} \STATE $\pi_{t+1}^1 = \verb+oracle+(\pi_t^1, x_t, a_t, \ell_t(a_t), p_t(a_t))$; \STATE $\hat{\ell}_t = \verb+estimator+(x_t, a_t, \ell_t(a_t), p_t(a_t))$; \FOR{$i = 2, \ldots, N$} \STATE $q_i(a) \propto |\{j \leq i - 1 : \pi_{t+1}^j(x_t) = a\}|$; \STATE $\hat{c}(a) = \hat{\ell}_t(a) - \frac{\psi \epsilon_t}{\epsilon_t + (1 - \epsilon_t)q_i(a)}$; \STATE $\pi_{t+1}^i = \verb+csc_oracle+(\pi_t^i, x_t, \hat{c})$; \ENDFOR \end{algorithmic} \end{algorithm} This method, given in Algorithm~\ref{alg:cover}, is based on Online Cover, an online approximation of the ``ILOVETOCONBANDITS'' algorithm of~\citet{agarwal2014taming}. The approach maintains a collection of $N$ policies, $\pi_t^1, \ldots, \pi_t^N$, meant to approximate a covering distribution over policies that are good for both exploration and exploitation. The first policy $\pi_t^1$ is trained on observed data using the oracle as in previous algorithms, while subsequent policies are trained using modified cost-sensitive examples which encourage diversity in the predicted actions compared to the previous policies. Our implementation differs from the Online Cover algorithm of~\citet[Algorithm 5]{agarwal2014taming} in how the diversity term in the definition of~$\hat{c}(a)$ is handled (the second term). When creating cost-sensitive examples for a given policy~$\pi^i$, this term rewards an action~$a$ that is not well-covered by previous policies (\emph{i.e.}, small~$q_i(a)$), by subtracting from the cost a term that decreases with $q_i(a)$. While Online Cover considers a fixed $\epsilon_t = \epsilon$, we let~$\epsilon_t$ decay with~$t$, and introduce a parameter~$\psi$ to control the overall reward term, which bears more similarity with the analyzed algorithm. In particular, the magnitude of the reward is~$\psi$ whenever action~$a$ is not covered by previous policies (\emph{i.e.}, $q_i(a) = 0$), but decays with $\psi \epsilon_t$ whenever $q_i(a) > 0$, so that the level of induced diversity can decrease over time as we gain confidence that good policies are covered. \paragraph{Cover-NU.} While Cover requires some uniform exploration across all actions, our experiments suggest that this can make exploration highly inefficient, thus we introduce a variant, \emph{Cover-NU}, with \emph{n}o \emph{u}niform exploration outside the set of actions selected by covering policies. \begin{algorithm}[tb] \caption{RegCB} \label{alg:regcb} $f_1$; $C_0 > 0$. $\verb+explore+(x_t)$: \begin{algorithmic} \STATE $l_t(a) = \verb+lcb+(f_t, x_t, a, \Delta_{t,C_0})$; \STATE $u_t(a) = \verb+ucb+(f_t, x_t, a, \Delta_{t,C_0})$; \STATE $p_t(a) \propto \1\{a \in \arg\min_{a'} l_t(a')\}$; \hfill \COMMENT{RegCB-opt variant} \STATE $p_t(a) \propto \1\{l_t(a) \leq \min_{a'} u_t(a')\}$; \hfill \COMMENT{RegCB-elim variant} \STATE {\bfseries return} $p_t$; \end{algorithmic} $\verb+learn+(x_t, a_t, \ell_t(a_t), p_t)$: \begin{algorithmic} \STATE $f_{t+1} = \verb+reg_oracle+(f_t, x_t, a_t, \ell_t(a_t))$; \end{algorithmic} \end{algorithm} \subsection{RegCB} \label{sub:regcb} We consider online approximations of the two algorithms introduced by~\citet{foster2018practical} based on regression oracles, shown in Algorithm~\ref{alg:regcb}. Both algorithms estimate confidence intervals of the loss for each action given the current context~$x_t$, denoted~$[l_t(a), u_t(a)]$ in Algorithm~\ref{alg:regcb}, by considering predictions from a subset of regressors with small squared loss. The \emph{optimistic} variant then selects the action with smallest lower bound estimate, similar to LinUCB, while the \emph{elimination} variant explores uniformly on actions that may plausibly be the best. More formally, the RegCB algorithm theoretically analyzed by~\citet{foster2018practical} defines the confidence bounds as follows: \[ l_t(a) = \min_{f \in \mathcal F_t} f(x_t, a), \quad \text{and} \quad u_t(a) = \max_{f \in \mathcal F_t} f(x_t, a). \] Here,~$\mathcal F_t$ is a subset of regressors that is ``good'' for loss estimation, in the sense that it achieves a small regression loss on observed data, $\hat R_{t-1}(f) := \frac{1}{t-1} \sum_{s=1}^{t-1} (f(x_t, a_t) - \ell_t(a_t))^2$, compared to the best regressor in the full regressor class~$\mathcal F$: \[ \mathcal F_t := \{f \in \mathcal F : \hat R_{t-1}(f) - \min_{f \in \mathcal F} \hat R_{t-1}(f) \leq \Delta_t\}, \] where~$\Delta_t$ is a quantity decreasing with~$t$. Our online implementation computes approximations of these upper and lower bounds on the loss of each action, by using a sensitivity analysis of the current regressor based on importance weighting taken from~\citet{krishnamurthy2017active} in the context of active learning (the computations are denoted~$\verb+lcb+$ and~$\verb+ucb+$ in Algorithm~\ref{alg:regcb}). The algorithm maintains a regressor~$f_t : \mathcal{X} \times \{1, \ldots, K\}$ and, given a new context~$x_t$, computes lower and upper confidence bounds $l_t(a) \leq f_t(x_t, a) \leq u_t(a)$. These are computed by adding ``virtual'' importance-weighted regression examples with low and high costs, and finding the largest importance weight leading to an excess squared loss smaller than~$\Delta_{t, C_0}$, where \[\Delta_{t,C_0} = \frac{C_0 \log (Kt)}{t},\] and~$C_0$ is a parameter controlling the width of the confidence bounds. This importance weight can be found using regressor sensitivities and a binary search procedure as described in~\citep[Section 7.1]{krishnamurthy2017active}. Note that this requires knowledge of the loss range~$[c_{\min}, c_{\max}]$, unlike other methods. In contrast to~\citet{krishnamurthy2017active}, we set the labels of the ``virtual'' examples to $c_{\min} - 1$ for the lower bound and~$c_{\max} + 1$ for the upper bound, instead of $c_{\min}$ and $c_{\max}$. \subsection{Learning Setting} The stochastic~(i.i.d.) contextual bandit learning problem can be described as follows. At each time step~$t$, the environment produces a pair~$(x_t, \ell_t) \sim D$ independently from the past, where~$x_t \in \mathcal{X}$ is a context vector and~$\ell_t = (\ell_t(1), \ldots, \ell_t(K)) \in {\mathbb R}^K$ is a loss vector, with~$K$ the number of possible actions, and the data distribution is denoted~$D$. After observing the context~$x_t$, the learner chooses an action~$a_t$, and only observes the loss~$\ell_t(a_t)$ corresponding to the chosen action. The goal of the learner is to trade-off exploration and exploitation in order to incur a small cumulative regret \begin{equation*} R_T := \sum_{t=1}^T \ell_t(a_t) - \sum_{t=1}^T \ell_t(\pi^*(x_t)), \end{equation*} with respect to the optimal policy~$\pi^* \in \arg\min_{\pi \in \Pi} \E_{(x, \ell) \sim D}[\ell(\pi(x))]$, where~$\Pi$ denotes a (large, possibly infinite) set of policies~$\pi : \mathcal{X} \to \{1, \ldots, K\}$ which we would like to do well against. It is often important for the learner to use randomized strategies, for instance in order to later evaluate or optimize new policies, hence we let~$p_t(a) \in [0,1]$ denote the probability that the agent chooses action~$a \in \{1, \ldots, K\}$ at time~$t$, so that~$a_t\sim p_t$. \subsection{Optimization Oracles} \label{sub:oracles} In this paper, we focus on CB algorithms which rely on access to an \emph{optimization oracle} for solving optimization problems similar to those that arise in supervised learning, leading to methods that are suitable for general policy classes~$\Pi$. The main example is the \textbf{cost-sensitive classification (CSC) oracle}~\citep{agarwal2014taming,dudik2011efficient,langford2008epoch}, which given a collection $(x_1, c_1), \ldots, (x_T, c_T) \in \mathcal{X} \times {\mathbb R}^K$ computes \begin{equation} \label{eq:csc_oracle} \arg\min_{\pi\in \Pi} \sum_{t=1}^T c_t(\pi(x_t)). \end{equation} The cost vectors~$c_t = (c_t(1), \ldots, c_t(K)) \in {\mathbb R}^K$ are often constructed using counterfactual estimates of the true (unobserved) losses, as we describe in the next section. Another approach is to use \textbf{regression oracles}, which find~$f : \mathcal{X} \times \{1, \ldots, K\} \to {\mathbb R}$ from a class of regressor functions~$\mathcal{F}$ to predict a cost~$y_t$, given a context~$x_t$ and action~$a_t$~\citep[see, \emph{e.g.},][]{agarwal2012contextual,foster2018practical}. In this paper, we consider the following regression oracle with importance weights $\omega_t > 0$: \begin{equation} \label{eq:reg_oracle} \arg\min_{f\in \mathcal{F}} \sum_{t=1}^T \omega_t(f(x_t, a_t) - y_t)^2. \end{equation} While the theory typically requires exact solutions to~\eqref{eq:csc_oracle} or~\eqref{eq:reg_oracle}, this is often impractical due to the difficulty of the underlying optimization problem (especially for CSC, which yields a non-convex and non-smooth problem), and more importantly because the size of the problems to be solved keeps increasing after each iteration. In this work, we consider instead the use of \emph{online optimization oracles} for solving problems~\eqref{eq:csc_oracle} or~\eqref{eq:reg_oracle}, which incrementally update a given policy or regression function after each new observation, using for instance an online gradient method. Such an online learning approach is natural in the CB setting, and is common in interactive production systems~\citep[\emph{e.g.},][]{agarwal2016multiworld,he2014practical,mcmahan2013ad}. \subsection{Loss Estimates and Reductions} \label{sub:reductions} A common approach to solving problems with bandit (partial) feedback is to compute an estimate of the full feedback using the observed loss and then apply methods for the full-information setting to these estimated values. In the case of CBs, this allows an algorithm to find a good policy based on \emph{off-policy} exploration data collected by the algorithm. These loss estimates are commonly used to create CSC instances to be solved by the optimization oracle introduced above~\citep{agarwal2014taming,dudik2011efficient,langford2008epoch}, a process sometimes referred as \emph{reduction} to cost-sensitive classification. Given such estimates~$\hat \ell_t(a)$ of~$\ell_t(a)$ for all actions~$a$ and for~$t = 1, \ldots, T$, such a reduction constructs cost vectors $c_t = (\hat \ell_t(1), \ldots, \hat \ell_t(K)) \in {\mathbb R}^K$ and feeds them along with the observed contexts~$x_t$ to the CSC oracle~\eqref{eq:csc_oracle} in order to obtain a policy. We now describe the three different estimation methods considered in this paper, and how each is typically used for reduction to policy learning with a CSC or regression oracle. In what follows, we consider observed interaction records~$(x_t, a_t, \ell_t(a_t), p_t(a_t))$. Perhaps the simplest approach is the \textbf{inverse propensity-scoring} (IPS) estimator: \begin{equation} \label{eq:ips} \hat{\ell}_t(a) := \frac{\ell_t(a_t)}{p_t(a_t)} \1\{a = a_t\}. \end{equation} For any action~$a$ with $p_t(a) > 0$, this estimator is unbiased, \emph{i.e.}~$\E_{a_t\sim p_t}[\hat{\ell}_t(a)] = \ell_t(a)$, but can have high variance when~$p_t(a_t)$ is small. The estimator leads to a straightforward CSC example~$(x_t, \hat{\ell}_t)$. Using such examples in~\eqref{eq:csc_oracle} provides a way to perform off-policy (or counterfactual) evaluation and optimization, which in turn allows a CB algorithm to identify good policies for exploration. In order to obtain good unbiased estimates, one needs to control the variance of the estimates, \emph{e.g.}, by enforcing a minimum exploration probability~$p_t(a) \geq \epsilon > 0$ on all actions. In order to reduce the variance of IPS, the \textbf{doubly robust} (DR) estimator~\citep{dudik2011doubly} uses a separate, possibly biased, estimator of the loss $\hat{\ell}(x, a)$: \begin{equation} \label{eq:dr} \hat{\ell}_t(a) := \frac{\ell_t(a_t) - \hat{\ell}(x_t, a_t)}{p_t(a_t)} \1\{a = a_t\} + \hat{\ell}(x_t, a). \end{equation} When $\hat{\ell}(x_t, a_t)$ is a good estimate of~$\ell_t(a_t)$, the small numerator in the first term helps reduce the variance induced by a small denominator, while the second term ensures that the estimator is unbiased. Typically, $\hat{\ell}(x, a)$ is learned by regression on all past observed losses, \emph{e.g.}, \begin{equation} \label{eq:loss_estimator_def} \hat \ell := \arg\min_{f \in \mathcal F} \sum_{t=1}^T (f(x_t, a_t) - \ell_t(a_t))^2. \end{equation} The reduction to cost-sensitive classification is similar to IPS, by feeding cost vectors~$c_t = \hat \ell_t$ to the CSC oracle. We consider a third method that directly reduces to the importance-weighted regression oracle~\eqref{eq:reg_oracle}, which we refer to as IWR (for \textbf{importance-weighted regression}), and is suitable for algorithms which rely on off-policy learning.\footnote{Note that IWR is not directly applicable to methods that explicitly reduce to CSC oracles, such as~\citet{agarwal2014taming,dudik2011efficient}.} This approach finds a regressor \begin{equation} \label{eq:mtr} \hat{f} := \arg\min_{f \in \mathcal{F}} \sum_{t=1}^T \frac{1}{p_t(a_t)}(f(x_t, a_t) - \ell_t(a_t))^2, \end{equation} and considers the policy~$\hat{\pi}(x) = \arg\min_a \hat{f}(x, a)$. Such an estimator has been used, \emph{e.g.}, in the context of off-policy learning for recommendations~\citep{schnabel2016recommendations} and is available in the Vowpal Wabbit library. Note that if~$p_t$ has full support, then the objective is an unbiased estimate of the full regression objective on all actions, \begin{align*} \sum_{t=1}^T \sum_{a=1}^K (f(x_t, a) - \ell_t(a))^2. \end{align*} In contrast, if the learner only explores a single action (so that $p_t(a_t) = 1$ for all~$t$), the obtained regressor~$\hat f$ is the same as the loss estimator~$\hat \ell$ in~\eqref{eq:loss_estimator_def}. In this csae, if we consider a linear class of regressors of the form $f(x, a) = \theta_a^{\top} x$ with $x \in {\mathbb R}^d$, then the \MTR{} reduction computes least-squares estimates~$\hat{\theta}_a$ from the data observed when action~$a$ was chosen. When actions are selected according to the greedy policy $a_t = \arg\min_a \hat{\theta}_a^\top x_t$, this setup corresponds to the greedy algorithm considered, \emph{e.g.}, by~\citet{bastani2017exploiting}. Note that while CSC is typically intractable and requires approximations in order to work in practice, importance-weighted regression does not suffer from these issues. In addition, while the computational cost for an approximate CSC online update scales with the number of actions~$K$, \MTR{} only requires an update for a single action, making the approach more attractive computationally. Another benefit of \MTR{} in an online setting is that it can leverage importance weight aware online updates~\citep{karampatziakis2011online}, which makes it easier to handle large inverse propensity scores. \subsection{Experimental Setup} \label{sub:exp_setup} Our experiments are conducted by simulating the contextual bandit setting using multiclass or cost-sensitive classification datasets, and use the online learning system Vowpal Wabbit (VW). \paragraph{Simulated contextual bandit setting.} The experiments in this paper are based on leveraging supervised cost-sensitive classification datasets for simulating CB learning. In particular, we treat a CSC example $(x_t, c_t) \in \mathcal{X} \times {\mathbb R}^K$ as a CB example, with $x_t$ given as the context to a CB algorithm, and we only reveal the loss for the chosen action~$a_t$. For a multiclass example with label~$y_t \in \{1, \ldots, K\}$, we set $c_t(a) := \1\{a \ne y_t\}$; for multilabel examples with label set~$Y_t \subseteq \{1, \ldots, K\}$, we set $c_t(a) := \1\{a \notin Y_t\}$; the cost-sensitive datasets we consider have $c_t \in [0,1]^K$. We consider more general \emph{loss encodings} defined with an additive offset on the cost by: \begin{equation} \label{eq:loss_enc_def} \ell_t^{c}(a) = c + c_t(a), \end{equation} for some~$c\in {\mathbb R}$. Although some techniques attempt to remove a dependence on such encoding choices through appropriately designed counterfactual loss estimators~\citep{dudik2011doubly,swaminathan2015self}, these may be imperfect in practice, and particularly in an online scenario. The behavior observed for different choices of~$c$ allows us to get a sense of the robustness of the algorithms to the scale of observed losses, which might be unknown. Separately, different values of~$c$ can lead to lower variance for loss estimation in different scenarios: $c = 0$ might be preferred if $c_t(a)$ is often 0, while $c = -1$ is preferred when $c_t(a)$ is often 1. In order to have a meaningful comparison between different algorithms, loss encodings, as well as supervised multiclass classification, our evaluation metrics consider the original costs~$c_t$. \paragraph{Online learning in VW.} Online learning is an important tool for having machine learning systems that quickly and efficiently adapt to observed data~\citep{agarwal2016multiworld,he2014practical,mcmahan2013ad}. We run our CB algorithms in an online fashion using Vowpal Wabbit: instead of exact solutions of the optimization oracles from Section~\ref{sub:oracles}, we consider online variants of the CSC and regression oracles, which incrementally update the policies or regressors with online gradient steps or variants thereof. Note that in VW, online CSC itself reduces to multiple online regression problems in VW (one per action), so that we are left with only online regression steps. More specifically, we use adaptive~\citep{duchi2011adaptive}, normalized~\citep{ross2013normalized} and importance-weight-aware~\citep{karampatziakis2011online} gradient updates, with a single tunable step-size parameter. \paragraph{Parameterization.} We consider linearly parameterized policies taking the form $\pi(x) = \arg\min_a \theta_a^\top x$, or in the case of the IWR reduction, regressors $f(x, a) = \theta_a^\top x$. For the DR loss estimator, we use a similar linear parameterization $\hat{\ell}(x, a) = \phi_a^\top x$. We note that the algorithms we consider do not rely on this specific form, and easily extend to more complex, problem-dependent representations, such as action-dependent features. Some datasets in our evaluation have such an action-dependent structure, with different feature vectors~$x_a$ for different actions~$a$; in this case we use parameterizations of the form $f(x, a) = \theta^\top x_a$, and $\hat{\ell}(x, a) = \phi^\top x_a$, where the parameters~$\theta$ and~$\phi$ are shared across all actions. \subsection{Algorithms and Hyperparameters} \label{sub:hyperparams_appx} We ran each method on every dataset with the following hyperparameters: \begin{itemize \item algorithm-specific \emph{hyperparameters}, shown in Table~\ref{table:algos}. \item 9 choices of \emph{learning rates}, on a logarithmic grid from 0.001 to 10 (see Section~\ref{sub:exp_setup}). \item 3 choices of \emph{reductions}: IPS, DR and IWR (see Section~\ref{sub:reductions}). Note that these mainly apply to methods that reduce to off-policy optimization (\emph{i.e.}, ($\epsilon$-)greedy and bagging), and to some extent, methods that reduce to cost-sensitive classification (\emph{i.e.}, cover and active $\epsilon$-greedy, though the IWR reduction is heuristic in this case). Both RegCB variants directly reduce to~regression. \item 3 choices of loss \emph{encodings}: 0/1, -1/0 and 9/10 (see Eq.~\eqref{eq:loss_enc_def}). 0/1 and -1/0 encodings are typically a design choice, while the experiments with 9/10 are aimed at assessing some robustness to loss range. \end{itemize} \begin{table}[tb] \caption{Choices of hyperparameters and reduction for each method. Fixed choices of hyperparameters for -1/0 encodings are in \textbf{bold}. These were obtained for each method with an instant-runoff voting mechanism on 200 of the multiclass datasets with -1/0 encoding, where each dataset ranks hyperparameter choices according to the difference between significant wins and losses against all other choices (the vote of each dataset is divided by the number of tied choices ranked first). Table~\ref{table:hyperparams2} shows optimized choices of hyperparameters for different encoding settings used in our study. } \label{table:algos} \center \begin{tabular}{ | c | c | c | c | } \hline Name & Method & Hyperparameters & Reduction \\ \hline G & Greedy & - & \textbf{IWR} \\ \hline R/RO & RegCB-elim/RegCB-opt & $C_0 \in 10^{-\{1, 2, \textbf{3}\}}$ & - \\ \hline \multirow{2}{*}{C-nu} & \multirow{2}{*}{Cover-NU} & $N \in \{\textbf{4}, 8, 16\}$ & \multirow{2}{*}{IPS/\textbf{DR}}\\ & & $\psi \in \{0.01, \textbf{0.1}, 1\}$ & \\ \hline \multirow{2}{*}{C-u} & \multirow{2}{*}{Cover} & $N \in \{\textbf{4}, 8, 16\}$ & \multirow{2}{*}{\textbf{IPS}/DR}\\ & & $\psi \in \{0.01, \textbf{0.1}, 1\}$ & \\ \hline B/B-g & Bag/Bag-greedy & $N \in \{\textbf{4}, 8, 16\}$ & IPS/DR/\textbf{IWR} \\ \hline $\epsilon$G & $\epsilon$-greedy & $\epsilon \in \{\textbf{0.02}, 0.05, 0.1\}$ & IPS/DR/\textbf{IWR} \\ \hline \multirow{2}{*}{A} & \multirow{2}{*}{active $\epsilon$-greedy} & $\epsilon \in \{\textbf{0.02}, 1\}$ & \multirow{2}{*}{IPS/DR/\textbf{IWR}} \\ & & $C_0 \in 10^{-\{2, 4, \textbf{6}\}}$ & \\ \hline \end{tabular} \end{table} \begin{table} \caption{Optimized choices of hyperparameters for different encoding settings, obtained using the voting mechanism described in Table~\ref{table:algos}: -1/0 (same as bold choices in Table~\ref{table:algos}, used in Tables~\ref{table:win_loss_diff}(top left), \ref{table:cost_sensitive}ce, \ref{table:breakdown}, \ref{table:win_loss_multiclass}a, \ref{table:win_loss_multilab}a, \ref{table:win_loss_uci}a and in the figures); 0/1 (used in Tables~\ref{table:win_loss_diff}(bottom left), \ref{table:cost_sensitive}abd, \ref{table:win_loss_multiclass}b, \ref{table:win_loss_multilab}b, \ref{table:win_loss_uci}b, \ref{table:rcv1cs_all}). } \label{table:hyperparams2} \small \center \begin{tabular}{|c|c|c|} \hline Algorithm & \textbf{-1/0} & \textbf{0/1} \\ \hline G & - & - \\ \hline R/RO & $C_0 = 10^{-3}$ & $C_0 = 10^{-3}$ \\ \hline C-nu & $N = 4, \psi = 0.1$, DR & $N = 4, \psi = 0.01$, DR \\ \hline C-u & $N = 4, \psi = 0.1$, IPS & $N = 4, \psi = 0.1$, DR \\ \hline B & $N = 4$, IWR & $N = 16$, IWR \\ \hline B-g & $N = 4$, IWR & $N = 8$, IWR \\ \hline $\epsilon$G & $\epsilon = 0.02$, IWR & $\epsilon = 0.02$, IWR \\ \hline A & $\epsilon = 0.02, C_0 = 10^{-6}$, IWR & $\epsilon = 0.02, C_0 = 10^{-6}$, IWR \\ \hline \end{tabular} \end{table} \subsection{Additional Evaluation Results} \label{sub:results_appx} This sections provides additional experimental results, and more detailed win/loss statistics for tables in the main paper, showing both significant wins and significant losses, rather than just their difference. \paragraph{Extended tables.} Tables~\ref{table:win_loss_multiclass} and~\ref{table:win_loss_multilab} are extended versions of Table~\ref{table:win_loss_diff}, showing both significant wins and loss, more methods, and separate statistics for multiclass and multilabel datasets. In particular, we can see that both variants of RegCB become even more competitive against all other methods when using 0/1 encodings. Table~\ref{table:rcv1cs_all} extends Table~\ref{table:cost_sensitive}(a) with additional methods. Table~\ref{table:reductions_win_loss} is a more detailed win/loss version of Table~\ref{table:reductions}, and additionally shows statistics for 0/1 encodings. We also show separate statistics in Table~\ref{table:win_loss_uci} for the 8 datasets from the UCI repository considered in~\citep{foster2018practical}, which highlight that Greedy can outperform RegCB on some of these datasets, and that the optimistic variant of RegCB is often superior to the elimination variant. We note that our experimental setup is quite different from~\citet{foster2018practical}, who consider batch learning on an doubling epoch schedule, which might explain some of the differences in the results. \begin{table}[tb] \caption{\emph{Statistically significant} wins / losses of all methods on the 324 held-out multiclass classification datasets. Hyperparameters are fixed as given in Table~\ref{table:hyperparams2}. } \label{table:win_loss_multiclass} \tiny \center \begin{tabular}{ | l | c | c | c | c | c | c | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & G & R & RO & C-nu & B & B-g & $\epsilon$G & C-u & A \\ \hline G & - & 22 / 25 & 8 / 16 & 41 / 33 & 77 / 15 & 65 / 16 & 59 / 6 & 159 / 10 & 33 / 10 \\ \hline R & 25 / 22 & - & 16 / 23 & 49 / 31 & 74 / 19 & 61 / 20 & 70 / 12 & 159 / 11 & 48 / 15 \\ \hline RO & 16 / 8 & 23 / 16 & - & 47 / 21 & 76 / 13 & 60 / 13 & 71 / 4 & 166 / 6 & 46 / 7 \\ \hline C-nu & 33 / 41 & 31 / 49 & 21 / 47 & - & 66 / 26 & 51 / 31 & 75 / 19 & 159 / 10 & 50 / 29 \\ \hline B & 15 / 77 & 19 / 74 & 13 / 76 & 26 / 66 & - & 11 / 32 & 46 / 41 & 126 / 16 & 26 / 54 \\ \hline B-g & 16 / 65 & 20 / 61 & 13 / 60 & 31 / 51 & 32 / 11 & - & 49 / 31 & 130 / 11 & 27 / 42 \\ \hline $\epsilon$G & 6 / 59 & 12 / 70 & 4 / 71 & 19 / 75 & 41 / 46 & 31 / 49 & - & 125 / 14 & 2 / 37 \\ \hline C-u & 10 / 159 & 11 / 159 & 6 / 166 & 10 / 159 & 16 / 126 & 11 / 130 & 14 / 125 & - & 9 / 152 \\ \hline A & 10 / 33 & 15 / 48 & 7 / 46 & 29 / 50 & 54 / 26 & 42 / 27 & 37 / 2 & 152 / 9 & - \\ \hline \end{tabular} (a) -1/0 encoding \vspace{0.3cm} \begin{tabular}{ | l | c | c | c | c | c | c | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & G & R & RO & C-nu & B & B-g & $\epsilon$G & C-u & A \\ \hline G & - & 27 / 66 & 5 / 69 & 36 / 51 & 75 / 38 & 72 / 39 & 69 / 21 & 151 / 30 & 36 / 19 \\ \hline R & 66 / 27 & - & 15 / 38 & 42 / 24 & 87 / 12 & 83 / 16 & 108 / 19 & 172 / 4 & 77 / 22 \\ \hline RO & 69 / 5 & 38 / 15 & - & 59 / 13 & 109 / 7 & 105 / 9 & 121 / 3 & 175 / 3 & 89 / 4 \\ \hline C-nu & 51 / 36 & 24 / 42 & 13 / 59 & - & 83 / 27 & 72 / 29 & 96 / 25 & 170 / 6 & 63 / 31 \\ \hline B & 38 / 75 & 12 / 87 & 7 / 109 & 27 / 83 & - & 21 / 34 & 59 / 49 & 131 / 14 & 37 / 67 \\ \hline B-g & 39 / 72 & 16 / 83 & 9 / 105 & 29 / 72 & 34 / 21 & - & 65 / 46 & 129 / 18 & 42 / 54 \\ \hline $\epsilon$G & 21 / 69 & 19 / 108 & 3 / 121 & 25 / 96 & 49 / 59 & 46 / 65 & - & 122 / 28 & 3 / 43 \\ \hline C-u & 30 / 151 & 4 / 172 & 3 / 175 & 6 / 170 & 14 / 131 & 18 / 129 & 28 / 122 & - & 22 / 151 \\ \hline A & 19 / 36 & 22 / 77 & 4 / 89 & 31 / 63 & 67 / 37 & 54 / 42 & 43 / 3 & 151 / 22 & - \\ \hline \end{tabular} (b) 0/1 encoding \end{table} \begin{table}[tb] \small \center \caption{\emph{Statistically significant} wins / losses of all methods on the 5 multilabel classification datasets. Hyperparameters are fixed as given in Table~\ref{table:hyperparams2}. } \label{table:win_loss_multilab} \begin{tabular}{ | l | c | c | c | c | c | c | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & G & R & RO & C-nu & B & B-g & $\epsilon$G & C-u & A \\ \hline G & - & 2 / 2 & 1 / 0 & 3 / 1 & 2 / 2 & 3 / 2 & 2 / 1 & 4 / 1 & 2 / 1 \\ \hline R & 2 / 2 & - & 2 / 2 & 2 / 0 & 3 / 0 & 3 / 0 & 3 / 2 & 5 / 0 & 3 / 2 \\ \hline RO & 0 / 1 & 2 / 2 & - & 2 / 2 & 2 / 2 & 3 / 1 & 2 / 1 & 4 / 1 & 2 / 1 \\ \hline C-nu & 1 / 3 & 0 / 2 & 2 / 2 & - & 3 / 1 & 3 / 1 & 3 / 2 & 5 / 0 & 3 / 2 \\ \hline B & 2 / 2 & 0 / 3 & 2 / 2 & 1 / 3 & - & 1 / 1 & 3 / 2 & 5 / 0 & 3 / 2 \\ \hline B-g & 2 / 3 & 0 / 3 & 1 / 3 & 1 / 3 & 1 / 1 & - & 2 / 3 & 4 / 1 & 2 / 3 \\ \hline $\epsilon$G & 1 / 2 & 2 / 3 & 1 / 2 & 2 / 3 & 2 / 3 & 3 / 2 & - & 4 / 1 & 1 / 0 \\ \hline C-u & 1 / 4 & 0 / 5 & 1 / 4 & 0 / 5 & 0 / 5 & 1 / 4 & 1 / 4 & - & 1 / 4 \\ \hline A & 1 / 2 & 2 / 3 & 1 / 2 & 2 / 3 & 2 / 3 & 3 / 2 & 0 / 1 & 4 / 1 & - \\ \hline \end{tabular} (a) -1/0 encoding \vspace{0.3cm} \begin{tabular}{ | l | c | c | c | c | c | c | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & G & R & RO & C-nu & B & B-g & $\epsilon$G & C-u & A \\ \hline G & - & 3 / 1 & 2 / 2 & 1 / 3 & 4 / 0 & 4 / 1 & 4 / 0 & 5 / 0 & 3 / 0 \\ \hline R & 1 / 3 & - & 0 / 3 & 1 / 4 & 2 / 2 & 2 / 3 & 2 / 2 & 4 / 1 & 2 / 2 \\ \hline RO & 2 / 2 & 3 / 0 & - & 1 / 2 & 5 / 0 & 4 / 0 & 3 / 1 & 5 / 0 & 3 / 1 \\ \hline C-nu & 3 / 1 & 4 / 1 & 2 / 1 & - & 4 / 1 & 3 / 1 & 4 / 0 & 4 / 1 & 4 / 1 \\ \hline B & 0 / 4 & 2 / 2 & 0 / 5 & 1 / 4 & - & 0 / 2 & 1 / 2 & 2 / 1 & 1 / 3 \\ \hline B-g & 1 / 4 & 3 / 2 & 0 / 4 & 1 / 3 & 2 / 0 & - & 2 / 2 & 4 / 1 & 1 / 3 \\ \hline $\epsilon$G & 0 / 4 & 2 / 2 & 1 / 3 & 0 / 4 & 2 / 1 & 2 / 2 & - & 4 / 1 & 0 / 1 \\ \hline C-u & 0 / 5 & 1 / 4 & 0 / 5 & 1 / 4 & 1 / 2 & 1 / 4 & 1 / 4 & - & 0 / 5 \\ \hline A & 0 / 3 & 2 / 2 & 1 / 3 & 1 / 4 & 3 / 1 & 3 / 1 & 1 / 0 & 5 / 0 & - \\ \hline \end{tabular} (b) 0/1 encoding \end{table} \begin{table}[tb] \small \center \caption{\emph{Statistically significant} wins / losses of all methods on the 8 classification datasets from the UCI repository considered in~\citep{foster2018practical}. Hyperparameters are fixed as given in Table~\ref{table:hyperparams2}. } \label{table:win_loss_uci} \begin{tabular}{ | l | c | c | c | c | c | c | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & G & R & RO & C-nu & B & B-g & $\epsilon$G & C-u & A \\ \hline \textbf{G} & - & 4 / 0 & 2 / 1 & 5 / 2 & 5 / 0 & 4 / 1 & 6 / 0 & 6 / 0 & 5 / 0 \\ \hline R & 0 / 4 & - & 0 / 1 & 3 / 1 & 4 / 1 & 3 / 1 & 4 / 2 & 6 / 0 & 2 / 2 \\ \hline RO & 1 / 2 & 1 / 0 & - & 4 / 1 & 5 / 0 & 5 / 0 & 5 / 0 & 6 / 0 & 3 / 0 \\ \hline C-nu & 2 / 5 & 1 / 3 & 1 / 4 & - & 3 / 1 & 2 / 2 & 3 / 2 & 7 / 0 & 2 / 3 \\ \hline B & 0 / 5 & 1 / 4 & 0 / 5 & 1 / 3 & - & 0 / 2 & 3 / 2 & 6 / 0 & 2 / 2 \\ \hline B-g & 1 / 4 & 1 / 3 & 0 / 5 & 2 / 2 & 2 / 0 & - & 4 / 1 & 6 / 0 & 2 / 2 \\ \hline $\epsilon$G & 0 / 6 & 2 / 4 & 0 / 5 & 2 / 3 & 2 / 3 & 1 / 4 & - & 6 / 0 & 0 / 2 \\ \hline C-u & 0 / 6 & 0 / 6 & 0 / 6 & 0 / 7 & 0 / 6 & 0 / 6 & 0 / 6 & - & 0 / 6 \\ \hline A & 0 / 5 & 2 / 2 & 0 / 3 & 3 / 2 & 2 / 2 & 2 / 2 & 2 / 0 & 6 / 0 & - \\ \hline \end{tabular} (a) -1/0 encoding \vspace{0.3cm} \begin{tabular}{ | l | c | c | c | c | c | c | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & G & R & RO & C-nu & B & B-g & $\epsilon$G & C-u & A \\ \hline G & - & 1 / 5 & 0 / 5 & 2 / 2 & 3 / 2 & 3 / 1 & 3 / 1 & 7 / 0 & 2 / 1 \\ \hline R & 5 / 1 & - & 0 / 1 & 5 / 0 & 6 / 0 & 6 / 0 & 6 / 0 & 8 / 0 & 5 / 0 \\ \hline \textbf{RO} & 5 / 0 & 1 / 0 & - & 5 / 0 & 6 / 0 & 6 / 0 & 7 / 0 & 7 / 0 & 6 / 0 \\ \hline C-nu & 2 / 2 & 0 / 5 & 0 / 5 & - & 2 / 1 & 3 / 0 & 2 / 1 & 7 / 0 & 1 / 1 \\ \hline B & 2 / 3 & 0 / 6 & 0 / 6 & 1 / 2 & - & 3 / 0 & 1 / 3 & 7 / 0 & 0 / 3 \\ \hline B-g & 1 / 3 & 0 / 6 & 0 / 6 & 0 / 3 & 0 / 3 & - & 1 / 3 & 7 / 0 & 0 / 3 \\ \hline $\epsilon$G & 1 / 3 & 0 / 6 & 0 / 7 & 1 / 2 & 3 / 1 & 3 / 1 & - & 7 / 0 & 0 / 1 \\ \hline C-u & 0 / 7 & 0 / 8 & 0 / 7 & 0 / 7 & 0 / 7 & 0 / 7 & 0 / 7 & - & 0 / 7 \\ \hline A & 1 / 2 & 0 / 5 & 0 / 6 & 1 / 1 & 3 / 0 & 3 / 0 & 1 / 0 & 7 / 0 & - \\ \hline \end{tabular} (b) 0/1 encoding \end{table} \begin{table}[tb] \caption{Progressive validation loss for RCV1 with real-valued costs. Same as Table~\ref{table:cost_sensitive}(a), but with all methods. Hyperparameters are fixed as given in Table~\ref{table:hyperparams2}. The learning rate is optimized once on the original dataset, and we show mean and standard error based on 10 different random reshufflings of the dataset.} \label{table:rcv1cs_all} \small \center \begin{tabular}{ | c | c | c | c | c | } \hline G & R & RO & C-nu & C-u \\ \hline 0.215 $\pm$ 0.010 & 0.408 $\pm$ 0.003 & 0.225 $\pm$ 0.008 & 0.215 $\pm$ 0.006 & 0.570 $\pm$ 0.023 \\ \hline \end{tabular} \begin{tabular}{| c | c | c | c |} \hline B & B-g & $\epsilon$G & A \\ \hline 0.256 $\pm$ 0.006 & 0.251 $\pm$ 0.005 & 0.230 $\pm$ 0.009 & 0.230 $\pm$ 0.010 \\ \hline \end{tabular} \end{table} \begin{table}[tb] \caption{Impact of reductions for Bag (left) and $\epsilon$-greedy (right), with hyperparameters optimized and encoding fixed to -1/0 or 0/1. Extended version of Table~\ref{table:reductions}. Each (row, column) entry shows the \emph{statistically significant} wins and losses of row against column.} \label{table:reductions_win_loss} \small \center \begin{tabular}{ | l | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & ips & dr & iwr \\ \hline ips & - & 30 / 72 & 25 / 84 \\ \hline dr & 72 / 30 & - & 30 / 58 \\ \hline iwr & 84 / 25 & 58 / 30 & - \\ \hline \end{tabular} ~~\begin{tabular}{ | l | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & ips & dr & iwr \\ \hline ips & - & 85 / 22 & 16 / 149 \\ \hline dr & 22 / 85 & - & 9 / 164 \\ \hline iwr & 149 / 16 & 164 / 9 & - \\ \hline \end{tabular} (a) -1/0 encoding \vspace{0.3cm} \begin{tabular}{ | l | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & ips & dr & iwr \\ \hline ips & - & 46 / 240 & 17 / 245 \\ \hline dr & 240 / 46 & - & 33 / 97 \\ \hline iwr & 245 / 17 & 97 / 33 & - \\ \hline \end{tabular} ~~\begin{tabular}{ | l | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & ips & dr & iwr \\ \hline ips & - & 40 / 136 & 36 / 182 \\ \hline dr & 136 / 40 & - & 23 / 148 \\ \hline iwr & 182 / 36 & 148 / 23 & - \\ \hline \end{tabular} (b) 0/1 encoding \end{table} \paragraph{Varying~$C_0$ in RegCB-opt and active $\epsilon$-greedy.} Figure~\ref{fig:ro} shows a comparison between RegCB-opt and Greedy or Cover-NU on our corpus, for different values of~$C_0$, which controls the level of exploration through the width of confidence bounds. Figure~\ref{fig:active} shows the improvements that the active $\epsilon$-greedy algorithm can achieve compared to $\epsilon$-greedy, under different settings. \begin{figure}[tb] \centering \includegraphics[width=0.30\columnwidth]{figures/scatter/regcbopt_1e-1.pdf} \includegraphics[width=0.30\columnwidth]{figures/scatter/regcbopt_1e-2.pdf} \includegraphics[width=0.30\columnwidth]{figures/scatter/regcbopt_1e-3.pdf} \includegraphics[width=0.30\columnwidth]{figures/scatter/regcbopt_cnu_1e-1.pdf} \includegraphics[width=0.30\columnwidth]{figures/scatter/regcbopt_cnu_1e-2.pdf} \includegraphics[width=0.30\columnwidth]{figures/scatter/regcbopt_cnu_1e-3.pdf} \caption{Comparison of RegCB-opt with Greedy (top) and Cover-NU (bottom) for different values of~$C_0$. Hyperparameters for Greedy and Cover-NU fixed as in Table~\ref{table:algos}. Encoding fixed to -1/0. The plots consider normalized loss on held-out datasets, with red points indicating significant wins. } \label{fig:ro} \end{figure} \begin{figure}[tb] \centering \small \includegraphics[width=0.30\columnwidth]{figures/scatter/active_drn10_1e-2.pdf} \includegraphics[width=0.30\columnwidth]{figures/scatter/active_drn10_1e-4.pdf} \includegraphics[width=0.30\columnwidth]{figures/scatter/active_drn10_1e-6.pdf}\\ (a) DR\\ \vspace{0.2cm} \includegraphics[width=0.30\columnwidth]{figures/scatter/active_mtrn10_1e-2.pdf} \includegraphics[width=0.30\columnwidth]{figures/scatter/active_mtrn10_1e-4.pdf} \includegraphics[width=0.30\columnwidth]{figures/scatter/active_mtrn10_1e-6.pdf}\\ (b) \MTR{} \caption{Improvements to $\epsilon$-greedy from our active learning strategy. Encoding fixed to -1/0. The \MTR{} implementation described in Section~\ref{sub:active_impl} still manages to often outperform $\epsilon$-greedy, despite only providing an approximation to Algorithm~\ref{alg:egreedy_active}.} \label{fig:active} \end{figure} \paragraph{Counterfactual evaluation.} Figure~\ref{fig:cfe_appx} extends Figure~\ref{fig:cfe} to include all algorithms, and additionally shows results of using IPS estimates directly on the losses~$\ell_t(a_t)$ instead of rewards $1 - \ell_t(a_t)$, which tend to be significantly worse. \begin{figure}[tb] \centering \begin{subfigure}[c]{.45\textwidth} \includegraphics[width=\textwidth]{figures/explogs/rips_neg10_full.pdf} \caption{all multiclass datasets} \end{subfigure} \begin{subfigure}[c]{.45\textwidth} \includegraphics[width=\textwidth]{figures/explogs/rips_neg10_full10000.pdf} \caption{$n \geq 10\,000$ only} \end{subfigure} \begin{subfigure}[c]{.45\textwidth} \includegraphics[width=\textwidth]{figures/explogs/ips_neg10_full.pdf} \caption{all multiclass datasets} \end{subfigure} \begin{subfigure}[c]{.45\textwidth} \includegraphics[width=\textwidth]{figures/explogs/ips_neg10_full10000.pdf} \caption{$n \geq 10\,000$ only} \end{subfigure} \caption{Errors of IPS counterfactual estimates for the uniform random policy using exploration logs collected by various algorithms on multiclass datasets (extended version of Figure~\ref{fig:cfe}). The boxes show quartiles (with the median shown as a blue line) of the distribution of squared errors across all multiclass datasets or only those with at least 10\,000 examples. The logs are obtained by running each algorithm with -1/0 encodings, fixed hyperparameters from Table~\ref{table:algos}, and the best learning rate on each dataset according to progressive validation loss. The top plots consider IPS with \emph{reward} estimates (as in Figure~\ref{fig:cfe}), while the bottom plots consider IPS on the \emph{loss}.} \label{fig:cfe_appx} \end{figure} \subsection{Shared \emph{baseline} parameterization} \label{sub:baseline} We also experimented with the use of an \emph{action-independent additive baseline} term in our loss estimators, which can help learn better estimates with fewer samples in some situations. In this case the regressors take the form $f(x, a) = \theta_0 + \theta_a^\top x$ (\MTR{}) or $\hat{\ell}(x, a) = \phi_0 + \phi_a^\top x$ (DR). In order to learn the baseline term more quickly, we propose to use a separate online update for the parameters $\theta_0$ or $\phi_0$ to regress on observed losses, followed by an online update on the residual for the action-dependent part. We scale the step-size of these baseline updates by the largest observed magnitude of the loss, in order to adapt to the observed loss range for normalized updates~\citep{ross2013normalized}. Such an additive baseline can be helpful to quickly adapt to a constant loss estimate thanks to the separate online update. This appears particularly useful with the -1/0 encoding, for which the initialization at 0 may give pessimistic loss estimates which can be damaging in particular for the greedy method, that often gets some initial exploration from an optimistic cost encoding. This can be seen in Figure~\ref{fig:baseline}(top). Table~\ref{table:baseline_opt} shows that optimizing over the use of \emph{baseline} on each dataset can improve the performance of Greedy and RegCB-opt when compared to other methods such as Cover-NU. In an online learning setting, baseline can also help to quickly reach an unknown target range of loss estimates. This is demonstrated in Figure~\ref{fig:baseline}(bottom), where the addition of baseline is shown to help various methods with 9/10 encodings on a large number of datasets. We do not evaluate RegCB for 9/10 encodings as it needs a priori known upper and lower bounds on costs. \begin{figure}[tb] \centering \includegraphics[width=0.30\columnwidth]{figures/scatter/baseline_g_neg10.pdf} ~\includegraphics[width=0.30\columnwidth]{figures/scatter/baseline_ro_neg10.pdf} ~\includegraphics[width=0.30\columnwidth]{figures/scatter/baseline_c-nu_neg10.pdf} \\ \includegraphics[width=0.30\columnwidth]{figures/scatter/robust910_g.pdf} ~\includegraphics[width=0.30\columnwidth]{figures/scatter/robust910_c-nu.pdf} ~\includegraphics[width=0.30\columnwidth]{figures/scatter/robust910_b-g.pdf} \caption{(top) Impact of \emph{baseline} on different algorithms with encoding fixed to -1/0; for Greedy and RegCB-opt, it can significantly help against pessimistic initial costs in some datasets. Hyperparameters fixed as in Table~\ref{table:algos}. (bottom) Baseline improves robustness to the range of losses. The plots consider normalized loss on held-out datasets, with red points indicating significant wins.} \label{fig:baseline} \end{figure} \begin{table} \small \center \caption{\emph{Statistically significant} wins / losses of all methods on held-out datasets, with -1/0 encoding and fixed hyperparameters, except for \emph{baseline}, which is optimized on each dataset together with the learning rate. The fixed hyperparameters are shown in the table below, and were selected with the same voting approach described in Table~\ref{table:algos}. This optimization benefits Greedy and RegCB-opt in particular.} \label{table:baseline_opt} \begin{tabular}{ | l | c | c | c | c | c | c | c | c | c | } \hline $\downarrow$ vs $\rightarrow$ & G & R & RO & C-nu & B & B-g & $\epsilon$G & C-u & A \\ \hline G & - & 32 / 14 & 14 / 16 & 60 / 15 & 88 / 13 & 69 / 16 & 78 / 1 & 178 / 1 & 49 / 6 \\ \hline R & 14 / 32 & - & 9 / 36 & 43 / 25 & 69 / 19 & 52 / 24 & 66 / 14 & 167 / 9 & 42 / 19 \\ \hline RO & 16 / 14 & 36 / 9 & - & 64 / 13 & 90 / 10 & 66 / 10 & 84 / 3 & 187 / 1 & 61 / 5 \\ \hline C-nu & 15 / 60 & 25 / 43 & 13 / 64 & - & 58 / 30 & 34 / 41 & 60 / 24 & 163 / 6 & 42 / 36 \\ \hline B & 13 / 88 & 19 / 69 & 10 / 90 & 30 / 58 & - & 10 / 33 & 54 / 35 & 133 / 7 & 29 / 59 \\ \hline B-g & 16 / 69 & 24 / 52 & 10 / 66 & 41 / 34 & 33 / 10 & - & 57 / 18 & 147 / 2 & 37 / 38 \\ \hline $\epsilon$G & 1 / 78 & 14 / 66 & 3 / 84 & 24 / 60 & 35 / 54 & 18 / 57 & - & 129 / 10 & 3 / 44 \\ \hline C-u & 1 / 178 & 9 / 167 & 1 / 187 & 6 / 163 & 7 / 133 & 2 / 147 & 10 / 129 & - & 5 / 164 \\ \hline A & 6 / 49 & 19 / 42 & 5 / 61 & 36 / 42 & 59 / 29 & 38 / 37 & 44 / 3 & 164 / 5 & - \\ \hline \end{tabular} \vspace{0.3cm} \begin{tabular}{|c|c|} \hline Algorithm & Hyperparameters \\ \hline G & - \\ \hline R/RO & $C_0 = 10^{-3}$ \\ \hline C-nu & $N = 16, \psi = 0.1$, DR \\ \hline C-u & $N = 4, \psi = 0.1$, IPS \\ \hline B & $N = 4$, IWR \\ \hline B-g & $N = 4$, IWR \\ \hline $\epsilon$G & $\epsilon = 0.02$, IWR \\ \hline A & $\epsilon = 0.02, C_0 = 10^{-6}$, IWR \\ \hline \end{tabular} \end{table} \subsection{Organization of the paper} The paper is organized as follows: \begin{itemize}[noitemsep,topsep=1pt] \item Section~\ref{sec:setup} provides relevant background on i.i.d.~contextual bandits, optimization oracles, and mechanisms for reduction to off-policy learning, and introduces our experimental setup. \item Section~\ref{sec:algorithms} describes the main algorithms we consider in our evaluation, as well as the modifications that we found effective in practice. \item Section~\ref{sec:experiments} presents the results and insights from our experimental evaluation. \item Finally, we conclude in Section~\ref{sec:discussion} with a discussion of our findings and a collection of guidelines and recommendations for practitioners that come out of our empirical study, as well as open questions for theoreticians. \end{itemize} \section{Introduction} \label{sec:introduction} \input{introduction} \section{Contextual Bandit Setup} \label{sec:setup} \input{background} \section{Algorithms} \label{sec:algorithms} \input{algorithms} \section{Evaluation} \label{sec:experiments} \input{evaluation} \section{Discussion and Takeaways} \label{sec:discussion} \input{discussion}
{ "timestamp": "2020-01-27T02:07:23", "yymm": "1802", "arxiv_id": "1802.04064", "language": "en", "url": "https://arxiv.org/abs/1802.04064" }
\section{Introduction} Full-field electro-dynamical simulations are used in nano-optics to predict the optical response of small (often sub-wavelength) particles by solving the Maxwell's equations~\cite{maxwell_dynamical_1865}. Examples are either the scattering or the confinement of an external electro-magnetic field by dielectric~\cite{kuznetsov_optically_2016} or metallic~\cite{bharadwaj_optical_2009} nano-structures, the appearance of localized surface plasmons~\cite{maier_plasmonics_2010} or the interaction of nano-structures with quantum emitters placed in their vicinity~\cite{girard_molecular_1995}. Nano-optics governs manifold effects and applications. Examples are phase control or polarization conversion either at the single particle level~\cite{black_optimal_2014, wiecha_polarization_2017} or from metasurfaces~\cite{arbabi_dielectric_2015}, shaping of the directionality of scattering~\cite{curto_unidirectional_2010}, thermoplasmonic heat generation with sub-micrometer heat localization~\cite{baffou_thermo-plasmonics_2013} or nonlinear nano-optics~\cite{kauranen_nonlinear_2012, butet_optical_2015}. It is of great importance to be able to calculate optical effects occurring in sub-wavelength small structures in order to predict or to interpret experimental findings. In this paper, we present the python toolkit ``pyGDM'' for full-field electro-dynamical simulations of nano-structures. Below we list the key-features and aims of pyGDM\ which we will explain in detail in the following. \begin{itemize} \item Easy to use. Easy to install: Fully relying on freely available open-source python libraries (numpy, scipy, matplotlib). % \item Fast: Performance-critical parts are implemented in fortran and are parallelized with openmp. Efficient and parallelized scipy libraries are used whenever possible. Spectra can be calculated very rapidly via an MPI-parallelized routine. % \item Electro-dynamical simulations including a substrate. % \item Different illumination sources such as plane wave, focused beam or dipolar emitter. % \item Efficient calculation of large problems such as raster-scan simulations. % \item Provide tools to rapidly post-process the simulations and derive physical quantities such as % \begin{itemize} \item optical near-field inside and around nanostructures. \item extinction, absorption and scattering cross-sections. \item polarization- and spatially resolved far-field scattering. \item heat generation, temperature distribution around nano-objects. \item photonic local density of states (LDOS). \item modification of the decay-rate of dipolar emitters in the presence of a nanostructure. \end{itemize} \item Evolutionary optimization of the nano-particle geometry with regards to specific optical properties. % \item Easy to use visualization tools including animations of the electro-magnetic fields. \end{itemize} We will start with a brief introduction to the Green Dyadic Method (GDM), the numerical discretization scheme and the renormalization of the Green's dyad. We will also compare the GDM to other frequently used numerical techniques. In the second part, we will explain in more detail the main features and tools provided by pyGDM. We start by explaining the general structure of pyGDM\ and the ingredients to setup a simulation. Then we describe how the main simulation routines work. This part is followed by descriptions of the pyGDM-tools for simulating different optical effects, post-processing, data-analysis and visualization. Subsequently we will illustrate the capabilities of pyGDM\ by some example simulations and benchmarks. In particular, we will compare pyGDM-simulations to Mie theory. Finally, we will give an overview on the evolutionary optimization submodule of pyGDM, accompanied by several examples. In the appendix we provide details concerning more technical tools and aspects of pyGDM\ as well as instructions for the compilation, installation and use of pyGDM. \section{The Green dyadic method}\label{sec:green_method} In the following we will give a brief introduction to the basic concepts of the Green dyadic method, implemented in pyGDM. Before we begin with this short overview, we want to note that the GDM is a frequency domain technique, solving Maxwell's equations for mono\-chroma\-tic fields (oscillating at fixed frequency~\(\omega\)). \paragraph*{Note:} We use cgs (centimeter, gram, second) units in pyGDM\ which results in simpler terms for most of the equations. This is first of all helpful for the derivation of the main equations and has no impact on the simulation results. The post-processing routines return values conform with SI units such as cross-sections (units of nm\(^2\)), powers in Watt or unit-less values (e.g. relative field intensities such as \(|\mathbf{E}|^2 / |\mathbf{E}_0|^2\)). \subsection{From Maxwell's equations to Lippmann-Schwinger equation} All electromagnetic phenomena can be entirely described by the four Maxwell's equations, which in the frequency domain write as follows (cgs units): \begin{subequations}\label{eq:Maxwell} \begin{align} \nabla\cdot \mathbf{E}(\mathbf{r}, \omega) &= \frac{4\pi}{\epsilon_{\text{env}}} \rho(\mathbf{r}, \omega) \label{eq:MaxwellFourierdivD}\\ % \nabla \times \mathbf{E}(\mathbf{r}, \omega) &= \mathrm{i} k_0 \mathbf{B}(\mathbf{r}, \omega) \label{eq:MaxwellFourierrotE}\\ % \nabla\cdot \mathbf{B}(\mathbf{r}, \omega) &= 0 \label{eq:MaxwellFourierdivB}\\ % \nabla\times \mathbf{B}(\mathbf{r}, \omega) &= -\mathrm{i} k_0 \epsilon_{\text{env}} \mathbf{E}(\mathbf{r}, \omega) + \frac{4\pi}{c} \mathbf{j}(\mathbf{r}, \omega) \label{eq:MaxwellFourierrotH} \end{align} \end{subequations} where the charge density \(\rho\) and the current density \(\mathbf{j}\) are associated with an arbitrary nanostructure, placed in an environment of permittivity \(\epsilon_{\text{env}}\) (\textit{c.f.} Fig.~\ref{fig:Theory_sketch_object}). \(k_0 = \omega / c\) is the wavenumber of light in vacuum, \(c\) the speed of light and the symbol \(\times\) is the rotational. \(\epsilon_r\) and \(\mu_r\) are the relative dielectric permittivity and magnetic permeability of the nanostructure, respectively. For dispersive media, \(\epsilon_r\) and \(\mu_r\) are functions of the frequency \(\omega\). They are defined as the ratios of the material's permittivity and permeability relative to the vacuum values \(\epsilon_0\) and \(\mu_0\). They can be related to the electric and magnetic susceptibilities as \(\chi_{\text{e}} = (\epsilon_r - \epsilon_{\text{env}})/4\pi\) and \(\chi_{\text{m}} = (\mu_r - \mu_{\text{env}})/4\pi\), respectively. In general, \(\chi_{\text{e}} (\mathbf{r}, \omega)\) and \(\chi_{\text{m}} (\mathbf{r}, \omega)\) are functions of frequency and space. In pyGDM\ we assume non-magnetic media, hence \(\mu_r = \mu_{\text{env}} = 1\). \begin{figure}[t] \centering \includegraphics[width=.65\columnwidth]{sketch_object} \caption{ Electromagnetic wave impinging on a nano\-structure of arbitrary shape, placed in a homogeneous environment.}\label{fig:Theory_sketch_object} \end{figure} It is possible to derive a wave-equation for the electric field from Maxwell's equations (see e.g. Ref.~\cite{griffiths_introduction_1989}, chapter~9 or Ref.~\cite{girard_near_2005}): \begin{equation} (\Delta + k^2) \mathbf{E}(\mathbf{r}, \omega) = - \frac{4\pi}{\epsilon_{\text{env}}} \left( k^2 + \nabla \nabla \right) \mathbf{P}(\mathbf{r}, \omega). \label{eq:waveequationEfieldSI} \end{equation} Where \(\nabla\) and \(\Delta\) are the nabla- and Laplace operator, respectively, \(\mathbf{P} = \boldsymbol\chi_{\text{e}} \cdot \mathbf{E}\) is the electric polarization and \(k\) the wavenumber in the environment medium with \(k = \sqrt{\epsilon_{\text{env}}}\,k_0\). \paragraph*{Note:} The dielectric function is in general a tensor of rank 2. In pyGDM, an isotropic susceptibility \(\chi_{\text{e,iso}}\) is assumed, hence the susceptibility tensor \(\boldsymbol\chi_{\text{e}}\) is defined as \begin{equation} \boldsymbol\chi_{\text{e}}(\mathbf{r}, \omega) = \left[ \begin{matrix} \chi_{\text{e,iso}}(\mathbf{r}, \omega) & 0 & 0 \\ 0 & \chi_{\text{e,iso}}(\mathbf{r}, \omega) & 0 \\ 0 & 0 & \chi_{\text{e,iso}}(\mathbf{r}, \omega) \end{matrix} \right]\, . \end{equation} In future versions of pyGDM\ anisotropic polarizabilities might be supported. From the wave-equation Eq.~\eqref{eq:waveequationEfieldSI} one can derive a vectorial Lippmann-Schwinger equation for the electric field (see e.g. Ref.~\cite{girard_near_2005}): \begin{equation} \mathbf{E}(\mathbf{r}, \omega) = \mathbf{E}_0(\mathbf{r}, \omega) + \int \mathbf{G}_{\text{tot}}^{\text{EE}}(\mathbf{r}, \mathbf{r'}, \omega) \cdot \boldsymbol\chi_{\text{e}} \cdot \mathbf{E}(\mathbf{r'}, \omega) \text{d} \mathbf{r'} \label{eq:LippmannSchwingerG0} \end{equation} which relates in a self-consistent manner the incident (or ``zero order'', ``fundamental'') electric field \(\mathbf{E}_0\) with the total field \(\mathbf{E}\) inside the structure of susceptibility \(\boldsymbol\chi_{\text{e}}\). The integral in Eq.~\eqref{eq:LippmannSchwingerG0} runs over the volume of the structure. \(\mathbf{G}_{\text{tot}}^{\text{EE}}\) is the Green's dyad, describing the environment in which the structure is placed (see also section~\ref{sec:technical_detail}). The Green's dyadic tensors \(\mathbf{G}\) are also called field susceptibilities and were originally introduced by G. S. Agarwal. \cite{agarwal_quantum_1975}. For an object in vacuum \(\mathbf{G}_{\text{tot}}^{\text{EE}}=\mathbf{G}_0^{\text{EE}}\), which writes~\cite{girard_near_2005, girard_shaping_2008} \begin{multline}\label{eq:vacuumGreenDyadicFunction} \mathbf{G}_0^{\text{EE}}(\mathbf{r}, \mathbf{r'}, \omega) = \frac{1}{\epsilon_{\text{env}}} \Big( k^2 \, \mathbf{I} + \nabla\nabla \Big) G_0(\mathbf{r}, \mathbf{r'}, \omega) \\ = \frac{ \mathrm{e}^{\mathrm{i} k R} }{ \epsilon_{\text{env}} } \, \Big( - k^2 \mathbf{T}_1(\mathbf{R}) - ik \mathbf{T}_2(\mathbf{R}) + \mathbf{T}_3(\mathbf{R}) \Big). \end{multline} \(\mathbf{I}\) is the Cartesian unitary tensor, \(\nabla\) the nabla operator acting along \(\mathbf{r}\) and \(G_0\) the scalar Green's function (see equation~\eqref{eq:scalarGreenFunctionVac}). The superscript ``\(^{\text{EE}}\)'' indicates that the Green's function accounts for an electric-electric interaction. Furthermore we used the abbreviations \(\mathbf{R} = \mathbf{r} - \mathbf{r'}\) and \begin{align} \mathbf{T}_1(\mathbf{R}) & = \frac{\mathbf{R}\mathbf{R} - \mathbf{I}R^2}{R^3} \label{eq:vacuumGreenDyadicFunctionT1}\\ \mathbf{T}_2(\mathbf{R}) & = \frac{3\mathbf{R}\mathbf{R} - \mathbf{I}R^2}{R^4} \label{eq:vacuumGreenDyadicFunctionT2} \\ \mathbf{T}_3(\mathbf{R}) & = \frac{3\mathbf{R}\mathbf{R} - \mathbf{I}R^2}{R^5}. \label{eq:vacuumGreenDyadicFunctionT3} \end{align} \(\mathbf{R}\mathbf{R}\) is the tensorial product of \(\mathbf{R}\) with itself and \(R\) represents its modulus. \(\mathbf{T}_1\) describes far-field effects while \(\mathbf{T}_2\) and \(\mathbf{T}_3\) account for the near-field. In pyGDM\ an additional non-retarded Green's dyad is used which allows to include a substrate and a cladding layer (see Fig.~\ref{fig:reference_system}): \begin{equation} \mathbf{G}_{\text{tot}}^{\text{EE}} = \mathbf{G}_0^{\text{EE}} + \mathbf{G}_{\text{3-layer}}\, . \end{equation} Such dyadic function \(\mathbf{G}_{\text{3-layer}}\) for a layered reference system can be derived in an asymptotic form using the image charges method (see also section~\ref{sec:technical_detail}). The derivation of a retarded Green's dyad for multi-layered systems is explained in detail e.g. in Refs.~\cite{colas_des_francs_enhanced_2005, paulus_accurate_2000}. For a derivation of the Lippmann-Schwinger equation in SI units, see e.g. Ref.~\cite{wiecha_linear_2016}. \begin{figure}[tp] % \centering \includegraphics[width=\linewidth]{sketch_reference_system} % \caption{ Geometry of the reference system described by the Green's dyad used in pyGDM: The discretized nano-structure is placed in the environment layer with (complex) refractive index \(n_2\) and of thickness \textit{spacing}. It is sandwiched between a substrate (\(n_1\)) and a cladding layer (\(n_3\)). }\label{fig:reference_system} \end{figure} % \subsection{Volume discretization} For arbitrarily shaped objects, the integral in the Lippmann-Schwinger equation~\eqref{eq:LippmannSchwingerG0} can generally not be solved analytically. In the following we describe a numerical approach which requires the discretization of the integral into a sum over finite size volume elements (see also Ref.~\cite{girard_near_2005}). For reasons of clarity the dependency on the frequency \(\omega\) will be omitted in the following. We discretize the nano-object using \(N\) cubic volume elements centered at positions \(\mathbf{r}_i\), as illustrated in figure~\ref{fig:Theory_Volume_Discretization}. The cube side lengths \(d\) and thus \(V_{\text{cell}} = d^3\) are constant on the mesh. \begin{multline} \mathbf{E}(\mathbf{r}_i, \omega) = \mathbf{E}_0(\mathbf{r}_i, \omega) + \\ \sum\limits_{j=1}^{N} \mathbf{G}_{\text{tot}}^{\text{EE}}(\mathbf{r}_i, \mathbf{r}_j, \omega) \cdot \boldsymbol\chi_{\text{e}}(\mathbf{r}_j,\omega)\cdot \mathbf{E}(\mathbf{r}_j, \omega) V_{\text{cell}}. \label{eq:LippmannSchwingerVolumeDiscretization} \end{multline} \begin{figure*}[t] \centering \includegraphics[width=.4\textwidth]{3D_volume_discretization_objects} \hspace{1cm} \includegraphics[width=.4\textwidth]{3D_volume_discretization_discretized} \caption{ Arbitrary nanostructure composed of multiple elements lying on a substrate (left) and its volume discretization on a cubic lattice (right).}\label{fig:Theory_Volume_Discretization} \end{figure*} We can rewrite eq.~\eqref{eq:LippmannSchwingerVolumeDiscretization} as follows \begin{multline} \mathbf{E}_0(\mathbf{r}_i) % = \mathbf{E}(\mathbf{r}_i) - \sum\limits_{j=1}^{N} \mathbf{G}_{\text{tot}}^{\text{EE}}(\mathbf{r}_i, \mathbf{r}_j) \cdot \boldsymbol\chi_{\text{e}}(\mathbf{r}_j)\cdot \mathbf{E}(\mathbf{r}_j) V_{\text{cell}} \\ = \sum\limits_{j=1}^{N} \Big( \delta_{ij} \mathbf{I} - \boldsymbol\chi_{\text{e}}(\mathbf{r}_j) \cdot V_{\text{cell}} \, \mathbf{G}_{\text{tot}}^{\text{EE}}(\mathbf{r}_i, \mathbf{r}_j) \Big) \cdot \mathbf{E}(\mathbf{r}_j) \end{multline} where \(\delta_{ij}\) is the Kronecker symbol. Let us now define two \(3N\)-dimensional vectors containing the ensemble of all electric field vectors in the discretized nano-object \begin{multline*} \mathbf{E}_{0, \text{obj.}} = \Big( E_{0,x}(\mathbf{r}_1),E_{0,y}(\mathbf{r}_1),E_{0,z}(\mathbf{r}_1), \\ E_{0,x}(\mathbf{r}_2),\; \ldots,\quad \ldots,\; E_{0,z}(\mathbf{r}_N) \Big) \end{multline*} \begin{multline*} \mathbf{E}_{\text{obj.}} = \Big( E_{x}(\mathbf{r}_1),E_{y}(\mathbf{r}_1),E_{z}(\mathbf{r}_1), \\ E_{x}(\mathbf{r}_2),\; \ldots,\quad \ldots,\; E_{z}(\mathbf{r}_N) \Big). \end{multline*} Together with the \(3N \times 3N\) matrix \(\mathbf{M}\) composed of \(3 \times 3\) sub-matrices \begin{equation}\label{eq:definitionMforInversion} \mathbf{M}_{ij} = \delta_{ij} \mathbf{I} - \boldsymbol\chi_{\text{e}} (\mathbf{r}_j) \cdot V_{\text{cell}} \mathbf{G}_{\text{tot}}^{\text{EE}}(\mathbf{r}_i, \mathbf{r}_j) \end{equation} we obtain a coupled system of \(3N\) linear equations \begin{equation}\label{eq:definitionMainEquationGDM} \mathbf{E}_{0, \text{obj.}} = \mathbf{M} \cdot \mathbf{E}_{\text{obj.}}\,. \end{equation} If we inverse the matrix~\(\mathbf{M}\) defined by eq.~\eqref{eq:definitionMforInversion}, we can calculate the field \(\mathbf{E}_{\text{obj.}}\) inside the structure for all possible incident fields \(\mathbf{E}_{0, \text{obj.}}\) (at frequency \(\omega\)) by means of a simple matrix-vector multiplication: \begin{equation}\label{eq:definitionGeneralizedPropoagator} \mathbf{E}_{\text{obj.}} = {\boldsymbol{\cal K}} \cdot \mathbf{E}_{0, \text{obj.}}\,, \end{equation} where we used the symbol \({\boldsymbol{\cal K}}\) for the inverse matrix \begin{equation} {\boldsymbol{\cal K}}(\omega) = \mathbf{M}^{-1}(\omega)\, . \end{equation} \({\boldsymbol{\cal K}}\) is called the \emph{generalized field propagator}, as introduced by Martin \textit{et al.} \cite{martin_generalized_1995}. \paragraph*{Note:} In our notation, \({\boldsymbol{\cal K}}\) represents the full \(3N \times 3N\) matrix, describing the response of the entire nanostructure. This matrix is composed of \(3\times 3\) sub-tensors \(\mathbf{K}(\mathbf{r}_i,\mathbf{r}_j)\) for the couples of \(i\)th and \(j\)th meshpoint. \paragraph*{Note:} After equation~\eqref{eq:LippmannSchwingerVolumeDiscretization}, we can use the Green's dyad of the reference system with the field inside the particle in order to calculate the total electric field at any point \(\mathbf{r}_i\) outside the nanostructure. \subsection{Renormalization of the Green's dyad}\label{sec:GDMRenormalizationScheme} When integrating the polarization distribution in equation~\eqref{eq:LippmannSchwingerG0} over the volume of the nanostructure, we integrate scalar Green's functions of the form \begin{equation}\label{eq:scalarGreenFunctionVac} G_0(\mathbf{r}, \mathbf{r'}) = \frac{\mathrm{e}^{\mathrm{i} k\, | \mathbf{r} - \mathbf{r'} |}}{ | \mathbf{r} - \mathbf{r'} | }. \end{equation} Obviously, \(G_0\) diverges if \(\mathbf{r} = \mathbf{r'}\), which occurs when the field of a point dipole \(\mathbf{p}\delta(\mathbf{r} - \mathbf{r'})\) is being evaluated at the dipole's position \(\mathbf{r'}\) itself. As a consequence, in order to remove this singularity, we need to apply a regularization scheme~\cite{yaghjian_electric_1980}. For a three dimensional cubic mesh, a simple renormalization rule for the free-space Green's dyad has been proposed (see Ref.~\cite{girard_near-field_1996}, section~4.3): \begin{equation}\label{eq:renormalization_cube} \mathbf{G}_{0,\text{cube}}^{\text{EE}}(\mathbf{r}_i, \mathbf{r}_i) = - \frac{4\pi}{3 \epsilon_{\text{env}} d^3} \, \mathbf{I} \end{equation} with \(d\) the stepsize of the volume discretization. The choice of an appropriate mesh can be crucial for the convergence of the method. While structures with flat surfaces and right angles (e.g. cuboids) can be accurately discretized using a cubic mesh, particles with 3-fold symmetry (e.g. prisms) or curved structures like wires of circular section or spherical particles are better described using a hexagonal mesh. A \(3\)D hexagonal compact mesh can be regularized with (see Ref.~\cite{girard_shaping_2008}, section~3.1) \begin{equation}\label{eq:renormalization_hex} \mathbf{G}_{0,\text{hex}}^{\text{EE}}(\mathbf{r}_i, \mathbf{r}_i) = -\frac{4\pi \sqrt{2}}{3 \epsilon_{\text{env}} d^3} \, \mathbf{I} \, . \end{equation} While a cubic mesh cell has a volume of \(V_{\text{cell}} = d^3\), in the hexagonal compact case, the volume of a cell equals \(V_{\text{cell}} = d^3 / \sqrt{2}\) and also must be accordingly adapted in Eq.~\eqref{eq:definitionMforInversion}. Other geometries like cuboids~\cite{ould_agha_near-field_2014} or tetrahedrons~\cite{kottmann_accurate_2000} can be used for the mesh as well, but are not implemented in pyGDM\ so far. Because it accounts for the field of a point dipole at the location of the dipole itself, the sub-matrix \(\mathbf{M}_{ii}\) is also called ``self-term''. \subsection{Multiple monochromatic simulations on the same nanostructure}\label{sec:GDMRasterScanSimulations} Once the generalized propagator \(\mathbf{K}\) is known, we can calculate the response of the system to arbitrary monochromatic incident fields (e.g. plane waves, focused beams or even fast electrons) by means of a simple matrix-vector multiplication. This can be used for instance to do raster-scan simulations at low numerical cost, by raster-scanning a light source such as a focused incident beam or a dipolar emitter step-by-step over the nano-object, while calculating and eventually post-processing the field at each position~\cite{teulle_scanning_2012}. \section{Comparison to other electro-dynamical simulation techniques} Before proceeding with a detailed introduction to the pyGDM\ toolkit, we want to give a non-exhaustive overview of other methods commonly used for solving electro-dynamical problems in nano-optics. A widely used frequency domain solver is the open source software DDSCAT \cite{draine_discrete-dipole_1994}, which implements a frequency domain technique analog to the GDM. It is usually called the ``Coupled'' or ``Discrete Dipole Approximation'' (CDA or DDA, respectively). However, there exist two main differences to GDM as used in this work. First, the renormalization problem is circumvented by setting the self-terms to zero and including the corresponding contributions using a physical polarizability for each dipole. Using such physical polarizabilities (usually of spherical entities) for each mesh-cell however leads generally to a worse convergence for larger step-sizes. The second difference is more technical. In the DDSCAT implementation of DDA, the matrix \(\mathbf{M}_{\text{DDSCAT}}\) is not stored in memory (c.f. Eq.~\eqref{eq:definitionMforInversion}). The resolution of the inverse problem is done by the conjugate gradients method, where the elements \(M_{\text{DDSCAT}, ij}\) are computed \textit{on-demand} during the calculation of the vector-matrix products \(\mathbf{M}_{\text{DDSCAT}} \cdot \mathbf{x} = \mathbf{E}\). To speed up these matrix-vector multiplications, a scheme involving fast Fourier transformations (FFT) is used \cite{goodman_application_1991}. A drawback is that without storing \(\mathbf{M}\), efficient preconditioning is very difficult (see also appendix~\ref{sec:conjugate_gradients}). Convergence of the DDSCAT conjugate gradient iterative scheme is therefore relatively slow and only obtained for very fine discretization meshes, further slowing down the computation due to the large size of the coupled dipole matrix \(\mathbf{M}_{\text{DDSCAT}}\). An obvious advantage of DDSCAT is, that large problems with huge numbers of mesh points can be treated, since the matrix coupling all dipoles is not stored in memory. However, the advantage of the generalized propagator is lost. The calculation of different incident fields at a fixed wavelength (such as raster-scan simulations) requires to re-run the time-consuming conjugate gradients solver for each configuration. Another free implementation of the DDA with particular focus on electron energy loss spectroscopy (EELS) simulations is the DDEELS package~\cite{geuquet_eels_2010}. Maxwell's equations can be reformulated as a set of surface-integral equations. It is therefore possible to develop a similar formalism as the above explained volume integral method in which only the surfaces of a nanostructure are discretized instead of the volume \cite{garcia_de_abajo_retarded_2002}. A great advantage of this so-called Boundary Element Method (BEM) is the smaller amount of discretization cells, which however comes at the cost of a more complex mathematical framework and numerical implementation. With MNPBEM an open-source BEM-implementation for MATLAB exists which allows also the consideration of layered environments~\cite{hohenester_mnpbem_2012, waxenegger_plasmonics_2015}. \begin{figure*}[t] \centering \includegraphics{pyGDM_structure} \caption{ Structure of the pyGDM\ package and workflow of a typical simulation: % (1) Setup of the geometry, environment and incident electric field. This is bundled in an instance of the \object{simulation} object. (2) Main GDM simulation. (3) Possible post-processing (e.g. calculation of extinction cross-sections). (4) Visualization of the results. }\label{fig:structure_pygdm} \end{figure*} Another very popular and flexible technique for electrodynamical simulations is the Finite-Difference Time-Domain (FDTD) method \cite{inan_numerical_2011, baida_finite_2013, cao_electron_2015}. As the name suggests, the calculation is performed in the time domain, which means that Maxwell's equations are iteratively evolved by small time increments. The problem is discretized in both, space and time. An incoming wave travels time-step by time-step across the region of interest and when the wave-packet has passed or turn-on effects have fully decayed (e.g. for plane wave illumination), the actual numerical measurement is performed. With respect to computational time, a disadvantage is the additional dimension (time) that needs to be discretized. Furthermore, a fraction of the environment around the object of interest has to be included in the discretization space, which is why FDTD is called a ``domain discretization technique''. Particularly in \(3\)D problems, this can lead to very high computational costs. Another drawback of FDTD can be the low accuracy for near-field intensities if very strong field enhancements occur (\textit{e.g.} in plasmonics) \cite{hoffmann_comparison_2009}. However, the simplicity and the robustness of the method are great advantages of FDTD. Furthermore, using temporally short and therefore spectrally broad illumination pulses, a large frequency spectrum can be obtained in a single simulation run. Frequency domain techniques on the other hand require each wavelength to be calculated separately. Provided an accurate analytical model for the material dispersion exists, this advantage can compensate the larger discretization domain in spectral simulations, compared to frequency domain methods like the GDM. A powerful open source implementation that comes with a rich toolbox is the software ``MEEP'' \cite{oskooi_meep_2010}. For a general introduction on finite difference methods, see for example Ref.~\cite{press_numerical_2007}, chapter~17. Finally, a very popular domain discretization technique in the frequency domain is the Finite Element Method (FEM, e.g. implemented in the commercial software ``COMSOL Multiphysics''). Due to its adjustable mesh-size it is particularly apt for plasmonic problems, where extremely localized fields can occur at sharp extremities. However, it suffers from the same drawback as FDTD since a certain volume around the nano-object needs to be discretized and included in the calculation, often leading to high memory and CPU-time requirements. A review including benchmarks for different numerical techniques in nano-optics can be found in Ref.~\cite{smajic_comparison_2009}. An extensive discussion of different DDA variants including a detailed review on their accuracies is given in Ref.~\cite{yurkin_discrete_2007}. \section{Setting up a pyGDM\ simulation}\label{sec:setup_simulation} The structure of the pyGDM\ package and the main steps to setup and run a simulation are schematically depicted in figure~\ref{fig:structure_pygdm}. The heart of pyGDM\ is the \object{simulation} object which contains the information about the structure, its environment and the incident electro-magnetic field(s) used in the simulation: \functiondescription{o}{core.simulation} { {\textit{struct:} instance of \object{structures.struct}} {\textit{efield:} instance of \object{fields.efield}} } {} A minimal example of a pyGDM\ python script is provided in section~\ref{sec:technical_detail}. \subsection{Geometry and material dispersion} The geometry of the nanostructure and the dielectric constant of both its constituent material and the environment are stored in an instance of \functiondescription{o}{structures.struct} { {\textit{step:} the stepsize (nm)} {\textit{geometry:} the particle geometry as \textbf{list} of meshpoints (\(x,y,z\))} {\textit{material:} an instance of a material object, having a routine \function{epsilon(wavelength)} which returns the permittivity at \textit{wavelength} (\textit{wavelength} in nm).} {\textit{n1,n2,n3:} the (complex) refractive indices of (1) the substrate, (2) the particle environment and (3) a top layer} {\textit{normalization:} the normalization factor for the Green's dyad (use \function{structures.\allowbreak get\_\allowbreak normalization})} } {} which contains the geometry as a list of mesh-point coordinates and the material dispersion via an instance of some \object{materials.dispersion\_\allowbreak class}. \subsection{Excitation fields} The second key-ingredient of a pyGDM-simulation is the incident (illuminating) electro-magnetic field. The fields in the GDM are time-harmonic, oscillating at frequency \(\omega\). We describe these fields using the phasor description with complex amplitudes in which we include the phase information: \begin{equation} \mathbf{\tilde{E}}(\mathbf{r}, \omega, t) = \mathbf{\hat{E}}(\mathbf{r}, \omega)\, \mathrm{e}^{- \mathrm{i} \omega t}\,\mathrm{e}^{\mathrm{i} \varphi} = \mathbf{E}(\mathbf{r}, \omega)\, \mathrm{e}^{-\mathrm{i} \omega t}. \label{eq:harmonicComplexWave} \end{equation} \(\mathbf{\tilde{E}}\) is the electric field including the time-dependence. We assume time-harmonicity, thus the time-dependence is expressed by the term \(\mathrm{e}^{-\mathrm{i} \omega t}\). \(\mathbf{\hat{E}}\) is the real valued amplitude, \(\mathbf{E}\) the complex amplitude (the ``phasor'') which includes the phase-factor \(\mathrm{e}^{\mathrm{i} \varphi}\) in its imaginary part. The information about the incident field is provided to pyGDM\ via \functiondescription{o}{fields.efield} { {\textit{field\_generator:} field generator (for available functions, see below)} {\textit{wavelengths:} \textbf{list} of wavelengths to calculate (nm)} {\textit{kwargs:} \textbf{dict} with kwargs passed to the field generator. Can contain lists of multiple values for each parameter. All possible permutations will be calculated.} } {} Illustrations of the below listed incident fields available in pyGDM\ are shown in figure~\ref{fig:sim_fundamental_fields}. \subsubsection{Plane wave} \functiondescription{f}{fields.planewave} { {\textit{theta:} linear polarization angle (degrees)} } {} The probably most common fundamental field is the plane wave, which is in many cases a sufficient approximation. Its complex amplitude can be expressed as \begin{equation} \mathbf{E}_0(\mathbf{r}, \omega) = \mathbf{E}_0\, \mathrm{e}^{\mathrm{i} \mathbf{k}\cdot\mathbf{r}}. \label{eq:planeWaveField} \end{equation} \subsubsection{Focused plane wave} \functiondescription{f}{fields.focused\_planewave} { {\textit{theta:} linear polarization angle (degrees)} {\textit{xSpot:} \(x\)-coordinate of focal spot (nm)} {\textit{ySpot:} \(y\)-coordinate of focal spot (nm)} {\textit{NA:} numerical aperture to calculate spotsize for each wavelength} {\textit{spotsize:} Gaussian spotsize \(w\) in nm (required if \textit{NA}\,\(=-1\))} } {} The simplest approximation for a focused beam is a plane wave with a Gaussian intensity profile. For incidence along \(Z\) (\(\mathbf{k} \parallel \mathbf{e}_z\)) this writes: \begin{equation} \mathbf{E}_0(\mathbf{r}, \omega) = % \mathbf{E}_0\, \mathrm{e}^{\mathrm{i} \mathbf{k}\cdot \mathbf{r}} \exp \left( \dfrac{(x - x_0)^2 + (y - y_0)^2}{ 2 w_{\text{spot}}^2 } \right) \label{eq:focusedPlaneWaveField} \end{equation} The beam propagates along \((x_0,y_0,z)\). The full width at half maximum (FWHM) can be obtained via \begin{equation} w_{\text{FWHM}} = w_{\text{spot}} \cdot 2\sqrt{2\ln 2} \, . \end{equation} A focused plane wave is often a sufficient approximation (see e.g. Ref.~\cite{viarbitskaya_tailoring_2013}) and can be particularly useful if the divergence of the radius of curvature at the origin of the paraxial Gaussian becomes problematic. \begin{figure*}[t] \centering \includegraphics[width=.195\textwidth]{efield_models_planewave} \includegraphics[width=.195\textwidth]{efield_models_focusedplanewave} \includegraphics[width=.195\textwidth]{efield_models_paraxGaussian} \includegraphics[width=.195\textwidth]{efield_models_paraxGaussian_corrected_Ereal} \includegraphics[width=.195\textwidth]{efield_models_edipole_Ereal} \caption{ Real part of \(E_x\) for (from left to right): A plane wave, a ``focused plane wave'', a paraxial Gaussian beam, a tight-focus corrected paraxial Gaussian beam (all \(X\)-polarized, \(\mathbf{k} \parallel -\mathbf{e}_z\)) and a dipole emitter along \(X\) (indicated by a white arrow).}\label{fig:sim_fundamental_fields} \end{figure*} \subsubsection{Paraxial Gaussian beam} \functiondescription{f}{fields.gaussian} { {\textit{theta:} linear polarization angle (degrees)} {\textit{xSpot:} \(x\)-coordinate of focal spot (nm)} {\textit{ySpot:} \(y\)-coordinate of focal spot (nm)} {\textit{NA:} numerical aperture to calculate spotsize for each wavelength} {\textit{spotsize:} Gaussian spotsize \(w\) in nm (required if \textit{NA}\,\(=-1\))} } {{\textit{paraxial:} \textbf{True} (default: \textbf{False})}} Often, lasers are used as sources of monochromatic, coherent light with high intensity. Light emitted from a laser-cavity is however not propagating like a plane wave, but as a Gaussian beam. The intensity profile differs significantly from the focused plane wave so the use of a model for Gaussian beams may become necessary -- particularly in larger objects, where the ``curved'' intensity profile of such a beam induces important field gradients along the propagation direction and the particle. A popular approximation to a real Gaussian beam is the so-called \emph{paraxial approximation}, where all \(\mathbf{k}\)-vectors are parallel to one single propagation direction. It can be calculated using the following formula (propagation along \(Z\)-axis) \begin{multline} \mathbf{E}_0(\mathbf{r}, \omega) = % \mathbf{E}_0\, \frac{w_0}{w(z)} \exp \left( \frac{-r^2}{w(z)^2} \right) \\ \times \exp \left( -\mathrm{i} \left( k\left(z + \frac{r^2}{2 R(z)}\right) - \zeta(z) \right) \right) % \label{eq:paraxialGaussianField} \end{multline} with the beam width or ``waist'' \(w_0\) and the squared distance to the beam axis \(r^2 = (\Delta x^2 + \Delta y^2)\). \(\Delta x\) and \(\Delta y\) are the distances to the beam axis in \(X\) and \(Y\) direction, respectively. In equation~\eqref{eq:paraxialGaussianField} we introduced furthermore the \(z\)-dependent beam waist \begin{equation} w(z) = w_0\sqrt{1 + \left( \frac{z \lambda}{\pi w_0^2} \right)^2} \end{equation} the radius of curvature \begin{equation} R(z) = z \left(1 + \left( \frac{\pi w_0^2}{z \lambda} \right)^2 \right) \end{equation} and the \emph{Gouy phase}~\cite{boyd_intuitive_1980} \begin{equation} \zeta(z) = \arctan \left( \frac{z \lambda}{\pi w_0^2} \right). \end{equation} \subsubsection{Tightly focused Gaussian beam} \functiondescription{f}{fields.gaussian} { {same as paraxial Gaussian.} } { {\textit{paraxial:} \textbf{False} (=default)} } Under tight focusing conditions an additional component \(E_{0,z}\) parallel to the wave-vector (again assuming \(\mathbf{k} \parallel \mathbf{e}_z\)) can gain a substantial magnitude, which can be explained by the \(\text{div} \mathbf{E}\) Maxwell's equation. This can be accounted for by adding the following correction term to the paraxial Gaussian (again assuming propagation along~\(Z\))~\cite{novotny_principles_2006} \begin{equation} E_{0,z}(x,y,z) = \frac{- 2 \mathrm{i}}{k w(z)^2} \left( \Delta x\, E_{0,x} + \Delta y\, E_{0,y} \right). \label{eq:gaussianCorrectField} \end{equation} \subsubsection{Dipolar emitter} \functiondescription{f}{fields.dipole\_electric} { {\textit{x0, y0, z0:} position of dipole (in nm)} {\textit{mx, my, mz:} amplitude and direction of dipole-vector} } {} An electric dipole \(\mathbf{p}\) placed in a homogeneous environment at \(\mathbf{r}_0\) and oscillating at frequency \(\omega\) creates an electric field at \(\mathbf{r}\) which writes~\cite{agarwal_quantum_1975} \begin{equation}\label{eq:field_electric_dipolar_emitter} \mathbf{E}_{\text{p}} (\mathbf{r}, \mathbf{r}_0, \omega) = \frac{1}{\epsilon_{\text{env}}} \Big( \mathbf{I}\, k^2 + \mathbf{\nabla\nabla} \Big) G_{0} (\mathbf{r}, \mathbf{r}_0, \omega) \cdot \mathbf{p}(\omega) \end{equation} where \(\nabla\) acts along \(\mathbf{r}\), \(\mathbf{I}\) is the unitary tensor, \(k\) the wavenumber and \(G_{0}\) the scalar vacuum Green's function (see Eq.~\eqref{eq:scalarGreenFunctionVac}). \subsubsection{Magnetic dipole emitter} \functiondescription{f}{fields.dipole\_magnetic} { {\textit{x0, y0, z0:} position of dipole (in nm)} {\textit{mx, my, mz:} amplitude and direction of dipole-vector} } {} Analogously, a magnetic dipole emitter \(\mathbf{m}\) at \(\mathbf{r}_0\) is the source of an electric field~\cite{agarwal_quantum_1975, wiecha_decay_2018} \begin{equation}\label{eq:field_magnetic_dipolar_emitter} \mathbf{E}_{\text{m}} (\mathbf{r}, \mathbf{r}_0, \omega) = \mathrm{i} k_0 \nabla \times G_{0} (\mathbf{r}, \mathbf{r}_0, \omega) \cdot \mathbf{m}(\omega). \end{equation} \section{Solver}\label{sec:solver} \subsection{Internal fields} Solving the primary scattering problem is usually the root of a GDM simulation. The self-consistent calculation of the fully retarded (complex) electric field inside the nano-structure is done by inversion of equation~\eqref{eq:definitionMainEquationGDM} via \functiondescription{f}{core.scatter} { {\textit{sim}: instance of \object{core.simulation}} } {} Usually, the underlying \textit{scipy} libraries used for inversion are multi-thread parallelized, making use of all processors on multi-core CPUs. For distributed systems like most modern computing clusters, also a multi-processing parallel version of \function{core.scatter} is implemented in pyGDM , which uses MPI to simultaneously calculate several wavelengths of a spectrum on parallel processes: \functiondescription{f}{core.scatter\_mpi} { {\textit{sim}: instance of \object{core.simulation}} } {} \paragraph*{Note:} Each MPI process calculates a single wavelength using the parallelized \textit{scipy}-routines. In this double-parallelized way, spectral simulations can be carried out very rapidly on multi-node computing clusters. \function{core.scatter\_mpi} requires the ``mpi4py'' package. \begin{figure*}[t] \centering \includegraphics{inversion_timing_methods} \caption{ % (a) Timings of a pyGDM-simulation of a spherical dielectric particle as a function of the number of meshpoints for the different available solvers. Solid lines are power-law fits, confirming \(p=3\) for full inversion methods and \(p=2\) for CG (the fitted power \(p\) is given in the legend). (b) Memory requirement (in megabytes) as function of the number of meshpoints for the different solvers. % All benchmarks were performed on a single core of an AMD FX-8350 CPU. }\label{fig:Theory_inversion_in_GDM} \end{figure*} \begin{figure}[t] \centering \includegraphics{inversion_timing_methods_speedup_parallel} \caption{ Speedup of the GDM-calculation using the multi-threaded parallelization capability of the available solvers. Benchmark performed on an Intel E5-2680 10-core CPU.}\label{fig:parallelization_speedup} \end{figure} \subsubsection{Direct inversion} \begin{itemize} \item argument \textit{method}: ``lu'' (default), ``scipyinv'', ``superlu'', ``pinv2'' (all require \textit{scipy}), ``numpyinv'' or ``dyson'' (only \textit{numpy}) \end{itemize} In pyGDM\ the inversion of \(\mathbf{M}\) (eq.~\eqref{eq:definitionMforInversion}) is by default performed with LU-decomposition (using the implementation in \textit{scipy}). This should be the fastest solver for full inversion (see Fig.~\ref{fig:Theory_inversion_in_GDM}a) and has furthermore an excellent multi-threaded parallelization scaling, as can be seen in figure~\ref{fig:parallelization_speedup}. An extensive explanation of LU-decomposition and details on its implementation can be found for example in Ref.~\cite{press_numerical_2007} (chapter~2.3). Other scipy solvers can be used in pyGDM, and, if for any reason \textit{scipy} is not available, the ``numpyinv'' and ``dyson'' methods are alternatives which do not require \textit{scipy}. The solver ``dyson'' uses a sequence of Dyson's equations~\cite{martin_generalized_1995} and comes with pyGDM. Since it does not depend on any libraries it should work in every case, however it will usually be significantly slower than the third-party solvers. An advantage of ``dyson'' can be the memory requirement which is relatively low, because the Dyson sequence allows an in-place inversion of the matrix (see figure~\ref{fig:Theory_inversion_in_GDM}b). A detailed description of the latter algorithm can be found in Ref.~\cite{colas_des_francs_optique_2002} (chapter~2.4). We note that LU inversion (or in some cases conjugate gradients, e.g. for dense spectra on single-core systems, see below and appendix) is the preferred technique in pyGDM\ due to its high efficiency (see Fig.~\ref{fig:Theory_inversion_in_GDM}a). \subsubsection{Conjugate gradients} \begin{itemize} \item argument \textit{method}: ``cg'' or ``pycg'' \end{itemize} Sometimes it is not necessary to calculate the full structure of the inverse of matrix \(\mathbf{M}\). Often it is sufficient to only know the \textit{result} of the matrix-vector product \(\mathbf{M}^{-1} \mathbf{E}_0\). It turns out that under certain circumstances, iterative approaches such as the "conjugate gradients" method lead to very accurate approximations of this matrix/vector product in significantly less time compared to the inversion of \(\mathbf{M}\). For a detailed description and informations related to the conjugate gradients solver, see appendix~\ref{sec:conjugate_gradients}. \subsection{Decay-rate of dipolar emitters}\label{sec:e_m_decay} \functiondescription{f}{core.decay\_rate}{}{} The Green's Dyadic formalism can be used not only to obtain scattered electro-magnetic fields. It gives also direct access to the modification of the decay rate of electric or magnetic dipolar transitions due to the presence of polarizable materials in their vicinity. \paragraph*{Note: } The decay rates are proportional to the photonic LDOS~\cite{carminati_electromagnetic_2015}, hence the values obtained from the calculation of the relative decay rates \(\Gamma / \Gamma_0\) are identical to the relative LDOS (specifically to the \textit{partial} LDOS, meaning its electric or magnetic component and / or partial for specific dipole orientations). \subsubsection{Electric dipole}\label{sec:e_decay} The effect is intuitively understandable for an electric dipole transition \(\mathbf{p}\), as a consequence of the enhancement (or weakening) of the electric near-field because of the dielectric contrast and the resulting back-action of the radiated field on the dipole itself. It is possible to derive an integral equation describing the decay rate \(\Gamma_{e}\) of the dipole transition~\cite{carminati_electromagnetic_2015, wiecha_decay_2018}: \begin{multline}\label{eq:gamma_electric} \Gamma_{e}({\bf r}_{0},\omega) = \Gamma_{e}^{0}(\omega) \\ \times \left(1+ \frac{3}{2k_{0}^{3}}{\bf u}\cdot \text{Im}\big( {\boldsymbol{\cal G}}_{p}^\text{EE}({\bf r}_{0},{\bf r}_{0},\omega)\big)\cdot{\bf u} \right) \; , \end{multline} where \begin{multline}\label{eq:gamma_magnetic_SpEE} {\boldsymbol{\cal G}}_{p}^\text{EE}({\bf r},{\bf r}_{0},\omega) = \int\limits_{V}\text{d}\mathbf{r}'\int\limits_{V}\text{d}\mathbf{r}''\mathbf{G}_{0}^{\text{EE}}({\bf r},{\bf r}',\omega) \\ \cdot \boldsymbol\chi_{\text{e}}({\bf r}',\omega) \cdot \mathbf{K}({\bf r}',{\bf r}'',\omega)\cdot\mathbf{G}_{0}^\text{EE}({\bf r}'',{\bf r}_{0},\omega) \; \end{multline} and \(\Gamma_{e}^{0}(\omega)$ = $ 4k_{0}^{3} p^{2}/3\hbar\) is the decay rate of the electric dipole transition in vacuum. $\mathbf{r}_0$ is the location of the dipolar transition (outside the nanostructure). ${\bf u}$ denotes the dipole orientation, \(p\) its amplitude. \(\mathbf{K}\) is the generalized propagator (see Eq.~\eqref{eq:definitionGeneralizedPropoagator} or Ref.~\cite{martin_generalized_1995}). For the numerical implementation, the integrals in Eq.~\eqref{eq:gamma_magnetic_SpEE} become sums over the mesh-points of the discretized nano-object(s). The propagator \(\mathbf{G}_{0}^\text{EE}\) can be found by identification using Eq.~\eqref{eq:field_electric_dipolar_emitter} and following equation for the field of an electric dipole \(\mathbf{p}\) at \(\mathbf{r}_0\)~\cite{agarwal_quantum_1975} \begin{equation}\label{eq:gamma_magnetic_SEE} {\bf E}_{0}({\bf r},\omega) = \mathbf{G}_{0}^\text{EE}({\bf r},{\bf r}_{0},\omega)\cdot{\bf p}(\omega) \; , \end{equation} \paragraph*{Note:} \(\mathbf{G}_{0}^\text{EE}\) is given for particles in a homogeneous environment by the Green's Dyad of equation~\eqref{eq:vacuumGreenDyadicFunction}. In analogy to the scattering simulations it can be easily extended for more complex environments, such as an infinite substrate (see section~\ref{sec:green_method}). \subsubsection{Magnetic dipole}\label{sec:m_decay} Also the decay rate of a \textit{magnetic} dipole transition close to \textit{non-magnetic} materials is influenced by the presence of the structure. Such magnetic-magnetic response function associated to a structure with no direct magnetic response, arises from the \textit{electric} field emitted by the magnetic dipole, interacting with the material and finally again inducing a magnetic field via the curl of the electric field. In metallic nanostructures, circular plasmonic currents can also lead to significant magnetic near-field enhancements~\cite{huidobro_magnetic_2014}. In complete analogy to equation~\eqref{eq:gamma_electric}, the magnetic decay rate \(\Gamma_{m}\) writes \begin{multline}\label{eq:gamma_magnetic} \Gamma_{m}({\bf r}_{0},\omega)=\Gamma_{m}^{0}(\omega) \\ \times \left(1+ \frac{3}{2k_{0}^{3}}{\bf u}\cdot \text{Im}\big( {\boldsymbol{\cal G}}_{p}^\text{HH}({\bf r}_{0},{\bf r}_{0},\omega)\big)\cdot{\bf u} \right) \; , \end{multline} where \begin{multline}\label{eq:gamma_magnetic_SpHH} {\boldsymbol{\cal G}}_{p}^\text{HH}(\mathbf{r},\mathbf{r}_{0},\omega) = \int\limits_{V}\text{d}\mathbf{r}'\int\limits_{V}\text{d}\mathbf{r}'' \mathbf{G}_{0}^\text{HE}({\bf r},{\bf r}',\omega) \\ \cdot \boldsymbol\chi_{\text{e}}({\bf r}',\omega) \cdot\, \mathbf{K}({\bf r}',{\bf r}'',\omega)\cdot\mathbf{G}_{0}^\text{EH}({\bf r}'',{\bf r}_{0},\omega) \; \end{multline} and \(\Gamma_{m}^{0}(\omega)$ = $ 4k_{0}^{3} m^{2}/3\hbar\) is the decay rate of the magnetic transition in vacuum. ${\bf u}$ and \(m\) are the magnetic dipole orientation and amplitude, respectively, and \(\mathbf{K}\) is again the generalized propagator. In the same way as for the electric dipole, \(\mathbf{G}_{0}^\text{HE}\) and \(\mathbf{G}_{0}^\text{EH}\) can be found using Eq.~\eqref{eq:field_magnetic_dipolar_emitter} with the electric field of a magnetic dipole \(\mathbf{m}\) at \(\mathbf{r}_0\) \begin{equation}\label{eq:gamma_magnetic_SEH} {\bf E}_{0}({\bf r},\omega) = \mathbf{G}_{0}^\text{EH}({\bf r},{\bf r}_{0},\omega)\cdot{\bf m}(\omega) \end{equation} and \begin{equation}\label{eq:gamma_magnetic_Efield_m} \mathbf{G}_{0}^\text{HE}({\bf r},{\bf r}',\omega) = \mathbf{G}_{0}^\text{EH}({\bf r}',{\bf r},\omega) \; . \end{equation} For a detailed derivation of the formalism see reference~\cite{wiecha_decay_2018}. For a comparison of our code with experimental results, see reference~\cite{wiecha_simultaneous_2018}. \subsubsection{LDOS inside a nanostructure}\label{sec:decay_inside} The decay rate (and hence the LDOS) at a position ${\bf r}_{0,s}$ \textit{inside} the structure can also be obtained via equation~\eqref{eq:gamma_electric} (for the electric case), using the field susceptibility \({\boldsymbol{\cal G}}_{p,s}^\text{EE}\) inside the structure. It is related to the generalized propagator (assuming an isotropic medium with \(\chi_{\text{e}} = \text{Tr}\,(\boldsymbol\chi_{\text{e}})/3\)), by \begin{equation} {\boldsymbol{\cal G}}_{p,s}^\text{EE}({\bf r}_i,{\bf r}_j,\omega) = \frac{\mathbf{K}({\bf r}_i,{\bf r}_j,\omega) - \mathbf{I}}{\chi_{\text{e}}\, V_{\text{cell}}} \, , \end{equation} where \(\mathbf{r}_i\) and \(\mathbf{r}_j\) are positions of nano-particle meshpoints. For the case of the magnetic LDOS inside the structure, an ``electric-magnetic'' mixed generalized propagator needs to be calculated. This propagator relates any incident \textit{electric} field to the scattered \textit{magnetic} field inside the structure. This is not implemented in pyGDM\ so far, but could easily be made available using the mixed tensor \(\mathbf{G}_{0}^{\text{EH}}\) instead of the electric-electric Green's Dyadic function in the inversion problem, defined by equation~\eqref{eq:LippmannSchwingerVolumeDiscretization}. \paragraph*{Note:} The frequency shift of the emitter due to the presence of a nano-structure (``Lamb shift'') can be obtained in analogy to the decay rate, via the real part of the field susceptibility \cite{lassalle_lamb_2017}. This is however not (yet) implemented in pyGDM. \section{Post-Processing}\label{sec:postprocessing} \subsection{Linear effects} After the main simulation (calculation of the fields inside the structure, decay rate), the information can be further processed to obtain experimentally accessible physical quantities. \subsubsection{Near-field outside the nanostructure}\label{sec:Theory_NF_outside_structure} \functiondescription{f}{linear.\allowbreak nearfield} { {\textit{sim}: instance of \object{core.simulation}} {\textit{field\_index}: index of field-configuration (see section~\ref{sec:tools_get_field_index}: \function{tools.\allowbreak get\_\allowbreak closest\_\allowbreak field\_\allowbreak index})} {\textit{MAP}: \textbf{list} of \(X\), \(Y\) and \(Z\) coordinates at which to evaluate the near-field} } {} \paragraph*{Electric field:} Via Eq.~\eqref{eq:LippmannSchwingerVolumeDiscretization} the electric field induced at any point \(\mathbf{r}\) at the exterior of the particle can be calculated from the electric polarization inside the structure. \paragraph*{Magnetic field:} The propagator \(\mathbf{G}_{0}^{\text{HE}}\) (see also equation~\eqref{eq:gamma_magnetic_SEH}) can be used to obtain the magnetic field outside the source region~\cite{girard_optical_1997}. \function{linear.\allowbreak nearfield} returns both, the electric and the magnetic field amplitudes for the scattered as well as for the total near-field (\(\mathbf{E}_{\text{tot}} = \mathbf{E}_{\text{scat}} + \mathbf{E}_0\)). \paragraph*{Note:} Alternatively, the \(\mathbf{B}\)-field may be calculated via finite differentiation: After Faraday's induction law from Maxwell's equations (Eq.~\eqref{eq:MaxwellFourierrotE}), the magnetic field writes (for time-harmonic fields) \begin{equation} \mathbf{B}(\mathbf{r}, \omega) = \frac{\nabla \times \mathbf{E}(\mathbf{r}, \omega) }{\mathrm{i} k_0}. \end{equation} \subsubsection{Extinction, absorption and scattering cross-sections}\label{sec:Theory_Spectra_from_NF} \functiondescription{f}{linear.\allowbreak extinct} { {\textit{sim}: instance of \object{core.simulation}} } {} The linear response in the farfield can be characterized by the scattered and absorbed light intensity, the sum of which is called the ``extinction''. Usually these values are given as cross sections \(\sigma_{\text{scat.}}, \sigma_{\text{abs.}}\) and \(\sigma_{\text{ext.}}\) which have the unit of an area. The extinction and absorption cross sections can be calculated from the near-field in the discretized structure~\cite{draine_discrete-dipole_1988} \begin{equation}\label{eq:sigma_ext_from_nearfield} \sigma_{\text{ext}} = \frac{4 \pi k}{|E_0|^2} \sum\limits_{i=1}^{N_\text{cells}}\ \text{Im} \left( \mathbf{E}_{0,i}^* \cdot \mathbf{P}_i \right) \end{equation} and \begin{equation}\label{eq:sigma_abs_from_nearfield} \sigma_{\text{abs}} = \frac{4 \pi k}{|E_0|^2} \sum\limits_{i=1}^{N_\text{cells}}\ \left( \text{Im} \left( \mathbf{P}_i \cdot \mathbf{E}_i^* \right) - \frac{2}{3} k^3 |\mathbf{P}_i|^2 \right). \end{equation} \(\mathbf{E}_i\) and \(\mathbf{P}_i\) are the electric field and polarization at meshpoint \(i\), respectively, induced by an excitation field \(\mathbf{E}_{0,i}\). \(k\) is the wavenumber in the particle's environment. Complex conjugation is indicated with a superscript asterisk~(\(^*\)). The scattering cross section finally is the difference of extinction and absorption \begin{equation}\label{eq:sigma_scat_from_nearfield} \sigma_{\text{scat}} = \sigma_{\text{ext}} - \sigma_{\text{abs}}. \end{equation} \subsubsection{Far-field pattern of the scattered light}\label{sec:linear_farfield} \functiondescription{f}{linear.\allowbreak farfield} { {\textit{sim}: instance of \object{core.simulation}} {\textit{field\_index}: index of field-configuration (see section~\ref{sec:tools_get_field_index}: \function{tools.\allowbreak get\_\allowbreak closest\_\allowbreak field\_\allowbreak index})} } {} The complex electric field in the far-field, radiated from an arbitrary polarization distribution can be calculated using a corresponding Green's Dyad \( \mathbf{G}_{\text{ff}}\) (assuming a dipolar emission from each of the \(N\) meshpoints): \begin{equation}\label{eq:FarfieldFromNearfield} \mathbf{E}_{\text{ff}}(\mathbf{r}) = \sum\limits_i^{N_{\text{cells}}} \mathbf{G}_{\text{ff}}(\mathbf{r}_i, \mathbf{r}) \cdot \mathbf{P}(\mathbf{r}_i). \end{equation} In vacuum, using equation~\eqref{eq:vacuumGreenDyadicFunction} with only the far-field term \(\mathbf{T}_1\), we can calculate the electric field at any point \(\mathbf{r}\) far enough from the scatterer. A substrate can be included in the asymptotic tensor by means of an appropriate dyadic Green's function. An analytic approximation of a farfield-propagator for a layered system has been derived e.g. by Novotny~\cite{novotny_allowed_1997}. Making use of the superposition principle, the radiation of single dipoles via the propagator \(\mathbf{G}_{\text{ff}}\) can be generalized to the total far-field radiation of an ensemble of \(N\) dipole-emitters by simple summation of all meshpoints' contributions (see Eq.~\eqref{eq:FarfieldFromNearfield}). Note that the presence of the illuminated nano-structure is fully taken into account also in this scattering formalism, thanks to the self-consistent nature of the Green's method. Particularly in nano-structures with high absorption, equation~\eqref{eq:sigma_scat_from_nearfield} requires a high accuracy of the extinction and absorption cross-sections, hence small discretization steps, which can be practically not feasible~\cite{draine_discrete-dipole_1988}. In such case, equation~\eqref{eq:FarfieldFromNearfield} offers a more precise alternative to determine the scattering cross-section. A further drawback of the calculation of the scattering spectra from the near-field via Eq.~\eqref{eq:sigma_scat_from_nearfield} is obvious: These spectra do not contain any information about the directionality of the scattering. Using Eq.~\eqref{eq:FarfieldFromNearfield} on the other hand, the spatial distribution and polarization of scattered light in the far-field and can be obtained. In pyGDM, \function{linear.farfield} implements a Green's dyad including the contribution of an optional dielectric substrate (in a non-retarded approximation~\cite{novotny_allowed_1997}). \subsubsection{Heat generation} Having calculated the electric fields inside a nano-object, it is possible to compute the heat deposited inside the nanoparticle by an optical excitation as well as the temperature rise in the vicinity of the structure~\cite{baffou_heat_2009}. \functiondescription{f}{linear.\allowbreak heat} { {\textit{sim}: instance of \object{core.simulation}} } {} The total heat generated inside the nanoparticle is the product of the imaginary part of the material's permittivity and the electric field intensity: \begin{equation}\label{eq:heat_deposited} \begin{aligned} Q(\omega) =\ & \int\limits_V q(\mathbf{r}, \omega)\, \text{d}\mathbf{r} \\ % =\ & \frac{\omega}{8\pi} \int\limits_V \mathrm{Im}\big(\epsilon (\mathbf{r})\big) \left| \mathbf{E}(\mathbf{r}, \omega) \right|^2 \text{d}\mathbf{r}. \end{aligned} \end{equation} \functiondescription{f}{linear.\allowbreak temperature} { {\textit{sim}: instance of \object{core.simulation}} } {} The temperature rise at position \(\mathbf{r}_{\text{probe}}\) outside the nanoparticle can be approximated with the heat \(q(\mathbf{r}, \omega)\), generated at each meshpoint (located at \(\mathbf{r}\)) via the thermal Poisson's equation~\cite{baffou_thermoplasmonics_2010, teulle_scanning_2012} \begin{equation}\label{eq:temp_rise_vicinity} \begin{aligned} \Delta T (\mathbf{r}_{\text{probe}}, \omega) = & \frac{1}{4\pi \kappa_{\text{env}}} \int\limits_V \Bigg( \frac{ q(\mathbf{r}, \omega) }{\left| \mathbf{r}_{\text{probe}} - \mathbf{r} \right|} \\ & + \bigg(\frac{\kappa_{\text{sub}} - \kappa_{\text{env}}}{\kappa_{\text{sub}} + \kappa_{\text{env}}}\bigg) \frac{ q(\mathbf{r}, \omega) }{\left| \mathbf{r}_{\text{probe}} - \mathbf{r} \right|} \Bigg)\text{d}\mathbf{r} \end{aligned} \end{equation} where \(\kappa_{\text{env}}\) and \(\kappa_{\text{sub}}\) are the heat conductivities of the environment and substrate, respectively. The second term in the integrand can be derived through a formalism similar to image charges in electro-dynamics and accounts for heat reflection at the interface of the substrate~\cite{baffou_thermoplasmonics_2010}. Eqs.~\eqref{eq:heat_deposited} and~\eqref{eq:temp_rise_vicinity} can be for example used to calculate raster-scan mappings of the deposited heat or the temperature increase as function of a focused beam's focal spot position. Eq.~\eqref{eq:temp_rise_vicinity} can also be used to compute maps of the temperature increase above a nanostructure, by raster-scanning \(\mathbf{r}_{\text{probe}}\) under constant illumination conditions. \paragraph*{Note:} Equation~\eqref{eq:temp_rise_vicinity} assumes that the heat \(q\) generated by the optical excitation at each meshpoint induces a static heat distribution inside the nanoparticle. This approximation might become inaccurate in large nanoparticles of material with high heat conductivity (e.g. metals), leading to a rapid redistribution of the heat inside the nanostructure~\cite{baffou_heat_2009}. If the temperature increase is evaluated at sufficiently large distances to the nano-object, Eq.~\eqref{eq:temp_rise_vicinity} is usually a good approximation also for larger metallic nano-objects~\cite{wiecha_local_2017}. ``Sufficiently large distances'' could mean comparable to, or larger than the size of the nanoparticle. \subsubsection{Dipolar emitter decay rate} \functiondescription{f}{linear.decay\_eval}{}{} The decay rate of magnetic or electric dipole emitters can be calculated within the GDM as described in section~\ref{sec:e_m_decay}. Via equation~\eqref{eq:gamma_electric}, the tensor \({\cal S}_{p}^\text{EE}\) (or \({\cal S}_{p}^\text{HH}\) using Eq.~\eqref{eq:gamma_magnetic}) can be used to calculate the decay rate of the transition for arbitrary orientations of the dipole. \function{core.decay\_\allowbreak rate} calculates the tensor \({\cal S}_{p}^\text{EE}\) or \({\cal S}_{p}^\text{HH}\) (for an electric, respectively magnetic dipole emitter) at each user-defined dipole position and wavelength. The final evaluation of the decay rate is done using \function{linear.decay\_eval} for a given dipole orientation and amplitude. The advantage of this two-step approach is that the generalized propagator needs to be computed only once, and the results of this expensive part of the simulation can be re-used for multiple dipole orientations and/or amplitudes. \subsection{Non-linear effects}\label{sec:nonlinear_effects} \subsubsection{Two-photon photoluminescence / surface LDOS} \functiondescription{f}{nonlinear.tpl\_ldos}{}{} Having calculated the electric field distribution inside a nanoparticle, a simple model allows to calculate the two-photon photoluminescence (TPL) signal generated by the excitation: We assume that the TPL is proportional to the square of the electric field intensity. We furthermore consider each meshpoint (at position \(\mathbf{r}\)) as an incoherent source of TPL, contributing to the total TPL with an intensity proportional to \(|\mathbf{E}(\mathbf{r}, \omega)|^4\). Integration over the nano-particle volume \(V\) results in the total TPL intensity~\cite{teulle_scanning_2012}: \begin{equation}\label{eq:Intensity_TPL} I_{\text{TPL}}(\mathbf{r}_{\text{focus}}, \omega) \propto \int\limits_{V} \left| \mathbf{E}(\mathbf{r}, \mathbf{r}_{\text{focus}}, \omega) \right|^4 \text{d}\mathbf{r}. \end{equation} Here we added a further parameter, the focal spot position \(\mathbf{r}_{\text{focus}}\) of a focused illumination. By performing a raster-scan over the nano-structure with the focal position coordinate, we can calculate \(2\)D scanning TPL-maps. This approach allows also to approximate the photonic local density of states at the surface of the nanostructure \(\rho_{\text{sf}}(\mathbf{r}, \omega)\) on which the focused spot impinges (surface LDOS), using an unphysically tightly focused beam. In the case of a circularly polarized excitaiton, it is possible to rewrite the TPL intensity of Eq.~\eqref{eq:Intensity_TPL}~\cite{teulle_scanning_2012, viarbitskaya_tailoring_2013, viarbitskaya_plasmonic_2015}: \begin{multline}\label{eq:Intensity_TPL_LDOS} I_{\text{TPL}}(\mathbf{r}_{\text{focus}}, \omega) \propto \\ \int\limits_{V} \left| \mathbf{E}_0^{\rcirclearrow}(\mathbf{r}, \mathbf{r}_{\text{focus}}, \omega) \right|^4 \rho_{\text{sf}, \parallel}^2(\mathbf{r}, \omega) \text{d}\mathbf{r} \end{multline} where \(\mathbf{E}_0^{\rcirclearrow}\) is the incident electric field and \(\rho_{\text{sf}, \parallel}\) is the component of the LDOS in the plane parallel to the incident electric field vector. Let us now decrease the waist of the focused beam: In the limit of a spatial profile of \(\mathbf{E}_0^{\rcirclearrow}\) corresponding to a Dirac delta function, the square root of the TPL intensity Eq.~\eqref{eq:Intensity_TPL_LDOS} becomes proportional to the LDOS at the position of the focal spot \begin{equation}\label{eq:Intensity_TPL_propto_LDOS} I_{\text{TPL}}(\mathbf{r}_{\text{focus}}, \omega) \propto \\ \rho_{\text{sf}, \parallel}^2(\mathbf{r}_{\text{focus}}, \omega). \end{equation} In consequence, a \(2\)D map of the LDOS can be calculated via a raster-scan simulation, which can be done very efficiently in pyGDM\ thanks to the generalized propagator. By using a linear polarized incident field, it is furthermore possible to extract partial contributions to the LDOS for the corresponding polarization. \paragraph*{Note:} The ``surface''-LDOS is reproduced by Eq.~\eqref{eq:Intensity_TPL_propto_LDOS} for a contraction of \(\mathbf{E}_0^{\rcirclearrow}\) towards a Dirac delta function. However, due to the finite stepsize in the GDM, the beam waist cannot be reduced to an infinitely small value, hence this method remains approximative. Practical values for the waist must be at least as large as a few times the discretization stepsize. To obtain the exact LDOS, the calculation of the decay-rate is the method of choice (see also section~\ref{sec:e_m_decay}). \section{Visualization}\label{sec:visu} pyGDM\ includes several visualization tools for simple and rapid plotting of the simulation results. They are divided into functions for the visualization of \(2\)D representations and functions for \(3\)D plots. \subsection{2D visualization tools} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{visu_tools} \caption{ Visualization tools available in pyGDM\ on the example of a \(450\times 90\times 45\,\)nm\(^3\) (\(L\times W\times H\)) gold-rod placed in vacuum. Plane wave illumination incident along \(-Z\), linear polarization along \(X\), \(\lambda=600\,\)nm. All plots show projections on the \(XY\)-plane. (a) the geometry (gold) and its surface contour (dashed blue), (b) the real part of the internal electric field and (c) the internal electric field intensity at the bottom of the rod. (d-g) show external fields, calculated on an \(800\times 800\,\)nm\(^2\) large area, \(30\,\)nm below the structure using \function{linear.nearfield}: (d) \textbf{E}-field real part, (e) isolines of \textbf{E}-field, (f) electric field intensity and (g) magnetic field intensity. }\label{fig:tools_visu} \end{figure*} The available visualization functions are explained in the following, examples are given in Fig.~\ref{fig:tools_visu} using a simulation of a \(450\,\)nm \(\times\ 90\,\)nm large gold rod with stepsize \(d=15\,\)nm, excited with a plane wave at \(\lambda_0 = 600\,\)nm, linearly polarized along \(X\) and incident from the reader towards the paper (\(\mathbf{k} = -\mathbf{e}_z \, k\)). The plots show projections on the \(XY\) plane. \subsubsection{Structure geometry} \functiondescription{f}{visu.structure} { {\textit{sim}: instance of \object{core.simulation}} } {} Plot a \(2\)D projection of the simulated nano-particle geometry (see figure~\ref{fig:tools_visu}a, meshpoints in golden color). \functiondescription{f}{visu.structure\_\allowbreak contour} { {\textit{sim}: instance of \object{core.simulation}} } {} Plot a contour around a \(2\)D projection of the nano-particle, in other words drawing the outer surface of the structure (see figure~\ref{fig:tools_visu}, dashed blue line in (a), dashed white lines in (f-g)). \subsubsection{Plot field vectors} \(2\)D projections of the real or imaginary part of vector-fields (see figure~\ref{fig:tools_visu}d) can be plotted using \functiondescription{f}{visu.vectorfield} { {\textit{NF}: list containing the complex field (list of 6-tuples \([x_i, y_i, z_i, E_{x,i}, E_{y,i}, E_{z,i}]\), see section~\ref{sec:tools_get_field_as_list}: \function{tools.\allowbreak get\_\allowbreak field\_\allowbreak as\_\allowbreak list})} } {} Alternatively, the function can be called using: \functiondescription{f}{visu.vectorfield\_\allowbreak by\_\allowbreak fieldindex} { {\textit{sim}: instance of \object{core.simulation}} {\textit{field\_index}: index of field-configuration (see section~\ref{sec:tools_get_field_index}: \function{tools.\allowbreak get\_\allowbreak closest\_\allowbreak field\_\allowbreak index})} } {} The intention of the latter is to be used for direct plotting of fields inside the simulated particle via the \object{core.simulation} object (see figure~\ref{fig:tools_visu}b). \subsubsection{Field lines (``stream-plot'')} Isolines of the field amplitude can be plotted using \functiondescription{f}{visu.vectorfield\_\allowbreak fieldlines} { {\textit{NF}: list containing the complex field} } {} For an example, see figure~\ref{fig:tools_visu}e. \subsubsection{Scalar field representation (color-plot)} Color-plots are well suited to illustrate a scalar representation of the electric- or magnetic-field. This can be used to represent either the real/imaginary part of an individual field component (such as \(E_x\)), or the field intensity (\(|\mathbf{E}|^2\), \(|\mathbf{B}|^2\)). In pyGDM, such a plot can be drawn using \functiondescription{f}{visu.vectorfield\_\allowbreak color}{}{} By default, the electric field intensity is plotted, as shown in figure~\ref{fig:tools_visu}f-g. Alternatively, to easily plot the field inside the nanoparticle (see figure~\ref{fig:tools_visu}c), the same type of color-plots can be generated by calling \functiondescription{f}{visu.\allowbreak vectorfield\_\allowbreak color\_\allowbreak by\_\allowbreak fieldindex}{}{} The above functions are actually plotting scalar fields, the function names ``vectorfield\dots'' refer to the fact that vectorial data is taken as input. If the data is available as scalar field (i.e. in tuples \((x,y,z,S)\) with \(S\) being a scalar value), one can use the following wrapper to \function{visu.vectorfield\_\allowbreak color} \functiondescription{f}{visu.scalarfield}{}{} \begin{figure}[tp] \centering \includegraphics{visu3d_tools} \caption{ Examples illustrating the pyGDM\ 3D visualization tools on the same data as shown in figure~\ref{fig:tools_visu}a-c. (a) structure geometry, (b) electric field (real part) and (c) intensity of the electric field \(|\mathbf{E}|^2\) inside the gold nanorod. }\label{fig:visu3d_tools} \end{figure} \begin{figure*}[tp] \centering \includegraphics{comparison_mie_dielectric_sphere_structs_cube} \hspace{.75cm} \includegraphics{comparison_mie_dielectric_sphere_structs_hex} \includegraphics{comparison_mie_dielectric_sphere_cube} \hspace{.75cm} \includegraphics{comparison_mie_dielectric_sphere_hex} \caption{ Comparison of the extinction cross-section of a dielectric sphere (\(n_{\text{sphere}}=2.0\)) of diameter \(D=300\,\)nm, placed in vacuum and illuminated by a linearly polarized plane wave. Calculated either using pyGDM\ with different numbers of meshpoints (blue lines) or Mie theory (dashed red line). (a) cubic mesh, (b) hexagonal compact mesh. At the top the number of meshpoints \(N\), the nominal stepsize \(s\) and an illustration of the discretization are given, the latter showing \(XY\)-slices through the sphere's center. }\label{fig:examples_mie_dielectric} \end{figure*} \subsubsection{Farfield backfocal plane image} Plot the ``backfocal plane'' image scattered to the farfield from the results obtained by \function{linear.\allowbreak farfield} (see section~\ref{sec:linear_farfield}) \functiondescription{f}{visu.farfield\_\allowbreak pattern\_2D} { {\textit{theta}: list containing the polar angles \(\vartheta\)} {\textit{phi}: list containing the azimutal angles \(\varphi\)} {\textit{I}: list containing field intensity at \((\vartheta, \varphi)\)}} {} An example illustrating the output of the farfield plotting function is shown in figure~\ref{fig:gold_splitring_farfield}. \subsubsection{Animate fields} The time-dependence of time-harmonic fields is expressed by harmonic oscillations at the fixed frequency \(\omega\). After equation~\eqref{eq:harmonicComplexWave} we can directly calculate the time-dependent field \(\mathbf{\tilde{E}} (\mathbf{r}, \omega, t)\) at time \(t\) from the complex fields \(\mathbf{E} (\mathbf{r}, \omega)\) obtained by the GDM. pyGDM\ provides a function for simple animations of the electromagnetic fields, which allows to visualize the time-dependent optical response of nanostructures. \functiondescription{f}{visu.animate\_\allowbreak vectorfield} { {\textit{NF}: list containing the complex field} } {} Quiver-plots of the field vectors, the real/imaginary part of individual field components or the field intensity (as color-plots for the latter two) may be animated. \subsection{3D visualization tools} Similar tools as for two-dimensional data visualization are available in the \function{visu3d} module for generating 3D~figures. The convention for the function names is the same as in the 2D visualization module in order to make switching between 2D and 3D representations as easy as possible. Available plotting functions are \function{structure}, \function{vectorfield}, \function{vectorfield\_\allowbreak by\_\allowbreak fieldindex}, \function{vectorfield\_\allowbreak color}, \function{vectorfield\_\allowbreak color\_\allowbreak by\_\allowbreak fieldindex} and \function{scalarfield}. For a short explanation, see the equivalent 2D-plotting functions, described above. Examples demonstrating the visual output of the 3D-plotting functions are shown in figure~\ref{fig:visu3d_tools} (on the same data as in figure~\ref{fig:tools_visu}a-c). Finally, also 3D-animations of the time-harmonic fields can be generated. This can be done using \function{visu3d.\allowbreak animate\_\allowbreak vectorfield}. \section{Tools} Apart from visualization, pyGDM\ includes also several tools to render post-processing as simple as possible. \subsection{2D-projections of nano-structures} In order to calculate a two-dimensional projection of a nano-structure, use \functiondescription{f}{tools.get\_geometry\_\allowbreak 2d\_projection}{}{} \subsection{Geometric cross-section} The geometric cross-section of a nano-structure is the area occupied by its projection onto a specific plane (i.e. its ``footprint''). It is often used as a reference value, for example for the scattering efficiency. It can be calculated using (in units of nm\(^2\)) \functiondescription{f}{tools.get\_geometric\_\allowbreak cross\_section}{}{} By default, the projection on the \(XY\) plane is used, this can be changed via the parameter ``projection''. \subsection{Surface of a nano-structure} For surface-effects like surface second-harmonic generation (surface SHG), the meshpoints on the surface of a nanostructure are of particular interest. They can be obtained using \functiondescription{f}{tools.get\_surface\_\allowbreak meshpoints}{}{} The function also returns the surface-normal unit vectors for each surface-meshpoint. \begin{figure}[tp] \centering \includegraphics{comparison_mie_au_si_sphere} \caption{ Comparison of the extinction, absorption and scattering cross-sections of (a) a gold sphere of diameter \(D=50\,\)nm and (b) a silicon sphere of diameter \(D=150\,\)nm. Both spheres are placed in vacuum and illuminated by a linearly polarized plane wave. Calculated either using pyGDM\ (solid lines) or by Mie theory (dotted lines). In both cases, a hexagonal compact mesh is used. }\label{fig:examples_mie_gold_silicon} \end{figure} \subsection{Calculating spectra}\label{sec:tools_spectra} Calculating spectra of different physical quantities is a very common task in nano-optics. pyGDM\ therefore provides tools to render this task very simple. Each field-configurations in a \object{simulation}-object, which is available for several wavelengths, can be obtained via \functiondescription{f}{tools.\allowbreak get\_\allowbreak possible\_\allowbreak field\_\allowbreak params\_\allowbreak spectra}{}{} These configurations can then be used together with post-processing routines (such as \function{linear.extinct} for the extinction cross-section) to calculate a spectrum for some physical quantity. This can be done using \functiondescription{f}{tools.\allowbreak calculate\_\allowbreak spectrum}{}{} \subsection{Calculating raster-scans}\label{sec:tools_rasterscan} Because pyGDM\ uses the concept of a generalized propagator, it is particularly suited for application in monochromatic problems with varying illumination conditions, such as raster-scan simulations (varying beam position). If a simulation with a large number of focused beam-positions has been performed, the available incident field configurations (e.g. wavelengths or polarizations) corresponding to full raster-scan maps can be obtained using \functiondescription{f}{tools.\allowbreak get\_\allowbreak possible\_\allowbreak field\_params\_\allowbreak rasterscan}{}{} Like in the case of a spectrum, a scalar mapping can be computed from a raster-scan simulation, where each raster-scan position will be attributed a value, according to an evaluation function (like \function{linear.extinct}, \function{linear.heat}, \dots). Such maps can be obtained using \functiondescription{f}{tools.calculate\_\allowbreak rasterscan}{}{} \begin{figure}[tp] \centering \includegraphics{si_sphere_fw_bw_scattering} \caption{ Forward (FW, red) and backward (BW, blue) scattering spectra and FW/BW ratio (green dotted) for a silicon sphere of diameter \(D=150\,\)nm in vacuum. A hexagonal compact mesh is used. }\label{fig:examples_silicon_sphere_fw_bw} \end{figure} \begin{figure}[tp] \centering \includegraphics{farfield_pattern_example} \caption{ (a) sketch of the simulation geometry: A dipolar emitter, radiating at \(\lambda = 1\,\)\textmu m is placed in the center of a gold split-ring resonator (in vacuum). (b-c) qualitative far-field patterns (backfocal plane images) of the scattering of the quantum emitter coupled to the plasmonic structure for dipole orientations along \(X\) and \(Y\) in (b), respectively (c). }\label{fig:gold_splitring_farfield} \end{figure} \section{Examples} In the following section, we show several examples of pyGDM\ simulations. In the examples we try to reproduce analytical Mie theory, results from selected publications or we simply intend to demonstrate pyGDM\ features. \subsection{Comparison to Mie theory} Curved surfaces are generally demanding if it comes to discretization. A popular benchmark problem for electro-dynamical numerical methods is therefore the sphere, for which an analytical solution is given by Mie theory. In the first examples we thus compare pyGDM\ simulations to Mie theory. \subsubsection{Dielectric nano-sphere} Using \function{linear.extinct}, we calculate the extinction cross-section \(\sigma_{\text{scat}}\) of a dielectric sphere of diameter \(D=300\,\)nm in vacuum with fixed, purely real refractive index \(n=2\). Results are shown in figure~\ref{fig:examples_mie_dielectric} for different stepsizes and for (a) a cubic mesh as well as (b) a hexagonal compact lattice. In comparison with Mie theory, we find that the GDM offers a very good approximation already using rather coarse meshing. Furthermore we note, that the case of a spherical particle seems to be better described using a hexagonal mesh: The agreement with the analytical solution is slightly better for comparable numbers of meshpoints. \subsubsection{Dispersive nano-spheres (Au, Si)} In figure~\ref{fig:examples_mie_gold_silicon} we compare spherical particles of dispersive materials. Fig.~\ref{fig:examples_mie_gold_silicon}a shows spectra corresponding to a \(D=50\,\)nm gold nano-sphere in vacuum, figure~\ref{fig:examples_mie_gold_silicon}b gives spectra for a \(D=150\,\)nm silicon sphere. Simulated spectra are calculated using \function{linear.extinct} and compared to Mie theory. The resonance positions from Mie theory are reproduced with excellent agreement. \begin{figure}[t] \centering \includegraphics{heat_generation_example} \caption{ Spectrally resolved heat generation within a gold prism of side length \(L=115\,\)nm and height \(H=12\)nm. Incident polarization along one edge of prism. }\label{fig:examples_heat_spectrum} \end{figure} \begin{figure*}[t] \centering \includegraphics{decayrate_example} \caption{ Decay rate of an electric (a-c) and a magnetic (d-f) dipole transition close to a small dielectric nano-cube (\(n=2\), side length~\(21\,\)nm, in vacuum) relative to their respective vacuum decay rate \(\Gamma_0\). The dipoles emit at \(\lambda_0 = 500\,\)nm, are scanned in a \(500 \times 500\,\)nm\(^2\) large plane \(15\,\)nm above the particle. Dipole orientations along \(0X\) (a,d), \(0Y\) (b,e) and \(0Z\) (c,f). Scale bar is \(100\)\,nm. }\label{fig:examples_decay_rate} \end{figure*} \subsection{Other examples} \subsubsection{Forward / backward scattering spectra} The far-field propagation routine \function{linear.farfield} can be used to calculate directionality resolved scattering spectra. This can be done by integrating the intensity in the far-field over limited solid angles. In figure~\ref{fig:examples_silicon_sphere_fw_bw}, the example of a Si sphere with diameter \(D=150\,\)nm is used again. This time, we calculate the scattering via the \function{linear.farfield} routine (instead of using \function{linear.extinct}). The forward (FW) and backward (BW) scattering spectra, as well as the FW/BW ratio are in excellent agreement with the results of reference \cite{fu_directional_2013}. \subsubsection{Far-field radiation pattern} The function \function{linear.farfield} can also be used to obtain the far-field intensity distribution, comparable to experimental backfocal plane images. In an attempt to reproduce results published in reference \cite{hancu_multipolar_2014}, we put a dipolar emitter (\(\lambda_0 = 1\,\)\textmu m) in the center of a gold split-ring resonator and calculate the scattering to the far-field of the coupled system (for simplicity we consider vacuum as environment). The geometry of the considered arrangement is depicted in Fig.~\ref{fig:gold_splitring_farfield}a. The dipole is oriented either along \(X\) (red) or along \(Y\) (blue), the corresponding radiation patterns are shown in figures~\ref{fig:gold_splitring_farfield}b and~c, respectively. We can indeed reproduce the dipole-orientation dependent directionality of the scattering from the coupled system. \begin{figure*}[t] \centering \includegraphics{rasterscan_example} \caption{ Thermoplasmonic rasterscan simulations. From left to right: TPL; total deposited heat; temperature rise \(150\,\)nm above the center of a gold rhombus (side length \(500\,\)nm, top angle \(60^{\circ}\)) as function of the focal spot position of the incident beam. For the first three columns, the rhombus lies in homogeneous water. The right column shows the temperature rise for the structure in water, but lying on a glass substrate (used heat conductivities are \(\kappa_{\text{water}}=0.6\), \(\kappa_{\text{glass}}=0.8\)\,W/mK). The incident wavelength is \(\lambda_0 = 750\,\)nm, the linear polarization angle is (a) \(\vartheta=0^{\circ}\), (b) \(\vartheta=30^{\circ}\), (c) \(\vartheta=60^{\circ}\) and (d) \(\vartheta=90^{\circ}\). Scalebar in (a) is \(200\,\)nm, the position of the gold rhombus is indicated by a white dashed contour line (plotted using \function{visu.\allowbreak structure\_\allowbreak contour}). }\label{fig:examples_rasterscan} \end{figure*} \begin{figure*}[t] % \centering \includegraphics{LDOS_example} % \caption{ LDOS rasterscan simulations: Partial LDOS above a planar U-shaped dielectric structure (\(n=2.0\), \(H=60\,\)nm), calculated using the imaginary part of the field susceptibility (via the decay-rate of a dipolar emitter) or a raster-scanned focused beam (``TPL-method''). From left to right: At \(\Delta Z = 60\,\)nm and \(\Delta Z = 30\,\)nm above the structure surface, at the height of the top-most mesh-point layer inside the structure and using the focused-beam approximation technique (see section ``TPL''). The wavelength is \(\lambda_0 = 600\,\)nm, the partial LDOS is shown for (a) \(\vartheta=0^{\circ}\) (\(X\)-direction), (b) \(\vartheta=90^{\circ}\) (\(Y\)-direction) and (c) the total LDOS in the structure plane. The scale bar is \(200\,\)nm, the position of the structure is indicated by white dashed contour lines (plotted using \function{visu.\allowbreak structure\_\allowbreak contour}). }\label{fig:LDOS_example} \end{figure*} \begin{figure*}[t] \centering \includegraphics{sketch_models_eo} \caption{ (a) Illustration of the evolutionary optimization scheme. (b-c) sketches of the gold antenna geometry models used for the evolutionary optimization examples. Free parameters for the rectangular geometry (b) are the length \(L\) and width \(W\) of the rectangle and an offset \((\Delta x, \Delta y)\) for the structure position with respect to the origin (indicated by a small red cross). Free parameters for the cross-like geometry (c) are the lengths \(L_1, L_2\) and widths \(W_1, W_2\) of the two rectangular components, forming the cross. }\label{fig:eo_cycle_models} \end{figure*} \subsubsection{Polarization conversion} Also the polarization of the scattered light can be analyzed using \function{linear.farfield}. In figure~\ref{fig:examples_polarconversion} we demonstrate polarization conversion from an L-shaped gold antenna with perpendicular arms of equal dimensions (c.f. Refs.~\cite{black_optimal_2014, wiecha_polarization_2017}). An L-shaped plasmonic antenna (in vacuum) with arm dimensions \(L=210\,\)nm, \(W=H=45\,\)nm (see inset in figure~\ref{fig:examples_polarconversion}) is illuminated by a plane wave of linear polarization along one antenna arm (here along \(X\)). The scattered intensity is shown for two different output polarizations in blue (\(\mathbf{E}_{\text{scat}}\parallel X\)) and red (\(\mathbf{E}_{\text{scat}}\parallel Y\)), the latter corresponding to a polarization converted scattered field, which is highest if the incident wavelength is spectrally inbetween the pure modes (the pure modes correspond to polarization angles of \(\pm 45^{\circ}\), see e.g. Ref.~\cite{black_optimal_2014}). \subsubsection{Heat generation} To demonstrate the capabilities to model nano-optical thermal effects in pyGDM, we reproduce results published in Ref.~\cite{baffou_heat_2009}. A gold prism of side length \(L=115\,\)nm and height \(H=12\,\)nm is illuminated by a plane wave, linearly polarized along a side of the prism. The prism is placed on a glass substrate (\(n_{\text{subst}}=1.45\)) and is surrounded by water (\(n_{\text{env}}=1.33\)). The total deposited heat \(Q\) from an incident power density of \(1\,\)mW\(/\)\textmu m\(^2\) is shown in figure~\ref{fig:examples_heat_spectrum} as function of the wavelength. \subsubsection{Decay rate of dipole transition} The modification of the decay rate of an electric and a magnetic dipolar transition close to a very small dielectric nano-particle is demonstrated in figure~\ref{fig:examples_decay_rate} (compare also with Ref.~\cite{wiecha_decay_2018}). The dipolar emitter (\(\lambda_0=500\,\)nm) is raster-scanned in the \(XY\) plane at \(\Delta z=15\,\)nm above a dielectric nano-cube (\(n=2\)) of side-length \(D = 21\,\)nm, placed in vacuum. At each position in the raster-scan, the relative decay rate modification with respect to the vacuum value \(\Gamma_0\) is calculated. A noteworthy observation is the much narrower confinement of the features in the case of an electric dipole compared to the magnetic transition. Furthermore, also the magnitude of the decay rate variation is much stronger for the electric dipole. Both phenomena can be attributed to the more ``direct'' interaction of an electric dipole with the nano-structure, compared to the ``indirect'' magnetic response of the itself \textit{non-magnetic} nano-particle (see also section~\ref{sec:e_m_decay}). \subsubsection{Rasterscan simulation: TPL / heat / temperature} To demonstrate a rasterscan simulation, we calculate as function of a focused beam's focal spot position and for several incident linear polarizations: the two-photon photoluminescence (TPL) signal, the total deposited heat \(Q\) and the temperature rise at \(150\,\)nm above the center of a flat gold rhombus. The object is assumed to lie in water with \(n_{\text{water}}=1.33\) and a thermal conductivity of \(\kappa_{\text{water}} = 0.6\,\)W/mK. The temperature rise is calculated either for the rhombus in a homogeneous water environment, or in water lying on a glass substrate (using \(n_{\text{glass}}=1.5\), \(\kappa_{\text{glass}}=0.8\,\)W/mK). The rhombus dimensions are defined by a side length of \(L=500\,\)nm, a height of \(H=20\,\)nm and a top (and bottom) angle of \(60^{\circ}\). A linearly polarized focused plane wave (\(\lambda_0=750\,\)nm) of spotsize \(w=200\,\)nm is used, setting the power density to \(1\,\)mW\(/\)\textmu m\(^2\). The rasterscans consisting of \(50\times 50\) focal spot positions are shown in figure~\ref{fig:examples_rasterscan} for different angles of the linear polarization of the fundamental field. We can clearly observe the correlation between TPL and the heat and temperature mappings. We also see that the temperature rise is slightly stronger if a glass substrate is present. This is a result of heat reflection at the glass surface. \subsubsection{Rasterscan simulation: LDOS} In figure~\ref{fig:LDOS_example} we show rasterscan simulations of the photonic LDOS above a U-shaped dielectric planar structure. The length is \(800\)\,nm in the \(X\)-direction, \(400\,\)nm in the \(Y\) direction, its height is \(60\,\)nm and the bar width is \(180\,\)nm. Fig.~\ref{fig:LDOS_example}a) shows the partial LDOS for \(X\)-oriented dipole emitters, (b) the case of \(Y\) orientation and (c) the total LDOS in the structure plane. From left to right is shown the LDOS at decreasing distance to the structure (\(60\,\)nm, \(30\,\)nm and \(0\,\)nm to the top surface). In the very right column the surface LDOS is calculated using the ``TPL''-method (using a spotsize of \(w=100\,\)nm, see also section~\ref{sec:nonlinear_effects}). Comparing the LDOS at the top surface layer with the TPL-method, the general trends are reproduced. The differences are not very surprising, since the calculated quantities are not exactly the same. The TPL method gives a measure of the energy that can be coupled into the structure at the respective surface position using a focused beam. It is therefore only non-zero if the focused beam intersects with the structure. The LDOS corresponds to the efficiency of the radiative coupling between a dipolar emitter and the structure and is non-zero also outside the structure. \begin{figure*}[t] \centering \includegraphics{eo_examples_farfield} \caption{ Evolutionary optimization of the dimensions of a rectangular plasmonic gold antenna for different optimization targets. The optimum geometry is shown in the top panels (showing \(400\times 400\,\)nm\(^2\) large areas). The corresponding scattering spectra are shown in the bottom panels for plane wave illumination with \(X\) and \(Y\) linear polarization (blue, respectively red lines). The left ticks in each spectrum denote the scattering cross section \(\sigma_{\text{scat}}\), the ticks on the right hand side give the corresponding scattering efficiency (\(Q_{\text{scat}} = \sigma_{\text{scat}} / \sigma_{\text{geo}}\)). (a) maximization of \(Q_{\text{scat}}\) for \(X\) polarized illumination at \(\lambda=800\,\)nm. (b) maximization of \(Q_{\text{scat}}\) for \(X\) polarized illumination at \(\lambda=1000\,\)nm. (c) maximization of \(Q_{\text{scat}}\) for \(Y\) polarized illumination at \(\lambda=1000\,\)nm. (c) maximization of \(\sigma_{\text{scat}}\) for \(Y\) polarized illumination at \(\lambda=1000\,\)nm. }\label{fig:examples_eo_farfield} \end{figure*} \begin{figure}[tp] \centering \includegraphics{eo_examples_nearfield} \caption{ Evolutionary optimization of the dimensions and the position of a rectangular plasmonic gold antenna for maximum \(\mathbf{E}\)-field intensity enhancement at the location (a) \(\mathbf{r}_{\text{target}} = (0,0)\ [\text{nm}]\) and (b) \(\mathbf{r}_{\text{target}} = (-150,100)\ [\text{nm}]\). The \(z\)-coordinate of \(\mathbf{r}_{\text{target}}\) is fixed to \(30\,\)nm above the upper surface of the structure. Shown areas are \(600\times 600\,\)nm\(^2\). }\label{fig:examples_eo_nearfield} \end{figure} \section{Evolutionary optimization of nanostructure geometries} \subsection{Evolutionary optimization} A peculiarity of pyGDM\ is the \function{EO} module, provided together with the main pyGDM\ toolkit. The purpose of the \function{EO} module is to find nanostructure geometries that perform a certain optical functionality in the best possible way. This is also known as the ``inverse problem'' \cite{macias_application_2004, odom_multiscale_2012}. We try to achieve this goal by formulating the optical property as an optimization problem which takes the geometry of the particle as input. Such problem will usually result in a complex, non-analytical function and hence cannot be solved by classical optimization methods like variants of the ``Newton-Raphson method''. In our approach we therefore optimize the problem using evolutionary optimization (EO) algorithms. The latter mimic natural selection to find ideal solutions to complex (often non-analytical) problems. The initial step is to define a ``population'' of random parameter-sets for the problem. These ``individuals'' are then evolved through a cycle of ``reproduction'' (mixing parameters between the individuals and application of random changes) and ``selection'' (problem evaluation and discarding weak solutions). After a sufficient number of iterations, hopefully an optimum parameter-set for the problem has been found. The evolutionary optimization cycle is depicted in figure~\ref{fig:eo_cycle_models}a. Unfortunately, convergence can in principle never be guaranteed in EO. Convergence is therefore probably the most critical point in evolutionary optimization strategies. To ensure the credibility of the optimization results, a good stop-criterion and/or careful testing of the convergence and reproducibility of the solution for different initializations are crucial. For details on EO, we refer to the related literature, e.g. Ref.~\cite{schwefel_evolution_1993}. \subsection{EO in pyGDM} pyGDM\ can be used to optimize particle geometries for nano-optical problems via evolutionary optimization. Our approach consists of three main ingredients: \begin{enumerate} \item[1.] The \textbf{structure-model}: Constructs a particle geometry as function of a set of input-parameters which will be the free parameters for the optimization algorithm. It furthermore contains the simulation setup (via an instance of \object{core.simulation}). It is defined by a class inherited from \end{enumerate} \functiondescription{o}{EO.models.BaseModel} { {\textit{sim:} instance of \object{core.simulation} } } {} \begin{enumerate} \item[2.] The \textbf{problem}: Defines the optimization target. This will usually be an optical property of the particle such as the scattering cross-section or its near-field enhancement. It is defined by a class inherited from \end{enumerate} \functiondescription{o}{EO.problems.BaseProblem} { {\textit{sim:} instance of \object{core.simulation} } } {} \begin{enumerate} \item[3.] The \textbf{EO algorithm}: Finally the algorithm to solve the optimization problem \end{enumerate} For (3.) we use the PyGMO / paGMO toolkit \cite{biscani_global_2010}. PyGMO not only offers a large spectrum of EO algorithms. It can furthermore distribute the population of solutions on several ``islands'' within the so-called ``Generalized island model''. This allows for a very easy scaling of the evolutionary optimization on multi-processor architectures \cite{izzo_generalized_2012}. \textit{Note:} In this context, the use of the generalized propagator in pyGDM\ is a clear asset. Optimization problems with the incident field shape and/or polarization as variable and a fixed structure geometry can be solved very efficiently. This includes problems like near-field shaping in adaptive optics \cite{brixner_ultrafast_2006}. \subsection{Multi-objective optimization} pyGDM's \function{EO}-module is also capable of treating multi-objective optimization problems, by internally addressing pyGMO's according API. In other words, it is possible to search for nano-structure geometries optimizing \textit{multiple} target properties simultaneously. Such evolutionary multi-objective optimization (EMO) can in principle be done in two ways: The first approach consists in summarizing the multiple target values in one single fitness function, hence capturing the problem into a single objective optimization. In this case, the critical part is the construction of an appropriate fitness-value, which is usually not trivial at all. In the second approach, one searches for the set of ``non-dominated'' or ``Pareto optimum'' solutions, which is often called the ``Pareto front''. It consists of solutions that are all optimal in the sense that an improvement in one of the target functions necessarily leads to a decrease in at least one of the other optimization targets. The obvious advantage is, that the individual objectives can be used \textit{as-is} without the need to fiddle them into a single fitness-function. On the other hand, the latter approach additionally requires the selection of a single optimum solution from the set of Pareto optimum solutions. For a detailed introduction to EMO, see e.g. Ref.~\cite{deb_multi-objective_2001}. \subsection{EO-Examples} \begin{figure}[tp] \centering \includegraphics{eo_examples_multi_objective} \caption{ Multi-objective optimization of double-resonant plasmonic antennas made from gold. Large plot: Pareto front found from a concurrent maximization of \(Q_{\text{scat}}\) at \(\lambda_1=800\,\)nm and \(\lambda_2=1200\,\)nm, for polarizations along~\(X\) and~\(Y\), respectively (\(Q_{\text{scat}} = \sigma_{\text{scat}} / \sigma_{\text{geo}}\)). Top: Spectra for~\(X\) (blue) and \(Y\)-polarized (red) illumination of three selected structures on the Pareto front, shown as insets (labeled by numbers 1-3). }\label{fig:examples_emo} \end{figure} We demonstrate the evolutionary optimization toolkit ``\function{EO}'' on some simple but illustrative problems. \subsubsection{Maximize scattering cross-section or scattering efficiency} For a first demonstration, we want to optimize the shape of a rectangular gold-antenna in order to obtain maximum scattering at a certain wavelength and for a fixed angle of the linear incident polarization. The free parameters are the length \(L\) and width \(W\) of the plasmonic rectangle (see Fig.~\ref{fig:eo_cycle_models}b). The position of the rectangle (\(\Delta x = \Delta y = 0\)), the stepsize (\(s=15\,\)nm) of the cubic mesh and the height of the antenna (\(H=45\,\)nm) are fixed. We run the EO of the rectangular shape with different optimization targets, the respective final best solutions are shown in figure~\ref{fig:examples_eo_farfield}: In (a) the scattering efficiency \(Q_{\text{scat}}\) (i.e. the scattering cross-section \(\sigma_{\text{scat}}\) divided by the geometrical cross-section \(\sigma_{\text{geo}}\)) is maximized for an incident plane wave with wavelength \(\lambda_0=800\,\)nm and linear polarization of \(\mathbf{E}_0\) along \(X\). In (b) maximum \(Q_{\text{scat}}\) is searched for \(\lambda_0=1000\,\)nm and \(\mathbf{E}_0 \parallel X\). In (c) \(Q_{\text{scat}}\) is again maximized for \(\lambda_0=1000\,\)nm but a perpendicular polarization angle, hence \(\mathbf{E}_0 \parallel Y\). Finally, (d) shows and optimization of the scattering cross-section \(\sigma_{\text{scat}}\) (instead of \(Q_{\text{scat}}\)), with otherwise equal configuration as in (c). The first observation is, that the optimization is indeed capable of adjusting the size of the antenna such that the surface plasmon resonance occurs at the target wavelength. We also observe that while the optimization of the scattering efficiency (\(Q_{\text{scat}}\) given at the right of each plot) leads to thin rectangles with low geometric cross section, the optimization of the scattering cross section (\(\sigma_{\text{scat}}\) given at the left of each plot) leads to a structure of maximum allowed dimensions. This leads to a cross-section \(\sigma_{\text{scat}}\) about twice as large as for the other antennas. The scattering efficiency \(Q_{\text{scat}}\) on the other hand is significantly lower (about a factor \(5\)) compared to the optimizations shown in figure~\ref{fig:examples_eo_farfield}a-c. \subsubsection{Maximize electric field intensity} In a second example we want to find a structure to maximize the electric field intensity at a specific point \(\mathbf{r}_{\text{target}}\), \(30\,\)nm above the structure surface. As structure to be optimized, we use again the rectangular gold-antenna of variable length and width with the same configuration as in the first example. Additionally we introduce as free parameters the offsets \(\Delta x\) and \(\Delta y\), shifting the rectangle with respect to the origin of the coordinate system (see Fig.~\ref{fig:eo_cycle_models}b). The structure is illuminated by a plane wave (\(\lambda_0=800\,\)nm), linearly polarized along \(X\). We run the optimization for two different \(\mathbf{r}_{\text{target}}\). The results are shown in figure~\ref{fig:examples_eo_nearfield}. In both runs, the optimization found a plasmonic dipole antenna, resonant at the incident wavelength and shifted its position such that the hot-spot of maximum field enhancement lies at \(\mathbf{r}_{\text{target}}\). \subsubsection{EMO: Double resonant plasmonic antenna} In a last example, we show how multiple objectives can be optimized concurrently in a single optimization, by calculating the Pareto-front. For the demonstration, we try to obtain structures that scatter light at two different wavelengths for perpendicular polarization angles of the incident plane wave. We choose a simple cross-like geometry model (structure placed in vacuum), consisting of the four free parameters \(L_1, L_2\) and \(W_1, W_2\) (see Fig.~\ref{fig:eo_cycle_models}c). The optimization goal is to simultaneously maximize \(Q_{\text{scat}}\) for (1) a wavelength \(\lambda_0 = 800\,\)nm and an incident polarization along \(X\) and (2) \(\lambda_0 = 1200\,\)nm and an incident polarization along \(Y\). The Pareto-front obtained by the evolutionary optimization is shown in figure~\ref{fig:examples_emo}. Scattering spectra of selected structures are shown in the top, their geometries are illustrated in the insets, labeled (1)-(3). The optimization indeed found structures which maximize either one of the scattering-targets ((1) and (3)), or scatter light similarly strong for both target configurations (structure (2)). Note that the Pareto-front is not very smooth. In addition, the model seems not to be sufficiently sophisticated in order to obtain structures with equal \(Q_{\text{scat}}\) for both target conditions. A more general structure model could probably provide better solutions for the problem. \paragraph*{Note:} Further nano-photonic EO problems tackled using the pyGDM\ toolkit, can be found in Refs.~\cite{wiecha_decay_2018, wiecha_linear_2016, wiecha_evolutionary_2017, girard_designing_2018, wiecha_multi-resonant_2018}. \section{Conclusion}\label{sec:conclusion} In conclusion, we presented a python toolkit for electro-dynamical simulations in nano-optics, based on a volume discretization approach, the Green dyadic method. While other techniques like FEM may offer better accuracy, the main strength of pyGDM\ is the efficient treatment of large monochromatic problems with many illumination configurations, like raster-scan simulations. Such calculations can be solved very efficiently in pyGDM\ thanks to the concept of the generalized propagator. Furthermore, its simplicity is a great advantage of pyGDM . The high-level python API as well as many tools for rapid data analysis and visualization render standard nano-optical simulations very easy. Finally, the evolutionary optimization submodule is a unique feature, which allows to optimize nanostructure geometries for specific target optical properties. Scripts to reproduce all above shown examples, can be found online together with further, extensive documentation. \section{Appendix -- Accuracy, possible system size and limitations}\label{sec:appendix_accuracy} \paragraph{Limitations} The limit for the number \(N\) of discretization meshpoints depends mainly on the amount of RAM available in the machine. The memory requirement rises with \(3N^2\) (figure~6b). The computation time rises even proportional to \(N^3\) (see also figure~6a), so at some point, the speed will be limiting as well. This effectively limits the number of meshpoints to \(\approx 10000-15000\). \paragraph{Accuracy, large systems} To yield a reasonable accuracy, the discretization stepsize has to be sufficiently small (in the order of \(\approx 10\,\)nm for plasmonics and dielectrics of refractive index \(n \lesssim 3 \)). For dielectrics with higher refractive index, the discretization should be further refined to yield accurate results. However, if the user is aware of the fact that the agreement will be only qualitative, approximative simulations are possible with larger discretization. In consequence, the memory requirement limits the applicability of pyGDM to the amount of material that can be simulated with good accuracy to (very rough estimation) \(\approx 10000 \times 10^3\,\)nm\(^3\)). \paragraph{Small systems} When the size of the system is reduced, the discretization can be finer within the limit of the feasible \(10\)-\(15\)k meshpoints. Hence, the accuracy becomes better. One must only be aware, that pyGDM is a purely classical Maxwell solver. Therefore, the size of the system should not be reduced down to scales where quantum effects would occur (usually \(\lesssim 1-2\)\,nm). \section{Appendix -- Technical details}\label{sec:technical_detail} \subsection{Reference system in pyGDM\ simulations} In pyGDM\ an asymptotic Green's dyad is used which describes not only a substrate (``layer 1'', refractive index \(n_1\)), but also an additional cladding layer at a variable height above the substrate (``layer 3'', \(n_3\)). The nanoparticle is placed in the sandwich layer (``layer 2'', \(n_2\)), i.e. in-between layers~1 and~3. The distance between the substrate and the cladding layer can be specified by a \textit{spacing} parameter. By default, \(n_3 = n_2\), so if the index of the cladding layer ``3'' is not specified in the constructor of \object{structures.struct}, a reference system composed of a homogeneous environment above a dielectric substrate is assumed. To run a simulation without a substrate it is sufficient to simply set \(n_1 = n_2\). The geometry of the pyGDM\ reference system is illustrated in figure~\ref{fig:reference_system}. The non-retarded Dyad used in pyGDM\ can be derived using the image charges method and gives good approximations for dielectric interfaces of low refractive index. A fully retarded Dyad for the 3-layer environment can also be calculated, which becomes necessary for instance at metallic interfaces \cite{colas_des_francs_enhanced_2005, marty_near-field_2012}. This might be implemented in future versions of pyGDM. \subsection{Structure geometry} The structure geometry in pyGDM\ is defined as a list of \((x,y,z)\) tuples, defining the positions of the meshpoints on either a cubic, or a hexagonal compact, regular grid. pyGDM\ comes with some generators for common structures in nano-photonics. These geometries are available in the \function{structures} submodule. An overview of the available structures is shown in figure~\ref{fig:available_structures}. Additional structures can easily be implemented at the example of the available generator functions. We suggest using the mesher-routines \function{structures.\_\allowbreak meshCubic} and \function{structures.\_\allowbreak meshHexagonalCompact}. Additionally, planar structures can be generated from the brightness contrast of an image file, using \functiondescription{f}{structures.image\_to\_struct}{}{} This may be used to create structures from a lithography-mask layout or also from scanning electron- or atomic force-microscopy images, to simulate ``real'' geometries from an experimental sample. Finally, structure-geometries can be manipulated (rotated, ``center of gravity'' shifted to the origin) using \functiondescription{f}{structures.rotate\_XY}{}{} \functiondescription{f}{structures.center\_struct}{}{} \begin{figure*}[tp] \centering \includegraphics[width=\linewidth]{structures} \caption{ Top view of some geometries available in the \function{structures} submodule. The corresponding generator function names are given on top of the example plots. }\label{fig:available_structures} \end{figure*} \subsection{Material dispersion} pyGDM\ provides some basic dispersion models in its \function{materials} submodule: \object{materials.dummy} generates a material object which returns a constant dielectric function. ``\object{silicon}'', ``\object{gold}'' and ``\object{alu}'' provide the commonly used dispersion data for the respective materials. However, usually one would use tabulated data for the dispersion. This can be done in pyGDM\ via \functiondescription{o}{materials.fromFile}{}{} All dispersion containers ``\object{materials.class}'' provide an \function{epsilon(wavelength)} attribute, which is a function that returns the (complex) permittivity at \textit{wavelength} (in nm). By default, the tabulated data is interpolated linearly using \textit{numpy}'s ``\textit{interp}''. Optionally, higher order spline interpolation is supported (based on \textit{scipy.interpolate.interp1d}). Note that the latter may cause problems with python's ``pickling'' technique, particularly in combination with the \function{EO} module. \subsection{Minimum working example script} \begin{lstlisting}[language=Python, caption={Minimum example script. The plots generated by the script are shown in Fig.~\ref{fig:example_script_output}.}, label={lst:simple_example_script}] from pyGDM2 import structures from pyGDM2 import materials from pyGDM2 import fields from pyGDM2 import core from pyGDM2 import visu ## --- simulation setup --- ## structure: sphere of 120nm radius, ## constant dielectric function (n=2), ## placed in vacuum step = 20 # nm geometry = structures.sphere(step, R=6, mesh='cube') material = materials.dummy(2.0) norm = structures.get_normalization(mesh='cube') n1 = n2 = 1.0 struct = structures.struct(step, geometry, material, n1,n2, norm) ## incident field: plane wave, 500nm, lin. pol. || x field_generator = fields.planewave wavelengths = [500] # nm kwargs = dict(theta=[0.0], kSign=[-1]) efield = fields.efield(field_generator, wavelengths=wavelengths, kwargs=kwargs) ## create simulation object sim = core.simulation(struct, efield) ## --- run the simulation --- core.scatter(sim) ## --- plot the near-field inside the sphere --- ## using first (of one) field-config (=index 0) visu.vectorfield_by_fieldindex(sim, 0, projection='XY') visu.vectorfield_by_fieldindex(sim, 0, projection='XZ') visu.vectorfield_by_fieldindex(sim, 0, projection='YZ') \end{lstlisting} \subsection{Further tools available in pyGDM} \subsubsection{Save and load simulations}\label{sec:tools_save_load} To save and reload pyGDM simulations, the following functions are available. Saving and loading relies on python's ``pickle'' technique: \functiondescription{f}{tools.save\_simulation}{}{} \functiondescription{f}{tools.load\_simulation}{}{} \subsubsection{Show information about simulations}\label{sec:tools_print_sim_info} To print detailed information about a pyGDM\ simulation, the following function can be used \functiondescription{f}{tools.print\_sim\_info}{}{} alternatively, simply use ``\textbf{print} \object{sim\_object}''. \subsubsection{Generate coordinate list for \(2\)D map}\label{sec:tools_generate_nf_map} To calculate \(2\)D data in pyGDM\ (e.g. near-field maps, c.f. figure~\ref{fig:tools_visu}d-g), we provide a tool to easily generate the \(2\)D grid (in cartesian \(3\)D space) for such data: \functiondescription{f}{tools.\allowbreak generate\_\allowbreak NF\_map}{}{} \subsubsection{Get index of specific field configuration}\label{sec:tools_get_field_index} pyGDM\ uses keyword dictionaries to store multiple configurations of the incident field (such as several wavelengths, polarizations, focused beam positions). All possible permutations of the given keywords are stored in the \object{core.simulation} object and are attributed an index, by which they can unambiguously identified. In order to get the index of the field parameters that closest match specific search values (like a wavelength), one can use: \functiondescription{f}{tools.\allowbreak get\_\allowbreak closest\_\allowbreak field\_\allowbreak index}{}{} All field-configurations available in a simulation, sorted by their \textit{field-index}, can be obtained by \functiondescription{f}{tools.\allowbreak get\_\allowbreak field\_\allowbreak indices}{}{} \subsubsection{Cubic stepsize from discretized structure} If the particle discretization is generated with another program then the pyGDM\ meshing functions (available in the \function{structures} submodule), it might be helpful to determine the stepsize of the structure. We provide a function, that computes the stepsize of a cubic mesh by calculating the closest distance between any two meshpoints (using \textit{scipy.spatial.distance.pdist}). \functiondescription{f}{tools.get\_step\_\allowbreak from\_geometry}{}{} \begin{figure*}[tp] \centering \includegraphics[width=\linewidth]{simple_example_script} \caption{ Plots generated by the demonstration script shown in listing~\ref{lst:simple_example_script}. From left to right: \(XY\), \(XZ\) and \(YZ\) projections of the real part of the electric field inside a dielectric nanosphere (\(n=2\)) with radius \(R=120\,\)nm placed in vacuum. Linear polarized (along \(X\)) plane wave illumination with \(\lambda=400\,\)nm, incident from positive \(Z\) (\(\mathbf{k} = -\hat{\mathbf{e}}_z k\)). }\label{fig:example_script_output} \end{figure*} \subsubsection{Get complex field as list of coordinate/field tuples}\label{sec:tools_get_field_as_list} After running \function{core.scatter}, pyGDM\ stores the fields inside the particle in the \object{core.simulation} object as lists of the complex field components (\(E_{x,i}, E_{y,i}, E_{z,i}\)). The (\(x_i, y_i, z_i\)) geometry coordinates are stored separately in the \object{structures.struct} object within the simulation description object. To generate complete field-lists of tuples \((x_i, y_i, z_i, E_{x,i}, E_{y,i}, E_{z,i})\), pyGDM\ provides the following functions: \functiondescription{f}{tools.\allowbreak get\_\allowbreak field\_\allowbreak as\_\allowbreak list}{}{} \functiondescription{f}{tools.\allowbreak get\_\allowbreak field\_\allowbreak as\_\allowbreak list\_\allowbreak by\_\allowbreak fieldindex}{}{} Both return the complex field for a selected illumination configuration as list of coordinate / field tuples (\(x_i, y_i, z_i, E_{x,i}, E_{y,i}, E_{z,i}\)), either from the raw field-object or from the \object{simulation} object and a field-index, respectively. \subsubsection{Generate \(2\)D map from coordinate list}\label{sec:tools_list_to_grid} To map spatial data available as list of (coordinate-) tuples onto a plot-able 2D grid (e.g. for plotting a mapping with \textit{matplotlib.imshow}), pyGDM\ provides \functiondescription{f}{tools.\allowbreak map\_to\_grid\_XY}{}{} \subsubsection{Raster-scan field configurations} If a simulation with a large number of focused beam-positions has been performed, the available incident field configurations corresponding to full raster-scan maps can be obtained using \functiondescription{f}{tools.\allowbreak get\_\allowbreak possible\_\allowbreak field\_params\_\allowbreak rasterscan}{}{} Analogously, the set of indices referring to the fields in the \object{simulation.E} which correspond to a particularly configured raster-scan, can be obtained via \functiondescription{f}{tools.get\_\allowbreak rasterscan\_\allowbreak field\_\allowbreak indices}{}{} Alternatively, the full set of fields inside the particle for a raster-scan with particular illumination-configuration can be obtained via \functiondescription{f}{tools.get\_\allowbreak rasterscan\_\allowbreak fields}{}{} \subsection{Dependencies} The core functionalities of pyGDM\ depend only on \textbf{numpy}. The compilation of the \textit{fortran} parts require a \textit{fortran} compiler such as \textit{gcc}'s \textbf{gfortran}. \subsubsection{Dependencies: \function{visu}} All 2D visualization tools require \textbf{matplotlib}. \subsubsection{Dependencies: \function{visu3D}} All 3D visualization tools require \textbf{mayavi}. \subsubsection{Dependencies: \function{tools}} Several tools require \textbf{scipy}. \subsubsection{Dependencies: \function{structures}} Several structure-tools require \textbf{scipy}. \function{image\_\allowbreak to\_\allowbreak struct} requires \textbf{PIL}. \subsubsection{Dependencies: \function{core.scatter}: Solver (parameter ``method'')} pyGDM\ includes wrappers to several \textbf{scipy} solvers but also other methods are supported. Below is given an exhaustive list of the available solvers and their dependencies. For benchmarks, see figure~\ref{fig:Theory_inversion_in_GDM}. \begin{itemize} \item ``lu'' (default) \textbf{scipy.linalg.lu\_factor} (LU decomposition) \item ``numpyinv'' \textbf{numpy.linalg.inv} (if \textbf{numpy} is compiled with LAPACK: LAPACK's ``dgesv'', else a slower fallback routine) \item ``dyson'': Own implementation, no requirements (sequence of Dyson's equations \cite{martin_generalized_1995}) \item ``scipyinv'' \textbf{scipy.linalg.inv} (LAPACK's ``dgesv'') \item ``pinv2'': \textbf{scipy.linalg.pinv2} (singular value decomposition, SVD) \item ``superlu'': \textbf{scipy.sparse.linalg.splu} (superLU \cite{li_overview_2005}) \item ``cg'': \textbf{scipy} conjugate gradient iterative solver (\textbf{scipy.sparse.linalg.bicgstab}), by default preconditioned with \textbf{scipy}'s incomplete LU decomposition (superLU \cite{li_overview_2005} via \textbf{scipy.sparse.linalg.spilu}) \item ``pycg'': \textbf{pyamg}'s implementation of the \textit{bicgstab} algorithm, optionally preconditioned with \textbf{scipy}'s incomplete LU decomposition. Recommended if multi-threading problems are encountered with scipy's \textit{bicgstab} implementation, which is not threadsafe \end{itemize} \subsection{Compiling, installation} We provide a script for the easy compilation and installation via python's ``distutils'' functionality. For this, simply run in the pyGDM\ root directory \vspace{.5\baselineskip} \indent\hspace{1cm}\texttt{python setup.py install} \vspace{.5\baselineskip} Alternatively, pyGDM\ can be compiled locally without installation via \vspace{.5\baselineskip} \indent\hspace{1cm}\texttt{python setup.py build} \vspace{.5\baselineskip} Or it may be installed to a user-defined location using the ``\texttt{{-}{-}prefix=...}'' option. \paragraph*{Note:} The ``setup.py'' script requires \textbf{numpy} as well as a \textit{fortran} compiler (tested with \textbf{gfortran}). \subsection{Possible future capabilities} The GDM can be used for manifold further calculations, which are to be included in future versions of pyGDM. A non-exhaustive list of possible future features includes \begin{itemize} \item 2D structures (assuming infinite length along one coordinate)\cite{paulus_greens_2001} \item Coherent nonlinear effects like (surface-) second or third harmonic generation\cite{wiecha_linear_2016, wiecha_origin_2016} \item Electron energy loss / gain spectroscopy (EELS, EEGS) or cathodoluminescence (CL) simulations\cite{geuquet_eels_2010, arbouet_electron_2014} \item more environment choices (surface propagator including retardation effects\cite{girard_generation_1995,girard_optical_1997}, multi-layer stratified environments\cite{paulus_accurate_2000, colas_des_francs_enhanced_2005} or magnetic decay rate calculation including a substrate\cite{girard_optical_1997, kwadrin_probing_2013}) \item cuboidal\cite{ould_agha_near-field_2014} or non-regular meshes\cite{kottmann_accurate_2000} \item materials with anisotropic susceptibility, e.g. birefringent media \item periodic structures\cite{gallinet_electromagnetic_2009, chaumet_simulation_2009} \item quantum corrected model for plasmonic tunneling currents via junctions of inhomogeneous permittivity \cite{esteban_bridging_2012} \item SNOM image calculation/interpretation\cite{greffet_image_1997, porto_theory_2000} \item memory-efficient conjugate gradients solver including FFT-accelerated matrix-vector multiplications for large problems \cite{goodman_application_1991} \end{itemize} \section{Appendix -- Keyword arguments of the most important classes and functions}\label{sec:kwargs} \section*{Most important classes} For a detailed explanation of the physical information contained by the below classes, see section~\ref{sec:setup_simulation}. \functiondescription{o}{core.simulation}{}{ {\textit{struct:} instance of \object{structures.struct}} {\textit{efield:} instance of \object{fields.efield}} } \functiondescription{o}{structures.struct}{}{ {\textit{step:} discretization stepsize (in nm)} {\textit{geometry:} list of meshpoint coordinates \((x,y,z)\) (in nm)} {\textit{material:} structure material dispersion, instance of \object{materials.CLASS}} {\textit{n1, n2:} ref. index of substrate (n1) and environment (n2)} {\textit{normalization (optional):} mesh-type dependent factor, default: ``1'' (cubic mesh)} {\textit{n3 (optional):} ref. index of cladding} {\textit{spacing (optional):} distance between substrate and cladding. default: ``5000'' (nm)} } \functiondescription{o}{fields.efield}{}{ {\textit{field\_generator:} field generator function (e.g. from \function{fields} module)} {\textit{wavelengths:} list of wavelengths at which to do the simulation (in nm)} {\textit{kwargs (optional):} dict (or list of dict) with further kwargs for the field generator} } \section*{Most important functions} For a detailed explanation of the calculations performed by the below functions, see sections~\ref{sec:solver}-\ref{sec:visu}. \subsection*{pyGDM\ core} \functiondescription{f}{core.scatter / core.scatter\_mpi}{}{ {\textit{sim:} instance of \object{core.simulation}} {\textit{method (optional):} inversion method, default: ``lu''} {\textit{multithreaded (optional):} default: ``True''} } \functiondescription{f}{core.decay\_rate}{}{ {\textit{sim:} instance of \object{core.simulation}} {\textit{method (optional):} inversion method, default: ``lu''} } \subsection*{Post-processing} \functiondescription{f}{linear.\allowbreak extinct}{}{ {\textit{sim}: instance of \object{core.simulation}} {\textit{field\_index}: index of field-configuration} } \functiondescription{f}{linear.\allowbreak nearfield}{}{ {\textit{sim}: instance of \object{core.simulation}} {\textit{field\_index}: index of field-configuration} {\textit{r\_probe }: list of \((x,y,z)\) coordinates at which to evaluate the near-field} } \subsection*{Visualization} \functiondescription{f}{visu.structure}{}{ {\textit{sim}: instance of \object{core.simulation}} {\textit{projection (optional)}: default: ``XY''} {\textit{color (optional)}: optional, matplotlib compatible color, default: ``auto''} {\textit{scale (optional)}: scaling, default: ``0.5''} } \functiondescription{f}{visu.vectorfield}{}{ {\textit{NF}: list containing the complex field (list of 6-tuples \((x_i, y_i, z_i, E_{x,i}, E_{y,i}, E_{z,i})\). See also section~\ref{sec:tools_get_field_as_list}: \function{tools.\allowbreak get\_\allowbreak field\_\allowbreak as\_\allowbreak list})} {\textit{projection (optional)}: default: ``XY''} {\textit{slice\_level (optional)}: using only fields at specific height. default: ``none'' \(\rightarrow\) superpose all vectors} } \functiondescription{f}{visu.scalarfield}{}{ {\textit{NF}: list of 4-tuples containing the coordinates and scalar-field values (\((x_i, y_i, z_i, S_{i})\))} } \section{Appendix -- GDM in the SI unit system}\label{sec:SIunits} \noindent In order to facilitate the conversion between SI and cgs unit systems, in this section, we introduce the main GDM equations in SI units. The Fourier transformed Maxwell equations are then: \begin{subequations}\label{eq:Maxwell_SI} \begin{align} \nabla\cdot \mathbf{D}(\mathbf{r}, \omega) &= \rho(\mathbf{r}, \omega) \label{eq:MaxwellFourierdivD_SI}\\ \nabla \times \mathbf{E}(\mathbf{r}, \omega) &= \mathrm{i} \omega \mathbf{B}(\mathbf{r}, \omega) \label{eq:MaxwellFourierrotE_SI}\\ \nabla\cdot \mathbf{B}(\mathbf{r}, \omega) &= 0 \label{eq:MaxwellFourierdivB_SI}\\ \nabla \times \mathbf{H}(\mathbf{r}, \omega) &= -\mathrm{i} \omega \mathbf{D}(\mathbf{r}, \omega) + \mathbf{j}(\mathbf{r}, \omega) \label{eq:MaxwellFourierrotH_SI} \end{align} \end{subequations} From which the following wave equation can be derived: \begin{equation} (\Delta + k^2) \mathbf{E} = - \frac{1}{\epsilon_0 \epsilon_{\text{env}}} \left(k^2 + \nabla \nabla \right) \mathbf{P}. \label{eq:waveequationEfield_SI} \end{equation} This leads to the vectorial Lippmann-Schwinger equation in SI units (here for a vacuum reference system): \begin{equation} \mathbf{E}(\mathbf{r}, \omega) = \mathbf{E}_0(\mathbf{r}, \omega) + \int \mathbf{G}_0^{\text{EE}}(\mathbf{r}, \mathbf{r'}, \omega) \cdot \boldsymbol{\chi}_e \cdot \mathbf{E}(\mathbf{r'}, \omega) \text{d} \mathbf{r'} \label{eq:LippmannSchwingerG0_SI} \end{equation} with the Green's Dyad \begin{multline}\label{eq:vacuumGreenDyadicFunction_SI} \mathbf{G}_0^{\text{EE}}(\mathbf{r}, \mathbf{r'}, \omega) = \frac{\mathrm{e}^{\mathrm{i} k R}}{4\pi\epsilon_0\epsilon_{\text{env}}} \, \Big( -k^2 \mathbf{T}_1(\mathbf{R}) \\ - ik \mathbf{T}_2(\mathbf{R}) + \mathbf{T}_3(\mathbf{R}) \Big) \, , \end{multline} where the definitions of \(\mathbf{T}_1\), \(\mathbf{T}_2\) and \(\mathbf{T}_3\) given in equations~\eqref{eq:vacuumGreenDyadicFunctionT1}-\eqref{eq:vacuumGreenDyadicFunctionT3} are still valid. In Eq.~\eqref{eq:LippmannSchwingerG0_SI} and its volume discretization \begin{multline} \mathbf{E}(\mathbf{r}_i, \omega) = \mathbf{E}_0(\mathbf{r}_i, \omega) + \\ \chi_e \sum\limits_{j=1}^{N} \mathbf{G}_0^{\text{EE}}(\mathbf{r}_i, \mathbf{r}_j, \omega) \cdot \mathbf{E}(\mathbf{r}_j, \omega) V_{\text{cell}} \label{eq:LippmannSchwingerVolumeDiscretization_SI} \end{multline} one has to use the susceptibility \(\chi_{\text{e}} = (\epsilon_r - \epsilon_{\text{env}})\). For simplicity here we assumed a scalar \(\chi_e\). Finally, the renormalization tensors Eqs.~\eqref{eq:renormalization_cube} and~\eqref{eq:renormalization_hex} have to be divided by a factor \(4\pi\). \subsection*{Post-processing routines} Concerning the post-processing routines, some of the pre-factors need to be adapted. For instance, the factor before the sums of equations~\eqref{eq:sigma_ext_from_nearfield} and~\eqref{eq:sigma_abs_from_nearfield} writes in SI units: \begin{equation} \frac{2\pi n}{\lambda_0 |\mathbf{E}_0|^2} \end{equation} with the refractive index \(n\). The equations \eqref{eq:heat_deposited} and \eqref{eq:temp_rise_vicinity} for heat-generation, respectively the local temperature increase have to be multiplied by a factor \(4\pi\). \paragraph*{Note:} In pyGDM\ the post-processing routines internally convert the results to SI compatible units. The ``\function{extinct}'' function for instance returns the cross sections in units of nm\(^2\), ``\function{heat}'' returns nano Watts and ``\function{temperature}'' returns \(^{\circ}\)K. For the respective returned units, see the technical documentation of the routines in the online documentation of the API e.g. at~\href{https://wiechapeter.gitlab.io/pyGDM2-doc/apidoc.html}{https://\allowbreak wiechapeter.\allowbreak gitlab.io/\allowbreak pyGDM2-doc/\allowbreak apidoc.\allowbreak html}. \begin{figure*}[t] \centering \includegraphics[rotate=270, scale=1.6]{inversion_sparsity_examples_GDM} \caption{ Population patterns of matrices \(\textbf{M}\) at \(\lambda=1\,\)\textmu m for a selection of structures (stepsize \(10\,\)nm, same scale for all sketches). Cubic meshes for the first three structures, hexagonal compact mesh for the structure on the right. For illustrative purposes the structures are only one layer of mesh-points high (small matrix size). White corresponds to an absolute value of \(0\), black to \(\geq 10\)\,\% of the matrix's largest element. }\label{fig:Theory_sparsity_examples} \end{figure*} \section{Appendix -- Conjugate gradients}\label{sec:conjugate_gradients} The following applies to all pyGDM\ functions which solve the main GDM inversion problem. The conjugate gradients solver provides an alternative to full inversion of the coupled dipole problem which can -- under circumstances -- be preferable to complete matrix inversion. \begin{itemize} \item argument \textit{method}: ``cg'' (requires \textit{scipy}) or ``pycg'' (requires \textit{pyamg}) \end{itemize} If we have a closer look at the matrix \(\textbf{M}\) (see Eq.~\eqref{eq:definitionMforInversion}), we can make an interesting observation: While \(\textbf{M}\) is not exactly sparse, most of the entries have significantly smaller absolute values than very few large matrix elements. In Fig.~\ref{fig:Theory_sparsity_examples} we show plots of the population of matrix \(\mathbf{M}\) for some selected nano-structures. These population plots work as illustrated in the following examples: \begin{equation*} \left[ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \right] = \centering\includegraphics[raise=-0.4\height]{inversion_sparsity_example1} \end{equation*} \begin{equation*} \left[ \begin{matrix} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{matrix} \right] = \centering\includegraphics[raise=-0.4\height]{inversion_sparsity_example2} \end{equation*} \begin{equation*} \left[ \begin{matrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{matrix} \right] = \centering\includegraphics[raise=-0.4\height]{inversion_sparsity_example3} \end{equation*} \(\mathbf{M}\) contains also phase-information and is therefore complex, hence we use the absolute values of the matrix elements for the population patterns. In addition, the maximum of the color-code in Fig.~\ref{fig:Theory_sparsity_examples} is clipped to 10\,\% of the maximum absolute value in the matrix to increase the contrast. Clearly, the matrices contain very few entries with values of more than some \% of the overall maximum and yet~\(>60\,\)\% of all elements are generally non-zero. It turns out, that such matrices are good candidates for iterative solving using so-called ``Krylow-subspace methods''. The most popular algorithm of this class is the conjugate gradients (CG) method and its derivations like biconjugate gradients (for non-symmetric problems) or complex CG \cite{joly_complex_1993}. A detailed description of the method can be found in Ref.~\cite{press_numerical_2007} (chapter~2.7). The main idea of these iterative methods is, that the inverse of the matrix is in many cases not actually required. For simulations that massively make use of the generalized propagator (like raster-scan simulations), the CG technique is therefore not the method of choice. It may be on the other hand an advantageous approach, if we search a solution for \(\mathbf{E}(\omega)\) that satisfies \begin{equation} \mathbf{M}(\omega) \cdot \mathbf{E}(\omega) = \mathbf{E}_0(\omega) \end{equation} for one single or only few incident field \(\mathbf{E}_0(\omega)\). During the CG-iterations, matrix-vector multiplications \(\textbf{M}\cdot\textbf{x}\) are performed following a minimization scheme in which \(\textbf{M}\cdot\textbf{x}\) converges eventually to \(\mathbf{E}_0\). Theoretically, for a \(N\times N\) matrix CG converge to the exact solution after \(N\) iterations and each iteration itself has a computational cost \(\propto N^2\). In reality, the convergence is often very rapid in the beginning, and a solution with sufficient precision can be obtained after very few iterations, yielding a total computational cost \(\propto N^2\) instead of a \(N^3\) scaling for exact inversion for example with LU-decomposition. Indeed, we find a \(N^3\)-scaling for complete inversion by LU or Dyson's sequence and a \(N^2 \) dependence when using conjugate gradients (Fig.~\ref{fig:Theory_inversion_in_GDM}a). Particularly for larger numbers of meshpoints, this allows a reduction of the simulation time, as shown in Fig.~\ref{fig:Theory_inversion_in_GDM}a. \begin{figure}[t] \centering \includegraphics{inversion_preconditioner_reuse_speedup} \caption{ Speedup of the GDM-calculation of a spectrum (2000 meshpoints Si nanowire, step of \(10\,\)nm, \(\lambda\) from \(500\,\)nm to \(1500\,\)nm) as function of the number of wavelengths, if recycling of the preconditioner is enabled. The narrower the wavelengths in the spectrum, the higher the possible gain of PC-recycling.}\label{fig:PC_recycling_speedup} \end{figure} \subsection{Preconditioning} \begin{itemize} \item argument \textit{pc\_method}: ``ilu'', ``lu'' (both require \textit{scipy}), ``amg'' (requires \textit{pyamg}) or ``none'' (no preconditioning) \end{itemize} The speed of the convergence of conjugate gradients is crucially dependent on the condition of the matrix \(\mathbf{M}\) and generally can be massively improved by doing a \emph{preconditioning} step before starting the actual iterative scheme. Let's assume, \(\mathbf{A}\) of the equation system \begin{equation} \mathbf{A} \cdot \mathbf{x} = \mathbf{b} \end{equation} would be the identity matrix \(\mathbf{I}\). Then CG would have converged within the first iteration. A possible approach for preconditioning is therefore to reshape the problem using a matrix \(\mathbf{P}\) \begin{equation}\label{eq:rightHand_Preconditioning} \mathbf{A} \cdot \left(\mathbf{P} \cdot \mathbf{\hat{x}}\right) = \mathbf{b}. \end{equation} If \(\mathbf{P}\) is a close approximation to \(\mathbf{A}^{-1}\), \(\mathbf{A} \cdot \mathbf{P}\) will be close to the identity \(\mathbf{I}\) and the system would converge very quickly under conjugate gradients iterations. Eq.~\eqref{eq:rightHand_Preconditioning} is called a right-preconditioned system. Consequently, a good preconditioner for our problem is a close approximation to the inverse of \(\mathbf{M}\). Several algorithms exist to search pseudo-inverse matrices for preconditioning. A very popular one is the \emph{incomplete} LU-decomposition (ILU) \cite{li_supernodal_2011} that scales with \(N^2\) and which is the default method in pyGDM. \subsection{Preconditioner recycling} \begin{itemize} \item argument \textit{cg\_recycle\_pc}: ``True'' (=default) \end{itemize} When calculating spectra using the GDM, the electric field in a particle is usually calculated for a large number of closely spaced wavelengths, at each of which the matrix \(\mathbf{M}\) is (incompletely) inverted. Most often, the electric field distribution changes only marginally for slightly different wavelengths and so does the matrix \(\mathbf{M}\). Unfortunately, a \emph{very similar} matrix is of little use for exact calculations, but we have seen in the preceding section that an \emph{approximation} to the exact inverse \(\mathbf{M}^{-1}\) can be a good preconditioner \(\mathbf{P}\) for CG. When calculating dense spectra (i.e. many points on the wavelength axis), we can use this fact and significantly accelerate the calculation with conjugate gradients by recycling the preconditioner matrix until a certain lower limit for the speedup factor is reached. In other words, we will be using the same \(\mathbf{P}\) repeatedly for several consecutive wavelengths and only if the acceleration is below a speed-up limit, a new preconditioner is calculated and subsequently re-used for the following wavelengths. As shown in Fig.~\ref{fig:PC_recycling_speedup}, this technique can divide the total calculation time easily by more than a factor~\(2\). Another possible application when preconditioner recycling may be beneficial is in series of simulations with many very similar or slowly transformed nano-structures like antennas of gradually increasing size. \paragraph*{Note:} The conjugate gradients solver is not very efficient for the moment and will be improved in future versions of pyGDM. In particular, in the specific case of the coupled dipole approximation it is possible to do very efficient vector/matrix multiplications by applying a fast Fourier transformation (FFT) scheme \cite{goodman_application_1991}. This is not yet implemented in pyGDM. The currently used third-party sparse-matrix solvers are not ideally suited regarding the dense matrix problem in pyGDM. \section*{Acknowledgments} I gratefully thank Christian Girard and Arnaud Arbouet for their advise, help with the theory, careful proof-reading and for the fortran routines. I also want to thank Gérard Colas des Francs for helpful discussions and his contributions to the fortran code to which also Renaud Marty contributed. I finally thank Vincent Paillard and Aurélien Cuche for many inspiring discussions and proof-reading of the manuscript. This work was supported by Programme Investissements d'Avenir under the program ANR-11-IDEX-0002-02, reference ANR-10-LABX-0037-NEXT, and by the computing facility center CALMIP of the University of Toulouse under grant P12167. \section*{Conflicts of interests} The author declares no competing financial interest. \input{05_technical} \input{06_appendix} \input{2017_doc_pygdm.bbl} \end{document}
{ "timestamp": "2018-06-11T02:11:12", "yymm": "1802", "arxiv_id": "1802.04071", "language": "en", "url": "https://arxiv.org/abs/1802.04071" }
\section{Introduction} \label{sec:intro} A hydrodynamic viscous fingering (VF) instability can deform the interface between two different fluids when a high mobility fluid of lower viscosity displaces a more viscous and hence less mobile one in a porous medium \cite{ST1958,S1986,TH1986,Homsy1987,TH1988}. In numerous industrial and environmental problems such as enhanced oil recovery, ${\rm CO}_2$ sequestration, combustion, hydrology, soil remediation, etc.~\cite{OT1984,LK1999,FCJ2013,Farajzadeh2015,RDNS1998}, this fingering instability can interplay with chemical reactions. In the past few decades, viscous fingering has been analyzed in reactive systems on both miscible and immiscible interfaces~\citep{HB1995,JH2000,NU2001,NU2003,NOKT2009,WH1999, WH1999a, FH2003,PSZB2007,NMKT2007, GW2009, NKKT2009,HTAW2010, HA2010a,HA2010b,NW2011,NKKT2011,RNIMTWit2012,riolfo,alh13,Nagatsu2010}. If the reaction does not modifies the viscosity {\it in-situ}, the chemical species are passively advected by the flow and the fingering properties of the interface remain similar to those of the nonreactive system~\cite{HB1995,JH2000, NU2001,NU2003,NOKT2009}. The flow in the fingering patterns can on the other hand change the spatio-temporal distribution of the reactants and influence the yield of the reaction. Active influence of chemistry on fingering can be obtained as soon as the chemical reaction taking place around the interface between the two fluids modifies their physical properties and, in particular, their viscosity~\cite{dew16}. The reaction then influences the stability as well as the spatio-temporal dynamics of the flow. In turn, the hydrodynamic flow affects mixing and thus the amount and spatial distribution of chemical species and a highly nonlinear feedback is established between chemistry and hydrodynamics. For cases where reactions actively change the viscosity {\it in-situ}, numerical simulations have first shown on the basis of a bistable chemical reaction scheme that the properties of miscible VF are modified when the reaction changes the viscosity across the reactive miscible interface~\cite{WH1999,WH1999a}. The bistable nature of chemical kinetics is then responsible for a new phenomenon of droplet formation isolating regions of high or low viscosity within connected domains of the other steady state. In other studies, the active influence of $A+B \rightarrow C$ types of chemical reaction on miscible viscous fingering has been studied both experimentally~\cite{PSZB2007,NMKT2007,NKKT2009,NKKT2011,RNIMTWit2012,riolfo} and theoretically~\cite{GW2009,HTAW2010,HA2010a,HA2010b,NW2011,RNIMTWit2012, alh13}. \citet{PSZB2007} have in particular studied experimentally chemically-driven fingering at the miscible reactive interface between two aqueous solutions of same viscosity when a reaction between a cationic surfactant and an organic salt produces an elastic more viscous worm-like micellar fluid. Various fingering regimes have been identified depending on concentrations, fluid characteristics and injection flow rate (or equivalently P{\' e}clet number, defined as the ratio of the convective to diffusive transport rates). In some experiments by Nagatsu et al., a less-viscous acidic or basic aqueous solution was injected into a more-viscous polymeric solution, the viscosity of which depends on pH ~\cite{NMKT2007,NKKT2009,NKKT2011}. It is observed that, when the viscosity is increased (decreased) by the reaction, fingers are widened (narrowed), which is mainly due to suppressed (enhanced) shielding effects. Interestingly, opposite results have been observed at moderate reaction rates for systems with a viscosity decrease~\cite{NKKT2009} and increase~\cite{NKKT2011}. In the case where the non reactive displacement is stable (more viscous solution displacing a less viscous once), it has even been shown experimentally that the reaction is able to trigger VF~\cite{RNIMTWit2012}. Depending whether the reaction increases or decreases viscosity, a different fingering pattern is then obtained. The experimental study of Nagatsu et al.~\cite{NMKT2007} showed that at `large' injection rate, or equivalently high P{\' e}clet number (${\rm Pe}$), an instantaneous chemical reaction can have opposite effects on miscible VF when a less viscous (acidic or basic) solution is injected radially into a more viscous (e.g.~polymeric solution) one in a Hele-Shaw cell depending whether the reaction locally increases or decreases the viscosity. In the viscosity increase case, the VF pattern is ``denser" in the sense that it covers a more compact area in the Hele-Shaw cell than the non-reactive pattern. On the contrary, a VF pattern covering a smaller area (also qualified as ``less dense pattern") was reported in the viscosity decrease reactive case. Recently, new experiments have been carried out focusing on the influence of the injection rate on viscosity increasing and decreasing reactive systems~\cite{riolfo,NMW2015}. Interestingly, it was found that, at lower Pe, the trends are opposite than at high ${\rm Pe}$ i.e. for viscosity decreasing reactions, the system can be stabilized at low injection flow rates. These experiments~\cite{NMKT2007,riolfo,NMW2015} thus clearly show that, in the presence of a viscosity decreasing reaction, the reactive VF patterns can be controlled by varying the P{\' e}clet number. Moreover, when the reaction induced viscosity decrease is large enough, a suppression of the VF instability can be obtained at small ${\rm Pe}$. In numerical studies, the explicit influence of the injection rate on reactive VF has however not been addressed explicitly. In this context, our objective is here to analyze numerically the influence on the VF instability of changes in the injection flow rate i.e. changes in the ${\rm Pe}$ number of the problem when a simple $A+B \rightarrow C$ chemical reaction decreases the viscosity {\it in situ}. To this end, we integrate numerically the reaction-diffusion-convection (RDC) equations of reactive VF in porous media and analyze the properties of the fingering patterns for different values of ${\rm Pe}$. We show that a viscosity-decreasing reaction enhances stabilization or destabilization of the interface at respectively low and high ${\rm Pe}$, with regard to the non-reactive system. This is related to the possibility at low ${\rm Pe}$ for chemistry to build up a minimum in the viscosity profile that blocks the further progression of fingering and stabilizes the system. On the contrary, at high ${\rm Pe}$, chemistry does not have time to act to decrease the viscosity and the classical enhanced destabilization when the flow rate is increased is then observed. These results highlights the optimum conditions on flow conditions to obtain stabilization by reactions of VF. This is of practical importance as it paves the way to a possible chemical control of fingering instabilities appearing in many practical situations ranging from geophysical to environmental problems. This paper is organized as follows. The problem description and the related RDC model are given in Sec.~\ref{sec:probdes_eqns}. In Sec.~\ref{subsec:numerical-method}, the numerical method used to integrate the model is discussed. The characteristics of VF patterns and in particular the influence of the P{\' e}clet number are studied in Sec.~\ref{sec:results}. The non-reactive and reactive cases are given in \ref{subsec:conc_NR} and \ref{subsec:VF_reactive}, respectively. A quantitative analysis and parametric study are carried out in Secs.~\ref{subsec:quantitative} and~\ref{sec:parametric_analysis}. At the end, conclusions and outlook are given in Sec.~\ref{sec:Conclusion and Outlook}. \section{Problem description and governing equations} \label{sec:probdes_eqns} Consider a homogeneous two-dimensional porous medium or horizontal thin Hele-Shaw cell of length $L_x$ and width $L_y$ with constant permeability $\kappa$ in which a miscible solution of reactant $A$ with viscosity $\mu_A$ is injected from left to right into a solution of reactant $B$ with viscosity $\mu_B$ at a constant speed $U$ along the $x$-direction (Fig.1). We assume that the initial concentrations of $A$ and $B$ are both equal to $a_0$. The initial position of the miscible interface is $x_0$. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.5]{PLOTS/Sketch}\\ \caption{\small{Sketch of a two-dimensional porous medium of length $L_x$ and width $L_y$ with permeability $\kappa$ in which a solution of reactant $A$ with viscosity $\mu_A$ is displacing a solution of reactant $B$ of viscosity $\mu_B$ from left to right at a constant speed $U$. Here $x_0$ and $a_0$ are the initial contact position and initial concentration of reactants, respectively. }} \label{fig:sketch} \end{center} \end{figure} Upon contact between the two solutions, a simple $A+B \rightarrow C$ chemical reaction takes place in the miscible interface zone where $A$ and $B$ meet by diffusion, react, and yield the product $C$ of viscosity $\mu_C$. The objective is to analyze numerically how the dynamic decrease of viscosity driven by the reaction can influence the VF instability and in particular what is the influence of the injection speed $U$ on this effect. To analyze the problem, the system is considered as incompressible and neutrally buoyant. The dynamics is modeled using Darcy's law for the velocity field along with three reaction-diffusion-convection (RDC) equations for the concentrations: \begin{align} \boldsymbol{\nabla}\cdot\boldsymbol{u}&=0, \label{eqn:Mass} \\ \boldsymbol{\nabla}p&=-\frac{\mu(a,b,c)}{\kappa}\boldsymbol{u}, \label{eqn:Momentum} \\ \frac{\partial a}{\partial t}+\boldsymbol{u}\cdot\boldsymbol{\nabla}a &= D_{\!A}\,\nabla^2 a-k\,a\,b, \label{eqn:concA} \\ \frac{\partial b}{\partial t}+\boldsymbol{u}\cdot\boldsymbol{\nabla}b &= D_{\!B}\,\nabla^2 b-k\,a\,b, \label{eqn:concB} \\ \frac{\partial c}{\partial t}+\boldsymbol{u}\cdot\boldsymbol{\nabla}c &= D_{\!C}\,\nabla^2 c+k\,a\,b, \label{eqn:concC} \end{align} where $a$, $b$, and $c$ denote the concentrations of the reactants $A$ and $B$ and of the product $C$, respectively, $k$ is the kinetic constant, $p$ is the pressure, $D_{A}$, $D_B$ and $D_C$ are the diffusivities of the reactants $A$ and $B$ and the product $C$, respectively, $\boldsymbol{u}=(u,v)$ is the two-dimensional flow velocity and $\kappa$ is the constant permeability. The viscosities of the solution when only one species is present in concentration $a_0$ are defined as $\mu_A$, $\mu_B$ and $\mu_C$, respectively in the presence of the reactants $A$, $B$ or of the product $C$. Following previous theoretical work on viscous fingering~\citep{TH1986,WH1999a,WH1999,MMW2007,GW2009,HTAW2010, HA2010a,HA2010b,NW2011,riolfo,alh13}, we assume the viscosity as an exponential function of the concentrations of $A$, $B$ and $C$ as \begin{equation} \mu(a,b,c) = \mu_A \,e^{[R_b b + R_c c]/a_0}, \label{eqn:viscosity} \end{equation} where $R_b$ and $R_c$ are the log-mobility ratios defined as \begin{equation} R_b= \mbox{ln} \left( \frac{\mu_B}{\mu_A}\right)\qquad \mbox{and}\qquad R_c= \mbox{ln} \left( \frac{\mu_C}{\mu_A}\right). \end{equation} For the non-reactive VF case or the equivalent specific reactive case when the product $C$ has the same viscosity as one of the reactant (i.e.~$R_b =R_c$), the system is unstable when the lower viscosity solution of A displaces the more viscous solution of B i.e. when $\mu_A<\mu_B$ or $R_b>0$. Let us analyze how this stability is changed when both $\mu_C$ and the injection speed $U$ are varied. \subsection{Non-dimensional Equations} To specifically let the injection speed appear in the dimensionless problem under the form of a P\'eclet number, the reference scales for length, velocity, time, concentration, viscosity, diffusivity and pressure are taken as $L_y$, $U$, $L_y/U$, $a_0$, $\mu_A$, $D_C$ and $\mu_AU L_y/\kappa$, respectively. For simplicity, equations are written in a reference frame moving with speed $U$ by transforming variables as $\boldsymbol{x} \rightarrow \boldsymbol{x} - U t \boldsymbol{e_x}$ and $\boldsymbol{u} \rightarrow \boldsymbol{u} - U \boldsymbol{e_x}$ with $\boldsymbol{e_x}$ being the unit vector along $x$ direction. The dimensionless form of (\ref{eqn:Mass})--(\ref{eqn:viscosity}) can then be written as \begin{align} \boldsymbol{\nabla}\cdot \boldsymbol{u} &=0, \label{eqn:mass_nd}\\ \boldsymbol{\nabla} p &= -\mu(a,b,c) (\boldsymbol{u}+\boldsymbol{e_x}), \label{eqn:momentum_nd}\\ \frac{\partial a}{\partial t}+ \boldsymbol{u} \cdot \boldsymbol{\nabla} a &= \delta_a {\rm Pe}^{-1} \nabla^2 a -D_a\,a\,b, \label{eqn:a_nd}\\ \frac{\partial b}{\partial t}+ \boldsymbol{u} \cdot \boldsymbol{\nabla} b &= \delta_b{\rm Pe}^{-1} \nabla^2 b -D_a\,a\,b, \label{eqn:b_nd}\\ \frac{\partial c}{\partial t}+ \boldsymbol{u} \cdot \boldsymbol{\nabla} c &= {\rm Pe}^{-1} \nabla^2 c +D_a\,a\,b, \label{eqn:c_nd}\\ \mu(a,b,c) &=e^{(R_b b + R_c c)}, \label{eqn:kappa_nd} \end{align} where $D_a\!=\!k a_0 L_y/U \!=\! \tau_h/\tau_c$ is the dimensionless Damk{\"o}hler number defined as the ratio of the hydrodynamic time scale $\tau_h\!=\!L_y/U$ to the chemical time scale $\tau_c\!=\!1/k a_0$. The P\'eclet number ${\rm Pe}\!=\!U L_y/D_c=\tau_h/\tau_D$ is the ratio of the convective time $\tau_h$ to the diffusive time $\tau_D=D_c/U^2$ while $\delta_a\!=\!D_A/D_C$ and $\delta_b\!=\!D_B/D_C$ are the diffusion coefficient ratios. Taking the curl of the momentum equation and defining the stream function $\psi(x,y)$ as $u \!=\! \partial \psi/\partial y$ and $v\!=\!- \partial \psi/\partial x$, we get \begin{eqnarray} \nabla^2 \psi&=& R_b ( \psi_x b_x + \psi_y b_y + b_y) +R_c ( \psi_x c_x + \psi_y c_y + c_y), \label{eqn:mass}\\ a_t+ a_x \psi_y - a_y \psi_x &=& \delta_a{\rm Pe}^{-1} \nabla^2 a -D_a\,a\,b, \label{eqn:a1}\\ b_t+ b_x \psi_y - b_y \psi_x &=& \delta_b{\rm Pe}^{-1} \nabla^2 b -D_a\,a\,b, \label{eqn:b1}\\ c_t+ c_x \psi_y - c_y \psi_x &=& {\rm Pe}^{-1} \nabla^2 c +D_a\,a\,b, \label{eqn:c1} \end{eqnarray} where the subscripts $x$ and $t$ represent the respective derivatives. The last term in \eqref{eqn:a1}--\eqref{eqn:c1} corresponds to the reaction rate $\mathcal{R}$: \begin{equation} \mathcal{R}(x,y,t) = D_a\,a(x,y,t)\,b(x,y,t). \label{eqn:reactionrate} \end{equation} Comparing the present RDC model \eqref{eqn:mass}--\eqref{eqn:c1} with those previously studied in the literature~\cite{TH1988,GW2009,HTAW2010,NW2011,PM2015}, we note that: (i) when $D_a\!=\!0$ we recover the classical model for non-reactive viscous fingering similar to the one studied by Tan and Homsy \cite{TH1988,PM2015}; (ii) when $D_a\!\neq\!0$, $Pe\!=\!1$ and $R_b\!=\!0$ we obtain the model of reactive VF for solutions of A and B of same viscosity as analyzed numerically by~\citet{GW2009}; (iii) when $D_a\!\neq\!0$, ${\rm Pe}=\delta_a\!=\!\delta_c\!=\!1$ we get back to the reactive VF model with $A$, $B$ and $C$ of different viscosity but species diffusing all at the same rate as studied by~\citet{HTAW2010} and~\citet{NW2011}. As the dynamics of the reactive zone is independent of boundary conditions as long as the unstable fingered front does not confront its periodic extension~\cite{TH1988}, we use periodic boundary conditions in both directions. The initial conditions for the stream function and product concentration $c$ are taken as $\psi(x,y)\!=\!0$ and $c(x,y)\!=\!0$, for all $(x,y)$, respectively. For the initial concentrations of the reactant $A$ and $B$ solutions, we use a step front between $A=1, B=0$ on the left and $B=1, A=0$ on the right of $x\!=\!x_0$ with a random noise of amplitude of order $10^{-2}$ added in the front to trigger the instability. The dimensionless system size is $\mathcal{A} \times 1$, where $\mathcal{A}=L_x/L_y$ is the aspect ratio. Equations~\eqref{eqn:mass}--\eqref{eqn:c1} together with the initial and boundary conditions form an initial-boundary value problem with six dimensionless control parameters---namely, $R_b$, $R_c$, $D_a$, $\delta_a$, $\delta_b$ and ${\rm Pe}$. To decrease the wide range of possibilities, we fix here $\delta_a=\delta_b=1$ to focus on the effect of the reaction (variable $Da$ and $R_c$ for a given $R_b$) and flow speed (variable ${\rm Pe}$) on the fingering instability. \subsection{Numerical Method} \label{subsec:numerical-method} To solve \eqref{eqn:mass}--\eqref{eqn:c1}, we use a pseudo-spectral numerical scheme based on the discrete Fourier transform library FFTW 3.3.4~\citep{TH1988,Fornberg1998,WH1999,WH1999a,GW2009}. In order to avoid any interaction between the unstable fingered front and its periodic extension, we choose a domain with a large aspect ratio. The physical and computational domain size ($L_x\times L_y$) are $32 \times1$ and $4098\times 128$, respectively. The time step of numerical integration is chosen as ${\rm dt} = 10^{-4}$. To validate our code, we have successfully reproduced previous nonlinear simulation results of non-reactive~\citep{TH1988,PM2015} and reactive~\citep{GW2009,HTAW2010,NW2011} systems. \section{Results} \label{sec:results} \subsection{Non-reactive system} \label{subsec:conc_NR} It is already known that, in absence of any reaction effect ($Da=0$ or $R_b=R_c$ \cite{HTAW2010,NW2011}), increasing the injection speed (i.e. increasing ${\rm Pe}$ in our dimensionless formulation of the problem) increases the destabilization of the interface by VF when $R_b > 0$ \citep{TH1988,Homsy1987,PM2015}. As a reference case, this observation is shown in Fig.~\ref{fig:NR_pe100_1000} which illustrates the concentration of reactants ($A$ and $B$) for ${\rm Pe}=100$ and ${\rm Pe}=1000$, respectively, at four different times. As ${\rm Pe}$ increases, fingering becomes more intense and the wavelength of the pattern decreases as the interface becomes more unstable. It is also observed that, at low Pe, the deformed interface tends to flatten as time evolves thanks to transverse diffusion. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5]{PLOTS/nonreactive_Pe100Pe1000.eps} \caption{\small{ Equivalent non-reactive ($R_b=R_c=2$) system : Concentrations of $A$ and $B$ for ${\rm Pe}=100$ [left] and $1000$ [right] at four different times (from top to bottom). Concentration fields are scaled between zero (blue) and one (red). The viscosity (not shown) varies in a similar way as the concentration of $B$.}} \label{fig:NR_pe100_1000} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.35]{PLOTS/routV_30n_jet1_withWhiteLine}\qquad\quad \includegraphics[scale=0.35]{PLOTS/routV_05n_jet1_withWhiteLine1.eps} \includegraphics[scale=0.35]{PLOTS/fig_conc_Fixed_yPosition_t30_Da0Pe100Rb2_NR.eps} \includegraphics[scale=0.35]{PLOTS/fig_conc_Fixed_yPosition_t5_Da0Pe1000Rb2_NR.eps} \caption{\small{Spatial profiles of concentrations of $A$ (dashed blue line), $B$ (dash-dotted red line), and of $\mbox{ln}(\mu)$ (solid magenta line) along the injection direction at $y=L_y/2$ for $R_b=R_c=2$ and (a) ${\rm Pe}=100$ at $t=30$ or (b) ${\rm Pe}=1000$ at $t=5$, see Fig.~\ref{fig:NR_pe100_1000}. The top figures represent the corresponding two-dimensional map of $\mbox{ln}(\mu)$ through which the one dimensional sections are shown. }} \label{fig:Fixed_positionConc_NR} \end{center} \end{figure} Figure~\ref{fig:Fixed_positionConc_NR} compares the one dimensional profiles of the concentrations of $A$ and $B$, and the logarithm of viscosity $\mbox{ln}(\mu)$ at a fixed transverse location $y=L_y/2$ for ${\rm Pe}=100$ at time $t=30$ and ${\rm Pe}=1000$ at $t=5$, respectively. As a reference, the white line $y=L_y/2$ is shown in the corresponding two-dimensional map of $\mbox{ln}(\mu)$ on the top of the panels. While the concentrations and viscosity profiles at large ${\rm Pe}$ show bumps characteristic of the fingering instability, these profiles are quasi linear between the end-point values at small ${\rm Pe}$ indicating a more stable interface. This stabilizing effect at low Pe is in agreement with previous results \cite{TH1986,TH1988,PM2015}. \subsection{Reactive system} \label{subsec:VF_reactive} Let us now analyze the effect of ${\rm Pe}$ on reactive VF when an $A+B \rightarrow C$ reaction produces the product $C$ of lower viscosity (negative value of $R_c$) such that the viscosity of the system develops in time a minimum around the reactive front. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.46]{PLOTS/reactive_Pe100Pe1000Da1Rc_min2.eps} \caption{\small{Reactive VF at $R_b=2$, $R_c=-2$ and $D_a=1$. The first and second columns represent the concentrations of $A$, $B$, $C$, viscosity in log-scale (${\rm ln}(\mu)$) and $\mathcal{R}$ for ${\rm Pe}=100$ and $1000$ at various times, respectively. $A$ and $B$ are scaled between zero (blue) and one (red), $C$ is scaled between zero (blue) and 0.5 (red), and ${\rm ln} (\mu)$ and $\mathcal{R}$ are shown in their absolute values. }} \label{fig:reactive_case_Da1_Pe100_1000} \end{center} \end{figure} Figure~\ref{fig:reactive_case_Da1_Pe100_1000} shows the concentrations, ${\rm ln}(\mu)$, and reaction rate $\mathcal{R}$ at $D_a=1$, $R_b=2$ and $R_c=-2$ for two values of P{\'e}clet numbers ${\rm Pe}=100$ (first column) and $1000$ (second column). Two opposite behaviours are obtained at low and high Pe: at ${\rm Pe}=1000$, fingering is more intense than in the non reactive case with coarsening, and more repetitive shielding and tip splitting \cite{NW2011}. The fingered zone extends on a larger spatial extent than in the non reactive case (Fig.2) suggesting that the reaction has here a destabilizing effect. A comparison of the transverse averaged viscosity profile in the non reactive (Fig.3b) and reactive (Fig.5b) cases shows that, at ${\rm Pe}=1000$, the decrease in viscosity induced by the reaction leads to a sharper viscosity jump which can explain the increased destabilisation. As a consequence, fingering extends both in the $A$- and $B$-rich regions with the reaction rate being localised at the fingered frontier between the two reactants. On the contrary, at ${\rm Pe}=100$ (Fig.4, first column), a minimum in viscosity develops in the course of time where the less viscous $C$ separates the two reactants $A$ and $B$ (Fig.5a). The reaction rate correspondingly decreases in time and remains strongly localised at a given location. The time scales are also longer as more time is needed to cover the same distance. Interestingly, fingering is weak and remains longer in the boundary zone where the less viscous $C$ displaces the more viscous $B$ then in the stable part of the non-monotonic profile where A pushes the less viscous $C$. This means that, in experiments where often a dye is used to visualize the fingering pattern, the instability would quickly become unnoticeable if the dye is diluted in the injected A reactant~\cite{riolfo}. A comparison of the spatio-temporal distribution of $A$ in Fig.2 (non reactive) and~4 (reactive) leads thus to the conclusion that, at high ${\rm Pe}$, reactive fingering is more intense with more ramified fingers that cover a larger area in the presence of reaction. On the contrary, at low ${\rm Pe}$, fingering is stabilized by the reaction. The effect of the reaction decreasing the viscosity has thus an opposite effect on the flow at high and low Pe, as observed experimentally~\cite{riolfo,NMKT2007}. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.44]{PLOTS/routV_30n_jet_withWhiteLine.eps} \includegraphics[scale=0.44]{PLOTS/routV_05n_jet1_Whiteline1.eps} \includegraphics[scale=0.35]{PLOTS/fig_conc_Fixed_yPosition_t30_Da1Pe100Rcmin2Rb2.eps} \includegraphics[scale=0.35]{PLOTS/fig_conc_Fixed_yPosition_t5_Da1Pe1000Rcmin2Rb2.eps} \caption{\small{ Same as Fig.~\ref{fig:Fixed_positionConc_NR} but for the reactive case at $D_a=1$, see Fig.~\ref{fig:reactive_case_Da1_Pe100_1000}. }} \label{fig:Fixed_positionConc} \end{center} \end{figure} \subsection{Quantitative analysis} \label{subsec:quantitative} In order to understand the opposite dynamics at low and high Pe, and to quantify the influence of varying ${\rm Pe}$ on reactive VF, we compute the one-dimensional transversely averaged profiles of given quantities, $\zeta(x,y,t)$ as \begin{equation} \langle \zeta(x,t) \rangle= \frac{1}{L_y} \int_{0}^{L_y} \zeta(x,y,t)\,\,{\rm d}y, \label{eqn:profile} \end{equation} where $\zeta$ can be, for instance, concentration, viscosity, etc. In absence of fingering ($R_b=R_c=0$), these profiles are equivalent to the one-dimensional reaction diffusion profiles. For the simulations of Fig.~\ref{fig:reactive_case_Da1_Pe100_1000}, the temporal evolution of some of these transversely averaged profiles is shown in Fig.~\ref{fig:profile_Pe100Pe1000_Da1}. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.36]{PLOTS/fig_profil_t010203040_Pe100Da1Rc_min2_scaled.eps} \includegraphics[scale=0.35]{PLOTS/fig_profil_t0_1_3_5_10_Pe1000Da1Rc_min2_scaled.eps} \includegraphics[scale=0.35]{PLOTS/fig_Vprofil_t010203040_Pe100Da1Rc_min2_scaled.eps}\quad \includegraphics[scale=0.35]{PLOTS/fig_Vprofil_t0_1_3_5_10_Pe1000Da1Rc_min2_scaled.eps} \includegraphics[scale=0.35]{PLOTS/fig_Rprofil_t010203040_Pe100Da1Rc_min2_scaled.eps}\quad \includegraphics[scale=0.35]{PLOTS/fig_Rprofil_t0_1_3_5_10_Pe1000Da1Rc_min2_scaled.eps} \caption{\small{ Transversely averaged concentration (top row), viscosity (middle row), and reaction rate (bottom row) profiles corresponding to simulations of Figs.~\ref{fig:reactive_case_Da1_Pe100_1000} for ${\rm Pe}=100$ (left column) and ${\rm Pe}=1000$ (right column). The dashed, dash-dotted and solid lines in panels (a,b) depict concentrations of $A$, $B$ and $C$, respectively. The black, red, green, blue and magenta colors in panels~(a,c,e) correspond to $t=0$, $10$, $20$, $40$ and $50$, respectively, while those in panels (b,d,f) correspond to $t=0$, $1$, $3$, $5$ and $10$, respectively. }} \label{fig:profile_Pe100Pe1000_Da1} \end{center} \end{figure} In the convective flow regime, the fingering pattern starts to develop around the reactive interface as soon as solutions $A$ and $B$ react and produce a less-viscous product $C$, see Figs.~\ref{fig:profile_Pe100Pe1000_Da1}(a) and \ref{fig:profile_Pe100Pe1000_Da1}(b). As the system evolves in time, we see that increasing amounts of $A$ and $B$ are consumed and that the total production of the product $\langle c(x,t) \rangle$ increases. The corresponding reaction rate $\langle \mathcal{R}(x,t) \rangle$, shown in Fig.~\ref{fig:profile_Pe100Pe1000_Da1}(e), decreases in time when $A$ and $B$ are consumed and are progressively separated by $C$. Fig.~\ref{fig:profile_Pe100Pe1000_Da1}(c) shows the development of viscosity as time evolves. At low ${\rm Pe}$, a viscosity minimum develops in time at the back of the reaction front where the product concentration is maximum which can also be seen from Figs.~\ref{fig:reactive_case_Da1_Pe100_1000}(c) and~\ref{fig:reactive_case_Da1_Pe100_1000}(d). Owing to the viscosity minimum, the interface between $A$ and $C$ is stabilized, which can clearly be observed in Fig.~\ref{fig:reactive_case_Da1_Pe100_1000}(a) as the interface tends to flatten. On the contrary, the interface between $B$ and $C$ where the less viscous $C$ pushes the more viscous $B$ indicates the presence of VF. Nevertheless, transverse diffusion finally dominates VF, and the interface between $B$ and $C$ eventually stabilizes again [see Fig.~\ref{fig:reactive_case_Da1_Pe100_1000}(a--e)]. Let us now analyze quantitatively fingering patterns at larger ${\rm Pe}$. We have noticed in Fig.~\ref{fig:reactive_case_Da1_Pe100_1000}(f--j) that reactive VF is destabilizing at high ${\rm Pe}$ in contrast to a stabilizing trend at low ${\rm Pe}$. Figures~\ref{fig:profile_Pe100Pe1000_Da1}(b,d,f) show that, at high ${\rm Pe}$, when VF is present, the transversely averaged concentration profiles feature bumps indicating the presence of forward and backward fingering. In contrast to fingering at the back, forward fingering shows merging and tip-splitting, see Fig.~\ref{fig:reactive_case_Da1_Pe100_1000}(f--j). Similar to the concentration, the log-viscosity, Fig.~\ref{fig:profile_Pe100Pe1000_Da1}(d), and reaction rate profiles, Fig.~\ref{fig:profile_Pe100Pe1000_Da1}(f), show similar features. The center of mass of these profiles is shifted towards the right of the reaction front indicating the presence of more elongated fingering in the $B$-rich region. While, at low ${\rm Pe}$, the viscosity minimum formed at the back (or left) of the reaction front gives rise to stabilization, it is completely absent at high ${\rm Pe}$ causing VF to expand significantly around the reaction zone. \section{Parametric Study} \label{sec:parametric_analysis} We have seen that fingering is stabilized at lower ${\rm Pe}$ when the viscosity decreases thanks to a chemical reaction. To gain more insight into this stabilization effect, a parametric study is next carried out at several low ${\rm Pe}$ values to understand the effect of varying the Damk{\" o}hler number $D_a$ and the viscosity of the product by changing the log-mobility ratio $R_c$. \subsection{Effect of mobility ratio $R_c$ at $R_b>0$} \label{subsec:effRc} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.33]{PLOTS/reactive_Rcmin2_0_2_Pe100Da1.eps \caption{\small{From top to bottom in each column: concentrations of $A$, $B$ and $C$, $\mbox{ln}(\mu)$ and $\mathcal{R}$ for various time steps ($t=10$, $20$, $30$ and $40$) at ${\rm Pe}=100$ with $R_c=-2$ (first column), $R_c=0$ (second column) and $R_c=2$ (equivalent of the non-reactive case, third column). Other parameter values are as in Fig.~\ref{fig:reactive_case_Da1_Pe100_1000}. }} \label{fig:EffR} \end{center} \end{figure} The effect of changing the log-mobility ratio $R_c$ is shown on Fig.~\ref{fig:EffR}. We consider the three values $R_c=-2$, $0$, $2$. We remind that, when $R_b=R_c$ (=2 here), the consumption of $B$ is balanced by the production of $C$, hence the dynamics of the reactive case is equivalent to that of the non reactive system. When $0<R_C<R_b$, the viscosity decreases by the reaction but the viscosity profile remains monotonic in space. On the contrary, if $R_c<0$, a minimum in viscosity develops in time. For $R_b=2$, the cases $R_c=2$, $R_c=0$ and $R_c=-2$ represent thus the (i) non-reactive VF, (ii) reactive VF with monotonic viscosity, and (iii) reactive VF with a viscosity minimum, respectively. By comparing concentrations of $A$, $B$ and $C$ for various time steps in these three cases, we see that, when $R_c<0$, the viscosity minimum has the following effects: (i) The interface between $A$ and $C$ stabilizes rapidly and the mixing of reactant $A$ decreases as compared to the other two cases, (ii) as time evolves, the mixing region between $C$ and $B$ increases and stops fingering in time, displacing more $B$ by the product $C$. The reactive VF is stabilized at low ${\rm Pe}$ by the viscosity minimum compared to the reactive VF case with monotonic viscosity or the non-reactive VF. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5]{PLOTS/figRD_VRc0_2_min2_Da1Pe100.eps}\qquad \includegraphics[scale=0.5]{PLOTS/figEffRc_ML_Pe100Da1.eps}. \caption{\small{(a) RD profiles for $\mbox{ln}(\mu)$, and (b) temporal variation of the mixing length for $A$ (main panel), $B$ (inset), and $C$ (inset) for $R_c:$ $-2$ (solid line), $0$ (dashed line), and $2$ (dash-dotted line). Other parameter values are as in Fig.~\ref{fig:EffR}. }} \label{fig:effR_2_RD_L} \end{center} \end{figure} The origin of this stabilization can be explained through the long time asymptotic one-dimensional reaction diffusion (RD) profiles of ${\rm ln}(\mu)$, as shown in Fig.~\ref{fig:effR_2_RD_L}(a). If $R_c=-2$, the reaction diffusion viscosity front moves in time from the higher viscosity region of $B$ to the lower viscosity region of $A$, see Fig.~\ref{fig:profile_Pe100Pe1000_Da1}(a). Due to the presence of lower viscosity region containing $C$, the profile of ${\rm ln}(\mu)$ develops a minimum in the $A$-rich region. While the gradients ${\rm d}({\rm ln}\mu)/{\rm d}x$ are decreasing with $R_c$ on the left of the reaction front [$x-x_0<0$], those on the right [$x-x_0>0$] are increasing. Owing to this, when $R_c=-2$, the miscible interface between $A$ and $C$ is more stable as is the case when a higher viscosity fluid displaces a lower viscosity one. As a consequence, the mixing length $L_a$ decreases rapidly as time evolves and finally reaches a steady value which is the lowest among all cases, as shown in the main panel of Fig.~\ref{fig:effR_2_RD_L}(b). In contrast to the interface between $A$ and $C$, the interface between $C$ and $B$ is more unstable when $R_c=-2$ because ${\rm d}({\rm ln}\mu)/{\rm d}x$ is then the steepest, see Fig.~\ref{fig:effR_2_RD_L}(a). This can also be noticed in the evolution of the mixing length of $B$ and $C$ in Fig.~\ref{fig:effR_2_RD_L}(b). The instability at the interface between $B$ and $C$ starts earlier and the mixing length $L_b$ and $L_c$ increase more in time as $R_c$ decreases. As time evolves (far from the onset), due to transverse diffusion, $L_b$ and $L_c$ reach a steady value which increases with decreasing $R_c$. The displacement of $B$ is thus larger when $R_c=-2$ in comparison to $R_c=0$ and $2$ (non-reactive). From Figs.~\ref{fig:reactive_case_Da1_Pe100_1000}--\ref{fig:effR_2_RD_L}, we can thus conclude that, when $R_c=-2$, the front between $A$ and $C$ stabilizes, the mixing between $B$ and $C$ is increased and the displacement of $B$ is larger. \subsection{Effect of P{\'e}clet number $Pe$} \label{subsec:effPe} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.3]{PLOTS/reactive_Pe100_150_300Da1Rc_min2} \caption{\small{Same as Fig.~\ref{fig:EffR} but for $R_c=-2$ and different values of ${\rm Pe}$: $100$ (first column), $150$ (second column) and $300$ (third column).}} \label{fig:effPe} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.45]{PLOTS/figRD_VPe100_150_300_Da1RcMin_21.eps}\quad \includegraphics[scale=0.45]{PLOTS/Fig_ML_Da1_Pe100_150_300_1.eps} \caption{\small{(a) RD profiles of $\mbox{ln}(\mu)$ for ${\rm Pe}=100$ (blue),~$150$ (red),~$300$ (black) and $1000$ (magenta) where the solid and dashed lines represent reactive and non-reactive systems, respectively. (b) Temporal evolution of the mixing length for $A$ (main panel), $B$ (upper inset) and $C$ (lower inset) at ${\rm Pe}=100$ (blue solid line), $150$ (red dashed line), $300$ (black dash-dotted line) and $1000$ (magenta dotted line). Other parameter values are as in Fig.~\ref{fig:effPe}.}} \label{fig:effPe_2} \end{center} \end{figure} In the previous section, we have seen that the onset time decreases i.e. the system is initially more unstable as $R_c$ decreases. We now fix $R_c=-2$ and analyze the effect on fingering of changing ${\rm Pe}$ keeping it nevertheless at small values. Specifically, concentrations, $\mbox{ln}(\mu)$ and the reaction rate are shown for ${\rm Pe}:$~$100$ (first column), $150$ (second column) and $300$ (third column) in Fig.~\ref{fig:effPe}. We see that the system becomes more unstable when increasing ${\rm Pe}$. This can be understood by inspecting the one-dimensional RD profiles shown in Fig.~\ref{fig:effPe_2}(a). The non-reactive displacement (dashed lines) is more unstable at higher ${\rm Pe}$ because the gradient of viscosity ${\rm d}({\rm ln}\mu)/{\rm d}x$ is correspondingly sharper. Similarly, viscosity gradients in the reactive RD systems (solid lines) are larger when ${\rm Pe}$ increases as diffusion is then less efficient to smooth the viscosity profile. Consequently, the RDC system also becomes more unstable with increasing ${\rm Pe}$, as shown in Fig.~\ref{fig:effPe} and on the evolution of the mixing lengths, see Fig.~\ref{fig:effPe_2}(b) where we see that the onset time of the fingering instability decreases with increasing ${\rm Pe}$. The smaller ${\rm Pe}$, the quicker the mixing lengths tend to a steady state value at low ${\rm Pe}$ whereas at large ${\rm Pe}$ the mixing lengths are increasing instantaneously irrespective of the viscosity minimum at the interface. \subsection{Effect of Damk{\"o}hler number, $Da$} \label{subsec:effDa} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.33]{PLOTS/reactive_Dap5_1_5_Pe100Rc_min2.eps} \caption{\small{ Same as Fig.~\ref{fig:EffR} but for ${\rm Pe}=100$ and different values of ${\rm D_a}$: $0.5$ (first column), $1$ (second column) and $5$ (third column). }} \label{fig:effDa} \end{center} \end{figure} To study the effect of varying the Damk{\"o}hler number on the stabilization of fingering instability thanks to reactions decreasing the viscosity, Figure~\ref{fig:effDa} depicts the concentrations, ${\rm ln}(\mu)$ and the reaction rate $\mathcal{R}$ at successive times for three values of $D_a$. We see that, when increasing $D_a$ (i.e.~the reaction occurs faster), the viscosity minimum develops more quickly (see also Fig.~\ref{fig:effDa_2}(a)), the amount of product $C$ formed at a given time increases, and the reaction rate $\mathcal{R}$ decays faster because the reactants $A$ and $B$ are increasingly separated by the product $C$. As a consequence, when $D_a$ increases, the miscible interface between $A$ and $C$ stabilizes faster and the steady value of $L_a$ decreases. In parallel, the interface between $B$ and $C$ becomes uniform in time, and the corresponding values of $L_b$ and $L_c$ saturates (see Fig.~\ref{fig:effDa_2}(b)). The system is thus globally more stable when $D_a$ is larger. We conclude thus from this parametric study that the displacement tends to stabilize (destabilize) at lower ${\rm Pe}$ (high ${\rm Pe}$) for $R_c<0$ ($R_c \geq 0$), and larger $D_a$ (smaller $D_a$). The optimal conditions to avoid fingering can thus be achieved when the viscosity is decreasing by a fast chemical reaction provided the rate of injection of displacing fluid is kept as low as possible to allow the viscosity minimum to build up. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.45]{PLOTS/FIG_RD_Da1_5_p5_Pe100Rcmin2.eps}\qquad \includegraphics[scale=0.45]{PLOTS/figEffDa_ML_Pe100Rc_min2.eps} \caption{\small{Same as Fig.~\ref{fig:effPe_2} for variable $D_a$: $0.5$ (thick dashed line), $1.0$ (thick solid line) and $5.0$ (thick dot-dashed line). The thin dashed line in panel (a) represents the non-reactive case. }} \label{fig:effDa_2} \end{center} \end{figure} \section{Conclusion and Outlook} \label{sec:Conclusion and Outlook} We have here analysed the influence of the injection flow rate on reactive VF driven by a simple $A+B \rightarrow C$ type chemical reaction decreasing the viscosity {\it in situ}. To do so, we have numerically integrated Darcy's law for the evolution of the flow velocity and RDC equations for the concentrations coupled by a viscosity profile depending dynamically on the concentration of the chemical species. The injection flow rate has been varied by changing the values of the dimensionless parameter ${\rm Pe}$. Nonlinear simulations have been performed to characterise the properties of reactive VF when a solution of a reactant $A$ displaces a solution of $B$ to produce the less viscous product $C$ at the miscible reactive interface. At lower ${\rm Pe}$, the VF instability is less intense in both reactive and non reactive cases because the viscosity gradients are smoothed out by diffusion. The reactive VF pattern covers nevertheless a larger area i.e. is spatially denser than the non-reactive pattern. These observations are in good agreement with experiments \cite{riolfo,NMKT2007}. Similarly to the non-reactive case, at higher ${\rm Pe}$, VF is enhanced in reactive systems when the viscosity minimum does not have time to build up. Less-dense fingering patterns and more mixing are then observed. In other words, the fingering patterns at high ${\rm Pe}$ cover a smaller area than at low ${\rm Pe}$. In terms of displacement efficiency, the presence of a viscosity minimum at lower ${\rm Pe}$ is found to optimize a homogeneous and regular displacement with less convective mixing. Our study provides a mathematical framework to control VF in many geophysical processes e.g.~reactive pollutant displacement, ${\rm CO}_2$ sequestration and EOR. Recently, it has been shown that fingering instabilities in the application of EOR can be controlled by introducing a viscosity minimum in the zone of contact between the two fluids via the formation of foam between the injected gas and displaced oil \cite{Farajzadeh2015}. In this context, the present study (i) provides a convection between viscosity minimum and stabilization, (ii) introduces a way to control VF by controlling the injection rate, (iii) shows that, at low injection rate, the reactive VF improves the sweep efficiency in comparison to the non reactive conditions. \section*{Acknowledgment} We thank Y. Nagatsu, F. Brau, F. Haudin and M. Mishra for fruitful discussions. P.S. acknowledges financial support from IIT Madras for a New Faculty Initiation Grant, and a New Faculty Seed Grant. A.D. acknowledges PRODEX for financial support.
{ "timestamp": "2018-02-13T02:20:48", "yymm": "1802", "arxiv_id": "1802.04153", "language": "en", "url": "https://arxiv.org/abs/1802.04153" }
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\itshape}} \makeatother \def\pplogo{\vbox{\kern-\headheight\kern -29pt \halign{##&##\hfil\cr&{\ppnumber}\cr\rule{0pt}{2.5ex}&\ppdate\cr}}} \makeatletter \def\ps@firstpage{\ps@empty \def\@oddhead{\hss\pplogo}% \let\@evenhead\@oddhead \thispagestyle{plain} \def\maketitle{\par \begingroup \def\fnsymbol{footnote}{\fnsymbol{footnote}} \def\@makefnmark{\hbox{$^{\@thefnmark}$\hss}} \if@twocolumn \twocolumn[\@maketitle] \else \newpage \global\@topnum\z@ \@maketitle \fi\thispagestyle{firstpage}\@thanks \endgroup \setcounter{footnote}{0} \let\maketitle\relax \let\@maketitle\relax \gdef\@thanks{}\gdef\@author{}\gdef\@title{}\let\thanks\relax} \makeatother \numberwithin{equation}{section} \newcommand\nn{\nonumber} \newcommand\eea{\end{eqnarray}} \newcommand\bea{\begin{eqnarray}} \newcommand{\sfrac}[2]{{\textstyle\frac{#1}{#2}}} \newcommand\di{\partial} \newcommand\mpl{M_{\rm Pl}} \newcommand\spacelike{\parbox{.7cm}{\Huge$\times$}} \def\langle {\langle} \def\rangle {\rangle} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\partial{\partial} \def\mbox{const}{\mbox{const}} \def{\rm e}{{\rm e}} \def\alpha{\alpha} \def\varepsilon{\varepsilon} \def\partial{\partial} \def\left({\left(} \def\right){\right)} \def\langle {\langle } \def\rangle {\rangle } \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{align}}{\begin{align}} \newcommand{\end{align}}{\end{align}} \newcommand{\begin{gather}}{\begin{gather}} \newcommand{\end{gather}}{\end{gather}} \newcommand{\begin{subequations}}{\begin{subequations}} \newcommand{\end{subequations}}{\end{subequations}} \newcommand{\mathop{\rm tg}\nolimits}{\mathop{\rm tg}\nolimits} \newcommand{\mathop{\rm arctg}\nolimits}{\mathop{\rm arctg}\nolimits} \renewcommand{\tanh}{\mathop{\rm th}\nolimits} \newcommand{\mathop{\rm ch}\nolimits}{\mathop{\rm ch}\nolimits} \newcommand{\mathop{\rm sh}\nolimits}{\mathop{\rm sh}\nolimits} \renewcommand{\ln}{\mathop{\rm ln}\nolimits} \newcommand{\sm}[1]{{\scriptscriptstyle \rm #1}} \newcommand{{\rm Tr}}{{\rm Tr}} \renewcommand{\Im}{\mathop{\rm Im}\nolimits} \renewcommand{\Re}{\mathop{\rm Re}\nolimits} \renewcommand{\t}{\tilde} \newcommand{{\mathcal R}}{{\mathcal R}} \newcommand{{\alpha_{UV}}}{{\alpha_{UV}}} \newcommand{{\alpha_{IR}}}{{\alpha_{IR}}} \newcommand{{z_{UV}}}{{z_{UV}}} \newcommand{{z_{IR}}}{{z_{IR}}} \newcommand{{R_\text{AdS}}}{{R_\text{AdS}}} \newcommand{{\rm tr}}{{\rm tr}} \newcommand{\mathcal}{\mathcal} \textwidth = 6.5 in \textheight = 8.5 in \oddsidemargin = 0.0 in \begin{document} \setcounter{page}0 \def\ppnumber{\vbox{\baselineskip14pt }} \def\ppdate{ } \date{} \author{Horacio Casini, Eduardo Test\'e, Gonzalo Torroba\\ [7mm] \\ {\normalsize \it Centro At\'omico Bariloche and CONICET}\\ {\normalsize \it S.C. de Bariloche, R\'io Negro, R8402AGP, Argentina} } \bigskip \title{\bf All the entropies on the light-cone \vskip 0.5cm} \maketitle \begin{abstract} We determine the explicit universal form of the entanglement and Renyi entropies, for regions with arbitrary boundary on a null plane or the light-cone. All the entropies are shown to saturate the strong subadditive inequality. This Renyi Markov property implies that the vacuum behaves like a product state. For the null plane, our analysis applies to general quantum field theories, and we show that the entropies do not depend on the region. For the light-cone, our approach is restricted to conformal field theories. In this case, the construction of the entropies is related to dilaton effective actions in two less dimensions. In particular, the universal logarithmic term in the entanglement entropy arises from a Wess-Zumino anomaly action. We also consider these properties in theories with holographic duals, for which we construct the minimal area surfaces for arbitrary shapes on the light-cone. We recover the Markov property and the universal form of the entropy, and argue that these properties continue to hold upon including stringy and quantum corrections. We end with some remarks on the recently proved entropic $a$-theorem in four spacetime dimensions. \end{abstract} \bigskip \newpage \tableofcontents \vskip 1cm \section{Introduction}\label{sec:intro} Quantum information theory provides powerful techniques to understand nonperturbative aspects of quantum field theory (QFT). One useful way in which this has worked out is by applying information-theoretic inequalities, such as strong subadditivity or monotonicity of the relative entropy, to QFT. These inequalities give insights into causality and unitarity constraints in relativistic theories, which are often hard to recognize from local observables. Some examples include energy conditions in QFT~\cite{Blanco:2013lea, Faulkner:2016mzt, Balakrishnan:2017bjg, Bousso:2015mna, Bousso:2015wca, Koeller:2015qmn, Leichenauer:2018obf}, and proofs of the irreversibility of renormalization group (RG) flows in various dimensions~\cite{Casini:2004bw, Casini:2012ei, Casini:2016fgb, Casini:2016udt, Casini:2017vbe, Lashkari:2017rcl}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.6\textwidth]{nullplane.jpg} \captionsetup{width=0.9\textwidth} \caption{Region with boundary $x^+=\gamma(y)$ (green curve) on the null plane $x^-=0$ and parallel to $k=(1, 1, 0, \ldots)$. Here $y$ are the $d-2$ transverse coordinates. } \label{fig:plane} \end{center} \end{figure} Recently, it has become clear that these results can be extended and generalized by taking the null limit.\footnote{This was motivated by the entropic proof of the $g$-theorem in~\cite{Casini:2016fgb}, which recognized that working with Cauchy surfaces that approach the null cone allows to derive nontrivial constraints for the irreversibility of the RG. See also~\cite{Casini:2016udt}.} Here one considers the reduced density matrix $\rho_X$ for a region $X$ whose boundary $\gamma$ lies on a null plane or on the light-cone. See Figs.~\ref{fig:plane} and \ref{fig:cone}. For these regions, Ref.~\cite{Casini:2016udt} obtained the modular Hamiltonian, which turns out to be local and given by the Rindler result, ray by ray. See also \cite{Koeller:2017njr,Wall:2011hj}. This surprising result is a consequence of the special geometry and symmetries on the null plane. As a consequence, the entanglement entropy (EE) for general QFTs saturates the strong subadditive (SSA) inequality on the null plane, \begin{equation}\label{eq:Markov} S_A+S_B-S_{A\cap B}-S_{A\cup B}=0\,. \end{equation} This is called the Markov property, in analogy with the classical case. For a conformal field theory (CFT), the null plane can be mapped to the light-cone, and then (\ref{eq:Markov}) holds on the null cone as well. With this result for CFTs, we showed in~\cite{Casini:2017vbe} that for RG flows between UV and IR fixed points, the change $\Delta S(r) = S(r) - S_{CFT_{UV}}(r)$ in the EE for a sphere obeys \begin{equation}\label{eq:beautiful} r\, \Delta S''(r) -(d-3) \Delta S'(r)\le 0\,. \end{equation} This leads to a new proof of the $a$-theorem in four spacetime dimensions, and it also reproduces the proof of~\cite{Casini:2012ei} for the $c$-theorem in two dimensions and the $F$-theorem in three dimensions. In this way, a single formula unifies all known results for the irreversibility of the RG in Lorentz invariant QFTs in $d \le 4$. See also~\cite{Lashkari:2017rcl} for related work. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{nullcone.jpg} \caption{A region with boundary on the light-cone. This setup applies for CFTs.} \label{fig:cone} \end{center} \end{figure} In the present work, we will analyze in detail the explicit form of the entanglement and Renyi entropies for regions with arbitrary boundaries $\gamma$ on the null plane (for general QFTs) and on the light-cone (for CFTs). In Sec.~\ref{sec:markov} we will provide simple geometric arguments that will prove that the EE and all Renyi entropies are in fact \textit{independent} of $\gamma$ on the null plane. This is a very strong result, and it implies that all Renyi entropies also satisfy the Markov property (\ref{eq:Markov}). This infinite set of equations for the reduced density matrix basically says that the vacuum state behaves like a product state over the null plane. In this sense, the result is opposite in spirit to the Reeh-Schlieder theorem, that forbids such products over spatial regions. The situation is much richer for regions with boundary on the light-cone, and we study this in Sec.~\ref{sec:qft}. Using Lorentz invariance and the Markov property, we determine the universal explicit form for all the entropies as a function of $\gamma$. This generalizes the result for the EE of a sphere to arbitrary boundaries. We obtain a local functional that is an integral over the angular coordinates of the light-cone. We interpret this as an effective action for a dilaton $\log \gamma(y)$ in $d-2$ dimensions.\footnote{For earlier work connecting the EE to a dilaton field theory in two less dimensions see~\cite{Solodukhin:2013yha}.} In particular, we argue that the universal logarithmic term for the sphere EE generalizes to the Wess-Zumino anomaly action for the dilaton. In the second part of the paper (Sec.~\ref{sec:holo}) we study these questions from the point of view of AdS/CFT.\footnote{The results of this section were presented by the authors during 2017 at various seminars and conferences.} The EE for the boundary theory becomes the area of the extremal Ryu-Takayanagi surface in the gravitational theory. We construct the extremal surfaces corresponding to regions of the boundary QFT on the null plane and the light-cone. This geometric problem turns out to have various special features: the surfaces are described by linear differential equations (bulk laplacians), and they lie themselves on the bulk null plane or cone. We verify that the Markov property holds holographically. For the null cone, we evaluate the holographic EE explicitly, and check that it agrees with a special case of the general form predicted for CFTs in Sec.~\ref{sec:qft}. These results are extended to include $1/N$ and 't Hooft coupling corrections. Armed with these additional insights, in Sec.~\ref{sec:athm} we revisit the proof of the $a$-theorem of~\cite{Casini:2012ei}, checking and expanding on the arguments in that work. In the process, we uncover a new positivity constraint for a nonlocal term in the EE. Lastly, in Sec.~\ref{sec:concl} we discuss implications of our results and various future directions. \textit{Note added:} while we were preparing the manuscript for submission, the work~\cite{Neuenfeld:2018dim} appeared, which also studies extremal surfaces with boundaries on the null plane and cone in holographic theories. Some of the results in Sec.~\ref{sec:holo} -- specifically, our formulas (\ref{con}) and (\ref{eq:solrt}) -- overlap with that reference. \section{Markov property for Renyi entropies} \label{sec:markov} In~\cite{Casini:2017roe} we showed that modular Hamiltonians $H_X$ for regions $X$ with boundary on a null plane $x^-=x^1-x^0=0$ are given by \begin{equation} H_\gamma=2\pi \int d^{d-2} y\, \int_{\gamma(y)}^\infty dx^+\, (x^+-\gamma(y)) T_{++}(\lambda, y)\,, \label{eq:DeltaH2} \end{equation} up to an additive constant. Here $y$ denote the transverse coordinates $(x^2, \ldots, x^{d-1})$, and $x^+= \gamma(y)$ parametrizes the boundary of $X$ on the null plane. This is simply the Rindler result, ray by ray. It leads to the operator equation \begin{equation} H_A+H_B-H_{A\cap B}-H_{A\cup B}=0\,, \end{equation} which in turn implies the Markov property for the entanglement entropies (EE) \begin{equation} S_A+S_B-S_{A\cap B}-S_{A\cup B}=0\,. \label{mcom} \end{equation} In this section we will prove a much stronger statement, namely that all vacuum Renyi entropies of regions with boundary on the null plane also satisfy the Markov property. Our analysis on the null plane will be valid for any QFT. Hence, for conformal field theories (CFT), after a conformal transformation, the Markov property also holds for Renyi entropies of regions with boundary on the null cone. This gives an infinite set of equations for the vacuum reduced density matrix, placing strong constraints on quantum entanglement in QFTs. We will argue that these properties for the entropies arise simply from geometrical considerations. In fact, our arguments also extend to other quantities such as free energies with insertions of $(d-2)$ dimensional surface operators. In the future, it would be interesting to understand the implications of our formulas for surface operators in gauge theories. \subsection{Proof of the Markov property}\label{subsec:proof} Let us first describe the setup in more detail. We work in $d$-dimensional Minkowski space with signature $(-,+,\ldots, +)$, and introduce null coordinates \begin{equation} x^\pm = x^1 \pm x^0\,. \end{equation} Consider a null plane $x^-=0$ with orthogonal coordinates $x^+$ and $y^a=(x^2,\ldots,x^{d-1}) \in \mathbb R^{d-2}$. The metric on the plane is \begin{equation}\label{eq:null-metric} ds^2=(dy^a)^2+0 \, dx^+ dx^-\,. \end{equation} We take a $d-2$ dimensional surface $x^+=\gamma(y)$ on the null plane, crossing all null rays --see Fig.~\ref{fig:plane}. We wish to compute the vacuum entanglement Renyi entropy $S_n$ of a QFT in a region with boundary in $\gamma(y)$. Since the entanglement entropy does not depend on the Cauchy surface but on the whole causal region, it is equivalent to say that it is a functional of the boundary $\gamma(y)$. We assume a Lorentz invariant regularization of the entropies, with short distance cutoff $\epsilon$. A Lorentz invariant cutoff can be produced using the mutual information, or mutual Renyi entropies; see Appendix \ref{Appendix:mutual}. In a theory with mass scales, $S_n$ can also depend on other dimensionful parameters. Since we are working with the vacuum state, we can only use the geometry of $\gamma$, $\epsilon$, and some constants of the theory to construct $S_n(\gamma)$. In particular, we can expand in terms of functionals of the form \begin{equation}\label{eq:functional} S_n(\gamma)=\int d^{d-2}\sigma_{y_1}\,\ldots \int d^{d-2}\sigma_{y_n} \,f(\gamma(y_1),\ldots,\gamma(y_n); \nabla \gamma(y_1), \ldots)\,, \end{equation} where $d\sigma$ is a volume element along $\gamma$ and $f$ is a function of the distances between points and the dimensionful parameters. The simplest argument is as follows. These functionals should be Lorentz invariant. In particular, a boost rescales the coordinate $x^+ \to \lambda x^+$, so we have \begin{equation} S_n(\gamma)=S_n(\lambda \gamma)\,, \end{equation} for any $\lambda>0$. Taking the limit $\lambda\rightarrow 0$, and focusing on bounded curves, the entropy of $\gamma$ must then be the same as the one of a surface arbitrarily near the plane $x^+=0$.\footnote{We are implicitly neglecting some ``pathological" Lorentz invariant functionals which still distinguish smooth surfaces arbitrarily close (along with all the derivatives) to $\gamma=0$, as the one counting the number of maximums in $\gamma$. We expect the cutoff entropies should be continuous as functions of the shape in this sense.} Therefore, $S_n$ must be independent of $\gamma$. Another way to establish this is to realize that the degenerate metric (\ref{eq:null-metric}) gives an infinite set of isometries for the null plane \bea y &=& y'\,, \nonumber \\ x^+ &=& h(y', {x^+}')\,. \label{defo} \eea That is, we can deform the $x^+$ coordinate in a way dependent on $y$, and get the same metric. These are of course not isometries of the full Minkowski space. Any two surfaces $\gamma$ can be deformed into one another by these isometries. Hence they have identical (flat) intrinsic geometry and also they are identically embedded in the null plane. These isometries imply that the functional (\ref{eq:functional}) will be the same for all $\gamma$. Nothing changes if we consider using derivatives of $\gamma$ of any order to form the functional of $\gamma$. More explicitly, multiple gradients of $\gamma$ are tensors that can be expanded with the orthogonal vectors $k=(1,1,0,\ldots, 0)$ and $\hat{y}^a$, and the same holds for the distance vectors between any two points along $\gamma$. Once these tensors are contracted the components proportional to $k$ do not contribute because $k^2=0$, $k \cdot \hat{y}^a=0$. Hence the remaining contribution is the same as the one of a planar $\gamma$, and hence independent of the shape of $\gamma$. Another aspect of this impossibility of distinguishing different $\gamma$ with a geometric functional is that we cannot form non trivial invariants from the extrinsic curvatures of $\gamma$. There are two null vectors normal to $\gamma$, $k=(1,1,0,\ldots,0)$ and $q$, $q^2=0$, normalized with $k\cdot q=1$. Since $k$ is constant along $\gamma$, the corresponding extrinsic curvature vanishes. There is an ambiguity $k\rightarrow \lambda k$, $q\rightarrow 1/\lambda q$ in the representation of the surface in terms of the orthogonal null vectors. Then, in order to produce an invariant we have to use products of curvatures for $q$ and $k$, which are also zero. We conclude that all functionals we can construct should give the same value of $S_n$ for any $\gamma$.\footnote{For the entropy, this statement might be related, in an admittedly obscure way, with a similar statement in \cite{Casini:2017roe} for infinite dimensional systems where the Markov property holds for the full modular Hamiltonians.} The Markov property for $S_n$ then follows trivially, that is, the combination \begin{equation}\label{eq:Renyi-Markov} S_n(A)+S_n(B)-S_n(A\cap B)-S_n(A\cup B)=0\,, \end{equation} because all the entropies are equal. This result for the independence of $S_n$ on $\gamma$ did not assume any unitary symmetry of the vacuum corresponding to the deformations (\ref{defo}) of the null plane. However, in addition to Lorentz boosts, such unitary symmetries deforming the null plane along the null rays and keeping the vacuum invariant do indeed exist for the special case $x^+=x^{+ \, \prime}+ \gamma^\prime(y^\prime)$. These are given by the modular translations corresponding to other arbitrary regions $\gamma^\prime$ with boundary in the null plane \cite{Casini:2017roe}. They act as isometries on the plane but do not have local action on field operators outside the plane. Therefore, the transformations between different surfaces $\gamma$ can indeed be implemented by unitaries keeping the vacuum invariant. This geometric argument implies that the equality of the entropies for all $\gamma$ extends to other quantities such as partition functions with insertions of $d-2$ dimensional surface operators. But this does not apply to lower dimensional operators which are not equivalent under the isometries of the null plane. The argument above needed a Lorentz invariant cutoff. Once this requirement is dropped the equality of all entropies for different $\gamma$ does not hold any more -- we could for example change the cutoff around $\gamma$ and $\gamma^\prime$ independently. However, the Markov property is a regularization independent statement. The reason is that the divergences in the entropies are local and extensive on the boundary of the region; hence in any other regularization they must also cancel locally in the combination (\ref{eq:Renyi-Markov}). In conclusion, a Lorentz invariant geometric functional of $d-2$ surfaces with minimal continuity properties must be constant on regions with boundary on a null plane. If this functional is either finite or has local extensive divergences along $\gamma$, it must be Markovian on the null plane, and this is a cutoff independent statement. This property then persists on the null cone for a conformally invariant functional (that is, a functional that is conformally invariant for any cutoff independent combination). We will next illustrate this with a model having extensive mutual information. We will also see directly this structure for the holographic entanglement entropy in Sec.~\ref{sec:holo}. \subsection{An example: extensive mutual information model}\label{subsec:EMI} A simple example is given by the EMI (extensive mutual information) model for the entropy~\cite{Casini:2008wt}. For a spatial surface $A$ with complement $\bar{A}$ in a given Cauchy surface, this model gives the functional \begin{equation} S(A)=\int_A d\sigma_x\, \int_{\bar{A}} d\sigma_y \, \eta_x^\mu \,\eta_y^\nu \,(\partial_\mu \partial_\nu-g_{\mu\nu} \partial^2) \,|x-y|^{-(2d-4)}\,, \label{aa} \end{equation} where $\eta$ is the normalized vector orthogonal to the Cauchy surface. A small distance cutoff is assumed between $A$ and $\bar{A}$. The interest of this expression is that it gives a simple example of conformal invariant, positive, and strong subadditive functional on causal regions. It can also be thought of as the free energy in the presence of surface operators which are exponentials of free fields~\cite{Swingle:2010jz}. The integrand is a conserved current in both indices what guarantees $S$ is independent of the Cauchy surface. In fact this expression is equivalent to one dependent only on the boundary of $A$ \begin{equation} S(A)=\int_{\partial A} d\sigma_x^{\alpha\beta}\, \int_{\partial A} d\sigma_y^{\alpha\beta}\, \frac{1}{|x-y|^{2(d-2)}}\,,\label{sis} \end{equation} where again a small cutoff is assumed at coincidence points. With a distance cutoff in (\ref{sis}), a quick look at the argument above confirms $S$ is independent of the region on the null plane. Markovianity on the cone can be seen directly from (\ref{aa}), choosing the null cone as a Cauchy surface. Then the Markov combination (\ref{mcom}) reduces to the (finite) double integral of the integrand in (\ref{aa}) over non-overlapping regions $A\cap \bar{B}$ and $B\cap \bar{A}$ of the null cone. It is easy to check explicitly that the double integral over patches of the same null cone vanishes identically, while it is always positive for other null patches or spatial regions. This vanishing gives the Markovian property for this functional. \section{Universal form of CFT entropies on the light-cone} \label{sec:qft} In this section we study the vacuum reduced density matrix for regions whose boundary lies on the light-cone. We will determine the universal form of the entanglement and Renyi entropies for general CFTs. The conformal transformation between the plane and the cone, working in the metric with signature $(-+...+)$, is given by \begin{equation}\label{maa} x^\mu=2\frac{X^\mu+(X\cdot X)C^\mu}{1+2(X\cdot C)+(X\cdot X)(C\cdot C)}-D^\mu\,,\ \ \ \ C^\mu\equiv(0,1/R,\vec{0})\,,\ \ \ \ D^\mu=(R,R,\vec{0}) . \end{equation} This maps the past light-cone of the origin $x^\mu=0$ into (part of) the null plane $X^-=X^1- X^0= 0$. The origin $X^\mu=0$ is mapped into the point $(-R,-R,\vec{0})$, the surface $X^\pm=0$ is mapped to the circle $x^0=-R$, $r=R$. The points on the null cone from the point line $x^1=-x^0=R$ correspond to the infinity in the coordinates $X$. We will then consider a surface\footnote{To simplify notation, the boundaries on the null plane and cone are denoted as $\gamma$.} \begin{equation} r^- = 2 \gamma(y) \end{equation} on the past light-cone $r^+=0$, with \begin{equation} r^\pm = r \pm x^0\,. \end{equation} This curve parametrizes the boundary of the Cauchy surface. The restriction of the Minkowski metric to $r^+=0, r^-=2 \gamma(y)$ gives a $(d-2)$-dimensional sphere with radius that depends on the angular position along the curve: \begin{equation}\label{eq:ds0} ds^2= 0\,dr^+ dr^- + \gamma(y)^2\,g_{ab}(y) dy^\alpha dy^\beta\,. \end{equation} Here \begin{equation}\label{eq:sphere-metric} g_{ab}(y) dy^a dy^b= \frac{4}{(1+y^2)^2}\,(dy^a)^2 \end{equation} describes a sphere $S^{d-2}$ of unit radius in conformally flat coordinates.\footnote{To see this, change variables to $y^a= \tan(\theta/2)\, \hat n^a$, with $\hat n^a$ unit vectors.} We argued in the previous section that the entropies for a Cauchy surface with boundary on the null plane and Lorentz invariant regularization are independent of the boundary shape. After a conformal transformation to the light-cone, this means that all the dependence on $\gamma$ has to arise from the short-distance cutoff $\epsilon$ on the light-cone. (We will see explicit examples of this in holographic theories in Sec.~\ref{sec:holo}). Up to an overall constant, this is local and extensive, and hence the entanglement and Renyi entropies should be given by local functionals of $\gamma/\epsilon$, its derivatives, and geometric quantities built from $g_{ab}$ \begin{equation}\label{eq:localS} S_n= \int d^{d-2}y\,\sqrt{g}\,L_n(\gamma/\epsilon, g_{ab}, \partial \ldots)+F_n\,. \end{equation} Equivalently, the Markov property on the null plane is regularization invariant and hence preserved by the conformal transformations for a CFT. The Markov property on the null cone implies that the entropy is a local functional plus possibly a constant $F_n$ independent of $\gamma$. Our goal is to determine the general form of $L_n$ allowed by Lorentz invariance. We will find that this is related to a dilaton effective action on $S^{d-2}$. Our analysis will reveal how the EE for spheres \begin{equation}\label{eq:EEsphere} S(\gamma)=\alpha_{d-2}\,\frac{\gamma^{d-2}}{\epsilon^{d-2}}+\alpha_{d-4}\, \frac{\gamma^{d-4}}{\epsilon^{d-4}}+\ldots+ \left\lbrace \begin{array}{ll} (-)^{\frac{d}{2}-1} 4\,A\, \log(\gamma/\epsilon)\,& d\; \textrm{even}\,.\\ (-)^{\frac{d-1}{2}} F\,& d\,\,\textrm{odd} \,. \end{array}\right. \end{equation} generalizes to an arbitrary boundary $\gamma(y)$ on the light-cone. The main results are given in (\ref{eq:Sodd}) and (\ref{eq:Seven}). The divergent terms are automatically Markovian, and we will find the form of the universal finite contributions. \subsection{Lorentz transformations on the light-cone}\label{subsec:lorentz} In order to impose Lorentz invariance, we need to determine how Lorentz transformations act on the subspace $r^+=0,\,r^-= 2\gamma(y)$. The pull-back metric is (\ref{eq:ds0}), which describes an $S^{d-2}$ with varying radius $\gamma(y)$. It is known that Lorentz transformations reduce to conformal transformations on $S^{d-2}$; this becomes clear in the embedding space formalism, where conformal transformations are represented as linear transformations on a null-cone of a projective space in two more dimensions. We will now review how this comes about; see e.g.~\cite{Weinberg:2010fx, Penedones:2016voo}. It is useful to parametrize the null cone $\mathcal C$ as \begin{equation} x^\mu(\lambda, y^a)= \lambda\, \omega(y)\,\hat x^{\mu}(y)\;,\;\hat x^\mu(y) = \left(\frac{1+ y^2}{2}, y^a,\frac{1- y^2}{2} \right)\,, \end{equation} where $\lambda \in \mathbb R$, $y^a \in \mathbb R^{d-2}$. The coordinate $\hat x^\mu$ gives the Poincar\'e section $\hat x^0+ \hat x^d=1$ of the null cone $\eta_{\mu\nu} \hat x^\mu \hat x^\nu=0$; $\lambda$ describes `radial' motion on the cone. See also~\cite{Kapec:2017gsg}. The conformal factor $\omega(y)$ can be arbitrary but here we will fix it to \begin{equation} \omega(y)=\frac{2}{1+y^2}\,. \end{equation} The pull-back of the Minkowski metric to $\mathcal C$ then reads \begin{equation}\label{eq:dsy} ds^2_{\mathcal C}= \lambda^2 \frac{4}{(1+y^2)^2} (dy^a)^2\,, \end{equation} which, recalling (\ref{eq:sphere-metric}), describes a sphere in conformally flat coordinates. In particular, we are interested in a sphere of varying radius $\gamma(y)$, and this is obtained for \begin{equation}\label{eq:sphere-coords} \lambda = \gamma(y)\,. \end{equation} The main advantage of these coordinates is that there is a simple relation between Lorentz transformations on $x^\mu$ and conformal transformations on $(\lambda, y^a)$. In more detail, the Lorentz generators $J_{\mu\nu}$ induce $SO(d-2)$ rotations, translations, special conformal transformations and dilatations on $\mathcal C$: \begin{equation} J_{ab}\;,\;T_a= J_{0,a}- J_{d-1,a}\;,\;K_a=J_{0,a}+ J_{d-1,a}\;,\;D= J_{d-1,0}\,. \end{equation} In this way, the Lorentz algebra $SO(d-1,1)$ gives rise to the conformal algebra for euclidean $\mathbb R^{d-2}$. The coordinates transform as $(\lambda, y) \to (\lambda', y')$ with \begin{equation}\label{eq:conf} \frac{\partial y'^a}{\partial y^c}\frac{\partial y'^b}{\partial y^d} \delta_{ab} = e^{2A(y)}\delta_{cd}\;,\;\lambda'= e^{-A(y)} \lambda\,. \end{equation} Note that while the embedding space $\mathbb R^{d-1,1}$ for CFTs is just an artifact, in our setup it is the physical space where the QFT lives. \subsection{Entropies on the null cone}\label{subsec:univ} Our goal now is to determine the general form of (\ref{eq:localS}) consistent with Lorentz invariance. We can think of $S_n$ as an ``action'' for an euclidean theory that lives on $S^{d-2}$, with a scalar degree of freedom $\gamma(y)$. As reviewed in Sec.~\ref{subsec:lorentz}, Lorentz transformations act as conformal transformations on $S^{d-2}$, so we will keep the metric $g_{ab}$ explicit to account for conformal rescalings, which act as $g_{ab} \to e^{2A(y)} g_{ab}$. Furthermore, from (\ref{eq:conf}), $\phi(y)=\log (\gamma(y)/\epsilon)$ transforms additively as a dilaton field. In this way, the problem of finding the entropies $S_n$ is equivalent to that of constructing a conformally-invariant local action in $d-2$ dimensions with a dilaton field $\phi(y)=\log (\gamma(y)/\epsilon)$. It is interesting to note that dilaton techniques have appeared in the recent proof of the $a$-theorem in~\cite{Komargodski:2011vj}; see also~\cite{Schwimmer:2010za, Komargodski:2011xv, Elvang:2012st, Luty:2012ww}. There, the dilaton is introduced by hand in order to match Weyl anomalies; in our context $\phi(y)$ is physical, as it arises from the varying radius of $S^{d-2}$ on the light-cone. These results on the dilaton effective action will be useful for our goal, especially the $d$-dimensional analysis in~\cite{Elvang:2012yc}.\footnote{Dilaton methods have also been used in EE calculations in~\cite{Banerjee:2011mg, Solodukhin:2013yha, Banerjee:2014daa, Herzog:2015ioa}.} \subsubsection{Odd $d$} Let us begin with the simpler case of odd space-time dimension $d$. The `action' functional for the entropy $S_n(\gamma)$ can be constructed simply as a derivative expansion in terms of local geometric invariants built from the metric \begin{equation}\label{eq:hatg} \hat g_{a b} \equiv \frac{\gamma(y)^2}{\epsilon^2}\,g_{ab}(y)\,, \end{equation} with $g_{ab}$ the metric of the unit radius $S^{d-2}$. Since this is the metric induced by the Minkowski metric on $\gamma$ it is clear that these geometric terms are Lorentz invariant. We note that the Riemann tensor can be written in terms of $\hat R_{ab}$ and $\hat R$ because $\hat g_{ab}$ is conformally flat (the Weyl tensor vanishes). In addition we could construct invariants using the extrinsic curvatures of $\gamma$. We show in Appendix \ref{Appendix:uno} that the extrinsic curvatures on the null cone give again combinations of the intrinsic metric and the Ricci tensors. Thus the most general effective action is constructed in terms of powers of $\hat g_{ab}$, the Ricci tensor, the Ricci scalar and covariant derivatives. The first few terms are \begin{equation}\label{eq:Snodd} S_n(\gamma) = \int d^{d_\perp} y\,\sqrt{\hat g}\left(\beta_0 + \beta_2 \hat R+ \beta_4 \hat R^2 + \beta_4'\, (\hat R_{\alpha \beta})^2+ \ldots \right)+F_n\,, \end{equation} with $d_\perp \equiv d-2$. The constant coefficients $\beta_j$ depend on the specific theory and on $n$. In this expression, conformal invariance for the dilaton --namely Lorentz invariance for the $d$-dimensional QFT-- is manifest. To gain intuition, let us write explicitly the terms with zero and two derivatives: \bea \int d^{d_\perp}y\,\sqrt{\hat g} &=& \int d^{d_\perp}y\,\sqrt{g}\,\frac{\gamma(y)^{d_\perp}}{\epsilon^{d_\perp}} \,,\\ \int d^{d_\perp}y\,\sqrt{\hat g} \,\hat R &=& \int d^{d_\perp}y\,\sqrt{g}\,\frac{\gamma^{d_\perp-2}}{\epsilon^{d_\perp-2}} \left((d_\perp-1)(d_\perp-2) \left( \frac{\nabla \gamma}{\gamma}\right)^2+ d_\perp (d_\perp-1) \right)\,. \eea The first term is the familiar area term. Performing a field redefinition \begin{equation} \varphi(y) = 2 \sqrt{\frac{d_\perp-1}{d_\perp-2}}\,\left(\frac{\gamma(y)}{\epsilon} \right)^{(d_\perp-2)/2}\,, \end{equation} the second term becomes, for $d \ge 5$, the action for a conformally coupled scalar, \begin{equation}\label{eq:conformalkin} \int d^{d_\perp}y\,\sqrt{\hat g} \,\hat R=\int d^{d_\perp}y\,\sqrt{g}\, \left((\nabla \varphi)^2+ \xi R \varphi^2 \right)\,, \end{equation} where $\xi = \frac{d_\perp-2}{4(d_\perp-1)}$ and the Ricci scalar $R=d_\perp(d_\perp-1)$ for the unit-radius sphere.\footnote{On the other hand, this term vanishes for $d=2,3$ and is proportional to the volume of $S^{d-2}$ in $d=4$.} The area term proportional to $\gamma^{d_\perp}$ is then simply a conformal potential $V(\varphi) \sim \varphi^{2d_\perp/(d_\perp-2)}$. The next terms in the `effective action' for the entanglement entropy $S$ are higher derivative generalizations of this conformal Laplacian --we will return to this point below. Note that the overall constant $F_n$ is trivially consistent with the Markov property (\ref{eq:Renyi-Markov}). However, it is not possible to write it as a local geometric invariant. In this sense it is analogous to the anomaly contributions for even $d$ to be discussed below. For entanglement over spheres, this is the familiar constant term $F$ that measures the free energy of the theory over the euclidean sphere. Putting these results together, and replacing $d_\perp \to d-2$, the universal form of the EE for regions with boundary on the null cone and in odd space-time dimensions becomes \bea\label{eq:Sodd} S_n(\gamma)&=& \int\,d^{d-2}y\,\sqrt{g}\, \Bigg \lbrace \beta_0 \frac{\gamma(y)^{d-2}}{\epsilon^{d-2}} + \beta_2\frac{\gamma^{d-4}}{\epsilon^{d-4}} \left((d-2) (d-3)+(d-3)(d-4) \left( \frac{\nabla \gamma}{\gamma}\right)^2 \right) \nonumber \\ &+& \ldots \Bigg \rbrace+ F_n\,. \eea Let us compare this with the EE for a CFT on a sphere, Eq.~(\ref{eq:EEsphere}). We recognize in (\ref{eq:Sodd}) the area terms and all the subleading contributions, generalized to an arbitrary varying curve $\gamma(y)$. Some of the $\beta_k$ are fixed in terms of the entropy of the sphere. For instance, $\beta_0= \alpha_{d-2}, \beta_2=\alpha_{d-4}$. This means that the coefficient of $(\nabla \log \gamma)^2$ in the first subleading term $(\gamma/\epsilon)^{d-4}$ is uniquely fixed by the corresponding term in the sphere EE. This is a consequence of Lorentz invariance. At higher orders, there are more geometric invariants allowed, such as the terms with $\beta_4, \beta_4'$ in (\ref{eq:Snodd}). In this case, the sphere coefficient $\alpha_{d-2k}$ fixes only an overall combination of the $\beta_i$, and the entropy for the boundary $\gamma(y)$ contains more information about the specific theory. The term of order $\gamma^{d-2 - 2k}$ is essentially a higher-derivative version of the conformal Laplacian on the sphere containing $2k$ derivatives. We will discuss below a compact expression for such operators. \subsubsection{Even $d$} For $d$ even this is not the full story: there must be an additional contribution that comes from the Euler $a$-anomaly. Indeed, recall that for a sphere of constant radius $\gamma$ at fixed time, we should recover the universal logarithmic contribution \begin{equation}\label{eq:Sanom} S_\text{anom} = (-1)^{d/2-1} 4 A \,\log \frac{\gamma}{\epsilon}\,. \end{equation} We want to find a Lorentz invariant local functional that reduces to (\ref{eq:Sanom}) for constant $\gamma(y)$. At first, this appears to be challenging in our approach because, as we saw in (\ref{eq:Snodd}), there are no local invariants we can form with geometric quantities from $\hat g_{ab}$ that give rise to such a term. We propose that the generalization of (\ref{eq:Sanom}) to arbitrary $\gamma(y)$ is a Wess-Zumino term for the Weyl anomaly on $S^{d-2}$. To explain how this comes about, let us first review the simplest case of the Weyl anomaly in 2d CFTs. The stress-tensor on a manifold with metric $g_{ab}$ has a trace-anomaly \begin{equation} \langle T^a_a \rangle =\frac{c}{24\pi} R \end{equation} where $R$ is the scalar curvature of $g_{ab}$. This implies that, under a Weyl rescaling $\delta g_{ab} =2 \delta \sigma g_{ab}$, the effective action $W = - \log Z$ changes as \begin{equation}\label{eq:deltaW} \frac{\delta W}{\delta \sigma} = - \frac{c}{24\pi} R\,. \end{equation} A local functional whose variation gives (\ref{eq:deltaW}) can be obtained by introducing a dilaton field $\tau$, which transforms as $\tau \to \tau+ \sigma(y)$ under $g_{ab} \to e^{2 \sigma(y)} g_{ab}$. The result is the Wess-Zumino action~\cite{Wess:1971yu} \begin{equation}\label{eq:SWZ} S_\text{WZ}= \frac{c}{24\pi} \int d^2 y\,\sqrt{g}\left( \tau R -(\nabla \tau)^2\right)\,. \end{equation} Here the dilaton derivative term cancels the Weyl transformation of the Ricci scalar, $R[e^{2\sigma} g]=e^{-2\sigma}(R[g]-2 \nabla^2 \sigma)$. We note that, while this is a local functional of $g_{ab}$ and $\tau$, it is not a local functional constructed from the Weyl-invariant metric $\hat g_{ab}= e^{-2 \tau} g_{ab}$. Let us return now to the EE calculation for $d=4$.\footnote{We thank J. Maldacena for suggesting that the $d=4$ result can be mapped to a Liouville action.} We seek a local Lorentz-invariant functional that reduces to (\ref{eq:Sanom}) for constant $\gamma$. We found that Lorentz transformations act as conformal transformations on the $S^2$ null-cone sphere, and that $\log (\gamma/\epsilon)$ transforms as a dilaton field. We then recognize (\ref{eq:Sanom}) as the first term of the WZ action (\ref{eq:SWZ}) evaluated on $S^2$. In order to preserve Lorentz invariance, we expect that the contribution to the EE for a curve $\gamma(y)$ should then generalize to \begin{equation}\label{eq:SWZ4} S_{WZ}=- \frac{A}{2\pi}\,\int d^2y \,\sqrt{g}\,\left(R\,\log\frac{\gamma(y)}{\epsilon}+ \left(\frac{\nabla \gamma}{\gamma}\right)^2 \right)\,, \end{equation} with the overall normalization fixed by (\ref{eq:Sanom}) and the Euler characteristic $\frac{1}{4\pi}\int d^2 y \sqrt{g} R = 2$. Note that the coefficient of $\log(\epsilon)$ is topological and hence is the same for all $\gamma$. In particular, this means there is not type $B$ anomaly contribution to this logarithmic coefficient. This can be seen as a consequence of the particular geometry of the cone in Solodukhin's formula~\cite{Solodukhin:2008dh} for the coefficient of $\log(\epsilon)$ in generic regions in $d=4$. See the Appendix \ref{Appendix:uno}. This is a local functional and hence satisfies the Markov property. But, as in the discussion of the Weyl anomaly, it is not a local functional of the metric $\hat g_{ab}= \frac{\gamma(y)^2}{\epsilon^2} g_{ab}$ introduced in (\ref{eq:hatg}). It is Lorentz invariant, as can be seen by writing it as a \textit{bilocal} functional~\cite{Deser:1993yx, Polchinski:1998rq} \begin{equation}\label{eq:Polch} S_{WZ}\propto \int d^2 y \sqrt{\hat g}\,\int d^2 y' \sqrt{\hat g}\,\hat R(y) \hat{G}(y, y') \hat R(y')\,, \end{equation} with $\nabla_y^2 \hat{G}(y,y') = \frac{1}{\sqrt{\hat g}}\,\delta^2(y,y')$ the Green's function for $\hat g_{ab}$, and $\hat R$ its curvature scalar. Using \begin{equation} \sqrt{\hat g}\, \hat R = \sqrt{g} \left(R- 2 \nabla^2\log \frac{\gamma}{\epsilon} \right) \end{equation} and integrating by parts, (\ref{eq:Polch}) reduces to (\ref{eq:SWZ4}), up to a term quadratic in $R$ that is independent of $\gamma$. This discussion extends to arbitrary dimensions $d_\perp$, where the Weyl anomaly is proportional to the Euler density $E_{d_\perp}$ (plus conformally invariant terms that vanish in our case). The Wess-Zumino action can be computed systematically by integrating the Euler density~\cite{Wess:1971yu, Schwimmer:2010za}, \begin{equation}\label{eq:SWZd} S_{WZ}=(-1)^{d_\perp/2} \frac{4 A}{\chi_{d_\perp}}\,\int d^{d_\perp}y\,\sqrt{g}\,\int_0^1\,dt\,\log\frac{\gamma(y)}{\epsilon}\,E_{d_\perp}\left(\left(\frac{\gamma(y)}{\epsilon}\right)^{2t} g_{ab}\right)\,, \end{equation} and $\chi_{d_\perp}= \int d^{d_\perp}y\,\sqrt{g}\,E_{d_\perp}(g)$ is proportional to the Euler character of the sphere. The contribution from $t=0$ reproduces (\ref{eq:Sanom}), and this is how the overall normalization is fixed. The full integral gives a conformally invariant action with derivatives of the schematic form $\int_y \log \frac{\gamma}{\epsilon} (\nabla^2)^{d_\perp/2} \log \frac{\gamma}{\epsilon}$. Explicit expressions in various even dimensions may be found in~\cite{Komargodski:2011vj, Komargodski:2011xv, Elvang:2012st, Elvang:2012yc, Herzog:2015ioa}. In summary, the entanglement entropy for an arbitrary curve $\gamma(y)$ in a CFT in even $d$ dimensions is given by \bea\label{eq:Seven} S_n(\gamma)&=& \int\,d^{d-2}y\,\sqrt{g}\, \Bigg \lbrace \beta_0 \frac{\gamma(y)^{d-2}}{\epsilon^{d-2}} + \beta_2\frac{\gamma^{d-4}}{\epsilon^{d-4}} \left((d-2) (d-3)+(d-3)(d-4) \left( \frac{\nabla \gamma}{\gamma}\right)^2 \right) \nonumber \\ &+& \ldots + (-1)^{d/2-1} \frac{4 A_n}{\chi_{d-2}}\,\int d^{d-2}y\,\sqrt{g}\,\int_0^1\,dt\,\log\frac{\gamma(y)}{\epsilon}\,E_{d-2}\left(\left(\frac{\gamma(y)}{\epsilon}\right)^{2t} g_{ab}\right) \Bigg \rbrace\nonumber \\ &+& F_n\,. \eea The last term is the WZ action on $S^{d-2}$ with a dilaton $\log (\gamma/\epsilon)$, and it generalizes the universal logarithmic term of the EE on a sphere. In this case, $A_n = A$ is just the Euler anomaly. For comparison with holographic results below, let us give some explicit examples. For $d=4$, using the curvature of $S^2$, $R=2$, we get, from (\ref{eq:SWZ4}), \begin{equation}\label{eq:SWZ4new} S_{WZ}=- \frac{A}{2\pi}\,\int d^2\Omega\,\left(2\log\frac{\gamma(y)}{\epsilon}+ \left(\frac{\nabla \gamma}{\gamma}\right)^2 \right)\,, \end{equation} Next, for $d=6$, we use that the WZ action (\ref{eq:SWZd}) becomes~\cite{Komargodski:2011vj} \begin{equation} S_{WZ}=\frac{4 A}{\chi_{4}}\,\int d^{4}y\,\sqrt{g}\, \left(\phi E_4- 4(R_{ab}-\frac{1}{2} g_{ab} R)\partial_a \phi\, \partial_b \phi-4(\nabla \phi)^2 \nabla^2 \phi - 2 (\nabla \phi)^4\right)\,, \end{equation} where $\phi = \log (\gamma/\epsilon)$. Performing the calculation for a sphere obtains\footnote{Recall that for a maximally symmetric space in $n$ dimensions, $R_{\mu\nu}= \frac{R}{n} g_{\mu\nu}$, and $R_{\mu \nu\rho \sigma}= \frac{R}{n(n-1)}(g_{\mu\rho} g_{\nu \sigma}-g_{\mu \sigma} g_{\nu \rho})$~\cite{Weinberg:1972kfs}. Furthermore, for a unit-radius sphere $S^n$, $R= n(n-1)$. The Euler density in four dimensions reads $E_4= R_{\alpha \beta \mu\nu} R^{\alpha \beta \mu\nu}-4 R_{\mu\nu} R^{\mu\nu}+ R^2$, and for a sphere $E_4=24$. } \begin{equation}\label{eq:SWZ6} S_{WZ}=\frac{3}{2\pi^2} A\,\int d^4 \Omega\,\left \lbrace \log \frac{\gamma}{\epsilon}+ \frac{1}{2}\left(\frac{\nabla \gamma}{\gamma} \right)^2 +\frac{1}{6}\left(\frac{\nabla \gamma}{\gamma} \right)^2 \left(\left(\frac{\nabla \gamma}{\gamma} \right)^2- \frac{\nabla^2 \gamma}{\gamma}\right)- \frac{1}{12}\left(\frac{\nabla \gamma}{\gamma} \right)^4 \right \rbrace\,. \end{equation} \subsection{An alternative approach}\label{subsec:alternative} We now present an alternative construction of the effective action. This approach is somewhat simpler, and makes it clear how Lorentz invariance of the $d$-dimensional theory is used. First, we write the metric over the varying radius $S^{d-2}$ as a dilaton factor times the flat space metric, \begin{equation} \frac{\gamma(y)^2}{\epsilon^2} d\Omega_{d-2}^2= e^{-2 \tau(y)} \,\delta_{ab} dy^a dy^b\;,\;e^{- \tau(y)} \equiv \frac{\gamma(y)}{\epsilon}\,\frac{2}{1+(y^a)^2}\,. \end{equation} See discussion around (\ref{eq:sphere-coords}). We then require a local effective action, invariant under rotations and translations on $\mathbb R^{d-2}$, and under scale transformations $y \to e^{\sigma} y, \tau \to \tau+ \sigma$. Following the construction of the dilaton effective action in~\cite{Elvang:2012yc}, this can be organized in terms of differential operators \begin{equation}\label{eq:Wk} W_k = \left(\frac{2}{d_\perp-2k} \right)^2\,e^{- \frac{d_\perp-2k}{2} \tau} (\nabla^2)^k e^{- \frac{d_\perp-2k}{2} \tau}\,, \end{equation} which contain $2k$ derivatives and transform covariantly under scale transformations, \begin{equation} W_k \to e^{-d_\perp \sigma} W_k\,. \end{equation} Hence, the basic scale-invariant objects are $d^{d_\perp} y \,W_k$ and $e^{d_\perp \tau} W_r$, and the most general local effective action is \begin{equation}\label{eq:SeffW} S_\gamma =\sum_{k, \bar{r}, \bar{n}} \int d^{d_\perp}y\,\alpha_{k \bar{r}}^{\bar{n}} \,W_k \,\prod_i \, (e^{d_\perp \tau} W_{r_i})^{n_i}\,, \end{equation} with $\alpha_{k \bar r }^{\bar n}$ some arbitrary coefficients. The term proportional to $\alpha_{k \bar r}^{\bar n}$ contains $2k + 2 \sum_i n_i r_i$ derivatives.\footnote{We are including here all the terms allowed by scale invariance, while formula (2.43) in~\cite{Elvang:2012yc} contains only a subset of these terms. This is presumably because the effective action in that reference is evaluated on-shell for the dilaton, something which does not make sense in our context.} An explicit evaluation of the first few contributions in (\ref{eq:SeffW}) recovers the terms analyzed in Sec.~\ref{subsec:univ}. This approach has the advantage of unifying odd and even $d$; in particular, the Wess-Zumino term arises from the limit $k \to d_\perp/2$, \begin{equation} \int\,d^{d_\perp} y \,W_{k=d_\perp/2} =\int\,d^{d_\perp} y \, \tau\,(\nabla^2)^{d_\perp/2} \tau\,. \end{equation} This is the reason for the normalization in (\ref{eq:Wk}). For instance, after integration by parts, \begin{equation} \int d^2y\, \tau \,\nabla^2 \tau = \text{const}- \int d^2\Omega\,\left(2\log \frac{\gamma}{\epsilon}+\left(\frac{\nabla \gamma}{\gamma} \right)^2 \right)\,, \end{equation} which agrees with (\ref{eq:SWZ4}). \section{Holographic analysis} \label{sec:holo} In this section we analyze the entanglement entropy for regions with arbitrary boundaries on the null plane and, for CFTs, with arbitrary boundaries on the null cone, in theories with holographic duals. Via the HRT formula~\cite{Ryu:2006bv, Hubeny:2007xt}, this translates into finding extremal surfaces anchored at boundary curves $\gamma(y)$ in the null surfaces in asymptotically AdS space. This geometric problem turns out to have many special and interesting features, which are not present in the case of generic space-like boundary curves. In particular, we will find that the extremal surface is determined by a \text{linear} second order differential equation. We will check that the Markov property holds, and regain the general expressions of the previous section for EE in a null cone for CFTs. We will also show that these results hold when adding corrections for finite $N$ or finite 't Hooft coupling $\lambda$. \subsection{Regions with boundary on a null plane}\label{subsec:plane} The metric for an asymptotically AdS space with Lorentz symmetry corresponding to the vacuum state in a holographic theory is \begin{equation}\label{eq:bulk-metric} ds^2=\frac{L^2}{z^2}\left( f^2(z) dz^2 + dx^+ dx^- +d\vec{y}^2 \right)\,, \end{equation} with $x^\pm=x^1\pm x^0$, $\vec{y}=(x^2,\ldots,x^{d-1})$, and $\lim_{z\rightarrow 0} f(z)=1$. Here $z\in (0,\infty)$ and $y^i\in (-\infty,\infty)$ We want to find an extremal surface in the bulk with boundary on a $d-2$ surface on the boundary given by \begin{equation} x^-=0\;,\;x^+ =\gamma(\vec{y})\,. \end{equation} The minimal surface has $d-1$ dimensions and we parametrize it with the coordinates $\alpha^i\equiv( z, \vec{y})$. The induced metric on this surface is \begin{equation} h_{ij}=g_{\mu\nu}\frac{\partial x^\mu}{\partial \alpha^i}\frac{\partial x^\nu}{\partial \alpha^j} =\frac{L^2}{z^2} \left(\delta_i^1 \delta_j^1 (f^2(z)-1)+\delta_{ij}+\frac{1}{2} \left(\frac{\partial x^+}{\partial \alpha^i}\frac{\partial x^-}{\partial \alpha^j}+\frac{\partial x^-}{\partial \alpha^i}\frac{\partial x^+}{\partial \alpha^j}\right)\right)\,. \end{equation} We have to minimize the area \begin{equation}\label{eq:Aplane} \mathcal A=\int dz\, d^{d-2}y\, \sqrt{h}\,. \end{equation} We have two equations of motion, one for $x^+$ and one for $x^-$, and the Lagrangian depends only of the derivatives of these fields. The equation of motion for $x^+$ contains only terms proportional to derivatives of $x^-$, and hence can be solved taking \begin{equation} x^-=0\,, \end{equation} consistently with the boundary condition. This simplifies the equation of motion coming from the variation of $x^-$, since we only need to keep the terms linear in $\partial_i x^-$ in (\ref{eq:Aplane}). The result is \begin{equation} \nabla_y^2 \, x^+ + \frac{1}{f^2} \left(\frac{\partial^2 x^+}{\partial z^2}-\left(\frac{f'}{f}+\frac{d-1}{z}\right) \frac{\partial x^+}{\partial z} \right)=0\,.\label{sss} \end{equation} This equation determines the minimal surface. Surprisingly, it is a linear equation for the shape $x^+$. A reason for this is that if $x^+$ is a solution, a scaled $\lambda x^+$ has to be a solution since it arises from boosting. It is the same as the equation for a massless scalar in the bulk metric (\ref{eq:bulk-metric}). Since we have obtained a minimal surface that lies completely on the $x^-=0$ plane on the bulk, the area on this surface has to be computed with the induced metric \begin{equation} ds^2|_{\cal M}=\frac{L^2}{z^2}\left( f^2(z) dz^2 +d\vec{y}^2 \right)\,, \end{equation} that is completely independent of the shape of $x^+(z,\vec{y})$. Hence, once we fix a cutoff $z=\epsilon$ and integrate the volume of this $z,\vec{y}$ plane for all $\vec{y}$ and $z>\epsilon$, the area is independent of $\gamma(\vec{y})$. This works for general $f(z)$, i.e., it captures fixed points ($f=1$) and also holographic RG flows. This verifies our arguments in Sec.~\ref{sec:markov}, and leads to the Markov property of the vacuum state in holographic theories. In fact, the area is the same for any surface on the $x^-=0$ plane but only the solution of (\ref{sss}) is extremal. For pure AdS, we can give an explicit solution for the extremal surface. When $f=1$, (\ref{sss}) reduces to \begin{equation} \left(\nabla_y^2 + \partial_z^2-\frac{d-1}{z} \partial_z\right)x^+=0\,.\label{sss1} \end{equation} By Fourier transforming in $\vec{y}$ and choosing the solution regular at infinity, we get the complete solution for the problem \bea\label{con} x^+(z,y)&=&\frac{2^{1-d/2}}{\Gamma[d/2]}\int d^{d-2}k\,\, a_{\vec{k}}\,\, e^{i \vec{k}\cdot \vec{y}}\, \,(|\vec{k}| z)^{d/2}\,\,K_{d/2}(|\vec{k}|z)\,, \nonumber\\ a_{\vec{k}}&=&\int \frac{d^{d-2}y}{(2\pi)^{d-2}}\,\, e^{-i \vec{k}\cdot \vec{y}}\, \, \gamma(\vec{y})\,. \eea See also~\cite{Neuenfeld:2018dim}. Eq.~(\ref{sss1}) was also derived in a different context in~\cite{Faulkner:2016mzt}. \subsection{Regions with boundary on a null cone}\label{subsec:cone} Next, we consider the entropy of CFTs for regions with boundary on the null cone. One idea would be to obtain the extremal surface and areas by mapping the null plane to the null cone, and then compute the entropy using the metric and a cutoff of fixed $z$ on the cone. We will more simply redo the calculation on the cone directly. We focus here on smooth curves $\gamma(\Omega)$, and later in Sec.~\ref{subsec:cusps} comment on the effects of cusps. For pure AdS there is a conformal transformation from the null plane to the null cone at the boundary that extends as an isometry on the bulk, respecting minimal surfaces and their areas. Hence, the only differences in the computation of the areas in the planar case and the cone can come from the position of the cutoff. The isometry of AdS corresponding to (\ref{maa}) is given by extending this conformal transformation to one in a Minkowski space with one more spatial coordinates $z$, and $Z$ respectively. These are just the two bulk coordinates. We have exactly the same formula (\ref{maa}) but where the vectors have now $d+1$ coordinates, and $x^{d+1}=z$, $X^{d+1}=Z$. The AdS metric is invariant under this transformation. The surface $X^0=0$, $X^1=0$, which corresponds to the minimal surface of Rindler space, is mapped to the spherical cup \begin{equation} |\vec{x}|^2=r^2+z^2=R^2\,,\quad t=-R\,, \end{equation} which is the minimal surface corresponding to the sphere. The surface $t+|\vec{x}|=0$, which is the past light-cone in the bulk of the upper tip of the cone, is mapped into the plane $X^-=0$. Then, the minimal surfaces we are interested in will lie on this null cone on the bulk. To follow the geometric ideas for the Markov property on the original AdS space, we will use the following coordinates \begin{equation} \tilde{r}=|\vec{x}|=\sqrt{r^2+z^2}\,, \hspace{1cm} \tilde{r}^\pm=\tilde{r}\pm t, \hspace{1cm}\tilde{\Omega}\,, \end{equation} where $\tilde{\Omega}$ are angular coordinates on the half-sphere $t=\textrm{const}$, $\tilde{r}=\textrm{const}$. For the surface $\tilde{r}^+=0$ each $\tilde{\Omega}$ constant describes a null line in the bulk having the origin as the future end-point. We will write \begin{equation} z= \tilde{r} \sin(\theta)\,,\, \theta \in (0,\pi/2)\,, \end{equation} ç with $\theta=\pi/2$ corresponding to the point of the sphere further from the AdS boundary, and $\theta=0$ to the boundary. The AdS metric writes \begin{equation} ds^2= L^2\frac{d\tilde{r}^+ d\tilde{r}^- +\tilde{r}^2 d\tilde{\Omega}^2}{\tilde{r}^2 \sin^2 \theta}\,, \end{equation} where \begin{equation} d\tilde{\Omega}^2= d\theta^2+ \cos^2 \theta \;d\Omega_{d-2}^2\,, \end{equation} and $\Omega$ are angular coordinates on a $d-2$ dimensional sphere describing usual polar coordinates in the boundary of AdS. On the surface $\tilde{r}^+=0$, the induced metric \begin{equation} ds^2=L^2\, \frac{d\tilde{\Omega}^2}{\sin^2 \theta}=L^2\,\frac{d\theta^2+ \cos \theta^2\; d\Omega_{d-2}^2}{\sin^2{\theta}}\,, \end{equation} is independent of the remaining coordinate $\tilde{r}^-=2 \tilde{r}=-2 t$. This shows that, if we naively forget about the cutoff, all possible minimal surfaces have the same induced metric and (divergent) area. If we impose a cutoff on a small $\theta$ independently of $\Omega$ we get again the same result for all minimal surfaces reproducing the previous result for the plane. However, we want to impose a covariant cutoff at fixed $z$ instead. All the dependence on the shape of $\gamma$ will come from this cutoff. \subsubsection{Extremal surface and covariant cutoff} Let us compute the equations for the minimal surface, and check that it lies on $\tilde{r}^+=0$. Writing the $d-1$ coordinates for the sphere described by $\tilde{\Omega}$ as $\alpha^i$ and the sphere metric as $\tilde{g}_{ij}$, we have to extremize the action \bea \mathcal A&=&\int d^{d-1}\alpha\,\frac{\det^{1/2}(\tilde{g})}{\sin^{d-1}(\theta)} \,\, \det (\delta^j_l+\tilde{g}^{jk}\partial_k \tilde{r}^+ \partial_l \tilde{r}^-/\tilde{r}^2)^{1/2}\nonumber\\ &=&\int d^{d-2}\Omega\, d\theta\,\frac{(\cos\theta)^{d-2}}{(\sin \theta)^{d-1}}\det (\delta^j_l+\tilde{g}^{jk}\partial_k \tilde{r}^+ \partial_l \tilde{r}^-/\tilde{r}^2)^{1/2}\,, \eea with respect to variations of $\tilde{r}^\pm(\tilde{\Omega})$. The equation of motion for $\t r^-$ is satisfied, along with the boundary conditions, by setting $\tilde{r}^+=0$. The equation of motion of $\tilde{r}^+$ gives \begin{equation}\label{eq:EOMr} \left(\frac{\partial^2}{\partial \theta^2}-\left((d-2) \tan \theta+(d-1) \cot \theta \right)\frac{\partial}{\partial \theta}+\frac{1}{\cos^2\theta}\nabla^2_\Omega \right)(\tilde{r}^-)^{-1}=0\,. \end{equation} The same equation holds for $\tilde{r}$ since it is just $\tilde{r}^-/2$. Notice that the equation for $(\tilde{r}^-)^{-1}$ is linear as was the case of $x^+$ for boundaries on the null plane. This is because these two variables are linearly related by the conformal transformation that carries the null plane into the null cone. The boundary curve now is of the form $r=\gamma(\Omega)$, where $r=\sqrt{(x^1)^2+\ldots+(x^{d-1})^2}$. The minimal surface takes the form $\tilde{r}^+=0$, $\tilde{r}(\theta,\Omega)$, with $\tilde{r}(0,\Omega)=r(\Omega)=\gamma(\Omega)$. It lies on the bulk light-cone, as illustrated in Fig.~\ref{fig:bulknull}. \begin{figure}[h] \begin{center} \includegraphics[width=0.4\textwidth]{bulknullcone.jpg} \caption{The extremal HRT surface anchored to the locus $r=\gamma(\Omega)$ on a boundary null-cone lies on a bulk null-cone.} \label{fig:bulknull} \end{center} \end{figure} The solution to (\ref{eq:EOMr}) that is regular in the interior $\theta \to \pi/2$ is\footnote{This solution was also obtained in~\cite{Neuenfeld:2018dim}.} \begin{equation}\label{eq:solrt} (\t r(\theta, \Omega))^{-1} = \sum_{n=0}^\infty \sum_I\,\frac{\sqrt{\pi} \Gamma(d-1+n)}{2^{d+n-2}\Gamma(\frac{d}{2})\Gamma(\frac{d-1+2n}{2})}\,a_{nI}\,Y_n^I(\Omega)\,(\cos \theta)^n\,{}_2F_1(\frac{n-1}{2}, \frac{n}{2}, \frac{d-1}{2}+n,\cos^2\theta)\,, \end{equation} where $Y_n^I(\Omega)$ are the orthonormal spherical harmonics of degree $n$ on the sphere $S^{d-2}$, \begin{equation}\label{eq:laplY} \nabla_{\Omega}^2 Y_n^I(\Omega)=-(n+d-3)n\,Y_n^I(\Omega)\;,\;n>0\,, \end{equation} and $I$ is some multi-index for the eigenfunctions of fixed degree $n$. The prefactor in (\ref{eq:solrt}) is chosen to cancel the value of the hypergeometric function at $\theta=0$, and $a_{nI}$ are the coefficients of the expansion of $\gamma^{-1}$ in spherical harmonics, \begin{equation} \gamma(\Omega)^{-1}= \sum_I\,a_{nI}\, Y_n^I(\Omega)\,. \end{equation} We want to impose a standard Lorentz invariant cutoff in \begin{equation}\label{eq:cutoff} z=\t r(\theta,\Omega) \sin(\theta)=\epsilon\,. \end{equation} Let us denote the solution to this equation by $\theta= \beta(\Omega)$; it will depend on the cutoff $\epsilon$ and on the curve $\gamma(\Omega)$. The minimal area then becomes \bea \mathcal A&=&L^{d-1} \int d^{d-2 }\Omega\,\int_{\beta(\Omega)}^{\pi/2}\,d\theta\,\frac{(\cos \theta)^{d-2}}{(\sin \theta)^{d-1}} \\ &=&L^{d-1}\int d^{d-2 }\Omega\,\frac{1}{d-1}\,(\cos \beta)^{d-1}\,{}_2F_1 (\frac{d-1}{2}, \frac{d}{2},\frac{d+1}{2}, \cos^2 \beta)\,. \nonumber \eea This has the form of a local action for the entropy, as in the QFT calculation. Also, as anticipated, all the dependence on $\gamma(\Omega)$ arises through the cutoff $\beta$. Since $\beta \sim \mathcal O(\epsilon)$, we expand in small $\beta$, obtaining \begin{equation}\label{eq:As} \mathcal A= L^{d-1} \int d^{d-2 }\Omega\, \left \lbrace \frac{1}{d-2} \frac{1}{\beta^{d-2}}- \frac{2d-5}{6(d-4)} \frac{1}{\beta^{d-4}}+ \left(\frac{3}{8(d-6)}+\frac{d}{18}- \frac{1}{45} \right)\frac{1}{\beta^{d-6}}+ \ldots \right \rbrace + A_0\,. \end{equation} Here \begin{equation}\label{eq:A0} A_0 =L^{d-1} \int d^{d-2 }\Omega\, \frac{\sqrt \pi}{2 \sin \frac{\pi d}{2}} \frac{\Gamma(\frac{d-1}{2})}{\Gamma(\frac{d}{2})}\,= L^{d-1}\,\frac{ \pi^{d/2}}{ \sin \frac{\pi d}{2} \Gamma(\frac{d}{2})}\,. \end{equation} In order to evaluate this expression, we need to solve for $\beta$ in powers of $\epsilon$. Besides the constant term, (\ref{eq:solrt}) contains a series that starts at order $\theta^2$ and one that starts at $\theta^d$. Explicitly, \bea (\t r(\theta, \Omega))^{-1}& =& \gamma(\Omega)^{-1}+ \sum_{n\ge 1,\,I}\,a_{nI} Y_n^I(\Omega)\frac{n(n+d-3)}{2(d-2)} \theta^2 \left\lbrace - 1+ \frac{3 n(n+d-3)-2(d-1)}{12(d-4)} \theta^2 + \ldots \right \rbrace \nonumber \\ &-& \sum_{n\ge 1,\,I}\,a_{nI} Y_n^I(\Omega)\,\theta^d \left \lbrace - \frac{\pi}{2^d \sin \frac{\pi d}{2}} \frac{\Gamma(d+n-1)}{\Gamma(n-1) \Gamma(\frac{d}{2})\Gamma(\frac{d+2}{2})} + \mathcal O(\theta^2)\right \rbrace\,. \eea The series in $\theta^2$ can be rewritten in terms of derivatives of $\gamma(\Omega)^{-1}$ by use of (\ref{eq:laplY}), \bea\label{eq:rtlocal} (\t r(\theta, \Omega))^{-1}&=&\gamma(\Omega)^{-1}+ \frac{1}{2(d-2)} \nabla_\Omega^2 (\gamma^{-1}) \theta^2 \nonumber\\ &+& \frac{1}{24(d-2)(d-4)} \left(2(d-1) \nabla_\Omega^2(\gamma^{-1}) + 3 \nabla_\Omega^2\nabla_\Omega^2(\gamma^{-1}) \right) \theta^4 + \ldots \eea This can also be verified by solving (\ref{eq:EOMr}) in powers of $\theta^2$. In contrast, the series that starts at order $\theta^d$ does not appear to have a local expansion in derivatives of $\gamma^{-1}$. This series is fixed by requiring regularity at the interior $\theta \to \pi/2$, which is the condition that fixed (\ref{eq:solrt}). Such terms end up modifying the EE at order $\epsilon^2$, and hence vanish in the limit in which the UV regulator is taken to zero. We will neglect them in what follows. Plugging (\ref{eq:rtlocal}) into (\ref{eq:cutoff}) leads to the power-series solution \begin{equation}\label{eq:beta} \beta(\Omega) =\epsilon \gamma(\Omega)^{-1} +\frac{1}{6} \epsilon^3\,\gamma(\Omega)^{-3} \left(1+\frac{3}{d-2} \gamma\,\nabla_\Omega^2\,(\gamma^{-1})\right)+\ldots \end{equation} We now use (\ref{eq:As}) and (\ref{eq:beta}) to study the extremal surface area in a derivative expansion. For general $d$, we have \bea\label{eq:Ad} \mathcal A&=&L^{d-1} \int d^{d-2}\Omega\,\Bigg \lbrace \frac{1}{d-2} \frac{\gamma^{d-2}}{\epsilon^{d-2}}- \frac{d-3}{2(d-2)(d-4)}\frac{\gamma^{d-4}}{\epsilon^{d-4}} \left((d-2) + \frac{d-4}{d-3}\gamma \nabla_\Omega^2(\gamma^{-1}) \right) \nonumber\\ &+&\frac{(d-3)(d-5)}{8(d-2)(d-4)(d-6)}\frac{\gamma^{d-6}}{\epsilon^{d-6}}\Big [ (d-2)(d-4)+\frac{(d-4)(d-6)}{(d-2)(d-3)} (\gamma \nabla_\Omega^2(\gamma^{-1}))^2 \nonumber \\ &-& \frac{d-6}{(d-3)(d-5)} \left( \gamma \nabla_\Omega^4(\gamma^{-1})-2(d-3)(d-5) \gamma \nabla_\Omega^2(\gamma^{-1})\right) \Big]+ \ldots \Bigg \rbrace\,. \eea \subsubsection{Odd $d$} For odd $d$, we recognize in (\ref{eq:Ad}) the derivative expansion in terms of the conformal laplacians presented in (\ref{eq:Sodd}) and (\ref{eq:Wk}). Furthermore, (\ref{eq:A0}) gives the universal constant term for the EE in holographic theories dual to Einstein gravity. It has the right $(-1)^{\frac{d-1}{2}}$ sign structure. Comparing with (\ref{eq:Sodd}) allows to identify \begin{equation} F=(-1)^{\frac{d-1}{2}} \frac{L^{d-1}}{4 G_N}\,\frac{ \pi^{d/2}}{ \Gamma(\frac{d}{2})}\,. \end{equation} This is the same for any curve $\gamma(\Omega)$ on the cone, and agrees (as it should) with the holographic result for the sphere~\cite{Ryu:2006ef}.\footnote{By a slight abuse of notation, we keep the sign $(-1)^{\frac{d-1}{2}}$ as part of $F$, in agreement with our convention in (\ref{eq:Sodd}). However, the standard notation for $F$ does not include the sign, as in (\ref{eq:EEsphere}).} In particular, for $d=3$ (\ref{eq:Ad}) becomes \begin{equation} \mathcal A=L^2 \int d\Omega\,\left ( \frac{\gamma}{\epsilon} -1 + \mathcal O(\epsilon^3) \right)\,.\label{tresd} \end{equation} Note from (\ref{eq:Ad}) that the term of order $\epsilon$ is a total derivative $\nabla_\Omega^2(\gamma^{-1})$ in $d=3$. For $d=5$, after integration by parts \begin{equation} \mathcal A=L^4 \int d^3 \Omega\, \left \lbrace \frac{1}{3} \frac{\gamma^3}{\epsilon^3}-\frac{1}{3} \frac{\gamma}{\epsilon}\left(3+ \left(\frac{\nabla_\Omega \gamma}{\gamma} \right)^2 \right) + \frac{2}{3}+ \mathcal O(\epsilon) \right\rbrace\,.\label{cincod} \end{equation} As in (\ref{eq:conformalkin}), the last two terms give the kinetic term for a conformally coupled scalar field, and the first term is a classically conformally invariant potential. \subsubsection{Even $d$} For even $d$, the expression (\ref{eq:Ad}) explains the origin of the universal logarithmic terms, \begin{equation} \frac{1}{d-2n} \frac{\gamma^{d-2n}}{\epsilon^{d-2n}}\,\to\, \log \frac{\gamma}{\epsilon} \end{equation} for $d=2n$. It also gives rise to the correct WZ terms, although it is not obvious how to rewrite the previous expressions with hypergeometric functions as (\ref{eq:SWZd}). Let us check this for $d=4, 6$. For $d=4$, \begin{equation} \mathcal A =L^3 \int d^2 \Omega \left \lbrace \frac{1}{2} \frac{\gamma^2}{\epsilon^2}- \frac{1}{2} \log \frac{\gamma}{\epsilon} - \frac{1}{4}\left(\frac{\nabla_\Omega \gamma}{\gamma} \right)^2+ \mathcal O(\epsilon^0) \right \rbrace\,.\label{cuatrod} \end{equation} The second and third term combine to give the two-dimensional WZ action (\ref{eq:SWZ4}). For $d=6$, \bea\label{eq:holo6} \mathcal A &=&L^5 \int d^4 \Omega \Bigg \lbrace \frac{1}{4} \frac{\gamma^4}{\epsilon^4}- \frac{1}{2} \frac{\gamma^2}{\epsilon^2}\left(\frac{3}{2}+\frac{1}{4}\gamma \nabla_\Omega^2 (\gamma^{-1}) \right) \nonumber\\ &+& \frac{1}{8} \left(3 \log \frac{\gamma}{\epsilon}+ \frac{1}{16}(\gamma \nabla_\Omega^2 (\gamma^{-1}) )^2 - \frac{1}{8} \gamma \nabla_\Omega^4(\gamma^{-1})+\frac{3}{4}\gamma \nabla_\Omega^2 (\gamma^{-1}) \right)+ \mathcal O(\epsilon^0) \Bigg \rbrace\,. \eea It is not hard to verify that this result is a linear combination of the WZ action (\ref{eq:SWZ4new}) and the two invariant terms that obtain from $\hat R^2$ and $\hat R_{ab}^2$ in (\ref{eq:Snodd}). This is a nontrivial check, given that the four terms in the last line of (\ref{eq:holo6}) are reproduced in terms of the QFT formula that has three independent contributions at this order. \subsection{Comments on cusps}\label{subsec:cusps} The holographic formula for the entropy contains terms depending on derivatives of $\gamma$. Here we want to comment on the interpretation of these terms when $\gamma$ is not smooth. We will only treat the case of a cusp, that is, the case of a jump in derivatives, and for simplicity will keep the discussion centered in low dimensions $d=3,4$. For a smooth surface, $\nabla_\Omega^2 (\t r^{-1})$ is finite as $\theta \to 0$; then we found in (\ref{eq:rtlocal}) that $\partial_\theta (\t r(0,\Omega)^{-1}=0$ and our previous results apply. However, this need not be true near a cusp. Before getting to the cusps, let us assume that there is some power-law singularity as we approach the boundary, \begin{equation} \nabla_\Omega^2 (\t r^{-1}) = C_0 \,\theta^{-\nu}\;,\;\theta \to 0\,. \end{equation} Solving the equation of motion for small $\theta$ then gives \begin{equation} \t r^{-1} \approx \frac{C_0}{(2-\nu)(d+\nu-2)}\,\theta^{2-\nu} \,. \end{equation} Therefore, negative powers of $\theta$ from $\nabla_\Omega^2 \t r^{-1}$ will indeed modify the expansion (\ref{eq:rtlocal}). We will now see that $\nu=1$ at codimension one cusps. For simplicity, let us focus on $d=3$, and consider a cusp at $\phi=\phi_0$ with local angle $\alpha$. Then, close to the cusp, $\gamma''(\phi) \sim \delta(\phi-\phi_0) \tan \alpha$. At finite $\theta$, this delta function is smoothed; we should recover an approximant of the delta function as $\theta \to 0$. By dimensional analysis, \begin{equation}\label{eq:rcusp} \partial_\phi^2(\t r(\theta,\phi)^{-1}) \approx \frac{\tan\alpha}{\pi} \,\frac{\theta}{\theta^2+(\phi-\phi_0)^2}\,, \end{equation} valid for small $\theta$ and near the cusp. Indeed, it is not hard to check that \begin{equation} \lim_{\theta \to 0} \,\frac{1}{\pi}\,\frac{\theta}{\theta^2+(\phi-\phi_0)^2}= \delta(\phi-\phi_0)\,. \end{equation} Plugging (\ref{eq:rcusp}) into the minimal area equation and expanding for small $\theta$, we find \begin{equation}\label{eq:drcusp} \partial_\theta(\t r(0,\phi)^{-1})= \left \lbrace \begin{matrix}\frac{1}{2\pi}\tan \alpha\,, & \phi = \phi_0 \\ 0 \,,& \phi \neq \phi_0 \end{matrix} \right. \end{equation} This can also be checked by computing the Fourier coefficients and performing the full sum (\ref{eq:solrt}). For instance, the calculation can be done explicitly for a cusp of the form $\sin |\phi|$. The same will happen for $d\ge 4$ as long as the cusp has codimension one, with $\phi$ above playing the role of the local normal coordinate. Indeed, for a cusp at $\phi_0$ that locally looks like $\gamma^{-1}\sim |\phi-\phi_0|$, we have $\nabla^2_\Omega \gamma^{-1} \sim \delta(\phi-\phi_0)$; this is just the familiar fact that $|\phi-\phi_0|$ is the one-dimensional Green's function. This also says that contributions from cusps of higher codimension will be smaller. Indeed, to get a delta function from $\nabla^2_\Omega \gamma^{-1}$ at codimension $n$, we need $\gamma^{-1} \sim 1/|\vec x-\vec x_0|^{n-2}$. However, we are considering curves without such divergences, and so all the cusp contributions will have $\nu<1$, with $\nu=1$ for codimension one cusps only. We conclude that the area integral is not affected by null cusps, since (\ref{eq:drcusp}) modifies the expansion of $\beta(\Omega)$ on a measure zero set of points (the cusps). Therefore the formula (\ref{eq:Ad}) for the entropy has to be integrated on each side of the cusp where the regular expansion in $\theta$ works, without any further cusp contribution. In consequence, the Markov property continues to hold when there are cusps. However, we cannot eliminate boundary terms in the integration by parts when there is a cusp. For example, the finite term with a Laplacian in $d=4$ can be treated in the following way when there are cusps. We integrate in the smooth patches $P_i$ to get \begin{equation} \int_{P_i} d\Omega\, r \nabla^2_\Omega r^{-1}=\int_{P_i} d\Omega\, \frac{\nabla_\Omega r \cdot \nabla_\Omega r}{r^2}-\int_{\partial P_i} dl\, \eta\cdot\frac{ \nabla_\Omega r}{r}\,,\label{boundarys} \end{equation} where the scalar products are with the sphere metric, and $\eta$ in the last term is the outward pointing unit normal to the boundary $\partial P_i$ on the sphere. The first term has a discontinuous but bounded integrand on the boundary (the position of the cusp). It is interesting to see that written in this way, the contributions of the local integrand cancel locally in the SSA relation, but the second term will cancel in the SSA relation because it has opposite contributions to the intersection and the union. This is because these have locally the same $(\nabla_\Omega r)/r$ at the points of the boundary of the patch, but opposite $\eta$. \subsection{Higher derivative gravity theories} In the remaining of this section, we will extend the previous results to include stringy and quantum effects. Higher derivative gravity theories in the bulk around an AdS solution represent different CFTs incorporating $1/\lambda$ corrections, with $\lambda$ the t'Hooft coupling. A general form of the EE functional corresponding to higher derivative Lagrangians was discussed in \cite{Dong:2013qoa,Camps:2013zua}. The result is a geometric functional computed on the generalized Ryu-Takayanagi surface $\Sigma$, including curvature and extrinsic curvature corrections. Here we want to briefly discuss how the main results of the preceding sections are expected to remain unchanged for these models. For a gravity action that is a function of the curvature tensor, the generalized entropy functional has two types of terms. The first is Wald's entropy formula \begin{equation} -2 \pi \int d^{d-1}y\, \sqrt{g}\, \frac{\partial L}{\partial R_{\mu\rho\nu\sigma}} \varepsilon_{\mu\rho}\varepsilon_{\nu\sigma}\,, \label{wald} \end{equation} where \begin{equation} \varepsilon_{\mu\nu}= n_\mu^{(a)} n^{(b)}_\nu \varepsilon_{ab}\,, \end{equation} the vectors $n^{(a)}$, $a=1,2$, are two normalized vectors normal to the codimension two surface, and $\varepsilon_{ab}$ is the usual two-dimensional Levi-Civita tensor. In what follows we find it convenient to choose $n^{(a)}$ as two null vectors orthogonal to the surface, normalized by $n^{(1)}\cdot n^{(2)}=1$. The second type of terms involves the extrinsic curvatures of the surface and is proportional to \bea &&\int d^{d-1}y\, \sqrt{g}\,\frac{\partial^2 L}{\partial R_{\mu_1\rho_1\nu_1 \sigma_1}\partial R_{\mu_2\rho_2\nu_2 \sigma_2}} \,K_{\lambda_1\rho_1\sigma_1} \,K_{\lambda_2\rho_2\sigma_2}\label{otrooo}\\ &&\nonumber\hspace{2cm} \times \left((\eta_{\mu_1\mu_2} \eta_{\nu_1\nu_2}-\varepsilon_{\mu_1\mu_2}\varepsilon_{\nu_1\nu_2})\eta^{\lambda_1\lambda_2}+(\eta_{\mu_1\mu_2}\varepsilon_{\nu_1\nu_2}+\varepsilon_{\mu_1\mu_2}\eta_{\nu_1\nu_2})\varepsilon^{\lambda_1\lambda_2}\right)\,. \eea Here $\eta$ is the projector onto the vector space normal to the surface \begin{equation} \eta_{\mu\nu}=n^{(1)}_\mu n^{(2)}_\nu+n^{(2)}_\mu n^{(1)}_\nu\,. \end{equation} The extrinsic curvature is given by \begin{equation} K_{\lambda\mu\nu}= n^{(2)}_\lambda P^\alpha_\mu P^\beta_\nu \nabla_\alpha n^{(1)}_\beta + n^{(1)}_\lambda P^\alpha_\mu P^\beta_\nu \nabla_\alpha n^{(2)}_\beta\,,\label{ex} \end{equation} where $P$ is the projector to the tangent space of the surface \begin{equation} P^\alpha_\mu= g^\alpha_\nu-\eta^\alpha_\mu\,. \end{equation} The bulk metric is pure AdS corresponding to vacuum CFT. In AdS the curvature tensor is proportional to combinations of product of the metric tensor. In consequence, Wald's term (\ref{wald}) is proportional to the area functional. Let us consider a surface $\Sigma$ that lies on the bulk null cone $\tilde{r}^+=0$. In that case we can choose $n^{(1)}$ to be the Killing null vector parallel to the cone. Then we have \begin{equation} (\nabla_\alpha n^{(1)}_\beta+\nabla_\beta n^{(1)}_\alpha) =0\,. \end{equation} As the extrinsic curvature tensor (\ref{ex}) is symmetric in $\mu,\nu$ the contribution of the derivative of $n^{(1)}$ vanishes. In consequence only one term remains in the extrinsic curvature (\ref{ex}) and the integrand in (\ref{otrooo}) vanishes as well. In addition, we have here a situation analogous to the one of surfaces $\gamma$ in a null plane discussed in Sec.~\ref{sec:markov}. The areas of any two surfaces lying on this null cone in AdS are equal since only the projection of the surface orthogonal to $n^{(1)}$ contributes, and there is an isometry that shows that these projections are equal along the direction of the null ray. Then, on the null cone in the bulk, all surfaces give the same value of the functional. The equations that fix the position of $\Sigma$ in the general case follow by extremizing the entropy functional \cite{Dong:2017xht}. For surfaces on the null cone, the variations of the entropy functional for variations of position also contained in the null cone, vanish. Hence, analogously to the case of Einstein gravity treated above, one of the equations of motion is solved precisely by placing $\Sigma$ on the null cone, and this is compatible with the boundary conditions. The other equation of motion will fix the shape of the surface on the cone itself. On the cone, the functional is just proportional to the area, but this need not be the case for deformations that take the surface outside the cone. Hence, we expect the differential equation for $\tilde{r}^-$ to get modified by the higher derivative terms in the Lagrangian. However, this equation should still be linear. This is because, as we have explained in section \ref{subsec:plane}, boost invariance will lead to a linear equation for regions on the null plane on the boundary, and a conformal transformation will give a linear equation for $(\tilde{r}^-)^{-1}$. In any case, once the surface is determined, the Markov property follows from the fact that the functional on the cone reduces to a term proportional to the area, and the area on the cone is independent of shape. Then, the result can only be affected by the position of the cutoff. Again, we will have a local expression for the entropy as a function of $\gamma$, with the same types of terms found in Sec.~\ref{sec:qft}. The only change can be in the coefficients of the independent terms, in particular the value of the anomaly. This can be calibrated by computing the entropy of the sphere. See for example \cite{Hung:2011xb}. \subsection{$1/N$ corrections} According to \cite{Faulkner:2013ana}, $1/N$ corrections to the entanglement entropy in the large $N$ limit come from quantum corrections in the bulk. One has to add to the holographic entropy the entanglement entropy of quantum fields living in the bulk across the Ryu-Takayanagi surface. For the regions on the light-cone we are considering, the entangling surfaces all lie on the bulk light-cone $\tilde{r}^+=0$ in AdS. Then, we can apply an argument analogous to the one on Sec.~\ref{sec:markov} for the null plane in Minkowski space. The bulk EE has to be a functional of surfaces on the light-cone, and this light-cone is mapped into itself by isometries of AdS which correspond to conformal symmetries of the boundary theory. For example, we can take a surface $\gamma$ on the boundary, and a sphere $\gamma'$ on the light-cone which does not cut $\gamma$. The modular flow corresponding to $\gamma'$ will move $\gamma$ towards $\gamma'$ as much as we want. In the bulk, this corresponds to an isometry that will squeeze as much as we want the entangling surface of $\gamma$ towards the entangling surface of the sphere $\gamma'$ (which is a sphere in the bulk). This symmetry keeps the vacuum invariant and respects a covariant cutoff in the bulk. Hence it will keep the bulk EE invariant. We conclude that quantum corrections in the bulk, except for terms coming from the UV cutoff of the boundary theory, will be the same for all regions on the light-cone, and will not spoil the Markov property. We expect the same structure of the entropy as in Sec.~\ref{sec:qft}, with some corrections in the different coefficients for the independent possible terms. \section{Revisiting the entropic proof of the $a$-theorem} \label{sec:athm} In the previous sections we obtained the explicit form of the CFT entropy on the null cone and worked out the holographic case. In this section we will use this information to check the arguments leading to a proof of the $a$-theorem in $d=4$ in~\cite{Casini:2017vbe}. These followed the lower dimensional cases ($d=2,3$) treated in~\cite{Casini:2004bw,Casini:2012ei}, where the strong subadditive property of the entropy was used for spheres (intervals or circles in $d=2$ and $d=3$ respectively) on the light-cone to show the monotonicity of the $c$ and $F$ quantities. In particular, the result (\ref{eq:Seven}) for the entropy for arbitrary regions on the null cone will allow us to see explicitly why the Markov property has to be invoked as a key ingredient in $d=4$, as opposed to the $d=2$ and $d=3$ cases. However, from the outset we can say that the Markov property plays an important hidden role even in dimensions lower than $d=4$. This is because if the strong subadditive inequality can teach us something non-trivial about the RG running, it must be the case that this inequality saturates for a CFT, where no relevant RG running is taking place. This shows the precise reason of the geometric setup of these theorems involving regions on the null cone. This is basically the only case where the Markov property holds for a CFT.\footnote{For regions $A$ and $B$ where $A-B$ and $B-A$ contain non-trivial spacial slices the Markov property cannot hold since there is quantum entanglement between them, as can be seen from the failure of Bell's inequalities for the correlators \cite{Verch:2004vj}.} \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{boosted.jpg} \caption{Boosted circles lying on the null cone in $d=3$. The vertical axis of the cone gives the time direction.} \label{fig:boosted} \end{center} \end{figure} Let us first review the arguments in~\cite{Casini:2012ei}. We start with a boosted sphere of radius $\sqrt{r R}$ lying on the null cone between the time slices at time $|t|=r$ and $|t|=R>r$. We then take a large number $N$ of rotated copies of this sphere, as equally distributed on the unit sphere of directions as possible.\footnote{It is not possible to distribute them in a regular fashion for $d>3$. The details of this distribution on the unit sphere of directions turns out to be irrelevant as far as a uniform distribution is approached for large $N$.} From strong subadditivity we get the inequality in the limit of large $N$ \begin{equation} S(\sqrt{r R})\ge \int_r^R dl\ \beta(l) \tilde{S}(l)\,. \label{41} \end{equation} In this expression $ \tilde{S}(l)$ are the entropies of ``wiggly'' spheres that come about in the process of intersecting and joining boosted spheres in the SSA inequality -- see Fig.~\ref{fig:boosted}. The wiggly spheres have an approximate radius $l\in (r,R)$, and lie around the surface of equal time $|t|=l$; the deviations from the perfect sphere of radius $l$ at $|t|=l$ form the wiggles, that lie on the null cone, and have a typical width $\sim l/N^{1/(d-2)}$ that tends to zero for large $N$. $\beta(l)$ is the density of wiggly spheres as the number of boosted spheres $N\rightarrow \infty$, divided by $N$.\footnote{Strictly speaking the integral in (\ref{41}) is a sum over $N$ wiggly sphere entropies divided by $N$. The notation with an integral and a density of wiggly spheres of the same radius is a convenience here, that will make sense for later expressions when we take the limit $N\rightarrow \infty$, and more information about the entropies of the wiggly spheres is introduced.} It is given by \begin{equation} \beta(l)=\frac{\text{Vol}(S_{d-3})}{\text{Vol}(S_{d-2})}\, \frac{2^{d-3} (r R)^{\frac{d-2}{2}} \left( (l-r)(R-l) \right)^{\frac{d-4}{2}} }{ l^{d-2} (R-r)^{d-3}}\,, \end{equation} normalized to have unit integral, \begin{equation} \int_r^R dl\, \beta(l)=1\,.\label{normali} \end{equation} In a sense these wiggly regions tend to spheres of radius $l$ for large $N$, but we have to work out how exactly the entropies behave in this limit. Note that even if the amplitude of the wiggles decreases with $N$ this is not the case for their slope, which remains a fixed function of $l$ in the limit $N\rightarrow \infty$. At this point three different questions arise which have to be understood in order to extract useful information for the monotonicity theorems from (\ref{41}). The first question is if this inequality contains cutoff independent information, that is, if the divergent terms cancel between the two sides of the inequality. Since divergences are local on the boundary of the regions this can be rephrased as if the new features on the wiggly spheres, coming from the locus of intersections of two or more spheres for example, gives place to new unbalanced divergent terms or not. The second question is whether, in case the inequality contains information about finite quantities, this can be extracted in a useful way. In other words, whether the wiggly sphere entropies can be related to sphere entropies. The third and last question is if the inequality will teach us something about the central charges at the fixed points of the RG. We will discuss these three questions in turn. \subsection{The inequality is UV finite} Unbalanced divergences in the inequality in principle could appear due to the cusps formed at the intersection and union of smooth spheres. We want to present a slightly different geometrical setup which bypasses this issue about divergent terms in any dimension. The idea is to slightly deform the spheres of radius $\sqrt{r R}$ on the left hand side of the inequality along the null cone and around the points of intersection with other rotated spheres such that all intersections and unions are now smooth (we can choose infinitely many smooth derivatives). See Fig.~\ref{fig:smooth}. In this case there are no cusps and it is clear that the divergent terms cancel in any regularization. The price we pay is that now we do not have perfect spheres on the left hand side of the inequality, and they are replaced by wiggly spheres of approximate radius $\sqrt{r R}$. The inequality now reads \begin{equation} \frac{1}{N}\sum_i \tilde{S}_i(\sqrt{r R})\ge \int_r^R dl\ \beta(l) \tilde{S}(l)\,, \label{441} \end{equation} where $\tilde{S}(l)$ is the entropy of a wiggly sphere of approximate radius $l$ and again the integral on the right hand side is a shortcut for a sum over $N$ terms. In the present case this is not a big price to pay since we already have to deal with the wiggly spheres on the right hand side. The size of the new wiggles used to smooth out the cusps can be made arbitrarily small. \begin{figure}[t] \begin{center} \includegraphics[width=0.8\textwidth]{smooth.pdf} \captionsetup{width=0.9\textwidth} \caption{Deformations of spheres to smooth out intersections and unions on the light-cone.} \label{fig:smooth} \end{center} \end{figure} While this approach sidesteps the issue of divergences arising at the cusps, in \cite{Casini:2017vbe} we argued that the divergences cancel out from (\ref{41}), even in presence of cusps. We argued in two steps, assuming a covariant cutoff.\footnote{A general definition of a covariant cutoff for an arbitrary QFT can be provided using mutual information along the same lines as has been done for $d=3$ in \cite{Casini:2015woa}. This is reviewed in Appendix \ref{Appendix:mutual}.} For completeness, in the rest of this section we will review and discuss these arguments. \bigskip 1) First, since (\ref{41}) was obtained by a series of SSA inequalities, the Markov property requires that the divergences cancel for a CFT. Let us see how this comes about. The new divergences on the new local features of the intersections and unions are given by integrals of local geometric terms on the defects of the surface. An essential point is that these defects live on a null cone. The leading divergence is proportional to the defect dimensions, and we also have new terms for all subleading integer powers corresponding to integration of the defect curvatures along the defect. For a CFT the dimensions of these terms are compensated by negative integer powers of the cutoff $\epsilon$ (or a logarithm if the power is zero). Let us focus on $d=4$. We have linear terms growing as $L/\epsilon$ from the intersection of two spheres in a curve of size $L$, and from the same defect, a term proportional to $\log (L/\epsilon)$ due to the integral of the curvature of the intersection curve along the defect. From the vertex of the intersection of three spheres we should also get a logarithmic term. Now, the argument is that the coefficients of these contributions are either zero or have opposite sign for the contributions of the defect to the union and the intersection that gave place to it in the SSA inequality. Let us first consider the leading divergences, where no curvature terms are present. Hence the contribution is the same as for the same type of defect on a null plane rather than a cone. The defect will not contribute because there is no geometric quantity depending on the defect ``angles'' on which the entropy can depend making the defect contribution different from the plane without defect. These is just a manifestation of the argument in Sec.~\ref{sec:markov} about functionals on a null plane being independent of $\gamma$. In other terms, boosting these geometries while keeping the null plane and the location of the defect invariant, one can squash the planes and make them as similar to a single plane without defect as we want. To be more explicit, take for example the case of the vertex in $d=4$. The vertex defines three spatial lines with unit tangents $t_1$, $t_2$ and $t_3$. However, these tangents live in a three-dimensional null plane. Therefore they all can be written as linear combinations of a spatial vector living in a two dimensional plane orthogonal to the null vector $k$ and $k$ itself, $t_i=v_i+\alpha_i k$, with $v_i^2=1, v_i\cdot k=0$. In any invariant formed by the three vectors all contributions from the component along $k$ will vanish and then the invariant will be the same as the one formed by three lines in a single two dimensional plane, which of course does not define a real vertex. Hence we conclude that these terms have zero coefficient and do not appear in the entropy. The holographic examples in Sec.~\ref{sec:holo} also illustrate this. For $d=3$ and $d=4$ we showed there is no $\log(\epsilon)$ (resp. no $1/ \epsilon$) contribution from the cusps. In $d=4$ we also have the possibility of a curvature term on the intersection of two spheres. This can sense the form of the null cone and in this way bypass the arguments in Sec.~\ref{sec:markov}. In writing the contribution of the curvature term we are allowed to use the gradient operator $\nabla_\mu$ on the vector $k$ for example, to produce local invariants. However, these gradients are defined on the defect only, and then the indices of the derivatives have to be contracted with one of the defect directions. This defect is locally formed by the intersection of two spatial planes inside the same null hyperplane with null vector $k$. Each spatial plane has another null vector $q_i$ that defines it, such that $q_i^2=0$, $q_i\cdot k=1$. There is an ambiguity in this representation of the planes in the scale of $k$, as we can freely rescale $k\rightarrow \lambda k$, $q_i\rightarrow (1/\lambda) q_i$. Then, in order to produce the integrand of the contribution we have to write an invariant using the same number of vectors $q_i$ than of $k$. The only non trivial invariant with the right dimensions is \begin{equation} \int dx^\mu \, (\nabla_\mu k^\alpha) \,k^\beta \,q_1^\gamma \,q_2^\delta\, \varepsilon_{\alpha\beta\gamma\delta}\,. \end{equation} This requires a choice of ordering of the two vectors $q_1$, $q_2$, which can be assigned for example choosing first the one to the right of the direction of integration along the intersection. This orientation changes sign when we compute the contributions of this defect to the intersection and the union of the two spheres, and hence the full $\log \epsilon$ contribution of these defects to the SSA inequality vanish. In our general analysis in Sec.~\ref{sec:qft}, and the holographic case in Sec.~\ref{sec:holo}, we have in fact learned a bit more. We have shown that the total coefficient of the $\log \epsilon$ term is a topological invariant and it is always the same for any shape on the null cone. This is given by an integral of the intrinsic curvature of the surface, giving the Euler number (the only non vanishing term in Solodukhin's formula~\cite{Solodukhin:2008dh} in this case). Hence, the $\log(\epsilon)$ contribution clearly cancels from SSA. To see how this fits with the previous argument, suppose we have a normalized contribution $\log(\epsilon)$ for any shape and we are doing the SSA of two spheres of radius $\sqrt{r R}$. The logarithmic coefficient for the intersection and union should be of the form \bea 1=\frac{\textrm{area}_\cap}{4\pi r R} + \textrm{cusp}_\cap \,,\\ 1=\frac{\textrm{area}_\cup}{4\pi r R} + \textrm{cusp}_\cup \,, \eea where the first term on the right hand side comes from integration of the constant intrinsic curvature of the spheres and is proportional to the total solid angle. Summing these two equations and using $\frac{\textrm{area}_\cap+\textrm{area}_\cup}{4\pi r R} =2$ we get $\textrm{cusp}_\cap=-\textrm{cusp}_\cup$, which coincides with the previous argument. \bigskip 2) The previous argument shows that the inequality is free from divergences for a CFT. If we add a relevant deformation other divergent terms can appear with different powers of $\epsilon$, and where some cutoff powers are replaced by powers of the coupling constant. However, the important point is that these terms are again local on the boundary and have to have the same geometric structure as for a CFT, being integrals of local geometric tensors on the boundary. That is, the only change is in replacements of the cutoff by coupling constants. Then, the previous argument still gives an inequality free of divergences. \subsection{Converting wiggly spheres into spheres} We would like to convert wiggly spheres into spheres in (\ref{41}) or (\ref{441}). It turns out that this is correct for $d=2$ (since there are no wiggly intervals) and for $d=3$, where terms produced by the wiggles go to zero for large $N$. This is not the case for $d=4$, and the naive replacement of wiggly spheres by spheres just violates the Markov property. Let us see this in more detail. For a CFT in $d=4$ the entropy for a sphere has the form \begin{equation} S(l)=c\, \frac{l^2}{\epsilon^2}-4 a \log(l/\epsilon)\,. \end{equation} If we attempt to plug this formula into the Markov equation, assuming wiggly spheres can be replaced by spheres, \begin{equation} S(\sqrt{r R})= \int_r^R dl\ \beta(l) S(l)\,, \label{4441} \end{equation} we find this is not correct. The area term does indeed cancel since \begin{equation} (\sqrt{r R})^{d-2}= \int_r^R dl\ \beta(l) l^{d-2}\,,\label{acan} \end{equation} and the constant $\log(\epsilon)$ term cancels as well due to (\ref{normali}). However, this is not the case for the $-a \log(l)$ term. The issue here is that there is a nontrivial contribution to the wiggly sphere entropy from the finite term in (\ref{cuatrod}) that comes together with the logarithmic term; this contribution, however, cancels for spheres at constant $t$ on the right hand side of (\ref{4441}). This invalidates the replacement of wiggly spheres by spheres. We will now see that taking this difference into account correctly restores the Markov equality. With $l=\sqrt{x^2+y^2+z^2}$, and $\theta$ the usual polar angle, the equation for the boosted sphere of radius $\sqrt{r R}$ is \begin{equation} |t|=l=\frac{2 r R}{r+R-(R-r)\cos(\theta)}\,. \end{equation} We have \begin{equation} \frac{1}{2}\frac{(\nabla_\Omega \gamma)^2}{\gamma^2}= \frac{1}{2}\left(\frac{1}{l}\partial_\theta l\right)^2=\frac{(R-l)(l-r)}{2 r R}\,. \end{equation} We get a constant integrand (except for higher order terms in $1/N$) on the surface of the wiggly sphere of approximate radius $l$.\footnote{The boundary terms in (\ref{boundarys}) cancel automatically in the sum over wiggly spheres.} Taking into account this term, the Markov equation for the finite terms \begin{equation} \log(\sqrt{rR})=\int_r^R dl\, \beta(l)\,\left(\log(l)+\frac{(R-l)(l-r)}{2 r R}\right)\,, \end{equation} is now satisfied, once we replace $\beta=\frac{r R}{l^2 (R-r)}$ corresponding to $d=4$. Note that the cancellation happens in each SSA equality but in terms of the wiggly spheres it happens ``non locally'', and takes all the range $l\in (r,R)$. Therefore, a finite term coming from the wiggles obstructs replacing the wiggly spheres by spheres. The idea of~\cite{Casini:2017vbe} was to take advantage of the Markov property of a CFT to subtract from the inequality for the entropies $S$ of the deformed theory the equation corresponding to the entropies $S_0$ of the UV CFT. This can be done at no cost since the SSA of $S_0$ vanishes exactly. We have shown that, in addition, the divergent terms coming from massive deformations are also Markovian and cancel in the SSA inequality; we can subtract them as well, without spoiling the inequality. Then, in any dimensions, we safely replace \begin{equation} S(l)\rightarrow \Delta S(l)=S(l)-S_0(l)-\textrm{massive divergent terms}\,, \end{equation} in (\ref{441}). Now the finite terms of the wiggles coming from the UV fixed point disappear in the subtraction, and we are free to replace subtracted wiggly spheres by subtracted spheres, taking the limit $N\rightarrow \infty$, and getting the inequality \begin{equation} \Delta S(\sqrt{r R}) \ge \int_r^R dl\ \beta(l) \Delta S(l)\,. \label{111} \end{equation} We still have to check that there are no finite terms induced by a mass parameter that give a contribution for the wiggles that survive in the limit of small wiggles for the deformed theory. In fact, the difference in the EE from a wiggly and non wiggly sphere is controlled by the UV. These terms should be proportional to some mass scale of the square coupling constant $g^2$ of the theory deformation at the UV, which must be compensated by powers of $r$ and positive powers of the distance scale set by the wiggles size. In consequence, they do not contribute in the large $N$ limit. In more detail, a local term should be of the same form as the ones encountered for CFTs but where a power of the cutoff has been replaced by one of a mass parameter. These contributions are divergent except for some non generic perturbation dimensions. In any case a local term is always Markovian and can be subtracted as well. If the term induced by the deformation is non local,\footnote{See for example eq. (\ref{oyo1}) in the next subsection.} then the change from the wiggly sphere to the sphere is suppressed by powers of the wiggly size, and does not contribute in the limit. We have computed these wiggly massive corrections holographically in Appendix \ref{app:RG}. The result agrees with these expectations. Note that for $d=3$ the formula (\ref{tresd}) gives no contribution for the wiggles, and we can safely replace wiggly circles by circles without subtracting the CFT entropies. But this is not the case in higher dimensions. \subsection{Irreversibility theorems} We then have (\ref{111}) for spheres in any dimension, where the UV CFT entropy along with other possible divergent contributions have been subtracted. These inequalities are equivalent to the differential ones obtained taking the limit $r\rightarrow R$: \begin{equation} r\, \Delta S''(r) -(d-3) \Delta S'(r)\le 0\,. \label{tera} \end{equation} Writing the entropy as a function of the area $a$ rather than the radius, we get the compact expression \begin{equation} \Delta S''(a)\le 0 \label{sisi} \end{equation} valid in any dimension. Thus the constraint for $\Delta S$ is that it must be concave as a function of the area. For completeness, let us briefly review here the results of~\cite{Casini:2017vbe}. With our definition of $\Delta S$, that has the entropy with the UV CFT terms and other possible divergent terms subtracted, in the UV limit of small $r$ all local geometric terms vanish and we get the leading ``nonlocal'' term (see e.g.~\cite{Metlitski:2011pr, Liu:2012eea, Liu:2013una} for the structure of the entropy of spheres at fixed points) \begin{equation} \Delta S_{UV}(r) \sim c_0 \,g^2 r^{2(d-\Delta)}+\ldots =c_0\, g^2 a^{\frac{2(d-\Delta)}{d-2}}+\ldots\,,\label{oyo1} \end{equation} where the ellipsis are higher powers in $r$. In the IR fixed point all contributions (except the universal term) are local (proportional to integral of curvatures on the surface) and we have \bea \Delta S_{IR}(r)&=&\Delta \mu_{d-2}\,r^{d-2}+\Delta \mu_{d-4}\, r^{d-4}+\ldots + \left\lbrace \begin{array}{l} (-)^{\frac{d-2}{2}} 4\,\Delta A\, \log(m R)\,\, d \,\, \textrm{even}\\ (-)^{\frac{d-1}{2}} \Delta F \hspace{1.9cm}d\,\,\textrm{odd } \end{array}\right.\,\\ \nonumber &=&\Delta \mu_{d-2}\, a +\Delta \mu_{d-4}\, a^{\frac{d-4}{d-2}}+\ldots + \left\lbrace \begin{array}{l} \frac{(-)^{\frac{d-2}{2}} 4}{(d-2)}\Delta A\, \log(m^{d-2}a)\,\, d \,\, \textrm{even}\\ (-)^{\frac{d-1}{2}} \Delta F \hspace{1.9cm}d\,\,\textrm{odd } \end{array}\right.\,, \label{even1} \eea with $m$ a characteristic energy scale of the RG flow. The coefficients $\Delta \mu_{d-k}$ have dimension $d-k$ and have the interpretation of a finite renormalization of the coefficient of $r^{d-k}$ between the UV and IR fixed points. The last term gives the change in the universal part of the EE: $\Delta A=A_{IR}-A_{UV}$, with $A$ the Euler trace anomaly coefficient for even dimensions, and $\Delta F=F_{IR}-F_{UV}$, with $F$ the constant term of the free energy of a $d$-dimensional Euclidean sphere. Concavity, Eq.~(\ref{sisi}), implies two relations between the short and long distance expansions for $\Delta S(a)$: 1) The slope of the $\Delta S(a)$ curve is bigger at the UV than at the IR; 2) Given that $\Delta S(0)=0$, the height at the origin of the tangent line at the IR has to be positive. The first requirement, comparing (\ref{oyo1}) and (\ref{even1}), and provided $\Delta < (d+2)/2$, gives place to the ``area theorem", that is, the decrease along the RG of the coefficient of the area term,\footnote{If $\Delta > (d+2)/2$ the area term at the $UV$ can be considered infinite because the slope of (\ref{oyo1}) diverges as $r\rightarrow 0$.} \begin{equation} \Delta \mu_{d-2}\le 0\,.\label{dfghj} \end{equation} In $d=2$ the area coefficient is dimensionless and (\ref{dfghj}) coincides with the $c$-theorem. The area theorem was obtained in~\cite{Casini:2016udt} using monotonicity of the relative entropy. The second requirement gives for $d=3$ the $F$-theorem, \begin{equation} \Delta F\le 0\,, \end{equation} and for $d=4$ the $a$-theorem, \begin{equation} \Delta A\le 0\,.\label{mia} \end{equation} For higher dimensions $d>4$ it gives \begin{equation} \Delta \mu_{d-4}\ge 0\,.\label{tuya} \end{equation} The inequality does not constraint the sign of the subleading terms, in particular the universal terms, for $d>4$. In addition to these constraints that come from comparison of the UV and IR expansions, we have to check (\ref{sisi}) at the UV and infrared expansions themselves. At the IR we get again (\ref{mia}) and (\ref{tuya}) for $d\ge 4$. For $d=3$ we get information on the sign of the first subleading correction to the constant \begin{equation} \Delta S^{d=3}_{IR}= \Delta \mu_1 r -\Delta F - \frac{k}{r^{\alpha}}+\ldots\,, \end{equation} where the last term is purely infrared in origin and $\alpha$ is related to the leading irrelevant dimension of the operator driving the theory to the IR \cite{Liu:2012eea}. We get $k>0$ from (\ref{sisi}). This coincides with holographic calculations \cite{Liu:2013una}, and free field theory calculations \cite{Huerta:2011qi}. At the UV we get that the sign of the coefficient $c_0$ in (\ref{oyo1}) is the same as the one of $\Delta-(d+2)/2$. This also agrees with holographic calculations \cite{Liu:2012eea}. Notice that while the inequality (\ref{sisi}) saturates at the UV, it does not saturate at the IR for $d\ge 4$. The SSA inequality always saturates at the IR for regions smooth enough (with IR size curvatures) but this does not allow us to derive (\ref{sisi}) precisely because we are not allowed to convert wiggly spheres into spheres for these large wiggles. \section{Final remarks} \label{sec:concl} We have found that the Markov property for EE on the plane, and on the light-cone for CFT's, has an origin that is essentially geometric. Because of that, this property extends to other quantities, e.g. the Renyi entropies; it does not depend on other specific properties that the EE has -- and the Renyi entropies generally do not have -- such as the SSA inequality. The Markov property together with Lorentz invariance determine the general form of the entropies on the light-cone for a CFT, and turns out to be related to dilaton effective actions in two less dimensions. The universal part is completely fixed by the coefficient $A$ of the conformal anomaly in even dimensions and is given by the Wess-Zumino anomaly action. For odd dimensions the universal part is just a constant $F$ for any region in the light-cone. Beyond cases that are conformal transformations of the null plane in Minkowski space for CFT's, we expect that the Markov property also holds for any QFT on an space-time having a bifurcate Killing horizon, and where the state is invariant under the Killing symmetry. This is because the Killing symmetry will squash all regions to the bifurcation and keep a covariant cutoff invariant, leading to constant entropies on the horizon. This includes for example, arbitrary QFT in de Sitter space for the de Sitter invariant state and regions on the cosmological horizon, and for regions on the horizon of stationary black holes for the Hartle-Hawking state. The Markov property for the Renyi entropies extends the constraints on the density matrix beyond Markovianity. For finite systems, the Markov property for all Renyi entropies in subsystems $A$, $B$, $C$, \begin{equation} S_n(AB)+S_n(BC)=S_n(B)+S_n(ABC)\,, \end{equation} can only be possible if the global state is of the form, $\rho_{ABC}=\rho_{AB_1}\otimes \rho_{B_2C}$, with $B_1$ and $B_2$ two subsystems partitioning $B$. Hence, $\rho_{AC}=\rho_A\otimes \rho_C$ is a product. This suggests that the vacuum state is roughly a product over different null pencils in vacuum QFT, though this is not quite correct mathematically for a theory in $d>2$ and an interacting UV fixed point. In this case, the algebras corresponding to finite regions on the light-cone (that do not generate a domain of dependence containing spacetime volume) actually have no degrees of freedom. Anyway, in the cases where this identification makes sense, free theories and CFTs in $d=2$, one can check that the structure of the vacuum is in fact a product state, rather than a more general Markovian state where classical correlations are allowed between $A$ and $C$. For free theories this is described in \cite{Wall:2011hj}, while for a CFT in $d=2$ the vacuum is a product across the two null directions. The present investigation started in the course of attempting to generalize the entropic proofs of the $c$ and $F$ theorems to $d=4$. In this sense it is intriguing that we have found that the entropies on the null cone are classified by dilaton effective actions, which are fundamental in the proof by Komargodski and Schwimmer of the $a$-theorem \cite{Komargodski:2011vj}. However, in the present case, the dilaton lives in $d-2$ dimensions rather than $d$ dimensions. This connection was also noticed by Solodukhin in \cite{Solodukhin:2013yha}. Another difference is that our non dynamical dilaton does not necessarily obey unitarity constraints. It would be interesting to investigate if this connection could be the base for extending the irreversibility theorems to dimensions higher than $d=4$. We have checked that the general expressions for the entropy on the cone hold holographically. It is surprising that exact holographic expressions can be found for the entropy of such a large class of regions, though we can understand the origin of this simplification from more general principles. We have discussed how this simplification also permeates to $\lambda^{-1}$ and $N^{-1}$ corrections. Holographically, the origin of all the simplifications is the fact that the entangling surface lies on a maximally symmetric null cone in the bulk. It would be interesting to obtain the expected form of the Renyi entropies on the cone from a direct calculation of the holographic Renyi entropies. In this case we would have to deal with a (in principle) complicated Schwinger-Keldysh representation with Lorentzian conical defects in the bulk \cite{Dong:2016hjy} because we cannot use the Euclidean representation \cite{Dong:2016fnf,Lewkowycz:2013nqa} for generic regions living on the null cone. Our best guess is that the bulk manifold should still be locally AdS, in such a way as to allow to locate the defects on a fixed bulk null cone. If this is the case, the Markov property and the expected expansion of the Renyi entropies would hold by the same reasons discussed in this paper for the entropy. \section*{Acknowledgments} We thank Xi Dong, Aitor Lewkowycz, Juan Maldacena, and Mark Van Raamsdonk for discussions. We would like to dedicate this work to the memory of Joe Polchinski. This work was partially supported by CONICET (PIP grant 11220150100299), CNEA, and Universidad Nacional de Cuyo, Argentina. H.C. acknowledges an ``It From Qubit" grant of the Simons Foundation. G.T. is also supported by ANPCYT PICT grant 2015-1224.
{ "timestamp": "2018-04-20T02:09:18", "yymm": "1802", "arxiv_id": "1802.04278", "language": "en", "url": "https://arxiv.org/abs/1802.04278" }
\section{Introduction} \label{sec:1} Though the study of topological properties of dendrites from the viewpoint of general topology proceed for more than three quarters of a century \cite{Char,Kur, Nad}, the attempts to study the geometrical properties of self-similar dendrites are rather fragmentary. In 1985, M.~Hata \cite{Hata} studied the connectedness properties of self-similar sets and proved that if a dendrite is an attractor of a system of weak contractions in a complete metric space, then the set of its endpoints is infinite. In 1990 Ch.~Bandt showed in his unpublished paper \cite{BS} that the Jordan arcs connecting pairs of points of a post-critically finite self-similar dendrite are self-similar, and the set of possible values for dimensions of such arcs is finite. Jun Kigami in his work \cite{Kig95} applied the methods of harmonic calculus on fractals to dendrites; on a way to this he developed effective approaches to the study of structure of self-similar dendrites. D.Croydon in his thesis \cite{C} obtained heat kernel estimates for continuum random tree and for certain family of p.c.f. random dendrites on the plane. In our recent works \cite{STV1,STV2,STV3} we considered systems ${\EuScript S}$ of contraction similarities in $\rd$ defined by some polyhedron $P{\subset}\rd$, which we called contractible $P$-polyhedral systems. We proved that the attractor of such system ${\EuScript S}$ is a dendrite $K$ in $\rd$; we showed that the orders of points $x\in K$ have an upper bound, depending only on $P$; and that Hausdorff dimension of the set $CP(K)$ of the cut points of $K$ is strictly smaller than the dimension of the set $EP(K)$ of its end points unless $K$ is a Jordan arc. Now we extend our approach to the case of symmetric P-polygonal systems ${\EuScript S}$ and show that the symmetric dendrites $K$ which are the attractors of these systems, have clear and obvious structure: their main tree is a symmetric n-pod (Proposition \ref{npod}), all the vertices of the polygon $P$ are the end points of $K$ and show that for $n>5$ each ramification point of $K$ has the order n (Proposition \ref{comp1}). We show that the augmented system $\widetilde{\EuScript S}$ contain subsystems ${\EuScript Z}$ which are zippers whose attractors are subdendrites of the dendrite $K$ (Theorem \ref{zipdend}). \subsection{Dendrites} \label{subsec:1} \begin{definition} A {\em dendrite} is a locally connected continuum containing no simple closed curve. \end{definition} In the case of dendrites the order $Ord(p,X)$ of the point $p$ with respect to $X$ is equal to the number of components of the set $X \setminus \{p\}$. {Points of order 1 in a continuum $X$ are called {\em end points} of $X$; the set of all end points of $X$ will be denoted by $EP(X)$. A point $p$ of a continuum $X$ is called a {\em cut point} of $X$ provided that $X \setminus \{p\}$ is not connected; the set of all cut points of $X$ will be denoted by $CP(X)$. Points of order at least 3 are called {\em ramification points} of $X$; the set of all ramification points of $X$ will be denoted by $RP(X)$. } According to \cite[Theorem 1.1]{Char}, for a continuum $X$ the following conditions are equivalent: $X$ is dendrite; every two distinct points of $X$ are separated by a third point; each point of $X$ is either a cut point or an end point of $X$; each nondegenerate subcontinuum of $X$ contains uncountably many cut points of $X$; the intersection of every two connected subsets of X is connected; X is locally connected and uniquely arcwise connected. \subsection{ Self-similar sets} \label{subsec:1} Let $(X, d)$ be a complete metric space. A mapping $F: X \to X$ is a contraction if $\mathop{\rm Lip}\nolimits F < 1$.\\ The mapping $S: X \ra X$ is called a similarity if $ d(S(x), S(y)) = r d(x, y) $ for all $x, y\in X$ and some fixed r. \begin{definition} Let ${\EuScript S}=\{S_1, S_2, \ldots, S_m\}$ be a system of contraction maps on a complete metric space $(X, d)$. A nonempty compact set $K{\subset} X$ is the attractor of the system ${\EuScript S}$, if $K = \bigcup \limits_{i = 1}^m S_i (K)$. \end{definition} The system ${\EuScript S}$ defines the Hutchinson operator $T$ by the equation $T(A) = \bigcup \limits_{i = 1}^m S_i (A)$. By Hutchinson's Theorem, the attractor $K$ is uniquely defined by ${\EuScript S}$ and for any compact set $A{\subset} X$ the sequence $T^n(A)$ converges to $K$. We also call the subset $K {\subset} X$ self-similar with respect to ${\EuScript S}$. Throughout the whole paper, the maps $S_i\in {\EuScript S}$ are supposed to be similarities and the set $X$ to be $\mathbb{R}^2$. {\bf Notation.} $I=\{1,2,...,m\}$ is the set of indices, ${I^*}=\bigcup\limits_{n=1}^{\infty} I^n$ is the set of all finite $I$-tuples, or multiindices ${\bf {j}}=j_1j_2...j_n$. By ${\bf {i}}{\bf {j}}$ we denote the concatenation of the corresponding multiindices;\\ we say ${\bf {i}}\sqsubset{\bf {j}}$, if ${\bf {i}}=i_1\ldots i_n$ is the initial segment in ${\bf {j}}=j_1\ldots j_{n+k}$ or ${\bf {j}}={\bf {i}}{\bf {k}}$ for some ${\bf {k}}\in{I^*}$;\\ if ${\bf {i}}\not\sqsubset{\bf {j}}$ and ${\bf {j}}\not\sqsubset{\bf {i}}$, ${\bf {i}}$ and ${\bf {j}}$ are {\em incomparable};\\ we write $S_{\bf {j}}=S_{j_1j_2...j_n}=S_{j_1}S_{j_2}...S_{j_n}$ and for the set $A \subset X$ we denote $S_{\bf {j}}(A)$ by $A_{\bf {j}}$; \\ we also denote by $G_{\EuScript S}=\{S_{\bf {j}}, {\bf {j}}\in{I^*}\}$ the semigroup, generated by ${\EuScript S}$.\\ The set of all infinite sequences $I^{{\infty}}=\{{\bf \al}=\al_1\al_2\ldots,\ \ \al_i\in I\}$ is called the {\em index space}; and $\pi:I^{{\infty}}\rightarrow K$ is the {\em index map }, which sends a sequence $\bf\al$ to the point $\bigcap\limits_{n=1}^{\infty} K_{\al_1\ldots\al_n}$. \subsection{Zippers} \label{subsec:1} The simplest way to construct a self-similar curve is to take a polygonal line and then replace each of its segments by a smaller copy of the same polygonal line; this construction is called zipper and was studied in \cite{ATK,Tet06}: \begin{definition} Let $X$ be a complete metric space. A system ${\EuScript S} = \{S_1, \ldots, S_m\}$ of contraction mappings of $X$ to itself is called a {\em zipper} with vertices $\{z_0, \ldots, z_m\}$ and signature $\varepsilon = (\varepsilon_1, \ldots, \varepsilon_m)$, $\varepsilon_i \in\{0,1\}$, if for $ i = 1, \ldots m$, $S_i (z_0) = z_{i-1+\varepsilon_i}$ and $S_i (z_m) = z_{i-{\varepsilon_i}}$.\end{definition} A zipper ${\EuScript S}$ is a {\em Jordan zipper} if and only if one (and hence every) of the structural parametrizations of its attractor establishes a homeomorphism of the interval $J = [0, 1]$ onto $K({\EuScript S})$. \begin{theorem}\label{Jordan} Let ${\EuScript S} = \{S_1, . . . , S_m\}$ be a zipper with vertices $\{z_0, . . . , z_m\}$ in a complete metric space $X$ such that all contractions $S_j : X\to X$ are injective. If for arbitrary $i, j \in I$ the set $K_i\cap K_j$ is empty for $|i-j| > 1$ and is a singleton for $|i-j| = 1$ then every structural parametrization $\fy: [0, 1]\to K({\EuScript S})$ of $K({\EuScript S})$ is a homeomorphism and $K({\EuScript S})$ is a Jordan arc with endpoints $z_0$ and $z_m$.\end{theorem} \section{Contractible $P$-polygonal systems} \label{sec:2} Let $P$ be a convex polygon in ${\mathbb{R}}^2$ and $V_P=\{A_1, \ldots, A_{n_P}\}$ be the set of its vertices, where $n_P=\#V_P$.\\ Consider a system of contracting similarities ${\EuScript S} = \{S_1, \ldots, S_m\}$, which possesses the following properties:\\ {\bf(D1)}\ For any $k \in I$, the set $P_k = S_k (P)$ is contained in $P$; \\ {\bf(D2)}\ For any $i\neq j$,\ $i, j \in I$, $P_i \bigcap P_j$ is either empty or a common vertex of $P_i$ and $P_j$;\\ {\bf(D3)}\ For any $A_k\in V_P$ there is a map $S_i\in{\EuScript S}$ and a vertex $A_l\in V_P$ such that $S_i(A_l)= A_k$;\\ {\bf(D4)}\ The set ${{\widetilde P}} = \bigcup \limits_{i = 1}^m P_i$ is contractible. \begin{definition}\label{pts} The system $(P,{\EuScript S})$ satisfying the conditions {\bf D1-D4} is called a contractible P-polygonal system of similarities. \end{definition} Applying Hutchinson operator $T (A)=\bigcup\limits_{i\in I} S_i(A)$ of the system ${\EuScript S}$ to the polygon $P$, we get the set ${{\widetilde P}}^{(1)} = \bigcup \limits_{i\in I} P_i$. Taking the iterations of $T$, we define ${{\widetilde P}}^{(n + 1)} = T({\widetilde P}^{(n)})$ and get a nested family of contractible compact sets ${{\widetilde P}}^{(1)}{\supset} {{\widetilde P}}^{(2)}{\supset} \ldots {\supset} {{\widetilde P}}^{(n)}{\supset}\ldots$. By Hutchinson's theorem, the intersection of this nested sequence is the attractor $K$ of the system ${\EuScript S}$. The following Theorem was proved by the authors in \cite{STV1,STV2,STV3}: \begin{theorem}\label{main} Let ${\EuScript S}$ be a contractible P-polygonal system, and let $K$ be its attractor. Then $K$ is a dendrite.\end{theorem} Since $K$ is a dendrite, for any vertices $A_i,A_j \in V_P$ there is an unique Jordan arc ${\ga}_{ij}{\subset} K$ connecting $A_i,A_j$. The set $\hat\ga=\bigcup\limits_{i\neq j}\ga_{ij}$ is a subcontinuum of the dendrite $K$, all of whose end points are contained in $V_P$, so $\hat\ga$ is a finite dendrite or topological tree \cite[{\bf A.17}]{Char}. \begin{definition}\label{defmt} The union $\hat\ga=\bigcup\limits_{i\neq j}\ga_{ij}$ is called {\em the main tree} of the dendrite $K$. The ramification points of $\hat\ga$ are called {\em main ramification points} of the dendrite $K$. \end{definition} We consider $\hat\ga$ as a topological graph whose vertex set $V_{\hat\ga}$ is the union of $V_P$ and the set of ramification points $RP(\hat\ga)$, while the edges of $\hat\ga$ are the components of $\hat\ga\mmm V_{\hat\ga}$. The following Proposition \cite{STV1} show the relation between the vertices of $P$ and end points, cut points and ramification points of $\hat\ga$. \begin{proposition}\label{comp} a) For any $x\in\hat\ga$, $\hat\ga=\bigcup\limits_{ j=1}^n\ga_{A_jx}$.\\ b) $A_i$ is a cut point of $\hat\ga$, if there are $j_1,j_2$ such that $\ga_{j_1i}\cap\ga_{j_2i}=\{A_i\}$;\\ c) the only end points of $\hat\ga$ are the vertices $A_j$ such that $A_j\notin CP(\hat\ga)$;\\ d) if $\#\pi^{-1}(A_i)=1$, then $Ord(A_i,K)\le n-1$, otherwise $$Ord(A_i,K)\le (n-1)\left(\left\lceil{\dfrac{\te_{max}}{\te_{min}}}\right\rceil-1\right),$$ where $\te_{max},\te_{min}$ are maximal and minimal values of vertex angles of $P$. \end{proposition} It was proved in \cite{STV1,STV2,STV3} that each cut point of the dendrite $K$ is contained in some image $S_{\bf {j}}(\hat\ga)$ of the main tree: \begin{theorem}\label{order} The following statements are true:\\ a) $CP(K){\subset}\bigcup\limits_{{\bf {j}}\in I^*}S_{\bf {j}}(\hat\ga)$.\\ b) For each cut point $y\in K\mmm\bigcup\limits_{{\bf {j}}\in I^*}S_{\bf {j}}(V_P)$, $\#\pi^{-1}(y)=1$ and there is $S_{\bf {i}}$ and $x\in\hat\ga$, such that $y=S_{\bf {i}}(x)$ and $Ord(y,K)=Ord(x,{\hat\ga})$.\\ c) If $y\in\bigcup\limits_{{\bf {j}}\in I^*}S_{\bf {j}}(V_P)$ and $\#\pi^{-1}(y)=s$, there are multiindices ${\bf {i}}_k, k=1,..,s$ and vertices $x_1,...,x_s$, such that for any $k$, $S_{{\bf {i}}_k}(x_k)=y$ and for any $l\neq k$, $S_{{\bf {i}}_k}(P)\cap S_{{\bf {i}}_l}(P)=\{y\}$\\ and $ Ord(y,K)=\sum\limits_{k=1}^s Ord(x_k,{\hat\ga})\le(n_P-1)\left(\left\lceil{\dfrac{2\pi}{\te_{min}}}\right\rceil-1\right)$. \end{theorem} Moreover, the dimension of the set of the end points is always greater then the one of the set of cut points \cite{STV2,STV3} \begin{theorem}\label{equal} Let $(P,{\EuScript S})$ be a contractible $P$-polyhedral system and $K$ be its attractor. (i) $\dim_H(CP(K))=\dim_H(\hat\ga)\le \dim_H EP(K)=\dim_H(K)$; (ii) $\dim_H(CP(K))=\dim_H(K)$ iff $K$ is a Jordan arc. \end{theorem} \section{Symmetric polygonal systems} \label{sec:3} \begin{definition} \label{sps} Let $P$ be a polygon and $G$ be a non-trivial symmetry group of the polygon $P$. Let ${\EuScript S}$ be a contractible $P$-polygonal system such that for any $g \in G$ and any $S_i\in{\EuScript S}$, there are such $g' \in G$ and $S_j\in{\EuScript S}$ that $g\cdot S_i=S_j\cdot g'$. Then the system of mappings ${\EuScript S} = \{S_i , i = 1, 2, \ldots, m\}$ is called a contractible { $G$-symmetric} $P$-polygonal system. \end{definition} \begin{center} \includegraphics[width=.23 \textwidth]{trispts.png}\quad \includegraphics[width=.22 \textwidth]{7_penta0.jpg}\quad\includegraphics[width=.18\textwidth]{8_squ3ord.jpg}\quad \includegraphics[width=.2 \textwidth] {3_hex2.jpg} \end{center} For convenience we will call such systems symmetric polygonal systems or SPS, if this does not cause ambiguity in choice of $P$ and $G$. \begin{theorem} \label{main1} The attractor $K$ of a { symmetric} polygonal system and its main tree $\hat\ga$ are symmetric with respect to the group $G$. \end{theorem} \begin{proof} Let ${\EuScript S}=\{S_1,\ldots, S_m\}$. Take $g\in G$. The map $g^*:{\EuScript S} \to {\EuScript S}$, sending each $S_i$ to respective $S_j$ is a permutation of ${\EuScript S}$, therefore $g({\bigcup\limits}_{i=1}^m S_i(P))={\bigcup\limits}_{i=1}^m S_i(P)$, or $g({\widetilde P})={\widetilde P}$. Moreover, it follows from the Definition \ref{sps} that for any ${\bf {i}}=i_1 \ldots i_k$ there is such ${\bf {j}}=j_1 \ldots j_k$ and such $g'\in G$ that $g\cdot S_{\bf {i}}=S_{\bf {j}}\cdot g'$. Therefore for any $g$, $g({\widetilde P}^k)={\widetilde P}^k$. Since $K=\bigcap\limits_{k=1}^{\infty} {\widetilde P}^k$, $g(K)=K$. Since $g$ preserves the set of vertices of $P$, $g(\hat\ga)=\hat\ga$.\end{proof} \begin{center} \includegraphics[width=.22 \textwidth]{trisptsK.png}\quad \includegraphics[width=.22 \textwidth] {7_penta.jpg}\quad \includegraphics[width=.18 \textwidth] {4_squ3ord00.jpg}\quad \includegraphics[width=.22 \textwidth] {hexasptsK.png} \end{center} \begin{corollary} If ${\EuScript S}$ is a {$G$-symmetric} $P$-polygonal system then ${\EuScript S}^{(n)}=\{S_{\bf {j}}, {\bf {j}}\in I^n\}$ is a {$G$-symmetric} $P$-polygonal system.\end{corollary} \begin{corollary} Suppose ${\EuScript S}=\{S_1,\ldots, S_m\}$ is a {$G$-symmetric} $P$-polygonal system, $K$ is the attractor of ${\EuScript S}$, $g_1,\ldots,g_m\in G$ and ${\EuScript S}'=\{S_1g_1,\ldots,S_mg_m\}$. Then $K$ is the attractor of the system ${\EuScript S}'$. \end{corollary} \begin{proof}Let $K'$ be the attractor of the system ${\EuScript S}'$ and put ${\widetilde P}' = {\bigcup\limits}_{i=1}^m (S_i \circ g_i (P))$ . Observe that for any $i$, $g_i (P)=P$, therefore ${\widetilde P}'={\widetilde P}$ and ${{\widetilde P}}^{'(k)}={{\widetilde P}}^{(k)}$. Then $K'=\bigcap\limits_{k=1}^{\infty} {\widetilde P}^{'(k)} =K$.\end{proof} \begin{definition} Let ${\EuScript S}=\{S_1,\ldots, S_m\}$ be a {$G$-symmetric} $P$-polygonal system. The system $ \widetilde{\EuScript S}=\{S_i\cdot g, S_i\in {\EuScript S}, g\in G\}$ is called the {\em augmented system} for ${\EuScript S}$.\end{definition} The system $ \widetilde{\EuScript S}$ has the same attractor $K$ as ${\EuScript S}$ and generates the augmented semigroup $G(\widetilde{\EuScript S})$ consisting of all maps of the form $S_{\bf {j}}\circ g_i$, where $g_i\in G$. \subsection{The case of regular polygons} \label{subsec:3} \begin{proposition}\label{npod} Let $P$ be a regular n-gon and $G$ be the rotation group of $P$. Then the center $O$ of $P$ is the only ramification point of the main tree and $Ord(O,\hat\ga)=n$. \end{proposition} \begin{center} \includegraphics[width=.22 \textwidth]{6_triangle29.jpg}\quad\includegraphics[width=.22 \textwidth]{7_pentaMT.jpg}\quad \includegraphics[width=.22 \textwidth]{hexa001.png} \end{center} \begin{proof} Consider the main tree $\hat\ga$. It is a fine finite system \cite{Kig95}, which is invariant with respect to $G$. Let $f$ be the rotation of $P$ in the angle $2\pi/n.$\\ Suppose $V$ and $E$ be the number of vertices and edges respectively of the main tree. For any edge $\lambda \subset \hat\ga$, $f(\lambda)\cap\lambda$ is either empty, or is equal to $\{O\}$, and in the latter case $O$ is the endpoint of both $\la$ and $f(\la)$. In each case all the edges $f^k(\la)$ are different. Therefore $E$ is a multiple of $n$. \\ If $A'$ is a vertex of $\hat\ga$ and $A'\neq O$, then all the points $f^k(A'), k=1,...,n$ are different, so the number of vertices of $\hat\ga$, different from $O$, is also a multiple of $n$.\\ Since $\hat\ga$ is a tree, $V = E +1$. Therefore the set of vertices contains $O$, which is the only invariant point for $f $. Denote the unique subarc of $\hat\ga$ with endpoints $O$ and $A_k$ by $ \ga_k$. Then for any $k=1,...,n$, $\ga_k=f^k(\ga_n)$. By Proposition \ref{comp} $\bigcup \limits _{k=1} ^n \ga_k = \hat\ga$. Thus the center $O$ is the only ramification point of $ \hat\ga$ and $Ord(O, \hat\ga) = n$.\end{proof} \begin{corollary} All vertices of the polygon $P$ are the end points of the main tree. \end{corollary} \begin{proof} For any $k=1,...,n$ there is an unique arc $\ga_k$ of the main tree meeting the vertex $A_k$ of the polygon $P$, so $Ord(A_k, \hat\ga) = 1$ by Proposition \ref{comp}. Since all the vertex angles of $P$ are equal, for each vertex $A_k$ of $P$, there is unique $S_k\in{\EuScript S}$ such that $P_k=S_k(P)\ni A_k$, so $\#\pi^{-1}(A_k)=1$ and by Theorem \ref{order}, $Ord(A_k, K)=Ord(A_k, \hat\ga) = 1$. Then all vertices of the polygon $P$ are the end points of the main tree as well as of the dendrite $K$.\end{proof} \begin{lemma}\label{gaon} Each arc $\ga_{k}$ is the attractor of a Jordan zipper. \end{lemma} \begin{proof} We prove the statement for the arc $\ga_n$, because for $\ga_k=f^k(\ga_n)$ it follows automatically. If $n>3$, there is a similarity $S_0\in{\EuScript S}$, whose fixed point is $O$. Indeed, there is some $S_0\in{\EuScript S}$ for which $P_0=S_0(P)\ni O$. The point $O$ cannot be the vertex of $P_0$, otherwise the polygons $f(P_0)$ and $P_0$ would intersect more than in one point. Therefore $f(P_0)=P_0$ and $S_0(O)=O$. Observe that for any two vertices $A_i,A_j$ of $P$, the arc $\ga_{A_iA_j}$ is the union $\ga_i\cup\ga_j$. There is an unique chain of subpolygons $P_{l_k}=S_{l_k}(P), k=0, \ldots,s$ connecting $P_0$ and $P_n$ and containing $\ga_n$, where $S_ {l_0}=S_0$ and $S_ {l_s}=S_n$. For each $k=1,\ldots,s$, there are $i_k$ and $j_k$ such that $\ga_n\cap P_{l_k}=S_{l_k}(f^{i_k}(\ga_n)\cup f^{j_k}(\ga_n))$. Therefore $$\ga_n=\bigcup\limits_{k=1}^sS_{l_k}\left(f^{i_k}(\ga_n)\cup f^{j_k}(\ga_n)\right)\cup S_0(\ga_n).$$ The arcs on the right hand satisfy the conditions of Theorem \ref{Jordan}, therefore the system $$\{S_0,S_{l_1}f^{i_1},S_{l_1}f^{j_1},...,S_{l_s}f^{i_s},S_{l_s}f^{j_s}\}$$ is a Jordan zipper whose attractor is a Jordan arc with endpoints $O$ and $A_n$. If $n=3$, it is possible that for some $l_1$, $O$ is a vertex of a triangle $S_{l_1}(P)$ and there is an unique chain of subpolygons $P_{l_k}=S_{l_k}(P), k=1, \ldots,s$, where $S_ {l_s}=S_3$. Repeating the same argument, we get a system $\{S_{l_1}f^{i_1},S_{l_1}f^{j_1},...,S_{l_s}f^{i_s},S_{l_s}f^{j_s}\}$ is a Jordan zipper whose attractor is a Jordan arc with endpoints $O$ and $A_3$.\end{proof} \begin{corollary} If $P$ is a regular n-gon and the symmetry group $G$ of the system ${\EuScript S}$ is the dihedral group $D_n$ then $\ga_{OA_i}$ is the line segment and the set of cut points of $K$ has dimension 1. \end{corollary} \begin{proof} Since $D_n$ contains a symmetry with respect to the straight line containing $O$ and $A_n$,$\ga_n$ itself is a straight line segment.\end{proof} From the above statements we see that Proposition \ref{comp} and Theorem \ref{order} in the case of $G$-symmetric polygonal systems with $G$ being the rotation group of order $n$ and $P$ a regular n-gon, acquire the following form: \begin{proposition}\label{comp1} Let ${\EuScript S}$ be a $G$-symmetric $P$-polygonal system of similarities, where $P$ is a regular n-gon and $G$ contains the rotation group of $P$. Then:\\ a) $V_p{\subset} EP(\hat\ga){\subset} EP(K)$;\\ b) For each cut point $y\in K\mmm\bigcup\limits_{{\bf {j}}\in I^*}S_{\bf {j}}(V_P)$, either $y=S_{\bf {i}}(O)$ for some ${\bf {i}}\in I^*$ and $Ord(y,K)=n$, or $Ord(y,K)=2$.\\ c) For any $y\in\bigcup\limits_{{\bf {j}}\in I^*}S_{\bf {j}}(V_P)$ there is unique $x\in \bigcup\limits_{i\in I}S_i(V_P)$, that $$Ord(y,K)=Ord(x,K)=\#\pi^{-1}(y)=\#\pi^{-1}(x)=\#\{i\in I: x \in S_i(V_P)\} \le 1+\left\lceil{\dfrac{4}{n-2}}\right\rceil$$ \end{proposition} \begin{proof}All vertex angles of $P$ are $\te=\pi-\dfrac{2\pi}{n}$, so $ \left\lceil{\dfrac{2\pi}{\te_{min}}}\right\rceil-1=1+\left\lceil{\dfrac{4}{n-2}}\right\rceil$. a) Take a vertex $A_i\in V_P$. There is unique $j\in I$ such that $A_i\in S_j(V_P)$. For that reason $\#\pi^{-1}(A_i)=1$. Since $S_j(P)$ cannot contain the center $O$, $\#(S_j(V_P)\cap\hat\ga)=2$, therefore by Theorem \ref{comp}, $Ord(A_i,\hat\ga)=1$ and $Ord(A_i,K)=1$, so $A_i\in EP(K)$. b) If for some ${\bf {j}}\in I^*$, $y=S_{\bf {j}}(O)$, then $Ord(y,K)=n$. Since for any point $x\in CP(\hat\ga)\mmm\{O\}$, $Ord(x,\hat\ga)=2$, the same is true for $y=S_{\bf {j}}(x)$ for any $y\in I^*$. c) Now let ${\EuScript C}=\{C_1,...,C_N\}$ be the full collection of those points $C_k\in\bigcup\limits_{i\in I}S_i(V_P)$ for which $s_k:=\#\{j\in I: S_j(V_P)\ni C_k\}\ge 3$. By Theorem \ref{comp}, $\#\pi^{-1}(C_k)=s_k$ and $Ord(C_k,K)=s_k$, while $s_k\le 1+\left\lceil{\dfrac{4}{n-2}}\right\rceil$\\ Then, if $y\in S_{\bf {j}}(C_k)$ for some ${\bf {j}}\in I^*$ and $C_k\in{\EuScript C}$, then $\#\pi^{-1}(y)=s_k=Ord(y,K)$. \end{proof} Applying the Proposition \ref{comp1} to different n, we get the possible ramification orders for regular n-gons:\\ 1. If $n \ge 6$ then all ramification points of $K$ are the images $S_{\bf {j}}(O)$ of the centre $O$ and have the order $n$.\\ 2. If $n = 4$ or $5$ then there is a finite set of ramification points $x_1,...,x_r$, whose order is equal to $3$ such that each $x_k$ is a common vertex of polygons $S_{k1}(P), S_{k2}(P), S_{k3}(P)$. Then each ramification point is represented either as $S_{\bf {j}}(O)$ and has the order $n$ or as $S_{\bf {j}}(x_k)$ and has the order 3.\\ 3. If $n = 3$ the centre is a ramification point of order 3 and those ramification points which are not images of $O$ will have an order less than or equal to $5$. \subsection{Self-similar zippers, whose attractors are dendrites} \label{subsec:3} \begin{theorem}\label{zipdend} Let $({\EuScript S}, P)$ be a $G$-symmetric $P$-polygonal system of similarities. Let $A,B$ be two vertices of the polygon $P$ and $L$ be the line segment $[A,B]$. If ${\EuScript Z}=\{S_1', \ldots, S_k'\}$ is such family of maps from $\widetilde{\EuScript S}$ that $\tilde L= {\bigcup\limits}_{i=1}^k S_i'(L)$ is a polygonal line connecting $A$ and $B$, then the attractor $K_{\EuScript Z}$ of ${\EuScript Z}$ is a subcontinuum of $K$. If for some subpolygon $P_j$, $\widetilde L\cap P_j$ contains more than one segment, then $K_{\EuScript Z}$ is a dendrite. \end{theorem} \begin{center} \includegraphics[width=.3 \textwidth]{5_pd49.png} \qquad \qquad \includegraphics[width=.3 \textwidth]{5a49.jpg}\\ {\tiny The G-SPS $({\EuScript S}, P)$ with polygonal lines $\widetilde L$ joning the vertices $A, B$ and attractors of ${\EuScript S}$ and the zipper ${\EuScript Z}$.}\end{center} \begin{proof} Since ${\EuScript Z}{\subset}\widetilde{\EuScript S}$, the attractor $K_{\EuScript Z}$ is a subset of $K$. The system ${\EuScript Z}$ is a zipper with vertices $A,B$, therefore $K_{\EuScript Z}$ is a continuum, and therefore is a subdendrite of the dendrite $K$. Let $\ga_{AB}$ be the Jordan arc connecting $A$ and $B$ in $K_{\EuScript Z}$, and, therefore, in $K$. By the proof of Lemma \ref{gaon}, $\ga_{AB}=\ga_{OA}\cup\ga_{OB}$. If the maps $S_{i_1}', S_{i_2}'$ send $L$ to two segments belonging to the same subpolygon $P_{i_0}$, then $S_{i_1}'(\ga_{AB})\bigcup S_{i_2}'(\ga_{AB})$ is equal to $S_{i_1}'(\ga_{OA}\bigcup \ga_{OB})\bigcup S_{i_2}'(\ga_{OA}\bigcup\ga_{OB})$. The set $\{S_{i_1}'(A),S_{i_1}'(B), S_{i_2}'(A), S_{i_2}'(B)\} $ contains at least 3 different points, therefore $S_{i_1}'(O)$ is a ramification point of $K_{\EuScript Z}$ of order at least 3. \end{proof} \begin{corollary} Let $u_i$ be the number of segments of the intersection $\tilde L\cap P_i$ and $u=\max u_i$. Then maximal order of ramification points of $K_{\EuScript Z}$ is greater or equal to $\mathop{\rm min}\nolimits(u+1,n)$. \end{corollary} \begin{proof} Suppose $\widetilde L\bigcap P_i$ contains $u$ segments of $\widetilde L$. Then the set contains at least $u+1$ vertices of $P_i$ if $u<n$ and contains $n$ vertices of $P_i$ if $u= n-1$ or $n$. Then the set $K_{\EuScript Z}\cap P_i$ contains at least $u+1$ (resp. exactly $n$) different images of the arc $\ga_{OA}$.\end{proof} \input{referenc} \end{document}
{ "timestamp": "2018-02-13T02:20:41", "yymm": "1802", "arxiv_id": "1802.04148", "language": "en", "url": "https://arxiv.org/abs/1802.04148" }
\section{Introduction} Semidefinite programming (SDP) has become a key tool in solving numerous problems across operations research, machine learning, and artificial intelligence. While there are too many applications of SDP to present even a representative sample, inference in graphical models \cite{ExpFamilies,erdogdu2017inference}, multi-camera computer vision \cite{torr2003solving}, and applications of polynomial optimization \cite{Parrilo,lasserre2015introduction} in power systems \cite{7024950} stand out. Under some assumptions \cite{Madani2014}, the rank at the optimum of the SDP relaxation is bounded from above by the tree-width of a certain hypergraph, plus one. When a rank-one solution is not available, it is often not needed \cite{Marecek2017}, as one should like to construct a stronger SDP relaxation. Penalization of the objective is a popular approach for obtaining low-rank solutions, at least in theory \cite{recht2010guaranteed,Lemon,zhou19dis,Fawzi2019}. Notice that without a further penalization, an interior-point method for SDP provides a solution on the boundary of the feasible set, where SDP corresponds to the optimum of the highest rank, whenever there are optima of multiple ranks available. The use of a penalization provides a counter-balance in this respect. In practice, however, the penalties are often ignored, as it is believed that their computation is too demanding for large-scale problems and does not guarantee low-rank solutions in general. An alternative approach develops numerical optimization methods that seek \emph{a priori} low-rank solutions. This approach, widely attributed to \cite{BurerMonteiro}, considers a factorization of a semidefinite matricial variable $X = V\cdot V^\top$ with $V\in \R^{n\times k}$ for increasing $1 \le k \ll n$. In general, the resulting problems are non-convex. Early analyses required determinant-based penalty terms \cite{burer2005local}, although no efficient implementations were known. Under mild assumptions, for large-enough $k$, there is a unique optimum over such a factorization even without a penalization and it recovers the optimum of the initial SDP problem \cite{boumal2016non}. For smaller values of $k$, it is known that the low-rank relaxation achieves $\mathcal O(1/k)$ relative error \cite{Montanari}. Much more elaborate analyses \cite{erdogdu2018convergence} are now available. Especially when combined with efficient gradient computation, e.g. within low-rank coordinate descent \cite[e.g.]{Marecek2017}, this approach can tackle sufficiently large instances and is increasingly popular. In this paper, we aim to develop a method combining both approaches, i.e., utilize an efficient low-rank-promoting penalty in the Burer-Monteiro approach. We present efficient first-order numerical algorithms for solving the resulting penalized problem, with (almost) linear-time per-iteration complexity. This makes the combined approach applicable to a wide range of practical problems. In a case study, we focus on certain combinatorial optimization problems and inference in graphical models. We show that despite the non-convexity of the penalized problem, our approach successfully recovers rank-one solutions in practice. We compare our solutions against non-penalized SDP, belief propagation, and state-of-the-art branch-and-bound solvers of~\cite{krislock2014improved} \paragraph{Contribution.} Our contributions can be summarized as follows. We show \begin{enumerate} \item convergence properties of optimization methods employing a wide class of penalty functions that promote low-rank solutions; \item linear-time algorithms for computing the gradient of these penalty functions; \item computational results on the penalized SDP relaxation of maximum a posteriori (MAP) estimates in Markov random fields (MRF), which considerably improve upon the results obtained by interior-point methods and randomized rounding. \end{enumerate} This allows for both well-performing and easy-to-analyze low-rank methods for SDPs coming from graphical models, combinatorial optimization, and machine learning. \paragraph{Paper structure.} This paper is organized as follows. First, we define the conic optimization problem together with a penalized form with a list of suitable penalization functions. Next, we present theoretical guarantees for solution recovery. These extend known results for solution recovery to the penalty case. Then, we consider the MAP problem in Markov Random Fields (MRF) and introduce an iterative procedure for it, together with a first-order method for solving a subproblem at each step; we also show how to compute the gradients efficiently. Finally, we provide computational experiments for different inference problems in MRF. \section{Background}\label{sec:setup} SDP is the following conic optimization problem: \begin{align} \min_{X\in \mathbb{S}^n_{+}} \ & \sum_{i \in I} f_i( \tr XS_i ) \tag{SDP} \label{eq:sdp} \\ \text{s.t.:\;} & g_j( \tr X C_j ) \leq 0, \qquad j \in J, \notag \end{align} where $X\in \mathbb{S}^n_{+}$ denotes that the $n \times n$ matrix variable $W$ is symmetric positive semidefinite, $I$ and $J$ are finite index sets, each $f_i$ and $g_j$ are convex functions $\R^{n\times n} \to \R$, and $C_j \in R^{n\times n}$ and $S_i\in \R^{n\times n}$ are constant matrices. In the context of combinatorial optimization, one may also consider even more powerful methods such as Sum-of-Squares hierarchies of \cite{Parrilo}. However, even an SDP relaxation, which is, in fact, the first step of this hierarchy, may be too computationally challenging. It is usually solved by interior-point methods in the dimension that is quadratic in the number of variables and thus becomes intractable even for medium-scale problems (with a few thousand variables). The problem becomes even less scalable for higher orders of the hierarchy since it requires one to solve SDP with $n^{\Theta (d)}$ variables, where $d$ is the level of the hierarchy. \subsection{Low-rank Relaxations and Penalized Problem} First, let us formally define our notion of a penalty function and explain related work on first-order methods for SDP. \begin{assumption} Eq.~\eqref{eq:sdp} has an optimum solution with rank~$r$. \end{assumption} Let us consider the following proxy problem: \begin{align} P_{q,\lambda} \doteq & \min_{V\in R^{n\times q}} \sum_{i \in I} f_i( \tr(V^\top S_iV) ) + R_{q,\lambda}(V) \label{eq:prox} \tag{P-SDP}\\ & \;\text{s.t.: } g_j( \tr(V^\top C_j V) ) \leq 0, \; j \in J. \nonumber \end{align} Where $q > r$ and $R_{q,\lambda}(V)$ satisfies the following: \begin{definition}[Strict penalty function]\label{def:prop} A function $\mathcal{R}_{q,\lambda}(V): \R^{n \times n} \to \R$ is a \emph{penalty function} that promotes low-rank solutions if for some integers $q' \le q$ and a multiplier $\lambda \in \R^+$: \begin{align*} \lim_{\lambda \to \infty} \mathcal{R}_{q,\lambda}(V) = \begin{cases} \; \textrm{0} & \; \mbox{ if } \rank(V) < q',\\ \; \infty & \; \mbox{ if } \rank(V) = q. \end{cases} \end{align*} Moreover, if $q' = q$ and the $\rank(X) < q$, then $\mathcal{R}_{q,\lambda}(V) = 0, \ \forall \lambda$, $R(V)$ is a \emph{strict penalty function}. \end{definition} We use the word {\it penalty} instead of penalty function that promotes low-rank solutions, where there is no risk of confusion. This notion of a penalty is rather wide. When multiplied by $\lambda$, a determinant is a prime example. One may also consider functions of the following quasi-norms. \begin{enumerate} \item The nuclear norm: \begin{equation} \|X\|_* = \sum_i\sigma_i, \label{eq::trace_norm} \end{equation} where $\sigma_i$ is the $i$-th singular value, cf. \cite{Lemon}. The norm is also known as a trace norm, Schatten 1-norm, and Ky Fan norm. As shown by \cite{Srebro}, in the method of \cite{BurerMonteiro}, one can benefit from a bi-Frobenius reformulation: \begin{align*} \|X\|_{*} & = \min_{\substack{U\in\mathbb{R}^{n\!\times\! d} \\ V\in\mathbb{R}^{n\!\times\! d}:X=UV^{T}}}\|U\|_{F}\|V\|_{F} \\ & = \min_{U,V:X=UV^{T}}\frac{\|U\|^{2}_{F}+\|V\|^{2}_{F}}{2}. \end{align*} There are also truncated \cite{HuTruncated} and capped variants \cite{SunTruncated}. \item Schatten-${p}$ quasi-norm for $p > 0$: \begin{align} \|X\|_{S_{p}}=\left(\sum^{n}_{i=1}\sigma^{p}_{i}(X)\right)^{1/p}, \end{align} where $\sigma_{i}(X)$ denotes the $i$-th singular value of $X$ \item A smoothed variant of Schatten-${p}$ quasi-norm by \cite{Pogodin} for $p, \epsilon > 0$: \begin{equation} \schatteneps{X}{p}^p = \sum\limits_{i = 1}^{n} \left(\sigma_i^2 + \varepsilon\right)^{\frac{p}{2}} = \tr\left(X^\top X+\varepsilon I\right)^{\frac{p}{2}}. \label{eq::schatten_norm_smoothed} \end{equation} \item Tri-trace quasi-norm of \cite{Shang2018}: \begin{align} \|X\|_{\textup{Tri-tr}} = \min_{X=UV\Upsilon^\top}\|U\|_{*}\|V\|_{*}\|\Upsilon\|_{*}, \end{align} which is also the Schatten-1/3 quasi-norm. \item Bi-nuclear (BiN) quasi-norm of \cite{Shang2018} \begin{align} \|X\|_{\textup{BiN}} = \min_{X=UV^{T}}\|U\|_{*}\|V\|_{*}, \end{align} which is also the Schatten-${1/2}$ quasi-norm. \item Frobenius/nuclear quasi-norm of \cite{Shang2018} \begin{align} \|X\|_{\textup{F/N}} = \min_{X=UV^{T}}\|U\|_{*}\|V\|_{F}, \end{align} which is also the Schatten-${2/3}$ quasi-norm. \end{enumerate} We also note there has been considerable interest in the analysis of low-rank approaches without penalization, especially in matrix-completion applications. Much of the analysis goes back to the work of \cite{keshavan2010matrix}. For further important contributions, see \cite{arora2012computing}. \subsection{Entropy Viewpoint} One could see the penalty functions introduced above from the entropy-penalization perspective. This is useful not only from a methodological standpoint, but also from a computational one. To this end, we consider the Tsallis entropy: \[ \ent^T_\alpha(X) = \frac{1}{1-\alpha} \left( \frac{\tr{X^\alpha}}{(\tr{X})^\alpha} - 1\right). \] The Tsallis entropy is crucial in our study because it generalizes many popular penalties considered earlier. The Schatten $p$-norm coincides with the Tsallis entropy $\ent^T_p$ over a set of matrices with a fixed trace norm, so that the tri-quasi norm and bi-nuclear norm (2--6) are covered as well. The Log-Det function, $- \log \det X$, which is also used in low-rank SDP, is up to an additive constant factor relative (Shannon) entropy taken concerning a unit matrix, while Renyi ($\ent^R$) and von Neumann ($\ent^N$) entropies, \begin{align*} & \ent^R_\alpha(X) = \frac{\log \tr (X/\tr X)^\alpha}{1-\alpha} \text{ and }\\ & S_\alpha^N(X) = - \tr(\log(X/\tr X) \cdot X/\tr(X)), \end{align*} respectively, can also be used as penalties to promote a low-rank solution. To the best of our knowledge, neither Renyi, von Neumann, nor Tsallis entropies have been studied in the context of low-rank SDP. \section{Exact Recovery}\label{sec:theory} Let us now present a unified view of the penalties and their properties: \begin{lemma} Any of: \begin{enumerate} \item $\lambda \det ( X )$; \item $\lambda \sigma_{q}(X)$, where $\sigma_{i}(X)$ denotes the $i$-th singular value of $X$; \item Tsallis, Renyi, and von Neumann entropies defined on the last $n-q+1$ singular values; \item $\lambda \max \left\{ 0, \frac { \|X\|_{*} }{ \max \{ \sigma_{\min}(X), \sigma_{q}(X) \} } - q \right\},$ \end{enumerate} is a penalty function that promotes low-rank solutions. Moreover, penalties 1--3 are strict. \label{lem1} \end{lemma} \begin{proof} \emph{Sketch}. (1.) The proof is by simple algebra. (2.) If $\sigma_{q}(X)$ is 0, we know the rank is $q - 1$ or less. Otherwise, for large values of $\lambda$, the value of the penalties goes to infinity, and hence $q' = q$. (3.) The definition of entropy assumes that $S(0, ..., 0) = 0$, thus all entropies are strict penalty functions by definition. (4.) First, consider the case where all non-zero singular values are equal. In that case, $ \|X\|_{*} / \sigma_{\min}(X) = \rank(X)$, and subtracting $q$ results either in a non-positive number when the rank is less than $q$ or a positive number otherwise. If the singular values are non-equal, $ \|X\|_{*} / \sigma_{\min}(X) $ provides an upper bound on the rank of $X$, which can be improved as suggested. The use of the upper bound results in the value of the penalty tending to infinity for values between $q'$ and $q$ in the large limit of $\lambda$. \end{proof} Crucially, under mild assumptions, any penalty allows for the recovery of the optimum of a feasible instance of \eqref{eq:sdp} from the iterations of an algorithm on the non-convex problem in variable $V \in \R^{n \times r}$, such as in the methods of \cite{lowrankMAP} or \cite{BurerMonteiro}. In contrast to the traditional results of \cite{burer2005local}, \cmmnt{,burer2005local} who consider the $\det$ penalty, we allow for the use of any strict penalty function. \begin{theorem} Assume that we solve the proxy problem \eqref{eq:prox} iteratively and $\mathcal{R}_{q,\lambda}(V)$ is a strict penalty function that promotes low-rank solutions. In each iteration, if $\mathcal{R}_{q,\lambda}(V) \neq 0$, we increase $\lambda $ (e.g., set $\lambda_{t+1} = \gamma \lambda_t$, with $u > 1$ as some fixed parameter). Furthermore, let us assume that the solution we found is denoted by $\tilde V_{q}$ with $\rank(\tilde V_{q})= q' < q$. Let us also denote $\tilde V_{q'} \in \R^{n \times q'}$ some factorization of $\tilde V_{q} \tilde V_{q}^\top$ (such factorization exists because $\rank(\tilde V_{q})= q'$). Also assume that we have an optimal solution of \eqref{eq:sdp}, $X^*$ with a~$\rank(X^*) = r$. If \begin{equation} V_{q'+r+1} \triangleq [ \tilde V_{q'}, {\bf 0}_{n\times r}, {\bf 0}_{n\times 1}] \end{equation} is a local minimum of $P_{q'+r+1,\lambda}$, then $(\tilde V_{q'}) \tilde V_{q'}^\top$ is a global solution of \ref{eq:sdp}. \label{strict} \end{theorem} \begin{proof} Let us define a family of matrices for $\tau \in [0,1]$ as follows: \begin{align*} V(\tau) \triangleq [ \sqrt{\tau} \tilde V_{q'}, \sqrt{(1-\tau)} V_* , {\bf 0}_{n\times 1}], \end{align*} where $ (V_*)^\top (V_*)$ is some factorization of $X^*$ with $V_* \in \R^{n \times r} $. Note that $\forall \tau$, we have $\rank(X(\tau)) < r+q'+1$, and hence $\forall \lambda, \tau: R_{q'+r+1, \lambda}(V(\tau)) = 0$. Now, assume the contradiction, that is, $V_{q'+r+1}$ is a local optimum solution but $\tilde V_{q'}$ is not a global solution. We show that $\forall \tau \in [0,1]$, $V(\tau)$ is a feasible solution. Indeed, for any $j \in J$ we have \begin{align*} & g_j(\tr(V(\tau)^T C_j \tr(V(\tau)) \leq \\ & \quad \tau g_j(\tr( [ \tilde V_{q'}, {\bf 0}_{n\times r+1}]^\top C_j [ \tilde V_{q'}, {\bf 0}_{n\times r+1}])) + \\ & \quad(1-\tau) \tr( [ {\bf 0}_{n\times q'}, V_*,{\bf 0}_{n\times 1}]^\top C_j [ {\bf 0}_{n\times q'}, V_*,{\bf 0}_{n\times 1}])) = \\ & \hspace{20mm} \tau g_j(\tr( \tilde V_{q'} ^\top C_j \tilde V_{q'} )) + (1-\tau) \tr( X^* C_j )) \leq 0. \end{align*} We just showed that for each $V(\tau), \tau \in [0,1]$ is a feasible point. Now, let us compute the objective value at this point. For all $ \tau \in [0,1]$, we have \[ \sum_{i \in I} \tr( V(\tau)^\top S_i V(\tau)) \leq \tau \sum_{i \in I} \tr( \tilde V_{q'}^\top S_i \tilde V_{q'} ) + \] \[ + (1-\tau) \sum_{i \in I} \tr( X^* S_i ) < \sum_{i \in I} \tr( \tilde V_{q'}^\top S_i \tilde V_{q'} ), \] which is a contradiction under the assumption that $\tilde V_q$ is a local optimum. \end{proof} \section{Efficient Implementation for MAP Inferenc }\label{sec:algo} A pairwise Markov Random Field (MRF) is defined for an arbitrary graph $G = (V, E)$ with $n$ vertices. We associate a binary variable $x_i\in \{-1, +1\}$ with each vertex $i\in V$. Let $\theta_i: \{\pm 1\}\to \mathbb{R}$ and $\theta_{ij}: \{\pm1\}^2 \to \mathbb{R}$ defined for each vertex and edge of the graph be vertex and pairwise potential, respectively. Thus, \emph{a posteriori} distribution of $x$ follows the Gibbs distribution: \[ p(x|\theta) = \frac{1}{Z(\theta)} e^{U(x|\theta)}, \] with $U(x;\theta) = \sum_{i\in V} \theta_i(x_i) + \sum_{(i,j)\in E} \theta_{ij}(x_i, x_j).$ The maximum \emph{a posteriori} (MAP) estimate is then \[ \hat{x} = \argmax\limits_{x\in \{-1, 1\}^n} p(x|\theta) = \argmax\limits_{x\in \{-1, 1\}^n} U(x;\theta), \tag{MAP} \] which is its turn an NP-hard binary quadratic optimization problem, \[ \hat x = \argmax\limits_{x\in \{-1, 1\}^n} x^\top S x, \] with indefinite matrix $S$. The SDP relaxation for this problem is given by \cite{goemans1995improved,Nesterov}: \begin{gather} \min_{X \in \mathbb{S}^+_n} \tr SX, \quad \text{s.t.: } X_{ii} = 1,\label{eq:lin_sdp} \end{gather} which also covers the Ising model in statistical physics and a number of combinatorial optimization problems. We believe that the approach can be extended to a general setup given by Eq.~\eqref{eq:sdp}. An entropy-penalized SDP relaxation of \eqref{eq:lin_sdp} has the form \begin{align} \min_{X \in \mathbb{S}^+_n} & \tr V^\top S V + R_{\lambda}(V), \quad \text{s.t.: } \|V^i\|_2^2 = 1,\label{eq:lin_sdp_rel}\tag{EP-SDP} \end{align} where $V^i$ is the $i$-th column of matrix $V \in \mathbb{R}^{n\times k}$, $X = V V^\top$. \subsection{Numerical Method. } To solve Problem~\eqref{eq:lin_sdp_rel}, we use the Augmented Lagrangian method starting from a sufficiently small value of the penalty parameter $\lambda > 0$ and increasing it in geometric progression, $\lambda_{k+1} = \lambda_{k} \gamma$, with $\gamma > 1$, as summarized in Algorithm \ref{algo:01}. The efficiency of the method is due to the efficient computability of gradients of Tsallis, Renyi, and von~Neumann entropies: \begin{algorithm}[] \SetAlgoLined \label{algo:01} \KwData{Quadratic matrix $S$ of the MAP inference problem, staring point $\lambda_0$, $\gamma > 1$, step size policy $\{\eta_k\}_{k\ge 1}$ accuracy parameters $\varepsilon$, $\epsilon$} \KwResult{Solution $V_*$ as a local minimum of \eqref{eq:lin_sdp_rel} of unit rank} \Begin{ $V_0 \leftarrow$ random initialization in $\R^{n\times k}$\; \While{$\tr (V_t^\top V_t) - \lambda_{\max} V_t > \varepsilon$}{ Find local minimum of \ref{eq:lin_sdp_rel}\!$(S, \lambda_t)$ starting from $V_{t-1}$, assign it to $V_t$\; \While{$\nabla (\tr V^\top S V + R_{\lambda}(V)) \le \epsilon$}{ $V = V - \eta_k \nabla (\tr V^\top S V + R_{\lambda}(V))/\|\nabla (\tr V^\top S V + R_{\lambda}(V))\|_2$\; $V_i \leftarrow V_i/\|V_i\|_2$ for each row $V_i$\; } $\lambda_{t+1} = \lambda_t\cdot \gamma$\; } } {\bf Return:} first singular vector of $V_t$.\; \caption{Entropy-Penalized SDP.}% \end{algorithm} \begin{lemma} For any matrix $V \in \mathbb{R}^{n\times k}$ with $k = {\mathcal O}(1)$, let $X(V) = V^\top V$. Then, gradients of $\ent_\alpha^T(X)$, $\ent^R(X)$, and $\ent^N(X)$ can be computed in $\mathcal{O}(n)$ time. Moreover, if the number of non-zero elements in matrix $A$ is $\mathcal{O}(n)$, then the iteration complexity of Algorithm \ref{algo:01} is $\mathcal{O}(n)$. \end{lemma} \begin{proof} We start our analysis with Tsallis entropy. First, compute the gradient of $\ent^T_\alpha$ in $V$: \[ \frac{\partial \ent_\alpha^T(X)}{\partial V} = \frac{\alpha}{1-\alpha}\left(\frac{X^{\alpha-1}}{(\tr{X})^{\alpha}} - \frac{\tr{X^\alpha} }{(\tr{X})^{\alpha + 1}} I \right)V. \] Similarly for Renyi, $\ent^R(X)$, and von Neumann, $\ent^N(X)$, entropies we have \[ \frac{\partial \ent_\alpha^R(X)}{\partial V} = \frac{\alpha}{1-\alpha}\frac{X^{\alpha-1}}{\tr{X^{\alpha}}} \left( I - \frac{X}{\tr{X}} \right) V \] and \[ \frac{\partial \ent_\alpha^N(X)}{\partial V} = \frac{\tr{X} I - X}{(\tr{X})^2} \left( I + \log{\frac{X}{\tr{X}}} \right)V. \] Following \cite{Holmes}, the singular-value decomposition of matrix $V = U_1 D U_2$ with $U_1 \in \R^{n\times n}$, $D\in \R^{n\times k}$, and $U_2 \in \R^{k\times k}$ can be performed in $\mathcal O(\min{(nk^2, n^2k)}) = \mathcal O(nk^2)$ time. For any $\alpha > 1$, the product $X^{\alpha-1}\cdot V = U_1 D^{2\alpha - 1} U_2$ can be computed in time $\mathcal O(n)$ together with $\tr{X^\alpha} = \tr{D^{2\alpha}}$ and $\tr{X} = \tr{D^2}$. Thus, for a fixed $k$, the gradient $\frac{\partial \ent^T(X)}{\partial V}$ computation time is linear in its dimension. (Here, for any $\alpha \in (0, 1)$, we use the identity $\partial \lambda_i = {\bf v_i}^\top \partial X {\bf v_i}$.) To finish the proof of the statement, it remains to note that matrix-vector multiplication takes $\mathcal{O}(n)$ time for any matrix with $\mathcal{O}(n)$ non-zero entries. \end{proof} \section{Case Study}\label{sec:case_study} In this section, we compare our penalized algorithm with other conventional approaches to MAP problems. We fix the width of factorization to $k = 10$, since there is no significant gain in practice for larger values of $k$, cf. \cite{Montanari}. We choose $2\eta_k \beta = 1$, where $\beta$ is the Lipschitz constant of the gradient in $\ell_2$ norm and $\gamma = 3/2$. Parameters $\lambda_0$ and $\gamma$ of Algorithm~\ref{algo:01} are usually chosen by a few iterations of random search. It is usually enough to have about 35 iterations for penalty updates and a few hundred iterations to find a local minimum using Algorithm~\ref{algo:01}. We emphasize that matrices we obtain by solving \ref{eq:lin_sdp_rel} are rank-one solutions on all MAP instances presented. Thus, we do not need any further rounding procedure. First, in Table~1, we show the performance of our algorithm on selected hard MAP inference problems from the BiqMac collection\footnote{http://biqmac.uni-klu.ac.at/biqmaclib.html}. We selected a few of the hardest instances ("gkaif" among them)---dense quadratic binary problems of 500 variables. \begin{table}[!t] \label{tbl:new} \centering \begin{adjustbox}{width=\columnwidth,center} \begin{tabular}{l|c|c|c|c|c} Instance & gka1f & gka2f & gka3f & gka4f & gka5f\\ \hline\hline \multicolumn{6}{c}{SDP}\\ \hline\hline objective & 59426 & 97809 & 1347603 & 168616 & 185090 \\ upper bound &66783 & 109826 & 152758 & out of & out of \\ time [s] & 669 & 673 &592 & memory & memory\\ \hline\hline \multicolumn{6}{c}{EP-SDP}\\ \hline\hline objective & 60840 & {\bf 99268} & {\bf 136567} & {\bf 170669} & {\bf 189762} \\ upper bound & n/a & n/a & n/a & n/a & n/a \\ time [s] & 3.3 & 5.0 & 5.3 & 5.2 & 5.7 \\ \hline\hline \multicolumn{6}{c}{Gurobi}\\ \hline\hline objective & {\bf 64678} & 97594 & 131898 & 162875 & 189324 \\ upper bound & 73267 & 112223 & 153726 & 190073 & 218428 \\ time [s]. & 70 & 70 & 71 & 70 & 70 \\ \end{tabular} \end{adjustbox} \caption{Results for the BiqMac collection.} \end{table} We compared our algorithm (EP-SDP with Tsallis entropy and $\alpha=2$) with the plain-vanilla semidefinite programming instance solved by the interior-point method, possibly with rounding using the best of one thousand roundings of \cite{goemans1995improved} and also with Gurobi solver for Mixed-Integer Problems. To avoid any confusion, we solve the corresponding maximization problems; by the objective value, we mean the value at a feasible solution produced by the method (e.g., rounded solution of SDP relaxation), which is a lower bound for the corresponding problem. Because these problems are of the same size (but varying density), the running time of each method is almost constant. It took around 10 minutes for CVXPY to solve the SDP relaxation, and it runs out of memory for the two problems with higher density. Within five seconds, EP-SDP obtains results that are better than what Gurobi can produce in 70 seconds. \begin{table}[!th] \centering \begin{tabular}{l|c|c|c|c} & \multicolumn{4}{c}{GSET Instance}\\ \hline\hline & 1 & 2 & 3& 4 \\ \hline EP-SDP\\(T, $\alpha=2.0$)\!\!& 11485 & 11469 & 11429 & 11442 \\ (T, $\alpha=1.1$)\!\!& 11454 & 11463 & 11444 & 11508 \\ (R, $\alpha=5$)\!& 11508 & {\bf 11519} & 11496 & {\bf 11531} \\ (R, $\alpha=10$) & {\bf 11520} & 11420 & 11523 & 11523 \\ SDP & 11372 & 11363 & 11279 & 11355 \\ Loopy BP & 10210 & 10687 & 10415 & 10389 \\ Mean-Field & 11493 & 11515 & {\bf 11525} & 11512\\ \hline\hline & \multicolumn{4}{c}{GSET Instance}\\ \hline\hline & 5 & 6 & 7& 8 \\ \hline EP-SDP\\(T, $\alpha=2$)& 11427 & 2059 & 1888 & 1866\\ (T, $\alpha=1.1$) & 11506 & 2075 & 1858 & 1895\\ (R, $\alpha=5$) & 11527 & {\bf 2127} & {\bf 1942} & 1954\\ (R, $\alpha=10$) & {\bf 11538} & 2112 & 1940 & {\bf 1958}\\ SDP & 11313 & 1945 & 1728 & 1727\\ Loopy BP & 10143 & 1076 & 964 & 731\\ Mean-Field & 11528 & 2096 & 1906 & 1912\\ \hline\hline \end{tabular} \begin{tabular}{l|c|c|c|c|c} & \multicolumn{5}{c}{GSET Instance}\\ \hline\hline & 9 & 10& 11& 12 & 13 \\ \hline EP-SDP\\(T, $\alpha=2$) & 1933 & 1882 & 532 & 530 & 560 \\ (T, $\alpha=1.1$) & 1969 & 1861 & 544 & 536 & {\bf 568} \\ (R, $\alpha=5$) & 1992 & 1960 & {\bf 550} & {\bf 548} & {\bf 568} \\ (R, $\alpha=10$) & {\bf 2006} & {\bf 1982} & 544 & 546 & 564 \\ SDP & 1767 & 1784 & 524 & 514 & 540 \\ Loopy BP & 1021 & 820 & 424 & 412 & 482 \\ Mean-Field & 1940 & 1902 & 542 & 538 & 564\\ \end{tabular} \label{tbl:02} \caption{Results for the GSET collection. \end{table} In our study, parameter $\alpha$ of entropies $\mathcal{S}^T_\alpha$, $\mathcal{S}^N_\alpha$, and $\mathcal{S}^R_\alpha$ is chosen on an exponential grid from 1 to 10 with a step 1.1. After experimentation, we note that $\alpha = 1.1$ and $\alpha = 5.0$ seem to improve the results the best for the low-rank SDP with Tsallis and Renyi entropies, respectively, although the difference between different $\alpha \in (1,10)$ is not very significant for either of the (Tsallis and Renyi) entropies. Table~2 summarizes the results of solving the Max-Cut problem over a GSET collection of sparse graphs\footnote{https://sparse.tamu.edu/Gset}. As we see from the experiments, the results of applying suitable entropy often outperform both the plain-vanilla SDP with the classical Goermans-Williamson rounding, the mean-field approximation, as well as the results of UGM solver\footnote{https://www.cs.ubc.ca/~schmidtm/Software/UGM.html} for loopy belief propagation and mean-field inference. It is worth noting that for several instances of the GSET graph collection, loopy belief propagation provides rather weak results. Usually, strong results of the loopy belief propagation are complementary to those of the mean-field approximation, which is supported by our empirical results. Results of both loopy belief propagation and mean-field approximation can be substantially improved using the linear-programming belief-propagation approach (LP-BP). \begin{figure}[t!] \centering \begin{tikzpicture}[scale=0.73] \begin{axis}[legend pos=north east, title={},xlabel=power of $\gamma$, ylabel=second singular value of $V$] \addplot table [x=iter, y=lambda2-g2, col sep=comma] {decreasing-rank.txt}; \addlegendentry{$\alpha=2$ and $\gamma = 2$} \addplot table [x=iter, y=g1.5, col sep=comma] {decreasing-rank.txt}; \addlegendentry{\hspace{2.5mm}$\alpha=2$ and $\gamma = 1.5$} \addplot table [x=iter, y=alpha5-g2, col sep=comma] {decreasing-rank.txt}; \addlegendentry{$\alpha=5$ and $\gamma = 2$} \end{axis} \end{tikzpicture} \caption{Rank decrement.}\label{rankdef \end{figure} \begin{figure}[t!] \centering \begin{tikzpicture}[scale=0.73] \begin{axis}[legend pos=north east, title={},xlabel=power of $\gamma$, ylabel=second singular value of $V$] \addplot table [x=iter, y=10, col sep=comma] {inc-lambda.txt}; \addlegendentry{$\lambda_0=10$ and $\gamma = 1.5$} \addplot table [x=iter, y=40, col sep=comma] {inc-lambda.txt}; \addlegendentry{$\lambda_0=40$ and $\gamma = 1.5$} \addplot table [x=iter, y=100, col sep=comma] {inc-lambda.txt}; \addlegendentry{\hspace{2mm}$\lambda_0=100$ and $\gamma = 1.5$} \end{axis} \end{tikzpicture} \caption{Rank decrement with multistart.}\label{rankdef2 \end{figure} We also want to point out that our iterative algorithm successfully decreases the rank of the solution. The higher the penalization parameter, the lower the rank. We illustrate this in Figure~\ref{rankdef}, where for Tsallis entropies with $\alpha = 2$ and $\alpha = 5$, we plot the second singular value of matrix $V$. For this plot, we considered the Max-Cut problem for the first graph from the GSET collection and the Tsallis entropy as the penalization function. In Figure~\ref{rankdef2}, we illustrate the same concept for fixed penalization (Tsallis entropy with $\alpha = 2$) and different initial values of the multiplier $\lambda$. We observe that for different penalization functions and update schemes, the rank of the solution decreases gradually with each step. In practice, our iterative algorithm could be seen as a universal rounding procedure for SDP relaxations. Indeed, if we choose a large-enough penalization update (e.g., $\gamma = 2$ as in Figure 1), we easily obtain a rank-one solution that is not worse and often is substantially better than solutions obtained by randomized rounding. Overall, we would like to stress that Algorithm~\ref{algo:01} is very fast. This is shown in Figure~3 and Table~3, where we compare run times of EP-SDP, low-rank Burer-Monteiro approach (LR-SDP), and interior-point method solvers (SDP) for various Erdos-Renyi random graphs. From the data, we see that (assuming the fixed width of factorization $k= 10$) EP-SDP run time increases linearly with the number of vertices. Indeed, throughout the benchmark instances tested, the run time does not exceed a few seconds per each of the test cases. At the same time, the bound is often almost as good as that of the Branch and Bound Biq-Mac Solver of \cite{krislock2014improved}, which requires a significant amount of time. \begin{figure}[t!] \centering \begin{tikzpicture}[scale=0.73] \begin{axis}[legend pos=north west, title={},xlabel=number of vertices, ylabel=time in seconds] \addplot table [x=vert, y=EPSDP, col sep=comma] {time.txt}; \addlegendentry{EP-SDP} \addplot table [x=vert, y=SDPLR, col sep=comma] {time.txt}; \addlegendentry{LR-SDP} \addplot table [x=vert, y=SDP, col sep=comma] {time.txt}; \addlegendentry{SDP} \end{axis} \end{tikzpicture} \caption{Time complexity}\label{time} \end{figure} \begin{table}[!t] \label{tbl:time} \centering \begin{tabular}{l|c|c|c} Instance & {\bf EP-SDP} & {\bf LR-SDP} & {\bf SDP}\\ \hline\hline E-R(50, 0.2) & 0.2s & 0.1s & 0.4s \\ \hline G(100, 0.2) & 0.3s & 0.4s & 1.4s \\ \hline G(200, 0.2) & 0.5s & 1.6s & 7.6s \\ \hline G(300, 0.2) & 0.8s & 3.9s & 21.0s \\ \hline G(400, 0.2) & 1.0s & 6.0s & 45.0s \\ \hline G(500, 0.2) & 1.3s & 8.9s & 85.0s \\ \end{tabular} \caption{Run time for random graphs. \end{table} \section{Conclusions}\label{sec:conclusion} This paper presented a unified view of the penalty functions used in low-rank semidefinite programming using entropy as a penalty. This makes it possible to find a low-rank optimum, where there are optima of multiple ranks. Semidefinite programs with an entropy penalty can be solved efficiently using first-order optimization methods with linear-time per-iteration complexity, which makes them applicable to large-scale problems that appear in machine learning and polynomial optimization. Our case study illustrated the practical efficiency on binary MAP inference problems. The next step in this direction is to consider the structure of the SDP, which seems to be crucial for further scalability. \section*{Acknowledgements} The work of Martin Tak{\' a}{\v c} was partially supported by the U.S. National Science Foundation, under award numbers NSF:CCF:1618717, NSF:CMMI:1663256, and NSF:CCF:1740796. The work at LANL was supported by the U.S. Department of Energy through the Los Alamos National Laboratory as part of the GMLC ``Emergency Monitoring and Control through new Technologies and Analytics'' project and multiple LANL LDRD projects. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). \bibliographystyle{named}
{ "timestamp": "2019-08-02T02:08:25", "yymm": "1802", "arxiv_id": "1802.04332", "language": "en", "url": "https://arxiv.org/abs/1802.04332" }
\section{Introduction} \label{sec:Intro} Although complex networks theory \cite{strogatz2001exploring, newman2003structure} was initially used to describe the structure underpinning individual complex systems, in recent years there has been an explosion in the number of situations in which (potentially large) sets of networks have to be studied in a comparative way. The availability of multiple related networks may be the natural result of analysing different, yet compatible systems - as, for instance, functional brain networks obtained from a large set of healthy people, with the aim of identifying common connectivity patterns \cite{greicius2003functional}; or from control subjects and patients suffering from a given condition \cite{seeley2009neurodegenerative}, to detect differences between them. This can nevertheless also stem from the analysis of a single system across its parameters' and temporal dimensions. Following on the previous example, neuroscientists may be interested in characterising the temporal evolution of such networks during a long cognitive task \cite{bassett2011dynamic, zalesky2014time}, or across different frequency bands \cite{buldu2017frequency}. Potential examples are not limited to neuroscience, and indeed appear in all research fields in which complex networks have been applied \cite{costa2011analyzing}, {\it i.e.} across social, biological and technological systems - a clear example of the latter being air transport networks~\cite{neal2014devil, belkoura2016multi}. The analysis of the differences between two or more networks is a two-fold problem. On one hand, it entails the quantification of such differences \cite{brodka2017quantifying}, {\it e.g.} by calculating a set of topological metrics and by comparing their normalised values \cite{costa2007characterization}; and, on the other hand, the understanding of the dynamical processes causing such changes. These two aspects of the problem are orthogonal, as both of them have to be taken into account for the correct understanding of an observed evolution. The fact that two networks are not equal does not imply the presence of a structured evolutionary process, as they may be the result of describing the same system under observational noise. Such conclusion cannot be drawn even from a statistically significant change in some topological metric: {\it e.g.} a reduction in the modularity may be the result of a random link rewiring, but also of a targeted process aimed at disrupting the modular structure. Even an increase in modularity may be the result of a random process, albeit with low probability. Lastly, and on the same line, one should not correlate the magnitude of the changes with the presence of targeted processes: noise does not necessarily result in small fluctuations only. These two aspects, {\it i.e.} description and structureness, are also of high relevance of real-world applications. For instance, in the specific case of brain functional networks, the presence of an unstructured difference between control subjects and patients may be ascribed to a global loss of brain connectivity, while structured changes may suggest a focused reorganisation of the information flow. The second previously discussed point, {\it i.e.} the understanding of the dynamical processes causing a change, is a specific aspect of the more general problem known as {\it phenotype to genotype} \cite{strohman2002maneuvering, zanin2016phenotype}. While we can observe only the phenotype of a system, in this case the resulting physical or functional network, what we would really like to understand is the genotype that has created it. If several phenotypes are available, {\it e.g.} we can observe the temporal evolution of the system, we can in principle use the phenotype's dynamics to (partly) reconstruct the genotype: in other words, we can use the ``difference of structures'' to unveil the underlying ``structure creating such difference''. Inspired by this, we here present a framework designed to answer the following specific question: {\it do the observed changes follow a structure, or are they simply the result of random fluctuations?} This framework is based on {\it a}) the calculation of the {\it difference} between the two observed networks, {\it b}) the representation of such difference as a new {\it difference network}, and {\it c}) the analysis of its structural characteristics. Specifically, we start from the assumption that changes resulting from non-random processes are characterised by correlations, which reflect in the presence of a meso-scale in the difference network. Such meso-scale can then be detected using a broad-band topological metric, {\it i.e.} the {\it Information Content} \cite{zanin2014information}, and its significance assessed through a statistical test based on ensembles of equivalent random networks. By means of a set of synthetic evolving networks, we show that this approach outperforms other alternatives, as the ones based on cross-network correlations \cite{bianconi2013statistical} or von Neumann entropy \cite{braunstein2006laplacian, braunstein2006some}. We further demonstrate the usefulness of the proposed solution by analysing three real systems, respectively technical (the evolution of the world-wide air transport network), social (human contact networks in a hospital) and biological (comparison of functional brain networks corresponding to different frequency bands). We conclude this work by showing how this approach can be used to construct a {\it network correlogram}, which, among others, can be used to detect the natural frequency of a time-evolving network. \section{Metric definition} \label{sec:metric} \subsection{Information Content} \label{sec:IC} For the sake of completeness, we here include a short overview of the {\it Information Content} metric, which is the basis of the proposed methodology. For a more complete description, the reader may refer to Ref. \cite{zanin2014information}. The rationale behind the definition of the {\it Information Content} is that a regular network, or more generally any network presenting a meso-scale structure, displays strong correlations between the node's connectivity patterns. The information encoded by pairs of such correlated nodes is thus redundant, as the connections of one of them almost completely define the second one's. A clear example is yielded by networks with a strong community structure, in which two nodes belonging to the same community usually share most of their neighbours. Following this idea, and given an initial network, the proposed algorithm identifies the pair of nodes whose merging would suppose the smallest information loss, {\it i.e.} that share most of their connections. The analysis of two nodes $i$ and $j$ thus entails, firstly, the creation of a vector of differences $m$, with $m_k = 1 - \delta_{a_{i, k}, a_{j, k}}$ and $\delta$ being the Kronecker Delta. Secondly, the information encoded by $m$ is assessed through the classical Shannon's entropy, defined as: \begin{equation} I_{i, j} = -2N( p_0 \log_2 p_0 + p_1 \log_2 p_1 ), \end{equation} $p_0$ and $p_1$ being respectively the frequency of zeros and ones in $m$, and $N$ the number of nodes in the network. Note that $I_{i, j}$ represents the quantity of information required to reconstruct $j$'s connections given $i$'s ones; and thus the quantity of information lost when both nodes are merged. The pair of nodes minimising $I$ are then merged, and the quantity of information lost in the process is approximated by $I$. The process is iteratively repeated, until one single node remains, being the final Information Content $IC$ the sum of the information lost in all steps. As shown in a previous work \cite{zanin2014information}, low IC values indicate the presence of some kind of regularity in the link arrangement, including communities, hubs, or core-periphery configurations. \subsection{Comparing two networks} \label{sec:Comparing} Suppose two networks, each one described by a corresponding adjacency matrix $\mathcal{A}_1$ and $\mathcal{A}_2$, which have been observed under different conditions. Firstly, the most simple case includes two independent networks, representing two different systems - albeit of the same size, {\it i.e.} the same number of nodes. Secondly, these adjacency matrices can represent different layers of a multiplex network \cite{boccaletti2014structure}. Finally, the networks may represent different snapshots of the same time-evolving system \cite{holme2012temporal}. In all cases, changes between $\mathcal{A}_1$ and $\mathcal{A}_2$ can be encoded in a matrix $\mathcal{D} = |\mathcal{A}_1 - \mathcal{A}_2|$, whose element $d_{i, j}$ is equal to $1$ when the corresponding link has changed in the two analysed networks, and zero otherwise. Note that $\mathcal{D}$ can be interpreted as the adjacency matrix of a network whose links depict a corresponding change between $\mathcal{A}_1$ and $\mathcal{A}_2$. With respect to the meso-scale structure of the difference network $\mathcal{D}$, only two situations can be encountered. First, changes between $\mathcal{A}_1$ and $\mathcal{A}_2$ can be random, for instance due to measurement noise, or more generally due to uncorrelated forces; $\mathcal{D}$ would then resemble the adjacency matrix of a random network. Second, if changes between $\mathcal{A}_1$ and $\mathcal{A}_2$ are somehow correlated, the resulting network should present some kind of meso-scale structure. For instance, if changes only affect the connections of one node, $\mathcal{D}$ will be star-like shaped. All intermediate situations, {\it e.g.} with only a part of the links modified at random, can be interpreted as a special (and noisy) case of the latter situation. If changes are not random, and thus are correlated and form a meso-scale structure, the latter should be detected by the $IC$ metric. An algorithm for the comparison of different networks can thus be designed, composed of the following steps: {\it i}) calculate $\mathcal{D}$ as $|\mathcal{A}_1 - \mathcal{A}_2|$; {\it ii}) calculate the $IC$ of the network $\mathcal{D}$; {\it iii}) compare $IC(\mathcal{D})$ with the value obtained in an ensemble of equivalent random networks. As for the latter point, several ways of normalising the obtained value are available. Firstly, one can simply calculate: \begin{equation} IC^* = \frac{ IC(\mathcal{D}) }{ \mu(IC_r) }, \end{equation} where $\mu(IC_r)$ is the average {\it Information Content} obtained in an ensemble of random networks, with the same number of nodes and links as $\mathcal{D}$. Note that $IC^*$ usually takes values in $(0, 1)$, with values close to one indicating a random structure of the network $\mathcal{D}$, and thus a random difference between $\mathcal{A}_1$ and $\mathcal{A}_2$; and $0 < IC^* < 1$, the presence of a structure in the changes. Further note that, while $IC^* > 1$ is possible, it would indicate a structure more random than a random network, and can thus only be the result of random fluctuations. While $IC^*$ provides a quantitative assessment of the structure of changes, it yields little information about the statistical significance of the same. In order to tackle this issue, a normalisation based on a Z-Score can be used: \begin{equation} IC^\dagger = \frac{ IC(\mathcal{D}) - \mu(IC_r) }{ \sigma(IC_r) }. \end{equation} $IC^\dagger$ values close to zero indicate random modifications between $\mathcal{A}_1$ and $\mathcal{A}_2$, while negative values indicate modifications driven by some structure. The advantage of this formulation is that $IC^\dagger$ can easily be transformed into a $p$-value, provided $IC_r$ follows a normal distribution - condition that is not fulfilled only for very small random networks. \begin{figure*}[!tb] \begin{center} \includegraphics[width=0.9\textwidth]{Fig01} \caption{Examples of the application of the proposed algorithm to synthetic evolving networks. From left to right, the four columns respectively represent: the initial adjacency matrix; the final adjacency matrix; the difference matrix; and the evolution of the $\log_{10}$ of the $p$-value of the test assessing the presence of a structured change, as a function of the rewiring $\alpha$ - see main text for details. All calculations have been executed with $100$-nodes networks, while the depicted adjacency matrices have a smaller size for the sake of clarity. The black line and grey bands on the right hand side respectively depict the average and $1 \sigma$ band, as obtained in $100$ random realisations.}\label{fig:01} \end{center} \end{figure*} \subsection{Validation on synthetic networks} A simple way of validating the proposed algorithm involves the use of a set of controlled evolutions, {\it i.e.} governed by rules ensuring that the start and end points are known topologies. Given these two networks $\mathcal{A}_{start}$ and $\mathcal{A}_{end}$, we construct a third network $\mathcal{A}$ whose links are drawn from $\mathcal{A}_{end}$ with probability $\alpha$, and from $\mathcal{A}_{start}$ with probability $1 - \alpha$; and finally compare $\mathcal{A}$ with the initial network $\mathcal{A}_{start}$. Note that, for $\alpha = 0$, $\mathcal{A} = \mathcal{A}_{start}$ and $\mathcal{D} = 0_{N \times N}$; on the other hand, $\alpha = 1$ implies that $\mathcal{A} = \mathcal{A}_{end}$ and $\mathcal{D} = |\mathcal{A}_{end} - \mathcal{A}_{start}|$. Therefore, $\alpha$ controls the degree of morphing between $\mathcal{A}_{start}$ and $\mathcal{A}_{end}$. Several evolutions of interest are analysed in Fig. \ref{fig:01}. The four columns, from left to right, respectively represent the initial (rewiring $\alpha = 0$) and final ($\alpha = 1$) networks; $\mathcal{D}$, for the maximum rewiring $\alpha = 1$; and the evolution of the $\log_{10}$ of the $p$-value of $IC^\dagger$, as a function of the rewiring $\alpha$, calculated between the original and the rewired network. While, for the sake of clarity, the depicted adjacency matrices have a small size, all results have been obtained with networks of $100$ nodes and $100$ random realisations. The first row describes the rewiring of a random network into a second random one. As there is no correlation nor structure between the links that have changed, the resulting matrix $\mathcal{D}$ presents a random connectivity and no meso-scale; consequently, the drop in $IC$ never becomes statistically significant, as depicted in the right panel. The second example, while similar, presents an important difference: if both the initial and final networks are random, the second is obtained by reversing the set of neighbours of one single node - see the corresponding matrix $\mathcal{D}$. Note that, in this case, while the initial and final points are random, the evolution process is a structured one. This is correctly detected by the proposed metric, with the $p$-value dropping below $0.01$ for $\alpha \approx 0.15$. Similar behaviours are observed in the third and fourth examples, which describe two different networks converging towards a community structure. As creating or modifying a community requires links to be activated and de-activated in a targeted way, the metric detects the presence of a meso-scale in $\mathcal{D}$. Finally, the latter example consists of a situation in which both the starting and final networks have the same community structure, being both contaminated by random noise. Accordingly, the difference between both has a random nature, and the $p$-value never becomes statistically significant. Some general conclusions can be drawn from these results. Firstly, and most importantly, the structure of the two networks $\mathcal{A}_1$ and $\mathcal{A}_2$ is not relevant; instead, only the changes that are required to evolve from the former to the latter are. Specifically, two completely random networks may be associated with a structured change between them; and two well-structured networks may differ in a random fashion. Secondly, the presence of a statistically significant structured process is the result of the trade-off between the fraction of modified links and their organisation. For instance, it is worth noting that in the second example of Fig. \ref{fig:01} a statistical significant result is reached for $\alpha \approx 0.15$, as all modified links belong to the same node, while an $\alpha > 0.5$ is required in the third example. In other words, the change of a few strongly correlated links can be as significant as the change of many links, when the relationship between them is weaker. \subsection{Comparison with other approaches} \label{sec:OtherM} Among the literature dealing with the problem of complex network comparison \cite{brodka2017quantifying}, two alternative approaches are worth considering: the comparison of network topological properties on one hand, and of the raw adjacency matrices on the other. The former approach is the most common: one or more metrics, synthesising the topology of the networks, are calculated and compared. The main advantage is that it allows to compare heterogeneous (in terms of number of nodes) networks, provided the metrics are normalised against equivalent random networks. As explained in the introduction, this is not equivalent to what here proposed, as such comparison cannot explain the nature of the evolutionary process. This section thus focuses in the latter option, {\it i.e.} on strategies for directly comparing two or more adjacency matrices, specifically through the use of correlations and entropy measures. We show how there are several situations in which those strategies' effectiveness is below what yielded by the proposed approach. \subsubsection*{Correlation} An interesting, and yet simple way of comparing two networks, or two layers in a multiplex network, is to calculate the correlation between the links present in both of them. In other words, given two networks $\mathcal{A}_1$ and $\mathcal{A}_2$, the correlation expresses the probability that if $a^{\mathcal{A}_1}_{i, j} = 1$, then $a^{\mathcal{A}_2}_{i, j} = 1$. More generally, one can calculate a global overlap $O^{\mathcal{A}_1, \mathcal{A}_2}$ as the total number of pair of nodes connected at the same time by a link in networks $\mathcal{A}_1$ and $\mathcal{A}_2$, as proposed in Ref.~\cite{bianconi2013statistical}, {\it i.e.}: \begin{equation} O^{\mathcal{A}_1, \mathcal{A}_2} = \sum _{i < j} a ^{\mathcal{A}_1}_{i, j} a ^{\mathcal{A}_2}_{i, j}. \label{eq:O} \end{equation} Eq. \ref{eq:O} can further be normalised by considering the number of links present in both networks. It has recently been shown \cite{cellai2013percolation, baxter2016correlated} that such global overlap has important implications in the percolation, and thus in the robustness, of multiplex networks - as the presence of correlated (redundant) links slows down the disruption of the giant component of the network under random link removal. Two extreme situations can be encountered when considering the correlation between two networks: when all links are equal in both networks, and thus the correlation is maximal; and when links are reciprocal, {\it i.e.} $a ^{\mathcal{A}_1}_{i, j} = 1 - a^{\mathcal{A}_2}_{i, j}$, yielding a maximally negative correlation. $\mathcal{D}$ would respectively be a null and a complete matrix, and $IC^\dagger = 0$ in both cases - in other words, a strong structure drives the evolution between both networks. More interesting situations arise in the middle range, {\it i.e.} when only part of the links are different. To illustrate, let us consider the situation depicted in the first two rows of Fig. \ref{fig:01}, and suppose the initial and final matrices $\mathcal{A}$ are random and have the same link density of $0.5$. In both cases, $O^{\mathcal{A}_1, \mathcal{A}_2} \approx 0.25$ (as half of the activated links are expected to coincide); yet, the two resulting $IC$ values are completely different. This proves that the global overlap metric $O$ does not provide information on the underlying mechanism driving such difference, as a same correlation value may be the result of random or structured changes. \subsubsection*{von Neumann entropy} The von Neumann entropy ($S_{VN}$) is a metric that was initially introduced in quantum mechanics to assess the degree of mixing of the quantum states encoded in a probability distribution - and hence in a density matrix $\rho$. While the concept of a state probability distribution is not defined for complex networks, the metric can still be calculated over any density matrix, {\it i.e.} any Hermitian and positive semidefinite matrix. As previously shown \cite{braunstein2006laplacian, braunstein2006some}, $S_{VN}$ can be calculated over the density Laplacian matrix as: \begin{equation} S_{VN} = - \mathtt{Tr} \frac{\mathcal{L}}{<k>N} \log \frac{\mathcal{L}}{<k>N}, \end{equation} where $k$ is the average degree, $N$ the number of nodes composing the network, and $\mathcal{L} = \mathcal{D} - \mathcal{A}$ the corresponding Laplacian matrix. The von Neumann entropy has been demonstrated to be a good quantifier of the regularity of a network structure, with higher values obtained in graphs with uniform degree distributions, and smaller values in heterogeneous networks \cite{passerini2009quantifying}. In a way similar to our approach, $S_{VN}$ has been used to compare different networks \cite{de2016spectral}, but with the limitations discussed below. Let us suppose two networks with the same number of nodes and links, $\mathcal{A}_r$ and $\mathcal{A}_m$, respectively having a random and a modular structure. More specifically, the elements of the former adjacency matrix are drawn from a binomial distribution $\mathcal{B}(1, 0.5)$, while those of the second are defined as $a_{i, j} = 1$ for $i < N/2, j < N/2$ and $i > N/2, j > N/2$ (and zero otherwise). In the limit of large values of $N$, both networks are characterised by the same expected value of the degree distribution, {\it i.e.} $0.5$, and all nodes are further expected to have the same (or very similar) degree $N / 2$. Due to the dependency of $S_{VN}$ on the degree distribution, both networks are expected to have similar values of the entropy. It is easy to construct situations in which the difference network $\mathcal{D}$ is equal to $\mathcal{A}_r$ or $\mathcal{A}_m$. For instance, starting from a random network with a link density of $0.5$, the first case is obtained when this is compared with another random network with the same size and link density; on the other hand, the second case is obtained by inverting the activation of links in the upper left and bottom right quarters of the adjacency matrix. The behaviour of the von Neumann entropy in these two situations is depicted in Fig. \ref{fig:01a} - note that the right panels depict the evolution of the Z-Score of the $S_{VN}$, as calculated against ensembles of equivalent random networks. It is clear that, even though the von Neumann entropy may be an alternative metric for comparing network structures and has a substantially lower computational cost, our proposed methodology is able to detect more complex changes, and is therefore more reliable in real-world situations. \begin{figure*}[!tb] \begin{center} \includegraphics[width=0.8\textwidth]{Fig02} \caption{Evolution of the Z-Score of the von Neumann entropy, as a function of the rewiring $\alpha$, and for two synthetic networks. Panel meanings are as in Fig. \ref{fig:01}.}\label{fig:01a} \end{center} \end{figure*} \section{Application to real-world systems} \label{sec:Applications} \subsection{World-wide air transport network} \label{sec:ATN} As a first test case, we here consider the network created by flights between the top-50 and top-200 world airports, as extracted from the {\it Sabre Airport Data Intelligence} data set. As previously proposed \cite{sun2017worldwide, sun2017node}, nodes represent airports, pairwise connected when the total number of passengers per month who used a direct flight between both airports is larger than $1000$, {\it i.e.} at least $\approx 33$ passengers $/$ day. $72$ snapshots are available, representing the monthly evolution of the system between January 2010 and December 2015. The air transport network is known to present a strong seasonality, both on the short ({\it i.e.} daily) and long scales (monthly and yearly) \cite{zanin2013modelling}. This magnifies the importance of using a correct temporal representation, as projecting the system into a single atemporal network may result in severe topological distortions \cite{rocha2017dynamics}. This fact is here confirmed by Fig. \ref{fig:02}, which represents the evolution of three topological metrics (link density, modularity and assortativity) through time - note the annual sinusoidal behaviour of all curves. For a detailed discussion of the effect of including different sets of airports on the observed topological metrics, the reader can refer to \cite{belkoura2016multi}. \begin{figure*}[!tb] \begin{center} \includegraphics[width=0.8\textwidth]{Fig03} \caption{Evolution of the link density, modularity (as calculated through the Louvain's algorithm \cite{blondel2008fast}) and assortativity of the world air transport network through time. Solid and dashed lines respectively represent the networks for the top-50 and top-200 world airports. All series are normalised by subtracting the value at $t=0$, in order to make them start from $0.0$. }\label{fig:02} \end{center} \end{figure*} The evolution of the $\log_{10}$ of the $p$-value of the $IC^\dagger$ test, for all possible pairs of months, is depicted in the top panels of Fig. \ref{fig:03}. Light colours represent changes with a random structure; dark colours the presence of a meso-scale regularity. In the case of $50$ airports, it is interesting to see bright squares on the main diagonal, of size $6 \times 6$, corresponding to the summer and winter seasons - this is to be expected, as flights seldom change within the same season, and differences are thus the consequence of small and random adjustments in the schedules. The yearly seasonality of the air transport is also evident in the case of $200$ airports, with bright colours concentrating around the $\pm 12$ and $\pm 24$ diagonals. When the time distance between two snapshots is greater than two years, and when consecutive summer/winter pairs are compared, the $IC^\dagger$ test suggests that changes are not random: they thus correspond to systematic reconfigurations of the air transport market, driven by business considerations, and which cannot just be explained by a random rewiring. As a comparison, Fig. \ref{fig:03} bottom left panel depicts the evolution of the normalised global overlap $O$ for the case of $50$ airports. While {\it prima facie} the colour map is similar to the one presented in Fig. \ref{fig:03} top left, several differences can be observed, especially far away from the main diagonal - {\it i.e.} for distances greater than 2 years. In order to clarify these differences, the bottom right panel reports a scatter plot comparing the values yielded by $IC$ and $O$. While there is a general positive correlation, it is possible to find completely different $IC$ values for a same overlap. For instance, for $O \approx 0.95$, one can find instances with $-15 < \log_{10} p$-value $ < -2$. This suggests that small changes, {\it i.e.} high overlaps, can be due to both (almost) random and strongly structured evolutions. The $IC$ thus yields a more complete view of the evolution of the network, providing information (specifically, the nature of the changes) that is disregard by other metrics. \begin{figure*}[!tb] \begin{center} \includegraphics[width=0.9\textwidth]{Fig04} \caption{Analysis of the world-wide air transport network. (Top) Evolution of the structure of changes, for the top-$50$ (left panel) and top-$200$ (right panel) networks. The colour of each point represents the $\log_{10}$ of the $p$-value of the $IC$ test, with light (dark) shades indicating random (structured) changes. (Bottom Left) Evolution of the structure, for the top-$50$ networks, as yielded by the normalised global overlap $O$. (Bottom Right) Scatter plot comparing the results yielded by the $IC$ and $O$, for the top-$50$ networks.}\label{fig:03} \end{center} \end{figure*} \subsection{Hospital contact network} \label{sec:Hosp} As a second example, we here consider the temporal network of contacts in the geriatric unit of a Lyon university hospital, including patients and health care workers, as described in Refs. \cite{vanhems2013estimating,Lyon2017}. Nodes represent 46 health care workers and 29 patients, and links close-range interactions between them as detected by wearable sensors. The full data set spans from Monday, December 6, 2010 at 1:00 pm to Friday, December 10, 2010 at 2:00 pm, with a temporal resolution of 20 seconds. We extracted a set of 97 contact networks, by aggregating all contacts made within an one-hour interval, in order to avoid the sparsity characterising higher temporal resolutions. The median density of the network is $0.23$, the median and standard deviation of the average shortest path length is $2.07 \pm 0.38$, and the median clustering coefficient is $0.46$. \begin{figure*}[!tb] \begin{center} \includegraphics[width=0.9\textwidth]{Fig05} \caption{Analysis of the hospital contact networks. (Left) Evolution of the structure of changes; note that the represented values correspond to $IC^*$. (Centre) Hourly evolution of the link density through the data set. (Right) Evolution of the structure as yielded by the normalised global overlap $O$.}\label{fig:04} \end{center} \end{figure*} In a way similar to Fig. \ref{fig:03}, Fig. \ref{fig:04} (Left) represents the evolution of the structure of changes, for all pairs of available networks. Note that, in this case, $IC^*$ is used, such that values close to one (smaller than one) indicate random (respectively, structured) changes. A clear trend, with a $24$-hours period, can be identified - as confirmed by the central panel, depicting the evolution of the link density across several days. A comparison between $IC$ and the global overlap can be made by considering the right panel of Fig. \ref{fig:04}, representing an equivalent analysis performed with the latter metric. Some interesting situations can be detected. For instance, one can observe that several time windows correspond to a very high $IC$ and, at the same time, to a very low global overlap - see, for instance, the bottom lower part of both colour maps. A high $IC$ can nevertheless also be found in the square in the main diagonal, at around $36$ hours, which corresponds to a high global overlap. The presence of a random change between two snapshots is thus not correlated with their overlap: it can appear when both a few or most of the links are rewired. More in general, from the analysis of this system it can be concluded that it presents two different regimes. On one hand, most of the times the contact network evolves in a structured manner, reflecting the fact that health care workers perform regular tasks. On the other hand, nights are characterised by fewer contacts, which develop in a random fashion - possibly the result of emergencies and other random situations. \subsection{Brain functional networks} \label{sec:brain} As a third case study, we present an analysis of the brain activity of multiple healthy subjects during a resting state, as made available by the Human Connectome Project (HCP) \cite{larson2013adding}. Magnetoencephalographic (MEG) recordings \cite{hamalainen1993magnetoencephalography} were performed on a group of $10$ individuals, obtaining for each of them $248$ time series (each representing one MEG sensor) with $149,646$ points. Note that only a subset of the original group of people has been considered here, in order to ensure homogeneity in the number of channels and time series length. Functional networks were then reconstructed as described in \cite{buldu2017frequency}, by firstly extracting the time series corresponding to four standard bands (theta $[3-8]$ Hz, alpha $[8-12]$ Hz, beta $[12-30]$ Hz, and gamma $[30-100]$ Hz); secondly, by calculating the Mutual Information (MI) between each pair of channels; and finally, by binarising the resulting networks, through a threshold defined by surrogated time series obtained by a block-permutation procedure \cite{canolty2006high}. The final result is thus a set of four functional networks per subject, representing brain activity at rest in four frequency bands. For further details about the recording and data processing, the reader is referred to Refs. \cite{larson2013adding, buldu2017frequency}. As a first objective, we here want to show that the proposed algorithm can be used to quantify and describe the nature of the differences between the networks representing different frequency bands. The average and standard deviation of $IC^*$ when comparing each person's four networks is reported in Tab. \ref{tab:Meg}. It can be seen that results are consistently included within the range $(0.78, 0.90)$, thus indicating the presence of structural differences between the networks corresponding to different frequency bands. This is to be expected, as these bands are supposed to correspond to different functional tasks, contributing differently to the overall resting state activity, and therefore not to be equivalent \cite{brookes2011investigating, hillebrand2012frequency}. \begin{table*}[!tb] \centering \begin{tabular}{|l|c|c|c|c|} \hline & {\bf Theta} & {\bf Alpha} & {\bf Beta} & {\bf Gamma} \\ \hline {\bf Theta} & --- & $0.8111 \pm 0.1215$ & $0.8103 \pm 0.1099$ & $0.8943 \pm 0.0446$ \\ \hline {\bf Alpha} & $0.8111 \pm 0.1215$ & --- & $0.7864 \pm 0.1555$ & $0.8166 \pm 0.1041$ \\ \hline {\bf Beta} & $0.8103 \pm 0.1099$ & $0.7864 \pm 0.1555$ & --- & $0.8069 \pm 0.1254$ \\ \hline {\bf Gamma} & $0.8943 \pm 0.0446$ & $0.8166 \pm 0.1041$ & $0.8069 \pm 0.1254$ & --- \\ \hline \end{tabular} \caption{Average and standard deviation of the $IC^*$ for the ten considered people, when networks of different frequency bands are pair-wise compared.} \label{tab:Meg} \end{table*} A quite different picture nevertheless arises when one shifts the focus to subjects. Fig. \ref{fig:05} (Left) depicts the average and standard deviation of the $IC^*$ values for each subject, {\it i.e.} corresponding to pair-wise comparing the four networks of each subject. A greater inter-subject variability emerges, with the average $IC^*$ varying between $0.740$ for subject $4$ and $0.927$ for subject $8$. An even stronger effect can be observed for the $IC^*$ between frequency bands alpha and beta, as depicted in Fig. \ref{fig:05} (Right): the same two subjects present values of respectively $0.627$ and $1.217$. This last result highlights an important fact: alpha and beta bands can contribute to the global resting state activity in very different ways. In some of them, as in subject $4$, they have completely different topologies, while in others (as for subject $8$) their differences is only due to random fluctuations. More generally, different frequency bands interact between them in a way that is dependent on the subject, thus with high inter-subject variability. These results are aligned with previous findings, in which MEG studies report a low reproducibility for resting states in test-retest experiments \cite{deuker2009reproducibility, telesford2013exploration}. \begin{figure*}[!tb] \begin{center} \includegraphics[width=0.90\textwidth]{Fig06} \caption{(Left) Average and standard deviation of the $IC^*$ values for all pair-wise comparisons of the four (theta, alpha, beta and gamma) networks, as a function of the considered individual. (Right) $IC^*$ resulting of comparing the networks for the alpha and beta frequency bands, for all ten individuals.}\label{fig:05} \end{center} \end{figure*} \section{Finding a system's natural frequency: network self-correlations and correlograms} \label{sec:Freq} If a set of networks represents the evolution of the connectivity of a system through time, the parallelism with time series analysis can be pushed one step further by defining the equivalent of a network auto-correlation function. This requires calculating the similarity of the sequence of networks with itself, when one of the two instances is time-displaced with respect to the other. Let us denote by $S$ the matrix of similarity, whose element $s_{i, j}$ encodes the similarity of the two networks respectively representing the system at times $i$ and $j$ - note that such matrix is completely equivalent to the results presented in Figs. \ref{fig:03} and \ref{fig:04}. The auto-correlation of the sequence of $N$ networks, for a time displacement of $t > 0$, is given by: \begin{equation} C(t) = \frac{1}{N - t} \sum _{i = 0} ^{N - t} s_{i + t, i} = \frac{1}{N - t} \sum _{i = 0} ^{N - t} IC( A_{i+t}, A_i ). \label{eq:self-corr} \end{equation} In the r.h.s. of Eq. \ref{eq:self-corr}, the $IC$ measure is used as a proxy of the similarity between two networks; to be more precise, this self-correlation thus assesses how a sequence of networks is {\it intentionally} equivalent to itself, excluding the presence of uncorrelated noise (unintentional changes) in the links. $C(t)$ is, by construction, equivalent to the average of the $t$-diagonal of $S$, or of the matrices depicted in Figs. \ref{fig:03} and \ref{fig:04}. \begin{figure*}[!tb] \begin{center} \includegraphics[width=0.90\textwidth]{Fig07} \caption{Correlograms for the air transport (left panel) and hospital networks (right panel). While the Y axis of both panels represent the same concept, the specific measure changes: the $log_{10}$ of the $p$-value of $IC^\dagger$ in the former case, and $IC^*$ in the latter. See main text for definitions.}\label{fig:06} \end{center} \end{figure*} By calculating $C(t)$ over all $t$s, it is possible to construct a full correlogram of the evolution of the studied system, with the maxima representing its natural frequencies. In order to illustrate this idea, Fig. \ref{fig:06} depicts the correlograms for the air transport networks (left panel) and the hospital networks (right panel) - the brain functional networks have not here been considered, as they do not represent a temporal evolution. The respective matrices $S$ encode different variants of the $IC$ metric: the $log_{10}$ of the $p$-value of $IC^\dagger$ for the former (Fig. \ref{fig:04}), and $IC^*$ for the latter (Fig. \ref{fig:03}); as a consequence, the $y$ axis of the two panels have different scales. This is not a problem as long as the meaning is similar; in this case, both $IC^* \rightarrow 1$ and $IC^\dagger \rightarrow 0$ indicate highly similar networks, and both lie in the top part of the graph. As should be expected, the maximum in both correlograms is located at $t = 0$. Local minima can additionally be found at $24k$ ($k \in \mathcal{Z}$) for the hospital data set, corresponding to a daily activity cycle; and at $12k$ ($k \in \mathcal{Z}$) in the case of the air transport, indicating a yearly seasonality. As a final issue, it has to be highlighted that any other metric can be used within Eq. \ref{eq:self-corr} to calculate this correlogram, as for instance the global overlap $O$. Results would nevertheless have a different meaning, as the $IC$ allows not just to quantify the time required for returning to a given configuration, but also to ensure that differences are not due to structural changes. \section{Discussion and conclusions} \label{sec:Concl} Beyond the assessment of the raw differences between two networks, a more complex and challenging problem is to detect if these differences are due to random modifications or to organised forces. The two problems are complementary and not necessarily correlated. The network structure of a system may substantially change between two measurements, but still be the same topology deformed by strong observational noise. On the other hand, small changes may be due to the targeted (intentional) attempt of, {\it e.g.}, promoting a node. In this contribution we presented the use of Information Content \cite{zanin2014information} as a way of assessing the presence of meso-scale structures in the difference between two networks. The effectiveness of the metric has been demonstrated in several synthetic network evolutions, and tested with three real data sets respectively representing social, technological and biological systems. We additionally discussed the differences between the proposed approach and two {\it a priori} similar metrics, {\it i.e.} the network correlation \cite{bianconi2013statistical} and the von Neumann entropy \cite{braunstein2006laplacian, braunstein2006some}. The availability of a similarity metric further allows to adapt some standard techniques in time series analysis to the study of the evolution of networked systems. We here considered the case of self-correlations and correlograms, and showed that the natural frequency of the system, in terms of recurrence of intentional network changes, can be estimated by the maxima in the network self-correlation. While not explicitly discussed here, the proposed analysis can be extended to the more general case of the cross-correlation, in which multiple sequences of networks, for instance representing two or more systems, can be pair-wise analysed. Correlograms could also be used to select the best time resolution for sampling temporal networks, a topic still to be explored \cite{rocha2017sampling}. As a final thought, an hidden assumption of this work is that the networks to be compared are expected to be topologically compatible, {\it i.e.} to have the same number of nodes. While this holds for multiplex networks, general multi-layer and temporal graphs can have a variable size. The proposed methodology can still be used, provided an initial pre-processing is performed: for instance, the cores composed of nodes common to both networks could be isolated; while some information would be lost, the main evolutive trends could still be characterised. Furthermore, networks coming from different systems, {\it e.g.} respectively representing brain activity and air transport, could in principle be compared. Nevertheless, it would firstly be necessary to {\it match} nodes between both networks, that is, to create a map relating each node of the first network with the topological equivalent one of the second, by means of {\it e.g.} the SimRank \cite{jeh2002simrank} or similar algorithms.
{ "timestamp": "2018-02-13T02:16:36", "yymm": "1802", "arxiv_id": "1802.03966", "language": "en", "url": "https://arxiv.org/abs/1802.03966" }
\section{Introduction} Accessing long timescales in molecular dynamics (MD) simulations remains a longstanding challenge. Limited by the short time steps of a few femtoseconds, and despite recent progress in methods and hardware \cite{Lindorff-Larsen2011,Buch2011,Tiwary2015a,Paul2017}, it remains difficult to access the timescales of milliseconds and beyond. This leaves many applications outside the realm of MD, including many structural rearrangements in biomolecules and binding and unbinding events of ligands and drug molecules. Many of these reactions are so called `rare events' where the time scale of the reaction is orders of magnitude longer than the time it takes to cross the barrier between the states. In such cases, most of the simulation time ends up being used in simulating the local fluctuations inside free energy basins. To study phenomena that involve basin-to-basin transitions that occur on long timescales, one typically has to rely on some combination of machine parallelism \cite{Dror2012,Lane2013}, advanced strategies for analysing simulations \cite{Prinz2011,Trendelkamp-Schroer2016,Chong2016} and enhanced sampling methods \cite{Bernardi2015,Valsson2016,DeVivo2016,Bruce2018} that allow for efficient exploration of phase space. Metadynamics is one such popular and now commonly used enhanced sampling method that involves the periodic application of a history-dependent biasing potential to a few selected degrees of freedom, typically also called collective variables (CVs) \cite{Laio2002}. Through this bias, the system is discouraged from getting trapped in low energy basins, and one can observe processes that would be far beyond the timescales accessible in normal MD, while still maintaining complete atomic resolution. Originally designed to explore and reconstruct the equilibrium free energy surface \cite{Bussi2015}, it has recently been shown that metadynamics can also be used to calculate kinetic properties \cite{Tiwary2013,McCarty2015,Tiwary2015,Tiwary2015a,Mondal2016,Fleming2016,Tung2016,Sun2017,Fu2017,Wang2017,Tiwary2017,Casasnovas2017,Tiwary2017a}. In particular, inspired by the pioneering work of Grubm{\"u}ller \cite{Grubmueller1995} and Voter \cite{Voter1997}, it has recently been shown that the unbiased kinetics can be correctly recovered from metadynamics simulations using a method called infrequent metadynamics (InMetaD)\cite{Tiwary2013}. The basic idea in InMetaD is to add a bias to the free energy landscape sufficiently infrequently so that only the free energy basins, but not the free energy barriers, experience the biasing potential, $V({s},t)$, where $t$ is the simulation time and $s$ is one or more CV's chosen to distinguish the different free energy basins. By adding bias infrequently enough compared to the time spent in barrier regions, the landscape can be filled up to speed up the transitions without perturbing the sequence of state-to-state transitions. The effect of the bias on time can then be reweighted through an acceleration factor: \begin{equation} \alpha(\tau)=\langle e^{\beta (V({s},t))} \rangle \end{equation} Here $\beta$ is the inverse temperature and the angular brackets denote an average over a metadynamics run up until the simulation time $\tau$. In the application of InMetaD to recover the correct rates, there are two key assumptions. First, the state-to-state transitions are of the rare event type: namely, the system is trapped in a basin for a duration long enough that memory of the previous history is lost, and when the system does translocate into another basin, it does so rather rapidly. Second, it is important that no substantial bias is added to the transition state (TS) region during the simulation. This requirement can be achieved by adjusting the bias deposition frequency, determined by the time between two instances of bias deposition. If the deposition frequency is kept low enough, it becomes possible to keep the TS region unbiased. This second requirement, however, also means that it takes longer to fill up the basin, revealing one of the practical limitations of applying InMetaD. It is this second requirement that we address and improve in this work. In particular, we asked ourselves the question, can we design a bias deposition scheme so that the frequency is high near the bottoms of the free energy basins, but decreases gradually so as to lower the risk of biasing the TS regions? Our scheme, which we term Frequency-Adaptive Metadynamics (FaMetaD), is illustrated using a ligand (benzene) unbinding from the T4 lysozyme (T4L) L99A mutant as an example (Fig. 1). In particular, we designed a strategy that uses a high frequency at the beginning of the simulation (to fill up the basin quickly) and then slows progressively down (to minimize the risk of perturbing the TS). In this way, we aim to improve the reliability, accuracy and robustness of the calculations without additional computational cost. \begin{figure}[htbp] \begin{center} \mbox{ {\includegraphics[height=6cm]{Fig1a.pdf}} {\includegraphics[height=6cm]{Fig1b.pdf}} } \end{center} \caption{ {\bf A schematic picture to show the application of frequency-adaptive metadynamics (FaMetaD) to calculate the off-rate of protein-ligand systems.} The protein-ligand system (left panel) is benzene (light and dark blue spheres) binding to the L99A mutant of T4 lysozyme (green cartoon) \cite{Wang2017}. Frequency-adaptive metadynamics fills the free energy basin quickly at the beginning of the simulation, but adds the bias slowly at the latter stage when the system moves close to the transition state regions. The right panel shows a typical trajectory of how the time between deposition of the bias is adjusted on-the-fly in a FaMetaD run. The typical frequencies used in normal metadynamics ($\tau_0$) and InMetaD ($\tau_c$) are labeled by black arrows. } \end{figure} \section{Methods} Our adaptive frequency scheme reads: \begin{equation} \tau_{dep}(t)=\min \{ \tau_0 \cdot \max\{\frac {\alpha(t)} {\theta},1\},\tau_c \} \end{equation} where $\tau_0$ is the initial deposition time between adding a bias, similar to the relatively short time in normal metadynamics, and $\tau_c$ is a cut-off value for the frequency equal to or larger than the deposition time originally proposed in InMetaD. These two parameters bridge the normal metadynamics and InMetaD, by controlling the minimal and maximal bias deposition frequency. $\alpha(t)$ is the instantaneous acceleration factor at simulation time $t$ in a metadynamics simulation. Through the starting deposition time $\tau_0$, we can modulate the enhancement in our ability to fill free energy wells, relative to an InMetaD run performed with a constant stride of $\tau_c$. For practical considerations the choice of $\tau_c$ is dependent on the computational resources. The key free parameter here is $\theta$, the threshold value for the acceleration factor to trigger the gradual change from normal (frequent) metadynamics to InMetaD. The choice of $\theta$ requires some considerations. On one hand, $\theta$ should be as large as possible to delay the switch from normal metadynamics to InMetaD so as to get significant enhancement in basin filling. On the other hand, too large $\theta$ could lead to problems that a transition might occur when the frequency $\tau_{dep}(t)$ is still less than the typical duration $\tau_c$ of a reactive trajectory crossing over from one basin to another. We now estimate a value for $\theta$ in terms of the expected transition time $\tau_{exp}$ available from either experimental measurements or an estimate, the simulation time, $\tau_{sim}$ (determined also by computational resources), and a `safety coefficient' $C_s$. We seek the following to hold true: \begin{equation} \theta \le \frac {\tau_{exp}} {\tau_{sim}C_s}. \end{equation} If we use a protein-drug system as an example, and we (i) can estimate the residence time to be roughly on the order of a second ($\tau_{exp}=1 s$), (ii) aim to observe the transition within a 100 ns metadynamics run ($\tau_{sim}=100 ns$), and (iii) use $C_s=10^2$ to counteract the risk that a transition time falls in the long tail of Poisson distribution, we thus obtain $\theta=\frac{1s}{100 ns * 10^2}=10^5$. As we demonstrate by an example below, if one chooses too high a value of $\theta$ this leads to perturbed (and erroneous) kinetics; an error that may be detected by testing whether the observed distribution of transition times follows a Poisson distribution \cite{Salvalaglio2014}. As per Eq. (2), the frequency is also changed as a function of $\alpha(t)$. In practice, we observed significant fluctuation of $\tau_{dep}(t)$, that might cause problems if $\tau_{dep}(t)$ is small at the transition time point. Therefore we finally modified Eq. (2) to become a monotonously increasing function: \begin{equation} \tau_{dep}^{'}(t)=max(\tau_{dep}((N-1)\Delta t),\tau_{dep}(N \Delta t)) \end{equation} where $\tau_{dep}(N\Delta t)$ is the instantaneous deposition time at step N in a metadynamics simulation with a MD time step $\Delta t$. The frequency-adaptive scheme was implemented in a development version of the PLUMED2.2 code\cite{Tribello2014}. \begin{table*} \caption{\bf Conformational Transition Times of Ace-Ala3-Nme} \begin{center} \begin{threeparttable} \begin{tabular}{c ccc c c} \hline \rowcolor{Gray} & Parameters: $\tau_0$ (ps), $\tau_c$ (ps), h (kJ/mol) & $\tau_{slow}$ ($\mu s$) & P-value & Cost ($\mu$s) & Set \\ \hline Unbiased MD & T=300K & 11$\pm$2\tnote{a} & & 300 & \\ InMetaD & $\tau_0=200$,$h=0.4$ & 16$\pm$5 & 0.4$\pm$0.3 & 2.8 & \\ & $\tau_0=100$,$h=0.4$ & 12$\pm$3 & 0.3$\pm$0.2 & 1.5 & A \\ & $\tau_0=40$,$h=0.4$ & 19$\pm$6 & 0.3$\pm$0.2 & 0.8 & B \\ & $\tau_0=20$,$h=0.4$ & 16$\pm$5 & 0.1$\pm$0.2 & 0.5 & C \\ & $\tau_0=10$,$h=0.4$ & 32$\pm$15 & 0.1$\pm$0.1 & 0.3 & D \\ & $\tau_0=5$,$h=0.4$ & 92$\pm$38 & 0.02$\pm$0.05 & 0.2 & E \\ & $\tau_0=2$,$h=0.4$ & 98$\pm$61 & 0.01$\pm$0.01 & 0.1 & F \\ FaMetaD & $\tau_0=2$,$\tau_c=200$,$\theta=1$,$h=0.4$ & 15$\pm$3 & 0.4$\pm$0.3 & 1.5 & A \\ & $\tau_0=2$,$\tau_c=200$,$\theta=10$,$h=0.4$ & 14$\pm$3 & 0.5$\pm$0.3 & 0.6 & B \\ & $\tau_0=2$,$\tau_c=200$,$\theta=40$,$h=0.4$ & 19$\pm$6 & 0.2$\pm$0.2 & 0.4 & C \\ & $\tau_0=2$,$\tau_c=200$,$\theta=100$,$h=0.4$ & 24$\pm$7 & 0.1$\pm$0.1 & 0.3 & D \\ & $\tau_0=2$,$\tau_c=200$,$\theta=500$,$h=0.4$ & 29$\pm$10 & 0.1$\pm$0.2 & 0.2 & E \\ & $\tau_0=2$,$\tau_c=200$,$\theta=10000$,$h=0.4$ & 24$\pm$11 & 0.02$\pm$0.03 & 0.1 & F \\ \hline \end{tabular} \begin{tablenotes}[para,flushleft] \footnotesize \item [a] From ref. \cite{Wang2017}. \end{tablenotes} \end{threeparttable} \end{center} \end{table*} \section{Results and Discussion} \subsection{Test on a four-state model system} To benchmark our results, we first consider a five-residue peptide (Nme-Ala3-Ace) as a model system with a non-trivial free energy landscape involving multiple conformational states \cite{Wang2017}. We consider the slowest state-to-state transition time, $\tau_{slow}$, which has been estimated to be $\sim 11 \mu s$ from unbiased MD \cite{Wang2017}, as the benchmark target. We used the same computational setup and CVs as previously described \cite{Wang2017}. As a baseline, we used InMetaD with a fixed height of Gaussian bias potential ($h=0.4$ kJ/mol) but with different times of bias deposition ranging from 2 ps to 200 ps (40 runs for each parameter set) (Table 1 and Fig. S1). The reliability of the calculated transition times were verified a posteriori using a Kolmogorov-Smirnov test \cite{Salvalaglio2014} to examine whether their cumulative distribution function is Poissonian. If the $p$-value is low (e.g. less than 0.05) then it is likely that the distribution has been perturbed because of too aggressive application of the biasing potential. As expected, we find that simulations that used a low frequency ($\tau_0\ge20$ ps) gave rise to a consistent estimation of $\tau_{slow}$, with values close to those observed in unbiased MD, while the simulations with high frequency ($\tau_0\le10$ ps) resulted in substantially longer and less reliable times (Table 1 and Fig. S1). This correlation between the $p$-value and the deposition time in the InMetaD simulations can be explained by the fact that the higher bias frequency results in higher risk of biasing the TS regions and more non-Poissonian distribution. We also note that the transition times appear to be over-estimated in these cases. To compare the FaMetaD simulations with the InMetaD baseline, we designed six sets (A to F) of FaMetaD simulations to have comparable computational costs (Table 1 and Fig. 2). Overall, the results of FaMetaD show the same trends as InMetaD and support the same conclusions. In other words, both InMetaD and FaMetaD reveal the trade-off between reliability, accuracy and computational cost. There are, however, some important differences that highlight the improvement obtained through FaMetaD. First, in each case there is a small but notable improvement in the reliability of the calculations, as judged by the greater $p$-values that suggest that FaMetaD perturbs the TS less than InMetaD. Most important, however, is the accuracy of the calculations. While both InMetaD and FaMetaD achieve accurate results when using the most conservative parameters (sets A--C), there are dramatic differences when more aggressive parameters are used (D--F). Importantly, we observe a much more `gracious' decline in accuracy with more aggressive parameters in FaMetaD comparing to InMetaD. Thus together these results suggest that the frequency-adaptive scheme can further improve not only the reliability but also the accuracy of the calculation, without the need of increasing computational burden, and also that the reliability is more robust to the choice of the simulation parameters. \begin{figure}[htbp] \begin{center} \mbox{ {\includegraphics[width=12cm]{Fig2.pdf}} } \end{center} \caption{ {\bf Comparing the accuracy, reliability and efficiency of frequency-adaptive metadynamics and infrequent metadynamics.} The upper panel shows the key parameters ($\tau_0$ in InMetaD, $\theta$ in FaMetaD) of the six sets (A to F) pairs simulations. The middle panels show a comparison of the accuracy and reliability of the results. The grey line shows the $\tau_{slow}$ obtained from unbiased MD simulations. The bottom panel shows the computational cost of each set of simulations. } \end{figure} \subsection{Application on protein-ligand binding} \begin{table*}[ht] \caption{\bf Binding and unbinding times of T4L L99A with benzene and indole} \begin{center} \begin{threeparttable} \begin{tabular}{c cccc} \hline Methods & Parameters: $\tau_0$ (ps), $\tau_c$ (ps), $h$ (kJ/mol) & Time (ms) & p-value & Cost \\ \hline \rowcolor{Gray} \multicolumn{5}{c}{Set 1: Benzene Binding ($\tau_{on}^{BNZ}$)} \\ InMetaD\tnote{b} & $\tau_0=40$,$h=0.4$ & 9$\pm$5 & 0.1$\pm$0.1 & 4.4$\mu$s \\%(221ns/run) \\ FaMetaD & $\tau_0=1$,$\tau_c=100$,$h=0.2$,$\theta=10^3$ & 14$\pm$7 & 0.2$\pm$0.1 & 3.0$\mu$s \\%(169ns/run) \\ \rowcolor{Gray} \multicolumn{5}{c}{Set 2: Benzene Unbinding ($\tau_{off}^{BNZ}$)} \\ InMetaD\tnote{b} & $\tau_0=100$,$h=0.2$ & 168$\pm$59 & 0.4$\pm$0.3 & 6.7$\mu$s \\%(334ns/run) \\ FaMetaD & $\tau_0=1$,$\tau_c=100$,$h=0.2$,$\theta=10^3$ & 176$\pm$68 & 0.3$\pm$0.2 & 5.5$\mu$s \\%(274ns/run) \\ \rowcolor{Gray} \multicolumn{5}{c}{Set 3: Indole Unbinding ($\tau_{off}^{IND}$)} \\ InMetaD & $\tau_0=100$,$h=0.2$ & 102$\pm$87 & 0.2$\pm$0.2 & 4.5$\mu$s \\%(298ns/run) \\ FaMetaD & $\tau_0=1$,$\tau_c=100$,$h=0.2$,$\theta=10^4$ & 168$\pm$95 & 0.2$\pm$0.1 & 2.1$\mu$s \\%(138ns/run) \\ \hline & & & & 26$\mu$s \\ \hline \end{tabular} \begin{tablenotes}[para,flushleft] \footnotesize \item [a] In all simulations, the ligand concentration is $\sim 5$ mM. \item [b] The results are from our recent work \cite{Wang2017}. \end{tablenotes} \end{threeparttable} \end{center} \end{table*} We consider now the more complex case of a ligand binding to or escaping from a buried internal cavity in the L99A mutant of T4 lysozyme, processes which occur on timescales of milliseconds or more \cite{Feher1996,Bouvignies2011,Wang2016}. To benchmark the results, we performed three sets of metadynamics simulations, up to 26$\mu$s in total (including 12 $\mu$s InMetaD simulations of T4L L99A with benzene from our recent work \cite{Wang2017}). In each set, we compared the results of InMetaD with that of FaMetaD. In set 1 and 2 we calculated the time constants for binding ($\tau_{on}^{BNZ}$) and unbinding ($\tau_{off}^{BNZ}$) of benzene, while we in set 3 calculated the time for indole to escape the pocked ($\tau_{off}^{IND}$). We performed 20 independent runs for each set of simulations (using the CHARMM22* force field \cite{Piana2011} for protein and CGenFF force field \cite{Vanommeslaeghe2010} for the ligands) to collect the transition times, from which we obtained $\tau_{on}$ and $\tau_{off}$ (Table 2). Again, we find consistency between the results of InMetaD and FaMetaD, with e.g. $\tau_{on}^{BNZ}$ and $\tau_{off}^{BNZ}$ to be $\sim 10$ ms and $\sim 170$ ms, respectively. As previously described \cite{Wang2017}, we can use these values to estimate the binding free energy to be $\Delta G_{binding}\approx -21$ kJ/mol, a value that agrees well with calorimetric \cite{Morton1995} and NMR \cite{Feher1996} measurements. In set 1, the InMetaD simulation was performed with $\tau_0=40$ ps and $h=0.4$ kJ/mol. The frequency-adaptive scheme allows us to perform FaMetaD with weaker bias ($h=0.2$ kJ/mol) and ending with longer and more conservative deposition time ($\tau_c=100$ ps). This parameter set used less simulation time but resulted in a more reliable estimation. In set 2, the FaMetaD simulation was performed using the same $\tau_c=100$ ps and $h=0.2$ kJ/mol as that used in the InMetaD simulation, but with $\theta=10^3$. Given that $\tau_{exp}\sim 10-100$ ms, $\tau_{sim}\sim 100$ ns and $C_s=10^2$, according to Eq. (3), this allows us to judge that $\theta=10^3$ is a fairly conservative parameter. The estimated values of $\tau_{off}^{BNZ}$ in this set are indistinguishable between InMetaD and FaMetaD within the error bars but at slightly less computational cost. In set 3, we used similar parameters as in set 2 except a bit more aggressive $\theta=10^4$. Again, this parameter set resulted in $\tau_{off}^{IND}$ of $168\pm95$ ms from FaMetaD that is reasonably close to the estimation from InMetaD. Remarkably, FaMetaD allowed us to reduce more than half the computational cost, without loss of accuracy. Overall, the application in the case of L99A binding with two ligands suggests that the frequency-adaptive scheme can improve the reliability-accuracy-efficency balance of metadynamics on kinetics calculation. \section{Conclusions} Many biological processes occur far from equilibrium, and kinetic properties can play an important role in biology and biochemistry. For example, there has been continued interest in determining ligand residence times in the context of drug optimization \cite{Copeland2006}, and millisecond conformational dynamics in an enzyme has been shown to underlie an intriguing phenomenon of kinetic cooperativity of relevance to disease \cite{Larion2015}. The principle of microscopic reversibility, however, sets limits to how the kinetic properties can be varied independently of thermodynamics. Thus, for example, increasing the residence time of a ligand will also increase its thermodynamic affinity, unless there is also a simultaneous drop in the rate of binding. Thus, in practice it may be in many cases be difficult to disentangle the effects of kinetics and thermodynamics. Taken together, the above considerations suggest the need for improved methods for understanding and ultimately predicting the rate constants of biological processes. Further, the ability to calculate the kinetics of conformational exchange or ligand binding and unbinding provides an alternative approach to determine equilibrium properties \cite{Wang2017}. For these and other reasons, several methods have been developed to enable the estimation of kinetics from simulations \cite{Bruce2018}. We have here proposed a modification to the already very powerful InMetaD algorithm, which leads to a further improvement in the reliability and accuracy in recovering unbiased transition rates from biased metadynamics simulations. The basic idea in the resulting FaMetaD approach is that, by filling up the basin more rapidly in the beginning and only using an infrequent bias near the barrier, we may spend the computational time where it is needed. We anticipate that our scheme will prove particularly useful in two different aspects. First, by enabling a decreased bias in the transition state region, we obtain more accurate kinetics at fixed computational cost, leading also to the observed robustness of our approach. Second, in the case of large barriers, it would be prohibitively slow to use a very infrequent bias through the entire duration of the simulation. Thus, the FaMetaD provides a practical approach to study rare events that involve escaping deep free energy minima, and which occur on timescales well beyond those accessible to current simulation methods. Here we have opted to examine processes that can be studied using both InMetaD and FaMetaD, but in the future we aim to apply this approach to barrier crossing events that occur on even longer timescales. With more examples in hand, we also expect that in the future it may be possible to design improved biasing schemes to increase the computational efficiency further, and at the same time retain the robustness of the FaMetaD approach. \section{Acknowledgement} The authors thank Tristan Bereau and Claudio Perego for a critical reading of the manuscript. K.L.-L. acknowledges funding by a Hallas-M{\o}ller Stipend from the Novo Nordisk Foundation and the BRAINSTRUC initiative from the Lundbeck Foundation. M.P. acknowledges funding from the National Centre for Computational Design and Discovery of Novel Materials MARVEL and European Union grant ERC-2014-AdG-670227/VARMET.
{ "timestamp": "2018-02-13T02:21:39", "yymm": "1802", "arxiv_id": "1802.04182", "language": "en", "url": "https://arxiv.org/abs/1802.04182" }
\section{Introduction} Point-defects in crystalline solids, being either intrinsic like vacancies, self-interstitial atoms, and their small clusters, or extrinsic like impurities and dopants, play a major role in materials properties and their kinetic evolution. Some properties of these point-defects, like their formation and migration energies, are mainly determined by the region in the immediate vicinity of the defect where the crystal structure is strongly perturbed. An atomic description appears thus natural to model these properties, and atomic simulations relying either on \textit{ab initio}\@\xspace calculations \cite{Freysoldt2014} or empirical potentials have now become a routine tool to study point-defects structures and energies. But point-defects also induce a long-range perturbation of the host lattice, leading to an elastic interaction with other structural defects, impurities or an applied elastic field. An atomic description thus appears unnecessary to capture the interaction arising from this long-range part, and sometimes is also impossible because of the reduced size of the simulation cell in atomic approaches. Elasticity theory becomes then the natural framework. It allows a quantitative description of the point-defect interaction with other defects. Following the seminal work of Eshelby \cite{Eshelby1956}, the simplest elastic model of a point-defect corresponds to a spherical inclusion forced into a spherical hole of slightly different size in an infinite elastic medium. This description accounts for the point-defect relaxation volume and its interaction with a pressure field (size interaction). It can be enriched by considering an ellipsoidal inclusion, thus leading to a interaction with also the deviatoric component of the stress field (shape interaction), and by assigning different elastic constants to the inclusion (inhomogeneity) to describe the variations of the point-defect ``size'' and ``shape'' with the strain field where it is immersed. Other elastic descriptions of the point-defect are possible. In particular, it can be modeled by an equivalent distribution of point-forces. The long-range elastic field of the point-defect and its interaction with other stress sources are then fully characterized by the first moment of this force distribution, a second-rank tensor called the elastic dipole. This description is rather natural when modeling point-defects and it can be used to extract elastic dipoles from atomic simulations. These different descriptions are equivalent in the long-range limit, and allow for a quantitative modeling of the elastic field induced by the point-defect, as long as the elastic anisotropy of the matrix is considered. This article reviews these different elastic models which can be used to describe a point-defect and illustrates their usefulness with chosen examples. After a short reminder of elasticity theory (Sec. \ref{sec:elasticity}), we introduce the different descriptions of a point-defect within elasticity theory (Sec. \ref{sec:point_defect}), favoring the elastic dipole description and showing its equivalence with the infinitesimal Eshelby inclusion as well as with an infinitesimal dislocation loop. The next section (Sec. \ref{sec:para}) describes how the characteristics of the point-defect needed to model it within elasticity theory can be obtained either from atomistic simulations or from experiments. We finally give some applications in Sec. \ref{sec:examples}, where results of such an elastic model are compared to direct atomic simulations to assess its validity. The usefulness of this elastic description is illustrated in this section for elastodiffusion and for the calculation of bias factors, as well as for the modeling of isolated point-defects in atomistic simulations. \section{Elasticity theory} \label{sec:elasticity} Before describing the modeling of a point-defect within elasticity theory, it is worth recalling the main aspects of the theory \cite{Landau1970}, in particular the underlying assumptions, some definitions and useful results. \subsection{Displacement, distortion and strain} Elasticity theory is based on a continuous description of solid bodies. It relates the forces, either internal or external, exerting on the solid to its deformation. To do so, one first defines the elastic displacement field. If $\vec{R}$ and $\vec{r}$ are the position of a point respectively in the unstrained and the strained body, the displacement at this point is given by \begin{equation*} \vec{u}(\vec{R}) = \vec{r} - \vec{R}. \end{equation*} One can then define the distortion tensor $\partial u_i \,/\, \partial R_j$ which expresses how an infinitesimal vector $\vv{\mathrm{d}{R}}$ in the unstrained solid is transformed in $\vv{\mathrm{d}{r}}$ in the strained body through the relation \begin{equation*} \mathrm{d}{r}_i = \left( \delta_{ij} + \frac{\partial u_i}{\partial R_j} \right) \mathrm{d}{R}_j, \end{equation*} where summation over repeated indices is implicit (Einstein convention) and $\delta_{ij}$ is the Kronecker symbol. Of central importance to the elasticity theory is the dimensionless strain tensor, defined by \begin{align*} \varepsilon_{ij}(\vec{R}) &= \frac{1}{2}\left[ \left( \delta_{in} + \frac{\partial u_n}{\partial R_i} \right) \left( \delta_{nj} + \frac{\partial u_n}{\partial R_j} \right) - \delta_{ij} \right] \\ &= \frac{1}{2} \left( \frac{\partial u_i}{\partial R_j} + \frac{\partial u_j}{\partial R_i} + \frac{\partial u_n}{\partial R_i}\frac{\partial u_n}{\partial R_j} \right). \end{align*} This symmetric tensor expresses the change of size and shape of a body as a result of a force acting on it. The length $\mathrm{d}{L}$ of the infinitesimal vector $\vv{\mathrm{d}{R}}$ in the unstrained body is thus transformed into $\mathrm{d}{l}$ in the strained body, through the relation \begin{equation*} \mathrm{d}{l}^2 = \mathrm{d}{L}^2 + 2 \varepsilon_{ij} \mathrm{d}{R}_i \mathrm{d}{R}_j. \end{equation*} Assuming small deformation, a common assumption of linear elasticity, only the leading terms of the distortion are kept. The strain tensor then corresponds to the symmetric part of the distortion tensor, as \begin{equation} \varepsilon_{ij}(\vec{R}) = \frac{1}{2} \left( \frac{\partial u_i}{\partial R_j} + \frac{\partial u_j}{\partial R_i} \right). \label{eq:strain_def} \end{equation} The antisymmetric part of the distortion tensor corresponds to the infinitesimal rigid body rotation. It does not lead to any energetic contribution within linear elasticity in the absence of internal torque. With this small deformation assumption, there is no distinction between Lagrangian coordinates $\vec{R}$ and Eulerian coordinates $\vec{r}$ when describing elastic fields. One can equally write, for instance, $\vec{u}(\vec{r})$ or $\vec{u}(\vec{R})$ for the displacement field, which are equivalent to the leading order of the distortion. \subsection{Stress} The force $\vv{\delta F}$ acting on a volume element $\delta V$ of a strained body is composed of two contributions, the sum of external body forces $\vec{f}$ and the internal forces arising from atomic interactions. Because of the mutual cancellation of forces between particles inside the volume $\delta V$, only forces corresponding to the interaction with outside particles appear in this last contribution, which is thus proportional to the surface elements $\vv{\mathrm{d} S}$ defining the volume element $\delta V$. One obtains \begin{equation*} \delta F_i = \int_{\delta V}{ f_i \mathrm{d}{V}} + \oint_{\delta S}{\sigma_{ij}\mathrm{d} S_j}, \end{equation*} where $\sigma$ is the stress tensor defining internal forces. Considering the mechanical equilibrium of the volume element $\delta V$, the absence of resultant force leads to the equation \begin{equation} \frac{\partial \sigma_{ij}(\vec{r})}{\partial r_j} + f_i(\vec{r}) = 0, \label{eq:equil_stress} \end{equation} whereas the absence of torque ensures the symmetry of the stress tensor. At the boundary of the strained body, internal forces are balanced by applied forces. If $\vec{T}^{\rm a} \mathrm{d}{S}$ is the force applied on the infinitesimal surface element $\mathrm{d}{S}$, this leads to the boundary condition \begin{equation} \sigma_{ij} n_j = T^{\rm a}_i, \label{eq:equil_stress_boundary} \end{equation} where $\vec{n}$ is the outward-pointing normal to the surface element $\mathrm{d}{S}$. The work $\delta w$, defined per volume unit, of these internal forces is given by \begin{equation*} \delta w = -\sigma_{ij} \delta{\varepsilon_{ij}}, \end{equation*} where $\delta \varepsilon_{ij}$ is the strain change during the deformation increase, and the sign convention is $\delta w > 0$ when the energy flux goes outwards the elastic body. This leads to the following thermodynamic definition of the stress tensor \begin{equation*} \sigma_{ij} = \left( \frac{\partial e}{\partial \varepsilon_{ij}} \right)_s = \left( \frac{\partial f}{\partial \varepsilon_{ij}} \right)_T, \end{equation*} where $e$, $s$, and $f=e-Ts$ are the internal energy, entropy, and free energy of the elastic body defined per volume unit. \subsection{Hooke's law} \label{sec:elast_Hooke} To go further, one needs a constitutive equation for the energy or the free energy. Taking as a reference the undeformed state corresponding to the elastic body at equilibrium without any external force, either body or applied stress, the energy is at a minimum for $\varepsilon=0$ and then \begin{equation*} \sigma_{ij}(\varepsilon=0) = \left. \frac{\partial e}{\partial \varepsilon_{ij}} \right|_{\varepsilon = 0} = 0. \end{equation*} The leading order terms of the series expansion of the energy are then \begin{equation*} e(T,\varepsilon) = e^0(T) + \frac{1}{2} C_{ijkl}\varepsilon_{ij}\varepsilon_{kl}, \end{equation*} where $e^0(T) = e(T,\varepsilon=0)$ is the energy of the unstrained body at temperature $T$. The elastic constants $C_{ijkl}$ entering this expression are thus defined by \begin{equation*} C_{ijkl} = \frac{\partial^2 e}{\partial \varepsilon_{ij} \partial \varepsilon_{kl}}. \end{equation*} This is a fourth-rank tensor which obeys minor symmetry $C_{ijkl}=C_{jikl}=C_{ijlk}$ because of the strain tensor symmetry and also major symmetry $C_{ijkl}=C_{klij}$ because of allowed permutation of partial derivatives. This leads to at most 21 independent coefficients, which can be further reduced by considering the symmetries of the solid body \cite{Nye1957}. This series expansion of the energy leads to a linear relation, the Hooke's law, between the stress and the strain \begin{equation} \sigma_{ij} = C_{ijkl} \varepsilon_{kl}, \label{eq:stress_Hooke} \end{equation} which was summarized in 1678 by Robert Hooke as \emph{Ut tensio, sic vis}.\footnote{As the extension, so the force.} \subsection{Elastic equilibrium, superposition principle} Combining Hooke's law \eqref{eq:stress_Hooke} with the small deformation definition \eqref{eq:strain_def} of the strain tensor and the equilibrium condition \eqref{eq:equil_stress}, one obtains the equation obeyed by the displacement at equilibrium \begin{equation} C_{ijkl} \frac{\partial^2 u_k(\vec{r})}{\partial r_j \partial r_l} + f_i(\vec{r}) = 0. \label{eq:equil_displacement} \end{equation} The elastic equilibrium is given by the solution which verifies the boundary conditions, $\sigma_{ij}n_ j = T_i^{\rm a}$ for imposed applied forces and $u_i=u_i^{\rm a}$ for imposed applied displacements. As elastic equilibrium is defined by the solution of a linear partial differential equation (Eq. \ref{eq:equil_displacement}), the superposition principle holds. If two elastic fields, characterized by their displacement $\vec{u}^1(\vec{r})$ and $\vec{u}^2(\vec{r})$, correspond to equilibrium for the respective body forces $\vec{f}^1$ and $\vec{f}^2$ and the respective boundary conditions $(\vec{u}^{\rm a1}, \vec{T}^{\rm a1})$ and $(\vec{u}^{\rm a2}, \vec{T}^{\rm a2})$, then the elastic equilibrium for the body forces $\vec{f}^1 + \vec{f}^2$ and the boundary conditions $(\vec{u}^{\rm a1} + \vec{u}^{\rm a2}, \vec{T}^{\rm a1} + \vec{T}^{\rm a2})$ is given by the sum of these two elastic fields. The total elastic energy is composed of the contributions of each elastic field taken separately and an interaction energy given by \begin{equation} \begin{split} E^{\rm int} =& \int_{V}{ \sigma_{ij}^1(\vec{r}) \, \varepsilon_{ij}^2(\vec{r}) \, \mathrm{d}{V} } \\ =& \int_{V}{ \sigma_{ij}^2(\vec{r}) \, \varepsilon_{ij}^1(\vec{r}) \, \mathrm{d}{V} }. \end{split} \label{eq:Eqinter} \end{equation} This equation can be used to define interaction energy between two defects. The superposition principle allows making use of Green's function. The elastic Green's function $G_{kn}(\vec{r})$ is the solution of the equilibrium equation for a unit point-force \begin{equation} C_{ijkl} \frac{\partial^2 G_{kn}(\vec{r})}{\partial r_j \partial r_l} + \delta_{in} \dirac(\vec{r}) = 0, \label{eq:equil_Green} \end{equation} where $\dirac(\vec{r})$ is the Dirac delta function, \textit{i.e.}\@\xspace $\delta(\vec{r})=0$ if $\vec{r}\neq\vec{0}$ and $\delta(\vec{0})=\infty$. $G_{kn}(\vec{r})$ therefore corresponds to the displacement along the $r_{k}$ axis for a unit point-force applied along the $r_{n}$ axis at the origin. The solution of elastic equilibrium for the force distribution $\vec{f}(\vec{r})$ is then given by \begin{align*} u_k(\vec{r}) &= \int_V{ G_{kn}( \vec{r} - \vec{r}^{\,\prime} ) f_n( \vec{r}^{\,\prime} ) \mathrm{d}{V^{\,\prime}} }, \\ \sigma_{ij}(\vec{r}) &= C_{ijkl} \int_V{ G_{kn,l}( \vec{r} - \vec{r}^{\,\prime} ) f_n( \vec{r}^{\,\prime} ) \mathrm{d}{V^{\,\prime}} }, \end{align*} where we have introduced the notation $G_{kn,l} = \partial G_{kn} \,/\, \partial r_l$ for partial derivatives. An analytical expression of the Green's function exists for isotropic elasticity. Considering the elastic constants $C_{ijkl} = \lambda \delta_{ij}\delta_{kl} + \mu( \delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})$, where $\lambda$ and $\mu$ are the Lamé coefficients, the Green's function is given by \begin{equation*} G_{kn}(\vec{r}) = \frac{1}{ 8 \pi \mu} \left[ \frac{\lambda+3\mu}{\lambda+2\mu} \delta_{kn} + \frac{\lambda+\mu}{\lambda+2\mu} \eta_k \eta_n \right] \frac{1}{r}, \end{equation*} with $r = \| \vec{r} \|$ and $\vec{\eta}=\vec{r}/r$. No analytical expression exists in the more general case of elastic anisotropy, but the Green's function, and its successive derivatives, can be calculated efficiently from the elastic constants using the numerical scheme of Barnett \cite{Barnett1972a,Bacon1980}. Whatever the anisotropy, the Green's function and its derivatives will show the same variation with the distance $r$,\footnote{The scaling with the distance $r$ is a consequence of Eq. \eqref{eq:equil_Green}, given that the $\delta(\vec{r})$ function is homogeneous of degree $-3$.} leading to the general expressions \begin{equation*} G_{kn}(\vec{r}) = g_{kn}(\vec{\eta}) \frac{1}{r} \textrm{\ , } G_{kn,l}(\vec{r}) = h_{knl}(\vec{\eta}) \frac{1}{r^2} \textrm{\ , \dots } \end{equation*} where the anisotropy enters only in the angular dependence $g_{kn}(\vec{\eta})$, $h_{knl}(\vec{\eta})$, \dots \section{Elastic model of a point-defect} \label{sec:point_defect} Different models can be used to describe a point-defect within elasticity theory. One such model is the elastic dipole. We first describe this model and then demonstrate the analogy with a description of the point-defect as an infinitesimal Eshelby inclusion or an infinitesimal dislocation loop. We finally introduce the polarizability of the point-defect. \subsection{Elastic dipole} \label{sec:dipole_model} A point-defect can be described in a continuous solid body as an equilibrated distribution of point-forces \cite{Siems1968,Leibfried1978,Bacon1980,Teodosiu1982}. Considering a point-defect located at the origin modeled by such a force distribution $\vec{f}(\vec{r}) = \sum_{q=1}^N{ \vec{F}^q \dirac{(\vec{r}-\vec{a}^q})}$, \textit{i.e.}\@\xspace consisting of $N$ forces $\vec{F}^q$ each acting at position $\vec{a}^q$, the elastic displacement field of the point-defect is, according to linear elasticity theory, given by \begin{equation*} u_i(\vec{r}) = \sum_{q=1}^N{ G_{ij}( \vec{r} - \vec{a}^q ) F^q_j }, \end{equation*} where we have used the elastic Green's function. Far from the point-defect, we have $\| \vec{r} \| \gg \| \vec{a}^q \|$ and we can make a series expansion of the Green's function: \begin{multline*} u_i(\vec{r}) = G_{ij}( \vec{r} ) \sum_{q=1}^N{ F^q_j } \ - \ G_{ij,k}( \vec{r} ) \sum_{q=1}^N{ F^q_j a^q_k } \\ \ + \ \bigO{\left( \| \vec{a}^q \|^2 \right)} . \end{multline*} As the force distribution is equilibrated, its resultant $\sum_q{\vec{F}^q}$ is null. The displacement is thus given, to the leading order, by \begin{equation} u_i(\vec{r}) = - G_{ij,k}( \vec{r} ) P_{jk}, \label{eq:dipole_displacement} \end{equation} and the corresponding stress field by \begin{equation} \sigma_{ij}(\vec{r}) = - C_{ijkl} G_{km,nl}( \vec{r} ) P_{mn}, \label{eq:dipole_stress} \end{equation} where the elastic dipole is defined as the first moment of the point-force distribution, \begin{equation} P_{jk} = \sum_{q=1}^N{ F^q_j a^q_k }. \label{eq:dipole} \end{equation} This dipole is a second rank tensor which fully characterizes the point-defect within elasticity theory \cite{Siems1968,Leibfried1978,Bacon1980,Teodosiu1982}. It is symmetric because the torque $\sum_q{ \vec{F}^q \times \vec{a}^q }$ must be null for the force distribution to be equilibrated. Equations \eqref{eq:dipole_displacement} and \eqref{eq:dipole_stress} show that the elastic displacement and the stress created by a point-defect are long-ranged, respectively decaying as $1/r^2$ and $1/r^3$ with the distance $r$ to the point-defect. The elastic dipole is directly linked to the point-defect relaxation volume. Considering a finite volume $V$ of external surface $S$ enclosing the point-defect, this relaxation volume is defined as \begin{equation*} \Delta V = \oint_S { u_i(\vec{r}) \, \mathrm{d}{S_i} }, \end{equation*} where $\vec{u}(\vec{r})$ is the superposition of the displacement created by the point-defect (Eq. \ref{eq:dipole_displacement}) and the elastic displacement due to image forces ensuring null tractions on the external surface $S$. Use of the Gauss theorem, of the equilibrium condition \eqref{eq:equil_displacement} and of the elastic dipole definition \eqref{eq:dipole} leads to the result \cite{Leibfried1978} \begin{equation} \Delta V = S_{iikl} P_{kl}, \label{eq:dipole_relax_vol} \end{equation} where the elastic compliances $S_{ijkl}$ are the inverse of the elastic constants, \textit{i.e.}\@\xspace $S_{ijkl}C_{klmn} = \frac{1}{2}( \delta_{im}\delta_{jn} + \delta_{in}\delta_{jm} )$. For a crystal with cubic symmetry, this equation can be further simplified \cite{Leibfried1978} to show that the relaxation volume is equal to the trace of the elastic dipole divided by three times the bulk modulus. More generally, as it will become clear with the comparison to the Eshelby's inclusion, this elastic dipole is the source term defining the relaxation volume of the point-defect. Its trace gives rise to the size interaction, whereas its deviator, \textit{i.e.}\@\xspace the presence of off-diagonal terms and differences in the diagonal components, leads to the shape interaction. Of particular importance is the interaction energy of the point-defect with an external elastic field $\vec{u}^{\rm ext}(\vec{r})$. Considering the point-forces distribution representative of the point-defect, this interaction energy can be simply written as \cite{Bacon1980} \begin{equation*} E^{\rm int} = -\sum_{q=1}^N{ F_i^q \, u_i^{\rm ext}(\vec{a}^q)}. \end{equation*} If we now assume that the external field is slowly varying close to the point-defect, one can make a series expansion of the corresponding displacement $\vec{u}^{\rm ext}(\vec{r})$. The interaction energy is then, to first order, \begin{equation*} E^{\rm int} = - u_i^{\rm ext}(\vec{0}) \sum_{q=1}^N{ F_i^q } - u_{i,j}^{\rm ext}(\vec{0}) \sum_{q=1}^N{ F_i^q \, a_j^q}. \end{equation*} Finally, using the equilibrium properties of the point-forces distribution, one obtains \begin{equation} E^{\rm int} = - P_{ij} \, \varepsilon^{\rm ext}_{ij}(\vec{0}), \label{eq:dipole_Einter} \end{equation} thus showing that the interaction energy is simply the product of the elastic dipole with the value at the point-defect location of the external strain field. Higher order contributions to the interaction energy involve successive gradients of the external strain field coupled with higher moments of the multipole expansion of the force distribution, and can be generally safely ignored. This simple expression of the interaction energy is the workhorse of the modeling of point-defects within linear elasticity in a multiscale approach. Instead of working with the elastic dipole tensor, one sometimes rather uses the so-called $\lambda$-tensor \cite{Nowick1972} which expresses the strain variation of a matrix volume with the point-defect volume concentration $c$, \begin{equation} \lambda_{ij} = \frac{1}{\Omega_{\rm at}} \, \frac{\partial \bar{\varepsilon}_{ij}}{\partial c}, \label{eq:lambda_PD} \end{equation} where $\bar{\varepsilon}$ is the homogeneous strain induced by the point-defects in a stress-free state and $\Omega_{\rm at}$ is the atomic volume of the reference solid. As it will become clear when discussing parameterization of the elastic dipole from experiments (\S \ref{sec:para_exp}), these two quantities are simply linked by the relation \begin{equation} P_{ij} = \Omega_{\rm at} \, C_{ijkl} \, \lambda_{kl}. \label{eq:dipole_lamba} \end{equation} Using this $\lambda$-tensor to characterize the point-defect, Eq. \eqref{eq:dipole_Einter} describing its elastic interaction with an external elastic field becomes \begin{equation*} E^{\rm int} = - \Omega_{\rm at} \, \lambda_{ij} \, \sigma_{ij}^{\rm ext}(\vec{0}), \end{equation*} where $\sigma^{\rm ext}_{ij}(\vec{0})$ is the value of the external stress field at the point-defect position. \subsection{Analogy with Eshelby's inclusion} The Eshelby's inclusion \cite{Eshelby1957,Eshelby1959a} is another widespread model which can be used to describe a point-defect in an elastic continuum. As it will be shown below, it is equivalent to the dipole description in the limit of an infinitesimal inclusion. In this model, the point-defect is described as an inclusion of volume $\Omega_{\rm I}$ and of surface $S_{\rm I}$, having the same elastic constants as the matrix. This inclusion undergoes a change of shape described by the eigenstrain $\varepsilon_{ij}^*(\vec{r})$, corresponding to the strain that would adopt the inclusion if it was free to relax and was not constrained by the surrounding matrix. Eshelby proposed a general approach \cite{Eshelby1957} to solve the corresponding equilibrium problem and determine the elastic fields in the inclusion and the surrounding matrix. This solution is obtained by considering the three following steps: \begin{enumerate} \item Take the inclusion out of the matrix and let it adopt its eigenstrain $\varepsilon_{ij}^*(\vec{r})$. At this stage, the stress is null everywhere. \item Strain back the inclusion so it will fit the hole in the matrix. The elastic strain exactly compensates for the eigenstrain, so the stress in the inclusion is $-C_{ijkl}\varepsilon^*_{kl}(\vec{r})$. This operation is performed by applying to the external surface of the inclusion the traction forces corresponding to this stress \begin{equation*} \mathrm{d}{T}_i(\vec{r}) = -C_{ijkl} \, \varepsilon^*_{kl}(\vec{r}) \, \mathrm{d}{S}_j, \end{equation*} where $\vv{\mathrm{d}{S}}$ is an element of the inclusion external surface at the point $\vec{r}$. \item After the inclusion has been welded back into its hole, the traction forces are relaxed. Using Green's function, the corresponding displacement in the matrix is then \begin{equation*} \begin{split} u_n(\vec{r}) &= \oint_{S_{\rm I}} { G_{ni}(\vec{r}-\vec{r}^{\,\prime}) \, \mathrm{d}{T}_i(\vec{r}^{\,\prime})}, \\ &= -\oint_{S_{\rm I}} { G_{ni}(\vec{r}-\vec{r}^{\,\prime}) \, C_{ijkl} \, \varepsilon^*_{kl}(\vec{r}^{\,\prime}) \, \mathrm{d}{S}_j^{\,\prime}}. \end{split} \end{equation*} \end{enumerate} Applying Gauss theorem and the equilibrium condition satisfied by the eigenstrain $\varepsilon_{ij}^*(\vec{r})$, one obtains the following expression for the elastic displacement in the matrix \begin{equation} u_n(\vec{r}) = -\int_{\Omega_{\rm I}} { G_{ni,j}(\vec{r}-\vec{r}^{\,\prime}) \, C_{ijkl} \, \varepsilon^*_{kl}(\vec{r}^{\,\prime}) \, \mathrm{d}{V^{\,\prime}} }, \label{eq:inclusion_displacement} \end{equation} and for the corresponding stress field \begin{multline} \sigma_{pq}(\vec{r}) = -\int_{\Omega_{\rm I}}{ C_{pqmn} \, G_{ni,jm}(\vec{r}-\vec{r}^{\,\prime}) } \\ { C_{ijkl} \, \varepsilon^*_{kl}(\vec{r}^{\,\prime}) \, \mathrm{d}{V^{\,\prime}} }. \label{eq:inclusion_stress} \end{multline} Inside the inclusion, one needs to add the stress $-C_{ijkl}\varepsilon^*_{kl}(\vec{r})$ corresponding to the strain applied in step 2. Far from the inclusion, we have $\| \vec{r} \| \gg \| \vec{r}^{\,\prime} \|$. We can therefore neglect the variations of the Green's function derivatives inside Eqs. \ref{eq:inclusion_displacement} and \ref{eq:inclusion_stress}. This corresponds to the infinitesimal inclusion assumption. For such an infinitesimal inclusion located at the origin, one therefore obtains the following elastic fields \begin{align} u_n(\vec{r}) &= - G_{ni,j}(\vec{r}) \, C_{ijkl} \, \Omega_{\rm I} \, \bar{\varepsilon}^*_{kl}, \label{eq:small_inclusion_displacement} \\ \sigma_{pq}(\vec{r}) &= - C_{pqmn} \, G_{ni,jm}(\vec{r}) \, C_{ijkl} \, \Omega_{\rm I} \, \bar{\varepsilon}^*_{kl}, \label{eq:small_inclusion_stress} \end{align} where we have defined the volume average of the inclusion eigenstrain, $\bar{\varepsilon}_{ij}^* = \frac{1}{\Omega_{\rm I}}\int_{\Omega_{\rm I}}{ \varepsilon_{ij}(\vec{r}) \, \mathrm{d}{V}}$. Comparing these expressions with the ones describing the elastic field of an elastic dipole (Eqs. \ref{eq:dipole_displacement} and \ref{eq:dipole_stress}), we see that they are the same for any $\vec{r}$ value provided the dipole tensor and the inclusion eigenstrain check the relation \begin{equation} P_{ij} = \Omega_{\rm I} \, C_{ijkl} \, \bar{\varepsilon}_{kl}^*. \label{eq:small_inclusion_dipole} \end{equation} The descriptions of a point-defect as an elastic dipole, \textit{i.e.}\@\xspace as a distribution of point-forces keeping only the first moment of the distribution, or as an infinitesimal Eshelby inclusion, \textit{i.e.}\@\xspace in the limit of an inclusion volume $\Omega_{\rm I}\to 0$ keeping the product $\Omega_{\rm I}\,\bar{\varepsilon}_{ij}^*$ constant, are therefore equivalent. The point-defect can be thus characterized either by its elastic dipole tensor $P_{ij}$ or by its eigenstrain tensor $Q_{ij}=\Omega_{\rm I}\,\bar{\varepsilon}_{ij}^*$ \cite{Lazar2017}. Of course, the same equivalence is obtained when considering the interaction energy with an external stress field. For a general inclusion, Eshelby showed that this interaction energy is simply given by \begin{equation} E^{\rm int} = -\int_{\Omega_{\rm I}}{\varepsilon^*_{ij}(\vec{r}) \, \sigma_{ij}^{\rm ext}(\vec{r}) \, \mathrm{d}{V} }, \label{eq:inclusion_Einter} \end{equation} where the integral only runs on the inclusion volume. In the limiting case of an infinitesimal inclusion, one can neglect the variations of the external stress field inside the inclusion. One thus obtains the following interaction energy, \begin{equation} E^{\rm int} = - \Omega_{\rm I} \, \bar{\varepsilon}^*_{ij} \, \sigma_{ij}^{\rm ext}(\vec{0}) , \label{eq:small_inclusion_Einter} \end{equation} which is equivalent to the expression \eqref{eq:dipole_Einter} for an elastic dipole when the equivalence relation \eqref{eq:small_inclusion_dipole} is verified. \subsection{Analogy with dislocation loops} A point-defect can also be considered as an infinitesimal dislocation loop. This appears natural as dislocation loops are known to be elastically equivalent to platelet Eshelby's inclusions \cite{Nabarro1967,Mura1987}. The elastic displacement and stress fields of a dislocation loop of Burgers vector $\vec{b}$ are respectively given by the Burger's and the Mura's formulae \cite{Hirth1982} \begin{align} \begin{split} u_i(\vec{r}) ={ }& C_{jklm} \, b_m \\ & \quad \int_{A}{ G_{ij,k}(\vec{r} - \vec{r}^{\,\prime}) \, n_l(\vec{r}^{\,\prime}) \, \mathrm{d}{A^{\,\prime}} }, \label{eq:dislo_loop_displacement} \end{split} \\ \begin{split} \sigma_{ij}(\vec{r}) ={ }& C_{ijkl} \, \epsilon_{lnh} C_{pqmn} b_m \\ & \quad \oint_{L}{ G_{kp,q}(\vec{r} - \vec{r}^{\,\prime}) \, \zeta_h(\vec{r}^{\,\prime}) \, \mathrm{d}{l^{\,\prime}} }. \end{split} \label{eq:dislo_loop_stress} \end{align} The displacement is defined by a surface integral on the surface $A$ enclosed by the dislocation loop, with $\vec{n}(\vec{r}^{\,\prime})$ the local normal to the surface element $\mathrm{d}{A^{\,\prime}}$ in $\vec{r}^{\,\prime}$, and the stress by a line integral along the loop of total line length $L$. $\vec{\zeta}$ is the unit vector along the loop, and $\epsilon_{lnh}$ is the permutation tensor. Like for the Eshelby's inclusion, far from the loop ($\| \vec{r} \| \gg \| \vec{r}^{\,\prime} \|$), we can use a series expansion of the Green's function derivatives and keep only the leading term. Considering a loop located at the origin, we thus obtain \begin{align} u_i(\vec{r}) =& C_{jklm} \, b_m \, A_l \, G_{ij,k}(\vec{r}) , \label{eq:small_dislo_loop_displacement} \\ \sigma_{pq}(\vec{r}) =& C_{pqin} \, C_{jklm} \, b_m \, A_l \, G_{ij,kn}(\vec{r}) , \label{eq:small_dislo_loop_stress} \end{align} where $\vec{A}$ is the surface vector defining the area of the loop. These expressions are equal to the ones obtained for an elastic dipole \eqref{eq:dipole_displacement} and \eqref{eq:dipole_stress}, with the equivalent dipole tensor of the dislocation loop given by \begin{equation} P_{jk} = - C_{jklm} \, b_m \, A_l. \label{eq:small_dislo_loop_dipole} \end{equation} Looking at the interaction with an external stress field, the interaction energy with the dislocation loop is given by \begin{equation} E^{\rm int} = \int_{A}{ \sigma_{ij}^{\rm ext}(\vec{r}) \, b_i \, n_j \, \mathrm{d}{A} }. \label{eq:dislo_loop_Einter} \end{equation} For an infinitesimal loop, it simply becomes \begin{equation} E^{\rm int} = \sigma_{ij}^{\rm ext}(\vec{0}) \, b_i \, A_j, \label{eq:small_dislo_loop_Einter} \end{equation} which is equivalent to the expression \eqref{eq:dipole_Einter} obtained for an elastic dipole when the equivalent dipole tensor of the dislocation loop is given by Eq. \eqref{eq:small_dislo_loop_dipole}. \subsection{Polarizability} The equivalent point-forces distribution of a point-defect can be altered by an applied elastic field \cite{Kroner1964}. This applied elastic field thus leads to an induced elastic dipole and the total elastic dipole of the point-defect now depends on the applied strain $\varepsilon^{\rm ext}$: \begin{equation} P_{ij}(\varepsilon^{\rm ext}) = P_{ij}^{0} + \alpha_{ijkl} \varepsilon^{\rm ext}_{kl}, \label{eq:polarizability} \end{equation} where $P_{ij}^{0}$ is the permanent elastic dipole in absence of applied strain and $\alpha_{ijkl}$ is the point-defect diaelastic polarizability \cite{Schober1984,Puls1986,Granato1994}. Considering the analogy with the Eshelby's inclusion, this polarizability corresponds to an infinitesimal inhomogeneous inclusion, \textit{i.e.}\@\xspace an inclusion with different elastic constants than the surrounding matrix. It describes the fact that the matrix close to the point-defect has a different elastic response to an applied strain because of the perturbations of the atomic bonding caused by the point-defect. For the analogy with an infinitesimal dislocation loop, the polarizability corresponds to the fact that the loop can change its shape by glide on its prismatic cylinder (or in its habit plane for a pure glide loop) under the action of the applied elastic field. Following Schober \cite{Schober1984}, the interaction of a point-defect located at the origin with an applied strain is now given by \begin{equation} E^{\rm int} = -P^0_{ij} \, \varepsilon^{\rm ext}_{ij}(\vec{0}) - \frac{1}{2} \, \alpha_{ijkl} \, \varepsilon^{\rm ext}_{ij}(\vec{0}) \, \varepsilon^{\rm ext}_{kl}(\vec{0}). \label{eq:dipole_polar_Einter} \end{equation} This expression of the interaction energy, which includes the defect polarizability, has important consequences for the modeling of point-defects as it shows that some coupling is possible between two different applied elastic fields. Considering the point-defect interaction with the two strain fields $\varepsilon^{(1)}$ and $\varepsilon^{(2)}$ originating from two different sources, the interaction energy is now given by \begin{align*} \begin{split} E^{\rm int} ={ }& -P^0_{ij} \left( \varepsilon^{(1)}_{ij} + \varepsilon^{(2)}_{ij} \right) \\ &\qquad - \frac{1}{2} \, \alpha_{ijkl} \left( \varepsilon^{(1)}_{ij} + \varepsilon^{(2)}_{ij} \right) \left( \varepsilon^{(1)}_{kl} + \varepsilon^{(2)}_{kl} \right), \end{split} \\ \begin{split} ={ }& -P^0_{ij} \varepsilon^{(1)}_{ij} - \frac{1}{2} \, \alpha_{ijkl} \, \varepsilon^{(1)}_{ij} \, \varepsilon^{(1)}_{kl} \\ &\qquad -P^0_{ij} \varepsilon^{(2)}_{ij} - \frac{1}{2} \, \alpha_{ijkl} \, \varepsilon^{(2)}_{ij} \, \varepsilon^{(2)}_{kl} \\ &\qquad \qquad - \alpha_{ijkl} \, \varepsilon^{(1)}_{ij} \, \varepsilon^{(2)}_{kl}. \end{split} \end{align*} The last line therefore shows that, without the polarizability, the interaction energy of the point-defect with the two strain fields will be simply the superposition of the two interaction energies with each strain fields considered separately. A coupling is introduced only through the polarizability. Such a coupling is for instance at the origin of one of the mechanisms proposed to explain creep under irradiation. Indeed, because of the polarizability, the interaction of point-defects, either vacancies or self-interstitial atoms, with dislocations under an applied stress depends on the dislocation orientation with respect to the applied stress. This stronger interaction with some dislocation families leads to a larger drift term in the diffusion equation of the point-defect and thus to a greater absorption of the point-defect by these dislocations, a mechanism known as Stress Induced Preferential Absorption (or SIPA) \cite{Heald1974,Heald1975b,Bullough1975a,Bullough1975b}. This polarizability is also the cause, in alloy solid solutions, of the variation of the matrix elastic constants with their solute content. This diaelastic polarizability caused by the perturbation of the elastic response of the surrounding matrix manifests itself at the lowest temperature, even 0\,K, and whatever the characteristic time of the applied strain. At finite temperature there may be another source of polarizability. If the point-defect can adopt different configurations, for instance different variants corresponding to different orientations of the point-defect like for a carbon interstitial atom in a body-centered cubic Fe matrix, then the occupancy distribution of these configurations will be modified under an applied stress or strain. This possible redistribution of the point-defect gives rise to anelasticity \cite{Nowick1972}, the most famous case being the Snoek relaxation in iron alloys containing interstitial solute atoms like C and N \cite{Snoek1941}. When thermally activated transitions between the different configurations of the point-defect are fast enough compared to the characteristic time of the applied stress, the distribution of the different configurations corresponds to thermal equilibrium. Assuming that all configurations have the same energy in a stress-free state and denoting by $P^{\mu}_{ij}$ the elastic dipole of the configuration $\mu$, the average dipole of the point-defect is then given by \begin{equation*} \langle P_{ij} \rangle = \frac{ \sum_{\mu}{ \exp{\left( P_{kl}^{\mu}\varepsilon_{kl}^{\rm ext}\,/\,kT \right)} P_{ij}^{\mu} } } { \sum_{\mu}{ \exp{\left( P_{kl}^{\mu}\varepsilon_{kl}^{\rm ext}\,/\,kT \right)} } }. \end{equation*} As a consequence, the average elastic dipole of the point-defect distribution is now depending on the applied stress and on the temperature, an effect known as paraelasticity \cite{Kroner1964}. At temperatures high enough to allow for transition between the different configurations, the interaction energy of the configurations with the applied strain is usually small compared to $kT$. One can make a series expansion of the exponentials to obtain \begin{equation*} \begin{split} \langle P_{ij} \rangle &= \frac{1}{n_{\mathrm{v}}}\sum_{\mu=1}^{n_{\mathrm{v}}}{ P_{ij}^{\mu} } \\ &- \left( \frac{1}{ {n_{\rm v}}^2}\sum_{\mu, \nu=1}^{n_{\mathrm{v}}}{ P_{ij}^{\mu} P_{kl}^{\nu} } - \frac{1}{n_{\mathrm{v}}}\sum_{\mu=1}^{n_{\mathrm{v}}}{ P_{ij}^{\mu} P_{kl}^{\mu} } \right) \frac{ \varepsilon_{kl}^{\rm ext} }{kT}, \end{split} \end{equation*} where $n_{\mathrm{v}}$ is the number of configurations. This leads to the same linear variation of the elastic dipole with the applied strain as for the diaelastic polarizability (Eq. \ref{eq:polarizability}), except that the paraelastic polarizability is depending on the temperature. \section{Parameterization of elastic dipoles} \label{sec:para} To properly model a point-defect with continuum elasticity theory, one only needs to know its elastic dipole. It is then possible to describe the elastic displacement (Eq. \ref{eq:dipole_displacement}) or the stress field (Eq. \ref{eq:dipole_stress}) induced by the point-defect, and also to calculate its interaction with an external elastic field (Eq. \ref{eq:dipole_Einter}). This elastic dipole can be determined either using atomistic simulations or from experiments. \subsection{From atomistic simulations} \label{sec:para_atom} Different strategies can be considered for the identification of elastic dipoles in atomistic simulations. This elastic dipole can be directly deduced from the stress existing in the simulation box, or from a fit of the atomic displacements, or finally from a summation of the Kanzaki forces. We examine here these three techniques and discuss their merits and drawbacks. \subsubsection*{Definition from the stress} Let us consider a simulation box of volume $V$, the equilibrium volume of the pristine bulk material. We introduce one point-defect in the simulation box and assume periodic boundary conditions to preclude any difficulty associated with surfaces. Elasticity theory can be used to predict the variation of the energy of the simulation box submitted to a homogeneous strain $\varepsilon$. Using the interaction energy of a point-defect with an external strain given in Eq.~\eqref{eq:dipole_Einter}, one obtains \begin{equation} E(\varepsilon) = E_0 + E^{\rm PD} + \frac{V}{2}C_{ijkl}\varepsilon_{ij}\varepsilon_{kl} - P_{ij}\varepsilon_{ij}, \label{eq:energy_box_PD} \end{equation} with $E_0$ the bulk reference energy and $E^{\rm PD}$ the point-defect energy, which can contain a contribution from the interactions of the point-defect with its periodic images (see section~\ref{sec:elast_corr}). The average residual stress on the simulation box is obtained by simple derivation as\footnote{See also Refs. \cite{Puchala2008} and \cite{Pasianot2016b} for other proofs.} \begin{equation} \begin{split} \langle \sigma_{ij}(\varepsilon) \rangle &= \frac{1}{V}\frac{\partial E}{\partial \varepsilon_{ij}}, \\ &= C_{ijkl} \varepsilon_{kl} - \frac{1}{V} P_{ij}. \end{split} \label{eq:sigma_Pij} \end{equation} In the particular case where the periodicity vectors are kept fixed between the defective and pristine supercells ($\varepsilon=0$), the elastic dipole is proportional to the residual stress weighted by the supercell volume: \begin{equation} P_{ij} = -V \langle \sigma_{ij} \rangle. \label{eq:Pij_from_sigma} \end{equation} This residual stress corresponds to the stress increase, after atomic relaxation, due to the introduction of the point-defect into the simulation box. When this equation is used to determine the elastic dipole in \textit{ab initio}\@\xspace calculations, one should pay attention to the spurious stress which may exist in the equilibrium perfect supercell because of finite convergence criteria of such calculations. This spurious stress has to be subtracted from the stress of the defective supercell, so the residual stress entering Eq. \ref{eq:Pij_from_sigma} is only the stress increment associated with the introduction of the point-defect. One can also consider the opposite situation where a homogeneous strain $\bar{\varepsilon}$ has been applied to cancel the residual stress. The elastic dipole is then proportional to this homogeneous strain: \begin{equation} P_{ij} = V C_{ijkl} \bar{\varepsilon}_{kl}. \label{eq:Pij_from_strain} \end{equation} One would nevertheless generally prefer working with fixed periodicity vectors ($\varepsilon=0$) as $\sigma=0$ calculations necessitate an increased number of force calculations, as well as an increased precision for \textit{ab initio}\@\xspace calculations. In the more general case where a homogeneous strain is applied and a residual stress is observed, the elastic dipole can still be derived from these two quantities using Eq. \eqref{eq:sigma_Pij}. This definition of the elastic dipole from the residual stress (Eq. \ref{eq:Pij_from_sigma}), or more generally from both the applied strain and the residual stress (Eq. \ref{eq:sigma_Pij}), is to be related to the dipole tensor measurement first proposed by Gillan \cite{Gillan1981,Gillan1983}, where the elastic dipole is equal to the strain derivative of the formation energy, evaluated at zero strain. Instead of doing this derivative numerically, one can simply use the analytical derivative, \textit{i.e.}\@\xspace the stress on the simulation box, which is a standard output of any atomistic simulations code, including \textit{ab initio}\@\xspace calculations. This technique to extract elastic dipoles from atomistic simulations has been validated \cite{Subramanian2013,Garnier2014,Varvenne2017}, through successful comparisons of interaction energies between point-defects with external strain fields, as given by direct atomistic simulations and as given by the elasticity theory predictions using the elastic dipole identified through Eq.~\eqref{eq:Pij_from_sigma}. The residual stress therefore leads to quantitative estimates of the elastic dipoles. \subsubsection*{Definition from the displacement field} The elastic dipole can also be obtained from the displacement field, as proposed by Chen \textit{et al}.\@\xspace~\cite{Chen2010a}. Using the displacement field $\vec{u}^{\rm at}(\vec{R})$ obtained after relaxation in atomistic simulations, a least-square fit of the displacement field $\vec{u}^{\rm el}(\vec{R})$ predicted by elasticity theory can be realized, using the dipole components of the dipole as fit variables. A reasonable cost function for the least-square fit is \begin{equation} f(P_{ij}) = \sum_{\substack{\vec{R} \\ \|\vec{R}\|>r_{\rm excl}}} \left\|R^2\left[\vec{u}^{\rm el}(\vec{R})-\vec{u}^{\rm at}(\vec{R})\right] \right\|^2 , \label{eq:cost_F} \end{equation} with $r_{\rm excl}$ the radius of a small zone around the point-defect, so as to exclude from the fit the atomic positions where elasticity does not hold. The $R^2$ factor accounts for the scaling of the displacement field with the distance to the point-defect, thus giving a similar weight to all atomic positions included into the fit. For atomistic simulations with periodic boundary conditions, one needs to superimpose the elastic displacements of the point-defect with its periodic images, which can be done by simple summation, taking care of the conditional convergence of the corresponding sum \cite{Varvenne2017}. With large simulation boxes ($\ge 1500$ atoms), the obtained elastic dipole components agree with the values deduced from the residual stress, and the choice of $r_{\rm excl}$ is not critical. The number of atomic positions included in the fit, and for which elasticity is valid, is sufficiently high to avoid issues arising from the defect core zone \cite{Varvenne2017}. In contrast, for small simulation boxes of a few hundred atoms, \textit{i.e.}\@\xspace typical of \textit{ab initio}\@\xspace simulations, the obtained $P_{ij}$ values are highly sensitive to $r_{\rm excl}$, and their convergence with $r_{\rm excl}$ cannot be guaranteed. This fit of the displacement field appears therefore impractical to obtain precise values of the elastic dipole in \textit{ab initio}\@\xspace calculations. \subsubsection*{Definition from the Kanzaki forces} \begin{figure}[!bt] \begin{center} \subfigure[unrelaxed vacancy]{\hspace*{5mm}\includegraphics[scale=0.75]{fig1a.pdf}\hspace*{5mm}} \subfigure[relaxed vacancy]{\hspace*{5mm}\includegraphics[scale=0.75]{fig1b.pdf}\hspace*{5mm}} \\ \subfigure[$0^{\rm th}$ order approx.]{\hspace*{5mm}\includegraphics[scale=0.75]{fig1c.pdf}\hspace*{5mm}} \subfigure[$1^{\rm st}$ order approx.]{\hspace*{5mm}\includegraphics[scale=0.75]{fig1d.pdf}\hspace*{5mm}} \end{center} \caption{Procedure for the computation of the Kanzaki forces in the case of a vacancy. The white spheres correspond to atoms at their perfect bulk positions, \textit{i.e.}\@\xspace before relaxation, the white square to the vacancy, and the black spheres to the atoms at their relaxed position around the defect.} \label{fig:scheme_kanzaki} \end{figure} The definition given in Eq. \eqref{eq:dipole} of the elastic dipole as the first moment of the point-force distribution offers a third way to extract this elastic dipole from atomic simulations. This corresponds to the Kanzaki force method \cite{Kanzaki1957,Faux1971,Tewary1973,Leibfried1978,Schober1980,Lidiard1981,Simonelli1994,Domain2004,Hayward2012}. Kanzaki forces are defined as the forces which have to be applied to the atoms in the neighborhood of the point-defect to produce the same displacement field in the pristine crystal as in the defective supercell. Computation of these Kanzaki forces can be performed following the procedure given in Ref.~\cite{Simonelli1994}, which is illustrated for a vacancy in Fig.~\ref{fig:scheme_kanzaki}. Starting from the relaxed structure of the point-defect (Fig. \ref{fig:scheme_kanzaki}b), the defect is restored in the simulation cell, \textit{e.g.}\@\xspace the suppressed atom is added back for the vacancy case (Fig.~\ref{fig:scheme_kanzaki}c). A static force calculation is performed then and provides the opposite of the searched forces on all atoms in the obtained simulation cell. These atomic forces are used to compute the elastic dipole $P_{ij}=\sum_{q} F_j^q a_i^q$, with $\vec{F}^q$ the opposite of the force acting on atom at $\vec{a}^q$, assuming the point-defect is located at the origin. The summation is usually restricted to atoms located inside a sphere of radius $r_{\rm \infty}$. As Kanzaki's technique is valid only in the harmonic approximation, one checks that the atomic forces entering the elastic dipole definition are in the harmonic regime by restoring larger and larger defect neighboring shells to their perfect bulk positions \cite{Simonelli1994} (Fig.~\ref{fig:scheme_kanzaki}c-d), computing the forces on the obtained restored structures, and then the elastic dipole. The case where $n$ defect neighbor shells are restored is referred to as the $n^{\rm th}$ order approximation. As the restored zone becomes larger, the atoms remaining at their relaxed positions are more likely to sit in an harmonic region. The convergence of the resulting elastic dipole components with respect to $n$ thus enables to evaluate the harmonicity aspect. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.65,clip=true,trim=0mm 0mm 5mm 4mm]{fig2.pdf} \end{center} \caption{Elastic dipole components of the SIA octahedral configuration in hcp Zr, as a function of the cutoff radius $r_{\infty}$ of the force summation normalized by the lattice parameter $a$. Values are obtained by the Kanzaki's forces approach on a simulation box containing 12800 atoms, restoring (a) only the point-defect, and (b) up to $16$ defect neighbor shells. The horizontal lines are the values deduced from the residual stress. Calculations have been performed with the EAM $\#3$ potential of Ref. \cite{Mendelev2007} (see Ref. \cite{Varvenne2017} for more details). } \label{fig:Pij_measurement_EAM} \end{figure} Fig. \ref{fig:Pij_measurement_EAM} provides the elastic dipole values as a function of the cutoff radius $r_{\infty}$, for the octahedral configuration of the self-interstitial atom (SIA) in hcp Zr. Only the point-defect has been restored in Fig. \ref{fig:Pij_measurement_EAM}a (approximation 0), whereas the restoration zone extends to the 16$^{\rm th}$ nearest-neighbors in Fig. \ref{fig:Pij_measurement_EAM}b. Constant $P_{ij}$ values are reached for a cutoff radius $r_{\infty}\sim 2.5\,a$ and $\sim 4\,a$, respectively, showing that the defect-induced forces are long-ranged \cite{Hayward2012,Varvenne2017}. As a result, the supercell needs to be large enough to avoid convolution of the force field by periodic boundary conditions and a high precision on the atomic forces is required. Comparing with the elastic dipole deduced from the residual stress, one cannot only restore the point-defect (approximation 0 in Fig. \ref{fig:Pij_measurement_EAM}a) to obtain a quantitative estimate with the Kanzaki method. A restoration zone extending at least to the 16$^{\rm th}$ nearest neighbors is necessary for this point-defect to obtain the correct elastic dipole. As the anharmonic region depends on the defect and on the material, one cannot choose \textit{a priori} a radius for the restoration zone, but one needs to check the convergence of the elastic dipole with the size of this restoration zone. \subsubsection*{Discussion} These three approaches lead to the same values of the elastic dipole when large enough supercells are used, thus confirming the consistency of this elastic description of the point-defect. This has been checked in Ref. \cite{Varvenne2017} for the vacancy and various configurations of the SIA in hcp Zr. But for small simulation cells typical of \textit{ab initio}\@\xspace calculations, both the fit of the displacement field and the calculation from the Kanzaki forces are usually not precise enough because of the too large defect core region, \textit{i.e.}\@\xspace the region which has to be excluded from the displacement fit or the restoration zone for the Kanzaki forces. This is penalizing for \textit{ab initio}\@\xspace calculations, even for point-defects as simple as the H solute or the vacancy in hcp Zr \cite{Varvenne2017,Nazarov2016}. Besides, the Kanzaki's technique requires additional calculations to obtain the defect-induced forces and to check that the forces entering the dipole definition are in the harmonic regime. As this restoration zone is extended, the defect-induced forces become smaller and the precision has to be increased. The definition from the residual stress appears indeed as the only method leading to reliable $P_{ij}$ values within \textit{ab initio}\@\xspace simulations. It is also easy to apply, as it does not require any post treatment nor additional calculations: it only uses the homogeneous stress on the simulation box and the knowledge of the defect position is not needed. All these methods can be of course also used to determine the diaelastic polarizability. One only needs to get the elastic dipole for various applied strains. The linear equation \eqref{eq:polarizability} then leads the stress-free elastic dipole $P^0_{ij}$ and the polarizability $\alpha_{ijkl}$. The most convenient method remains a definition from the residual stress. Considering the polarizability, Eq. \eqref{eq:sigma_Pij} now writes \begin{equation} \langle \sigma_{ij}(\varepsilon) \rangle = \left( C_{ijkl} - \frac{1}{V} \alpha_{ijkl} \right) \varepsilon_{kl} - \frac{1}{V} P_{ij}, \label{eq:sigma_polarizability} \end{equation} thus showing that the polarizability is associated with a variation of the elastic constants proportional to the point-defect volume fraction. This linear variation of the elastic constants arising from the point-defect polarizability has been characterized for vacancies and SIAs in face-centered cubic (fcc) copper \cite{Ackland1988}, or various solute atoms in body-centered cubic (bcc) iron \cite{Bialon2013,Fellinger2017}. \begin{figure}[!bth] \centering \includegraphics[width=0.7\linewidth]{fig3.pdf} \caption{Elastic dipole of a C atom lying in a [001] octahedral interstitial site in bcc Fe as a function of the inverse volume $V$ of the supercell. The elastic dipole has been deduced from the residual stress in \textit{ab initio}\@\xspace calculations (see Ref. \cite{Clouet2011b} for more details).} \label{fig:C_dipole} \end{figure} One consequence of the diaelastic polarizability is that the elastic dipole may depend on the size of the supercell with periodic boundary conditions. The strain at the point-defect position is indeed the superposition of the homogeneous strain $\varepsilon_{ij}$ and the strains created by the periodic images of the point-defect $\varepsilon_{ij}^{\rm p}$. In the $\varepsilon=0$ case for instance, the obtained elastic dipole is then \begin{equation} P_{ij} = P^0_{ij} + \alpha_{ijkl} \varepsilon_{kl}^{\rm p}. \label{eq:dipole_PBC} \end{equation} As the strain created by a point-defect varies as the inverse of the cube of the separation distance (Eq. \ref{eq:dipole_stress}), the last term in Eq. \eqref{eq:dipole_PBC} scales with the inverse of the supercell volume. Therefore, when homothetic supercells are used, one generally observes the following volume variation \begin{equation*} P_{ij} = P^{0}_{ij} + \frac{\delta P_{ij}}{V}, \end{equation*} which can be used to extrapolate the elastic dipole to an infinite volume, \textit{i.e.}\@\xspace to the dilute limit \cite{Puchala2008,Clouet2011b,Varvenne2017}. An example of this linear variation with the inverse volume is shown in Fig. \ref{fig:C_dipole} for an interstitial C atom in a bcc Fe matrix. \subsection{From experiments} \label{sec:para_exp} From an experimental perspective, when trying to extract elastic dipole of point-defects, both the symmetry and the magnitude of the components of the elastic dipole tensor are \textit{a priori} unknown, and possibly also the number of defect-types into the material. We first restrict ourselves to the case where only one single type of point-defect with a known symmetry is present. If the point-defect has a lower symmetry than the host crystal, then it can adopt several variants which are equivalent by symmetry but possess different orientations. The energy of such a volume $V$ containing different variants of the point-defect and submitted to a homogeneous strain is \begin{multline} E(\varepsilon) = E_0 + E^{\rm PD} + \frac{V}{2}C_{ijkl}\varepsilon_{ij}\varepsilon_{kl} \\ - V \sum_{\mu=1}^{n_{\rm v}}{ c_{\mu} P^{\mu}_{ij} }\varepsilon_{ij}, \end{multline} with $n_{\rm v}$ the total number of different variants and $c_{\mu}$ the volume concentration of variant $\mu$. This relation assumes that the different point-defects are not interacting, which is valid in the dilute limit. For zero stress conditions, as usually the case in experiments, the average strain induced by this assembly of point-defects is \begin{equation} \bar{\varepsilon}_{ij} = S_{ijkl} \sum_{\mu=1}^{n_{\rm v}}{ c_{\mu} P^{\mu}_{kl} }, \label{eq:epsilon_Vegard} \end{equation} with $S_{ijkl}$ the inverse of the elastic constants $C_{ijkl}$. This linear relation between the strain and the point-defect concentrations corresponds to a Vegard's law and allows for many connections with experiments. It generalizes Eq. \ref{eq:Pij_from_strain} to the case of a volume containing a population of the same point-defect with different variants. As mentioned in \S\ref{sec:dipole_model}, point-defects in experiments are sometimes rather characterized by their $\lambda$-tensor \cite{Nowick1972}. Combining the definition of this $\lambda$-tensor (Eq. \ref{eq:lambda_PD}) with Eq. \eqref{eq:epsilon_Vegard}, one shows the equivalence of both definitions: \begin{equation*} \lambda_{ij}^{\mu} = \frac{1}{\Omega_{\rm at}} \, S_{ijkl} \, P^{\mu}_{kl}, \end{equation*} or equivalently Eq. \eqref{eq:dipole_lamba}. When the point-defect has only one variant or when only one variant is selected by breaking the symmetry -- through either a phase transformation (\textit{e.g.}\@\xspace martensitic \cite{Roberts1953,Cheng1990}) or the interaction with an applied strain field for instance -- the variations of the material lattice constants with the defect concentration follow the defect symmetry. If the point-defect concentration is known, the elastic dipole components are therefore fully accessible by measuring lattice parameter variations, \textit{e.g.}\@\xspace by dilatometry or X-ray diffraction using the Bragg reflections. On the other hand, for a completely disordered solid solution of point-defects with various variants ($n_{\rm v}>1$), the average distortion induced by the point-defect population does not modify the parent crystal symmetry \cite{Nowick1972}. Each variant is equiprobable, \textit{i.e.}\@\xspace $c_{\mu}=c_0/n_{\rm v}$ with $c_0$ the nominal point-defect concentration. The stress-free strain induced by the point-defect (Eq. \ref{eq:epsilon_Vegard}) thus becomes \begin{equation*} \bar{\varepsilon}_{ij} = c_0 \, S_{ijkl} \, \langle P_{kl} \rangle \ \textrm{ with }\ \langle P_{kl} \rangle = \frac{1}{n_{\rm v}} \sum_{\mu=1}^{n_{\rm v}}{ P^{\mu}_{kl} }. \end{equation*} Measurements of the lattice parameter variations with the total defect concentration give thus access only to some sets of combinations of the $P_{ij}$ components. For instance, if we consider a point-defect in a cubic crystal, like a C solute in an octahedral site of a bcc Fe crystal, one obtains the following variation of the lattice parameter with the solute concentration \begin{equation} a(c_0) = a_0 \left( 1 + \frac{\Tr{(P)}}{3 \left(C_{11}+2C_{12}\right)} \, c_0 \right), \label{eq:lattice_change_cubic} \end{equation} with $C_{11}$ and $C_{12}$ the elastic constants in Voigt notation. This variation can again be characterized using dilatometry or X-ray diffraction. But knowing $\Tr{(P)}$ is not sufficient for a point-defect with a lower symmetry than the cubic symmetry of the crystal, as the elastic dipole has several independent components (two for the C solute atom in bcc Fe). Additional information is therefore needed to fully characterize the point-defect. For those defects having a lower symmetry than their parent crystal, the anelastic relaxation experiments may provide such supplementary data \cite{Nowick1972, Nowick1963}. By applying an appropriate stress, a splitting of the point-defect energy levels occurs, and a redistribution of the defect populations is operated. The relaxation of the compliance moduli then gives access to other combinations of the elastic dipole components. Not all of the relaxations are allowed by symmetry, as illustrated for the C solute in bcc Fe, where only the quantity $|P_{11}-P_{33}|$ is accessible \cite{Swartz1968}. The number of parameters accessible from anelastic measurements is lower than the independent components of the defect elastic dipole. This technique must then be used in combination with other measurements, like the variations of the lattice parameter. Alternatively, a useful technique working with a random defect distribution is the diffuse Huang scattering. The diffuse scattering of X-rays near Bragg reflections \cite{Trinkaus1972,Bender1983,Michelitsch1996} reflects the distortion scattering caused by the long-range part of the defect-induced displacement field. It thus provides information about the strength of the point-defect elastic dipole. The scattered intensity is proportional -- in the dilute limit -- to the defect concentration and to a linear combination of quadratic expressions of the elastic dipole components. The coefficients of this combination are functions of the crystal elastic constants and of the scattering vector in the vicinity of a given reciprocal lattice vector. Therefore, by an appropriate choice of the relative scattering direction, the quadratic expressions can be determined separately. Except for simple point-defects like a substitutional solute atom or a single vacancy, the defect symmetry may be unknown. Both anelastic relaxation and Huang scattering experiments provide important information for the determination of the defect symmetry. The presence of relaxation peaks in anelasticity is a direct consequence of the defect symmetry \cite{Nowick1972,Nowick1963}. Within Huang scattering experiments, information about the defect symmetry is obtained either by the analysis of the morphology of iso-intensity curves or through an appropriate choice of scattering directions to measure the Huang intensity. To conclude, when extracting elastic dipoles from experiments, one must usually rely on a combination of several experimental techniques to obtain all components. \section{Some applications} \label{sec:examples} \subsection{Solute interaction with a dislocation} \begin{figure}[!bt] \centering \subfigure[Screw dislocation ($h=4d_{110}\simeq8.1$\,\AA)]{ \includegraphics[width=0.8\linewidth]{fig4a.png}} \subfigure[Edge dislocation ($h=-9d_{110}\simeq-18.2$\,\AA)]{ \includegraphics[width=0.8\linewidth]{fig4b.png}} \caption{Binding energy $E^{\rm bind} = -E^{\rm int}$ between a screw or an edge dislocation and a C atom in bcc iron for different positions $x$ of the dislocation in its glide plane. The C atom is lying in a [100] octahedral interstitial site at a fixed distance $h$ of the dislocation glide plane. Symbols correspond to atomistic simulations and lines to elasticity theory, considering all components of the stress created by the dislocation or only the pressure, and using isotropic or anisotropic elasticity.} \label{fig:dislo_Fe_C} \end{figure} This elastic modeling can be used for instance to describe the interaction of a point-defect with other structural defects. To illustrate, and also validate, this approach, we consider a C interstitial atom interacting with a dislocation in a bcc iron matrix. This interstitial atom occupies the octahedral sites of the bcc lattice. As these sites have a tetragonal symmetry, the elastic dipole $P_{ij}$ of the C atoms has two independent components and gives thus rise to both a size and a shape interaction. The interaction energy of the C atom with a dislocation is given by Eq. \eqref{eq:dipole_Einter} where the external strain $\varepsilon^{\rm ext}_{ij}$ is the strain created by the dislocation at the position of the C atom. This has been compared in Ref. \cite{Clouet2008} to direct results of atomistic simulations, using for the C elastic dipole and for the elastic constants the values given by the empirical potential used for the atomistic simulations. Results show that elastic theory leads to a quantitative prediction when all ingredients are included in the elastic model, \textit{i.e.}\@\xspace when elastic anisotropy is taken into account to calculate the strain field created by the dislocation and when both the dilatation and the tetragonal distortion induced by the C atom are considered (Fig. \ref{fig:dislo_Fe_C}). The agreement between both techniques is perfect except when the C atom is in the dislocation core. With isotropic elasticity, the agreement with atomistic simulations is only qualitative, and when the shape interaction is not considered, \textit{i.e.}\@\xspace when the C atom is modeled as a simple dilatation center ($P_{ij}=P\,\delta_{ij}$), elastic theory fails to predict this interaction (Fig. \ref{fig:dislo_Fe_C}). The same comparison between atomistic simulations and elasticity theory has been performed for a vacancy and a SIA interacting with a screw dislocation still in bcc iron \cite{Hayward2012}. The agreement was not as good as for the C atom. But in this work, the elastic dipoles of the point-defects were obtained from the Kanzaki forces, using the $0^{\rm th}$ order approximation, which is usually not as precise as the definition from the stress (\textit{cf}.\@\xspace \S\,\ref{sec:para_atom}) and may explain some of the discrepancies. One can also use elasticity theory to predict how the migration barriers of the point-defect are modified by a strain field. The migration energy is the energy difference between the saddle point and the stable position. Its dependence with an applied strain field $\varepsilon(\vec{r})$ is thus described by \begin{equation} E^{\rm m}[\varepsilon] = E^{\rm m}_0 + P_{ij}^{\rm ini} \varepsilon_{ij}(\vec{r}_{\rm ini}) - P_{ij}^{\rm sad} \varepsilon_{ij}(\vec{r}_{\rm sad}) , \label{eq:Emig_strain} \end{equation} where $P_{ij}^{\rm ini}$ and $P_{ij}^{\rm sad}$ are the elastic dipoles of the point-defect respectively at its initial stable position $\vec{r}_{\rm ini}$ and at the saddle point $\vec{r}_{\rm sad}$, and $E^{\rm m}_0$ is the migration energy without elastic interaction. Still for a C atom interacting with a dislocation in a bcc Fe matrix, comparison of this expression with results of direct atomistic simulations show a good agreement \cite{Veiga2011}, as soon as the C atom is far enough from the dislocation core. Similar conclusions, on the validity of equation \eqref{eq:Emig_strain} to describe the variation of the solute migration energy with an applied strain, have been reached for a SIA diffusing in bcc Fe \cite{Chen2010a}, a vacancy in hcp zirconium \cite{Subramanian2013} or a Si impurity in fcc nickel \cite{Garnier2014}. \subsection{Elastodiffusion} This simple model predicting the variation of the migration energy with an applied strain field (Eq. \ref{eq:Emig_strain}) can be used to study elastodiffusion. Elastodiffusion refers to the diffusion variations induced by an elastic field \cite{Dederichs1978}, either externally applied or internal through the presence of structural defects. Important implications exist for materials, such as transport and segregation of point-defects to dislocations leading to the formation of Cottrell atmospheres \cite{Cottrell1949}, irradiation creep \cite{Woo1984}, or anisotropic diffusion of dopants in semiconductor thin films \cite{Aziz1997,Daw2001}. At the atomic scale, solid state diffusion occurs through the succession of thermally activated atomic jumps from stable to other stable positions, with atoms jumping either on vacancy sites or on interstitial sites of the host lattice. Within transition state theory \cite{Vineyard1957}, the frequency of such a transition is given by \begin{equation} \Gamma_{\alpha} = \nu^0_{\alpha} \exp{ \left( - E^{\rm m}_{\alpha}\,/\, kT \right)}, \label{eq:transition_rate} \end{equation} where $\nu^0_{\alpha}$ is the attempt frequency for the transition $\alpha$ and $E^{\rm m}_{\alpha}$ is the migration energy. Considering the effect of a small strain field on this bulk system, the diffusion network and the site topology will not be modified. On the other hand, the presence of this small strain field modifies the migration energies and the attempt frequencies. As shown in the previous section, the elastic dipole description of the point-defect can predict the modification of the stable and saddle point energies, and thus of the migration energy (Eq. \ref{eq:Emig_strain}). Ignoring the strain effect on attempt frequencies, the incorporation of the modified energy barriers into stochastic simulations like atomistic or object kinetic Monte Carlo (OKMC) methods enables to characterize the point-defect elastodiffusion effect. This approach has been used, for instance, to study the directional diffusion of point-defects in the heterogeneous strain field of a dislocation, corresponding to a biased random walk \cite{Veiga2010,Veiga2011,Subramanian2013}. Diffusion in a continuous solid body is characterized by the diffusion tensor $D_{ij}$ which expresses the proportionality between the diffusion flux and the concentration gradient (Fick's law). The effect of an applied strain is then described by the elastodiffusion fourth-rank tensor $d_{ijkl}$ \cite{Dederichs1978}, which gives the linear dependence of the diffusion tensor with the strain: \begin{equation} D_{ij} = D_{ij}^0 + d_{ijkl} \, \varepsilon_{kl}. \label{eq:d_ijkl} \end{equation} This elastodiffusion tensor obeys the minor symmetries $d_{ijkl}=d_{jikl}=d_{ijlk}$, because of the symmetry of the diffusion and deformation tensors, and also the crystal symmetries. Starting from the atomistic events as defined by their transition frequencies (Eq. \ref{eq:transition_rate}), the diffusion coefficient, and its variation under an applied strain, can be evaluated from the long time evolution of the point-defect trajectories in stochastic simulations \cite{Goyal2015}. Alternatively, analytical approaches can be developed to provide expressions \cite{Howard1964,Allnatt1993}. The elastodiffusion can thus be computed by a perturbative approach, starting from the analytical expression of the diffusion tensor \cite{Dederichs1978,Trinkle2016}. This results in two different contributions: a geometrical contribution caused by the overall change of the jump vectors and a contribution due to the change in energy barriers as described by Eq. \eqref{eq:Emig_strain}. This last contribution is thus a function of the elastic dipoles at the saddle point and stable positions. It is found to have an important magnitude in various systems \cite{Dederichs1978,Trinkle2016}, being for instance notably predominant for interstitial impurities in hcp Mg \cite{Agarwal2016}. It is temperature-dependent, sometimes leading to complex variations with non-monotonic variations and also sign changes for some of its components \cite{Agarwal2016}. As noted by Dederichs and Schroeder \cite{Dederichs1978}, the elastic dipole at the saddle point completely determines the stress-induced diffusion anisotropy in cubic crystals. Experimental measurement of the elastodiffusion tensor components can therefore provide useful information about the saddle point configurations. Both approaches, relying either on stochastic simulations or analytical models, are now usually informed with \textit{ab initio}\@\xspace computed formation and migration energies, and attempt frequencies. The elastic modeling of a point-defect through its elastic dipole offers thus a convenient way to transfer the information about the effects of an applied strain, as obtained from atomistic simulations, to the diffusion framework. \subsection{Bias calculations} Point-defect diffusion and absorption by elements of the microstructure such as dislocations, cavities, grain boundaries and precipitates play an important role in the macroscopic evolution of materials. It is especially true under irradiation, since in this case not only vacancies but also self-interstitial atoms (SIAs) migrate to these sinks. Owing to their large dipole tensor components, SIAs generally interact more than vacancies with the stress fields generated by sinks. This leads to a difference in point-defect fluxes to a given sink known as the ``absorption bias''. For example, in the ``dislocation bias model''~\cite{Brailsford1972}, which is one of the most popular models to explain irradiation void swelling, dislocations are known as biased sinks: they absorb more interstitials than vacancies. Voids, which produce shorter range stress fields, are considered as neutral sinks, meaning that their absorption bias is zero. Since SIAs and vacancies are produced in the same quantity, the preferential absorption of SIAs by dislocations leads to a net flux of vacancies to voids and thus to void growth. Similar explanations based on absorption biases have been given to rationalize irradiation creep~\cite{Heald1974} and irradiation growth in hexagonal materials~\cite{Rouchette2014a}. In order to predict the kinetics of such phenomena, a precise evaluation of absorption biases is necessary. Following the rate theory formalism \cite{Brailsford1972}, the absorption bias of a given sink can be written as the relative difference of sink strengths for interstitials ($k_i^2$) and vacancies ($k_v^2$)~\cite{Heald1975a}. The strength of a sink for a point-defect $\theta$ ($\theta = i, v$) is related to the loss rate $\phi_\theta$ through \begin{equation} \label{eq-flux-sink-strength} \phi_{\theta} = k_{\theta}^2 D_{\theta} c_{\theta}, \end{equation} where $D_\theta$ is the diffusion coefficient free of elastic interactions and $c_{\theta}$ is the volume concentration of $\theta$. The sink strength can be calculated with different methods, for example by solving the diffusion equation around the sink~\cite{Brailsford1972,Dederichs1978} or an associated phase field model~\cite{Rouchette2014}, or by performing object kinetic Monte Carlo simulations (OKMC)~\cite{Heinisch2000,Malerba2007}. It should be noted that analytical solution of the diffusion equation is limited to a few cases and often requires the defect properties or the stress field to be simplified~\cite{Schroeder1975,Woo1981,Skinner1984}, so in general numerical simulations are necessary~\cite{Woo1979a,Bullough1981,Dubinko2005,Jourdan2015}. In the following we consider the OKMC approach, due to its simplicity and its flexibility to introduce complex diffusion mechanisms and the effect of stress fields~\cite{Sivak2011,Subramanian2013,Vattre2016}. In OKMC simulations of sink strengths, a sink is introduced in a simulation box where periodic boundary conditions are used and point-defects are generated at a given rate $K$. They diffuse in the box by successive atomic jumps until they are absorbed by the sink. For each defect in the simulation box, the jump frequencies of all jumps from the current stable state to the possible final states are calculated and the next event is chosen according to the standard residence time algorithm~\cite{Gillespie1976,Bortz1975}. The jump frequency of event $\alpha$ is given by Eq. \eqref{eq:transition_rate}, considering the strain dependence of the migration energy through Eq. \eqref{eq:Emig_strain}. The sink strength is deduced from the average number of defects in the box $\overline{N}_{\theta}$ at steady state by the following equation~\cite{Vattre2016}: \begin{equation} \label{eq-sink-strength-from-C} k_{\theta}^2 = \frac{K}{D_{\theta}\overline{N}_{\theta}}, \end{equation} from which the bias is deduced: \begin{equation} \label{eq-bias-definition} B = \frac{k_i^2-k_v^2}{k_i^2}. \end{equation} Another method is often used for the calculation of sink strengths with OKMC~\cite{Heinisch2000,Malerba2007}. For each defect, the number of jumps it performs before it is absorbed by the sink is registered. The sink strength is then deduced from the average number of jumps. Although this method is equivalent to the method based on the average concentration in the non-interacting case, it is no more valid if elastic interactions are included. In this case the average time before absorption should be measured instead of the average number of jumps, since jump frequencies now depend on the location of the defect and are usually higher. Therefore, applying this method in the interacting case often leads to an underestimation of sink strengths. As an illustration, we consider the study published in Ref. \cite{Vattre2016}, where sink strengths of semi-coherent interfaces have been calculated with OKMC, taking into account the effect of the strain field generated by the interfaces. The strain is the sum of the coherency strain and of the strain due to interface dislocations. It has been calculated by a semi-analytical method within the framework of anisotropic elasticity \cite{Vattre2013,Vattre2015,Vattre2016}. We consider the case of a twist grain boundary in Ag, which produces a purely deviatoric strain field. Two grain boundaries distant from each other by $d$ are introduced in the box and periodic boundary conditions are applied. Dipole tensors of vacancies and SIAs in Ag have been computed by DFT for both stable and saddle positions~\cite{Vattre2016}, using the residual stress definition (Eq. \ref{eq:Pij_from_sigma}). At the ground state, the elastic dipole of the vacancy is isotropic and the one of the SIA almost isotropic. On the other hand, the elastic dipole tensors have a significant deviatoric component for both point-defects at their saddle point. \begin{figure}[!bt] \centering \includegraphics[width=\linewidth]{fig5.pdf} \caption{Sink strengths of a twist grain boundary ($\theta = 7.5$\textdegree) for (a) vacancies and (b) SIAs, and (c) absorption bias, as a function of the layer thickness $d$. (see Ref. \cite{Vattre2016} for more details).} \label{fig-sink-strengths-and-bias} \end{figure} Sink strengths of the twist grain boundary are shown in Fig.~\ref{fig-sink-strengths-and-bias}-(a,b) as a function of the layer thickness $d$ and compared to the analytical result with no elastic interactions $k^2 = 12/d^2$. Sink strengths for both vacancies and SIAs are significantly increased when elastic interactions are included and when anisotropy at saddle point is taken into account, especially for thinner layers. However, if the saddle point is considered isotropic, the non-interacting case is recovered. This is due to the deviatoric character of the strain field: since the dipole tensor of the vacancy in its ground state is purely hydrostatic, the interaction energy of a vacancy with the strain field is zero and there is no thermodynamic driving force for the absorption of the vacancy. A similar result is obtained for SIAs, because of their almost purely hydrostatic dipole for their ground state. Fig.~\ref{fig-sink-strengths-and-bias}c shows the evolution of the bias. For this interface, saddle point anisotropy leads to a negative bias, meaning that vacancies tend to be more absorbed than interstitials. This approach has also been recently used for the calculation of the sink strength of straight dislocations and cavities in aluminum \cite{Carpentier2017}. In both cases, saddle point anisotropy appears to have a significant influence on the sink strengths. This confirms analytical results obtained with various levels of approximation \cite{Skinner1984,Borodin1993,Borodin1994}. \subsection{Isolated defect in atomistic simulations} \label{sec:elast_corr} The elastic modeling of point-defects is also useful in the context of atomistic simulations. Such simulations, in particular \textit{ab initio}\@\xspace calculations, are now unavoidable to obtain the point-defects energetics, like their formation and migration energies \cite{Freysoldt2014}. However, an ongoing issue is their difficulty to obtain the properties of isolated defects. One can use atomistic simulations with controlled surface to model an isolated point-defect \cite{Sinclair1978,Rao1998,Liu2007,Zhang2013,Huber2016}, but then, the excess energy associated with the point-defect could be exactly set apart from the one of the external surfaces or interfaces only for interatomic potentials with a cutoff interaction radius, corresponding to short-range empirical potentials like EAM. For more complex potentials or for \textit{ab initio}\@\xspace calculations, the absence of any interaction cutoff prevents an unambiguous definition of the point-defect energy. A supercell approach relying on periodic boundary conditions is therefore usually preferred. The combined effect of periodic boundary conditions and of the limited size of such calculations, for numerical cost reasons, makes the computed properties difficult to converge for defects inducing long-range effects. This problem is well-known in the context of charged point-defects, where long-range Coulombian interactions exist between the defect and its periodic images and for which corrective schemes have been developed \cite{Leslie1985,Makov1995,Taylor2011}. For neutral defects, interactions between periodic images also exist. These interactions are of elastic origin and decay like the inverse cube of the separation distance. Consequently, the computed excess energies are those of a periodic array of interacting point-defects, and converge with the inverse of the supercell volume to the energy of the isolated defect. This can be penalizing for defects inducing large distortions, like SIAs or clusters, or for atomic calculations where only small supercells are reachable. The elastic description of a point-defect allows calculating this spurious elastic interaction associated with periodic boundary conditions to obtain the energy properties of the isolated point-defect \cite{Varvenne2013}. After atomic relaxation, the excess energy of a supercell containing one point-defect is given by: \begin{equation} \label{eq:E_DP} E^{\rm PD}_{\rm PBC}(\bar{\varepsilon}=0) = E_{\infty}^{\rm PD} + \frac{1}{2} E_{\rm PBC}^{\rm int}, \end{equation} where $E_{\infty}^{\rm PD}$ is the excess energy of the isolated defect and $E_{\rm PBC}^{\rm int}$ is the interaction energy of the defect with its periodic images. The factor $1/2$ arises because half of the interaction is devoted to the defect itself and the other goes to its periodic images. Continuous linear elasticity theory can be used to evaluate this elastic interaction. If the point-defect is characterized by the elastic dipole $P_{ij}$, following Eq. \ref{eq:dipole_Einter}, this interaction energy is given by \begin{equation} E_{\rm PBC}^{\rm int} = - P_{ij} \, \varepsilon^{\rm PBC}_{ij}, \label{eq:Epint} \end{equation} with $\varepsilon^{\rm PBC}_{ij}$ the strain created by the defect periodic images. It can be obtained by direct summation \begin{equation} \varepsilon^{\rm PBC}_{ij} = -{\sum_{n,m,p}}'G_{ik,jl}(n\vec{a}_1+m\vec{a}_2+p\vec{a}_3 ) \, P_{kl}. \label{eq:eps_p} \end{equation} with $\vec{a}_1$, $\vec{a}_2$ and $\vec{a}_3$ the periodicity vectors of the supercell. The prime sign indicates that the diverging term ($n=m=p=0$) has been excluded from the sum. As the second derivative of the Green's function $G_{ik,jl}(\vec{r})$ is decaying like $1/r^3$, this sum is only conditionally convergent. It can be regularized following the numerical scheme proposed by Cai \cite{Cai2003}. After computing the point-defect energy with an atomistic simulation code, this energy can be corrected by subtracting the interaction energy with the periodic images (Eq. \ref{eq:E_DP}) to obtain the properties of the isolated defect. This interaction energy is computed from the elastic constants of the perfect crystal, which are needed to evaluate the Green's function and its derivative (\textit{cf}.\@\xspace \S\,\ref{sec:elast_Hooke}), and from the residual stress of the defective supercell to determine the point-defect elastic dipole (\textit{cf}.\@\xspace \S\,\ref{sec:para_atom}). This is therefore a simple post-treatment, which does not involve any fitting procedure and which can be performed using the \textsc{Aneto} program provided as supplemental material of Ref. \cite{Varvenne2013}. We have assumed in Eq. \eqref{eq:E_DP} that the supercell containing the point-defect has the same periodicity vector than the perfect supercell, \textit{i.e.}\@\xspace the applied homogenous strain $\bar{\varepsilon}$ is null. This corresponds to the easiest boundary conditions in atomistic simulations of point-defects. But sometimes, one prefers to relax also the periodicity vectors to nullify the stress in the supercell. Both these $\bar{\varepsilon}=0$ and $\sigma=0$ conditions converge to the same energy $E^{\rm PD}_{\infty}$ in the thermodynamic limit but different energies are obtained for too small supercells. The elastic model can be further developed to rationalize this difference \cite{Puchala2008,Varvenne2013}. For $\sigma=0$ conditions, a strain $\bar{\varepsilon}$ is applied to the defective supercell to nullify its stress. Eq. \eqref{eq:E_DP} therefore needs to be complemented with the energy contribution of this deformation \begin{equation*} \Delta E(\bar{\varepsilon}) = \frac{V}{2}C_{ijkl}\bar{\varepsilon}_{ij}\bar{\varepsilon}_{kl} - P_{ij} \bar{\varepsilon}_{ij}. \end{equation*} This applied strain $\bar{\varepsilon}$ in zero stress calculations is linked to the elastic dipole by Eq. \eqref{eq:Pij_from_strain}. The excess energy of the supercell containing one point-defect is thus now given by \begin{equation} \label{eq:E_DP_sig0} \begin{split} E^{\rm PD}_{\rm PBC}(\sigma=0) &= E_{\infty}^{\rm PD} + \frac{1}{2} E_{\rm PBC}^{\rm int} - \frac{1}{2V}S_{ijkl}P_{ij}P_{kl} \\ & = E^{\rm PD}_{\rm PBC}(\bar{\varepsilon}=0) - \frac{1}{2V}S_{ijkl}P_{ij}P_{kl}, \end{split} \end{equation} where the elastic compliances of the bulk material $S_{ijkl}$ are the inverse tensor of the elastic constants $C_{ijkl}$. This equation shows that $\bar{\varepsilon}=0$ and $\sigma=0$ conditions lead to point-defect excess energies differing by a factor proportional to the inverse of the supercell volume and to the square of the elastic dipole. This difference will be therefore important for small supercells and/or point-defects inducing an important perturbation of the host lattice. But once corrected through Eqs. \eqref{eq:E_DP} or \eqref{eq:E_DP_sig0}, both approaches should lead to the same value. $\sigma=0$ calculations appear therefore unnecessary. \begin{figure}[!ht] \includegraphics[scale=0.8]{fig6.pdf} \caption{Formation energy of a SIA cluster containing eight interstitials in bcc iron calculated for fixed periodicity vectors ($\bar{\varepsilon} = 0$) or at zero stress ($\sigma=0$) for different sizes of the simulation cell: (a) C15 aggregate and (b) parallel-dumbell configuration with a $\langle111\rangle$ orientation. Atomistic simulations are performed either with the M07 empirical potential \cite{Marinica2012} (EAM) or with \textit{ab initio}\@\xspace calculations (GGA). Filled symbols refer to uncorrected results and open symbols to the results corrected by the elastic model (see Ref. \cite{Varvenne2013} for more details).} \label{fig:Ef_8sia111c15} \end{figure} We illustrate the usefulness of this elastic post-treatment on an atomistic study of SIA clusters in bcc iron. These clusters appear under irradiation and can adopt different morphologies \cite{Marinica2012}. In particular, some clusters can have a 3D structure with an underlying crystal symmetry corresponding to the C15 Laves' phase, and others have a planar structure corresponding to dislocation loop clusters with $1/2\,\langle111\rangle$ Burgers vectors. The formation energies of two different configurations of a cluster containing 8 SIAs, a C15 aggregate and a planar aggregate of parallel-dumbells with a $\langle 111 \rangle$ orientation, are shown in Fig.~\ref{fig:Ef_8sia111c15} for different supercell sizes. They have been first calculated with an empirical EAM potential \cite{Marinica2012}: with fixed periodicity vectors ($\bar{\varepsilon}=0$), one needs at least $2000$ atoms for the C15 aggregate and $4000$ atoms for the $\langle 111 \rangle$ planar configuration to get a formation energy converged to a precision better than $0.1$\,eV. The convergence is slightly faster for zero stress calculations ($\sigma=0$) in the case of the C15 aggregate (Fig. \ref{fig:Ef_8sia111c15}a), but the opposite is true in the case of the $\langle 111 \rangle$ planar configuration (Fig. \ref{fig:Ef_8sia111c15}b). When we add the elastic correction, the convergence is improved for both cluster configurations. The corrected $\bar{\varepsilon}=0$ and $\sigma=0$ calculations lead then to the same formation energies, except for the smallest simulation cell ($128$ lattice sites) in the case of the $\langle 111 \rangle$ cluster. These formation energies have been also obtained with \textit{ab initio}\@\xspace calculations for a simulation cell containing $250$ lattice sites (Fig. \ref{fig:Ef_8sia111c15}). Uncorrected $\bar{\varepsilon}=0$ calculations lead to an energy difference $\Delta E = -5.6$\,eV between the C15 and the $\langle 111 \rangle$ planar configuration, whereas this energy difference is only $\Delta E = -0.6$\,eV in $\sigma = 0$ calculations. This variation of the energy difference is rationalized once the elastic correction is added, and a good precision is obtained with this approach coupling \textit{ab initio}\@\xspace calculations and elasticity theory, with an energy difference of $\Delta E = 3.5 \pm 0.2$\,eV. This elastic correction has been shown to accelerate the convergence of the point-defect formation and/or migration energies obtained from atomistic simulations, in particular from \textit{ab initio}\@\xspace calculations, in numerous other cases like SIA in hcp Zr \cite{Varvenne2013,Pasianot2016a}, vacancy in diamond silicon \cite{Varvenne2013}, or solute interstitials in bcc iron \cite{Souissi2016}. \section{Conclusions} Elasticity theory provides thus an efficient framework to model point-defects. Describing the point-defect as an equilibrated distribution of point-forces, the long range elastic field of the defect and its interaction with other elastic fields are fully characterized by the first moment of this force distribution, a second rank symmetric tensor called the elastic dipole. This description is equivalent to an infinitesimal Eshelby inclusion or an infinitesimal dislocation loop. Knowing only the elastic constants of the matrix and the elastic dipole, a quantitative modeling of the point-defect and its interactions is thus obtained. The value of this elastic dipole can be either deduced from experimental data, like Vegard's law parameters, or extracted from atomistic simulations. In this latter case, care must be taken to avoid finite-size effects, in particular for \textit{ab initio}\@\xspace calculations. The definition through the residual stress appears as the most precise one to obtain the dipole tensors. The elastic description offers a convenient framework to bridge the scales between an atomic and a continuum description so as to consider the interaction of the point-defects with various complex elastic fields. This upscaling approach has already proven its efficiency in the modeling of elastodiffusion or in the calculation of absorption bias under irradiation. As the numerical evaluation of the elastic Green's function and its derivatives does not present nowadays any technical difficulty, such an elastic model offers also a nice route to simulate the evolution of a whole population of point-defects in a complex microstructure, considering their mutual interaction and their interaction with other structural defects, in the same spirit as dislocation dynamics simulations are now routinely used to model the evolution of a dislocation microstructure. \vspace{0.5cm} \linespread{1} \small \textbf{Acknowledgements} - This work was performed using HPC resources from GENCI-CINES and -TGCC (Grants 2017-096847). The research was partly funded by the European Atomic Energy Community’s (Euratom) Seventh Framework Program FP7 under grant agreement No. 604862 (MatISSE project) and in the framework of the EERA (European Energy Research Alliance) Joint Program on Nuclear Materials. \section*{References} \bibliographystyle{elsarticle-num}
{ "timestamp": "2018-02-13T02:18:49", "yymm": "1802", "arxiv_id": "1802.04062", "language": "en", "url": "https://arxiv.org/abs/1802.04062" }
\section{Introduction} \label{sec:intro} Polycyclic aromatic hydrocarbons (PAHs) are a highly abundant family of astronomical molecules that produce dominant infrared (IR) emission features in the 3-20 \textmu m~ spectral range, primarily at 3.3, 6.2, 7.7, 8.6, 11.2, 12.7 and 16.4 \textmu m~ (e.g., \citealt{leger1984,allamandola1985,allamandola1989,peeters2002}). They are common in the interstellar medium and in carbon-rich circumstellar environments, and as such most lines of sight are littered with PAH emission features (for a review see \citealt{tielens2008} and references therein). The PAH emission bands are highly variable, changing in band profile, absolute strength and \textit{relative} strength from environment to environment(e.g., \citealt{hony2001,peeters2002,galliano2008b}). These variations have been linked to the local physical conditions via (e.g.,) radiation field strength, electron density and gas temperature, showing diagnostic potential in using PAHs as probes of local conditions (e.g., \citealt{sloan2007,galliano2008b,boersma2013,stock2016}). PAHs are excited by ultraviolet and visible photons, glowing brightly near regions of ongoing star formation. Conversely, PAH emission is generally weak in sight-lines with no illuminating source. \citet{golriz2014} studied AGB stars in the Galactic bulge and identified PAH emission in background positions of their observations. The Galactic bulge consists mostly of an old population of stars (10 $\pm$ 2.5 Gyr; \citealt{ortolani1995,zoccali2003}). Also, an intermediate age (1-3 Gyr) stellar population exists as evidenced by Mira variables which have evolved from a population of 1.5-2 M$_\sun$ stars \citep{groenewegen2005,blommaert2006}. Many of the bulge stars are in the asymptotic giant branch (AGB) phase, or at the tip of the red giant branch phase (e.g. \citealt{omont1999,ojha2003}). We emphasize that we are only examining background (off-star) positions--altogether, the source of PAH excitation in this environment is not immediately obvious, particularly in conjunction with the strong fine-structure lines observed towards these positions. As such, we aim to characterize the PAH, dust and fine-structure line emission toward the Galactic bulge, and correlate these properties to the local physical conditions. We present a mid- and far-IR study of emission towards the Galactic bulge in four fields. One field of observations (C32) is on the edge of the Galactic center lobe (GCL), a several hundred parsec feature slightly north of the Galactic plane. It was first identified by \citet{sofue1984} in radio continuum emission, who found that it spans roughly 185 pc $\times$ 210 pc between $l=359.2\deg - 0.2\deg$ and $b=0.2\deg - 1.2\deg$. We use known properties of the GCL to interpret the PAH, dust and fine structure line emission towards C32. Another field (C35) appears to reside near the edge of a complementary lobe south of the Galactic plane. Fields OGLE and NGC 6522 are further south of the plane, but generally have limited survey coverage in comparison. We thus focus especially on C32 and C35 in this analysis. We detail our observations and data reduction methods in Sec.~\ref{sec:obs} and accompanying data analysis in Sec.~\ref{sec:inventory}. Results are presented in Sec.~\ref{sec:results} and we discuss relevant implications in Sec.~\ref{sec:discussion}. Lastly, a brief summary of this work is presented in Sec.~\ref{sec:conclusion}. \section{Observations and Data Reduction} \label{sec:obs} \subsection{Target selection and observations} \citet{golriz2014} reported strong IR background emission (including prevalent PAH features) towards the Galactic bulge in four fields (C32, C35, OGLE, NGC 6522), each of which contains multiple pointings. These fields lie at different projected distances to the Galactic bulge (Fig.~\ref{fig:halpha}), notably with C32 and C35 being diametrically opposed across the Galactic center at (l, b) = (0.0\textdegree, 1.0\textdegree), (0.0\textdegree, -1.0\textdegree), respectively. The OGLE and NGC 6522 fields are further south of the Galactic plane, near (0.4\textdegree, -2.1\textdegree) and (1.0\textdegree, -3.8\textdegree), respectively. Note that NGC 6522 is in Baade's Window. \begin{figure} \centering \includegraphics[width=1\linewidth, clip=true, trim=4.1cm 0.7cm 5.8cm 1.4cm]{fig1.pdf} \caption{ An overview of our \textit{Spitzer}/IRS fields (labeled rectangles) overlaid on the 8.3 \textmu m~ band from the \textit{Midcourse Space Experiment} (MSX; \citealt{mill1994}) survey of the Galactic plane \citep{price2001}. The field of C32 is coincident with the boundary of the Galactic center lobe, which is the large wispy arc (located within the dashed circle, drawn to help guide the eye). Coordinates are presented in degrees. } \label{fig:halpha} \end{figure} The spectroscopic observations were acquired with the Infrared Spectrograph (IRS; \citealt{houck2004}) on the \textit{Spitzer} Space Telescope \citep{werner2004}. These data were obtained from the NASA/IPAC \textit{Spitzer} Heritage Archive.\footnote{\url{http://sha.ipac.caltech.edu/applications/Spitzer/SHA/}} The low-resolution ($R\sim100$) data span approximately 5-40 \textmu m, using the short low (SL) and long low (LL) modules. These data were previously examined by \citet{golriz2014} for the purpose of studying bulge AGB stars, which lie at the center of each pointing. However, we are only interested in the off-source emission. A summary of our observations is presented in Table~\ref{table:obs}, which is based on the sample of \citet{golriz2014}, their Table 1. Each field consists of multiple spectral maps, with corresponding unique identifiers, as illustrated in Figs.~\ref{fig:irac_c32c35},~\ref{fig:irac_ogle} and~\ref{fig:irac_ngc6522}, for the fields of C32, C35, OGLE and NGC 6522, respectively. Our sample contains a total of 47 separate pointings across these four fields. \begin{table*} \begin{center} \resizebox{0.85\linewidth}{!}{ \begin{tabular}{lcccrrc} \toprule \toprule ID & Object$^a$ & RA (J2000) & Dec. (J2000) & l (deg.) & b (deg.) & AOR key$^b$ \\ \midrule C32-1 & J174117.5-282957 & 17:41:17.50 & -28:29:57.50 & 359.874 & 1.037 & 10421504 \\ C32-2 & J174122.7-283146 & 17:41:22.70 & -28:31:47.00 & 359.858 & 1.005 & 10421504 \\ C32-3 & J174123.6-282723 & 17:41:23.56 & -28:27:24.20 & 359.922 & 1.041 & 10422784 \\ C32-4 & J174126.6-282702 & 17:41:26.60 & -28:27:02.20 & 359.933 & 1.034 & 10421504 \\ C32-5 & J174127.3-282851 & 17:41:27.26 & -28:28:52.10 & 359.908 & 1.016 & 10421504 \\ C32-6 & J174127.9-282816 & 17:41:27.88 & -28:28:17.10 & 359.918 & 1.019 & 10421504 \\ C32-7 & J174128.5-282733 & 17:41:28.51 & -28:27:33.80 & 359.929 & 1.024 & 10421504 \\ C32-8 & J174130.2-282801 & 17:41:30.15 & -28:28:01.30 & 359.926 & 1.015 & 10422784 \\ C32-9 & J174134.6-282431 & 17:41:34.60 & -28:24:31.40 & 359.984 & 1.032 & 10421504 \\ C32-10 & J174139.5-282428 & 17:41:39.48 & -28:24:28.20 & 359.994 & 1.017 & 10421504 \\ C32-11 & J174140.0-282521 & 17:41:39.94 & -28:25:21.20 & 359.982 & 1.008 & 10421504 \\ C32-12 & J174155.3-281638 & 17:41:55.27 & -28:16:38.70 & 0.135 & 1.037 & 10421504 \\ C32-13 & J174157.6-282237 & 17:41:57.53 & -28:22:37.70 & 0.055 & 0.977 & 10421504 \\ C32-14 & J174158.8-281849 & 17:41:58.73 & -28:18:49.20 & 0.111 & 1.007 & 10421504 \\ C32-15 & J174203.7-281729 & 17:42:03.69 & -28:17:29.90 & 0.139 & 1.003 & 10421504 \\ C32-16 & J174206.85-281832 & 17:42:06.86 & -28:18:32.40 & 0.131 & 0.984 & 10421504 \\ C35-1 & J174917.0-293502 & 17:49:16.96 & -29:35:02.70 & 359.859 & -1.019 & 10421248 \\ C35-2 & J174924.1-293522 & 17:49:23.99 & -29:35:22.20 & 359.868 & -1.044 & 10421248 \\ C35-3 & J174943.7-292154 & 17:49:43.65 & -29:21:54.50 & 0.097 & -0.989 & 10421248 \\ C35-4 & J174948.1-292104 & 17:49:48.05 & -29:21:04.80 & 0.117 & -0.996 & 10421248 \\ C35-5 & J174951.7-292108 & 17:49:51.65 & -29:21:08.70 & 0.122 & -1.008 & 10421248 \\ OGLE-1 & J175432.0-295326 & 17:54:31.94 & -29:53:26.50 & 0.176 & -2.156 & 10422528 \\ OGLE-2 & J175456.8-294157 & 17:54:56.80 & -29:41:57.40 & 0.387 & -2.137 & 10422528 \\ OGLE-3 & J175459.0-294701 & 17:54:58.98 & -29:47:01.40 & 0.318 & -2.186 & 10422528 \\ OGLE-4 & J175511.9-294027 & 17:55:11.90 & -29:40:27.80 & 0.436 & -2.171 & 10423040 \\ OGLE-5 & J175515.4-294122 & 17:55:15.41 & -29:41:22.80 & 0.429 & -2.190 & 10423040 \\ OGLE-6 & J175517.0-294131 & 17:55:16.97 & -29:41:31.90 & 0.430 & -2.196 & 10423040 \\ OGLE-7 & J175521.7-293912 & 17:55:21.70 & -29:39:13.00 & 0.472 & -2.192 & 10423040 \\ NGC 6522-1 & J180234.8-295958 & 18:02:34.78 & -29:59:58.90 & 0.950 & -3.722 & 10421760 \\ NGC 6522-2 & J180238.8-295954 & 18:02:38.72 & -29:59:54.60 & 0.958 & -3.734 & 10421760 \\ NGC 6522-3 & J180248.9-295430 & 18:02:48.90 & -29:54:31.00 & 1.054 & -3.722 & 10422016 \\ NGC 6522-4 & J180249.5-295853 & 18:02:49.44 & -29:58:53.40 & 0.992 & -3.759 & 10422272 \\ NGC 6522-5 & J180259.6-300254 & 18:02:59.51 & -30:02:54.30 & 0.951 & -3.824 & 10421760 \\ NGC 6522-6 & J180301.6-300001 & 18:03:01.60 & -30:00:01.10 & 0.997 & -3.807 & 10422272 \\ NGC 6522-7 & J180304.8-295258 & 18:03:04.80 & -29:52:59.30 & 1.105 & -3.760 & 10422272 \\ NGC 6522-8 & J180305.3-295515 & 18:03:05.25 & -29:55:15.90 & 1.072 & -3.780 & 10421760 \\ NGC 6522-9 & J180305.4-295527 & 18:03:05.33 & -29:55:27.80 & 1.070 & -3.782 & 10422016 \\ NGC 6522-10 & J180308.2-295747 & 18:03:08.11 & -29:57:48.00 & 1.040 & -3.809 & 10422016 \\ NGC 6522-11 & J180308.6-300526 & 18:03:08.52 & -30:05:26.50 & 0.930 & -3.873 & 10421760 \\ NGC 6522-12 & J180308.7-295220 & 18:03:08.69 & -29:52:20.40 & 1.121 & -3.767 & 10421760 \\ NGC 6522-13 & J180311.5-295747 & 18:03:11.47 & -29:57:47.20 & 1.047 & -3.820 & 10421760 \\ NGC 6522-14 & J180313.9-295621 & 18:03:13.88 & -29:56:20.90 & 1.072 & -3.816 & 10422016 \\ NGC 6522-15 & J180316.1-295538 & 18:03:15.99 & -29:55:38.30 & 1.086 & -3.817 & 10422272 \\ NGC 6522-16 & J180323.9-295410 & 18:03:23.84 & -29:54:10.70 & 1.121 & -3.830 & 10422016 \\ NGC 6522-17 & J180328.4-295545 & 18:03:28.36 & -29:55:45.40 & 1.106 & -3.856 & 10421760 \\ NGC 6522-18 & J180333.3-295911 & 18:03:33.26 & -29:59:11.50 & 1.065 & -3.900 & 10422016 \\ NGC 6522-19 & J180334.1-295958 & 18:03:34.07 & -29:59:58.80 & 1.055 & -3.909 & 10421760 \\ \bottomrule \end{tabular} } \end{center} This table is adapted from \citet{golriz2014}, their Table 1. The coordinates are the central positions of the slit. $^a$References: \citet{omont2003,ojha2003,blommaert2006}. $^b$The AOR key uniquely identifies \textit{Spitzer} Space Telescope observations. \caption{Spitzer/IRS observations} \label{table:obs} \end{table*} Photometric observations between 12 and 500 \textmu m~ are included from three sources to augment the mid-IR spectroscopy. First, we include data from the \textit{Herschel} Space Observatory \citep{pilbratt2010} infrared Galactic plane survey (Hi-GAL; \citealt{molinari2010}), which observed the Galactic plane with the Photoconductor Array Camera and Spectrometer (PACS; \citealt{poglitsch2010}) and Spectral and Photometric Imaging REceiver (SPIRE; \citealt{griffin2010}). Second, we include photometric observations taken by the \textit{AKARI} Space Telescope \citep{murakami2007} with its Far-Infrared Surveyor (FIS) instrument \citep{kawada2007}, released as part of the \textit{AKARI} all-sky survey maps \citep{doi2015}. And third, we analyze images obtained by the \textit{Infrared Astronomical Satellite} (\textit{IRAS}), which were released via Improved Reprocessing of the \textit{IRAS} Survey (IRIS; \citealt{iris2005}). The photometric observations are summarized in Table~\ref{table:photometry}. Archival H$\alpha$~ images of the Galactic bulge region are acquired from the Southern H$\alpha$~ Sky Survey Atlas (SHASSA; \citealt{gaustad2001}). Photometric calibration errors of the Hi-GAL photometric observations have been estimated as 5\% for \textit{Herschel}/PACS and 4\% for \textit{Herschel}/SPIRE \citep{molinari2016}. For the IRIS sample of \textit{IRAS} observations, the errors are approximately 15\%, 18\%, 11\% and 20\% for the 12, 25, 60 and 100 \textmu m~ bands, respectively \citep{iris2005}. The \textit{AKARI} photometric errors can be up to 20\%, 30\%, 40\% and 40\% for the 60, 90, 140 and 160 \textmu m~ filters \citep{kawada2007}. \begin{table*} \begin{center} \resizebox{1\linewidth}{!}{ \begin{tabular}{l l l l c} \toprule \toprule Observatory & Instrument & Nominal filters & Data origin & Data reference \\ \midrule \textit{Herschel} Space Observatory & PACS, SPIRE & 70, 160, 250, 350, 500 \textmu m & Hi-GAL & 1\\ \textit{AKARI} & FIS & 60, 90, 140, 160 \textmu m & All-sky survey maps & 2\\ \textit{Infrared Astronomical Satellite} (IRAS) & & 12, 25, 50, 100 \textmu m & IRIS & 3\\ Cerro Tololo Inter-American Observatory & & 656 nm & SHASSA & 4 \\ \bottomrule \end{tabular} } \end{center} \centering References: (1) Hi-GAL: The \textit{Herschel} Infrared Galactic Plane Survey \citep{molinari2010}; (2) \textit{AKARI} Far-Infrared All-Sky Survey Maps \citep{doi2015}; (3) IRIS: Improved Reprocessing of the \textit{IRAS} Survey \citep{iris2005}; (4) SHASSA: Southern H$\alpha$~ Sky Survey Atlas \citep{gaustad2001}. \caption{Photometric observations} \label{table:photometry} \end{table*} \begin{figure} \begin{center} \includegraphics[width=1\linewidth, clip=true, trim=1cm 1.2cm 2cm 0.8cm] {fig2a.pdf} \includegraphics[width=0.98\linewidth, clip=true, trim=2.6cm 1.3cm 4cm 1.8cm] {fig2b.pdf} \caption{The IRS apertures for C32 (\textit{top}) and C35 (\textit{bottom}) are overlaid on an Infrared Array Camera (IRAC; \citealt{fazio2004}) 8 \textmu m~ image. The numbered labels correspond to the overlapping SL and LL apertures (the short and long blue rectangles, respectively), e.g. C32-1 (c.f. Table~\ref{table:obs}).} \label{fig:irac_c32c35} \end{center} \end{figure} \subsection{Data reduction} The \textit{Spitzer}/IRS data were reduced using the \texttt{CUBISM} tool \citep{smith2007cubism}, beginning with the basic calibrated data processed by the \textit{Spitzer} Science Center\footnote{\url{http://ssc.spitzer.caltech.edu/}} (pipeline version S18.18). The \texttt{CUBISM} tool performs coaddition and bad pixel cleaning and produces full spectral cubes. Since we are only interested in the background (off-star) emission, no further reduction was performed (apart from identifying additional cosmic ray spikes or bad pixels). The slits were rebinned with a $2\times 2$-pixel aperture to match the point-spread function of IRS to sample independent pixels. Additional details of this approach are described by \citet{peeters2012}. The photometric data were retrieved fully processed, thus no additional reduction was necessary. \subsection{Aperture overlap} \label{sec:apertures} The IRS/SL and IRS/LL apertures are oriented approximately perpendicular to one another, such that the intended astronomical target lies at their intersection. Since we are interested in the background emission in the IRS observations, i.e., not the point of intersection, we cannot measure the 5-14 \textmu m~ and 14-38 \textmu m~ spectra at the same spatial position, which introduces an offset. After combining nod positions, each SL aperture in our observations typically spans 83\arcsec, while the LL apertures cover 233\arcsec. The maximum separation between SL and LL pointings is thus approximately 118\arcsec, with mean separations near 59\arcsec. We analyzed our spectra in two ways: first, we measured the median spectra in each module (avoiding the stellar emission zone), and stitched the median components together to produce a single full spectrum from 5-38 \textmu m; and second, we analyzed the SL and LL spectra entirely independently. No systematics were identified when comparing the separate spectra to the fully stitched median spectra, so we use the latter to represent the astronomical background emission for each pointing. It should be noted that some of the positions in each field are close enough to other pointings such that (e.g.,) the SL aperture of one position may overlap the LL aperture of another (see Fig.~\ref{fig:irac_c32c35}). However, we are interested in the general behavior of the spectra across the fields, rather than small position-to-position variations. We use apertures that are tightly clustered to verify consistency in our results. In C32 and C35, calibration offsets between the SL and LL modules and orders are very minor (on average less than 5\%, and reaching a maximum of 10\%), despite the spatial offset between the apertures. In the OGLE and NGC 6522 fields, however, the SL and LL modules are frequently mismatched by 10-20\%, and in one instance as high as 40\%. Because of this, we do not scale the SL and LL modules to each other in OGLE and NGC 6522. In all OGLE and NGC 6522 positions, the individual LL1 and LL2 orders are well matched, but the SL1 and SL2 orders are not (for instance, a factor of 0.30 is typically needed to bring SL2 in line with SL1 for the OGLE and NGC 6522 fields). As such, we exclude the OGLE and NGC 6522 measurements when examining PAH band strength ratios later in the text (Section~\ref{sec:corr} and accompanying figures). \section{Spectral analysis} \label{sec:inventory} Examining the spectrum for pointing 1 of field C32 (C32-01, for short) in Fig.~\ref{fig:spectrum1}, many PAH emission features are prominent, including bands at 6.2, 7.7, 8.6, 11.2, 12.7, 16.4, 17.4 and 17.8 \textmu m. Weaker PAH emission at 12.0 \textmu m, 15.8 \textmu m~ and possibly 14.0 \textmu m~ may also be present. A smoothly rising dust continuum is visible, as are plateaus between 5-10 \textmu m, 10-15 \textmu m~ and 15-18 \textmu m. Atomic emission lines are also present, including 12.8 \textmu m~ [Ne~\textsc{ii}], 15.5 \textmu m~ [Ne~\textsc{iii}], 18.7 \textmu m~ [S~\textsc{iii}], 25.9 \textmu m~ [O~\textsc{iv}], 33.5 \textmu m~ [S~\textsc{iii}] and the 34.8 \textmu m~ [Si~\textsc{ii}] line, in addition to H$_2$ lines at 9.7, 12.3, 17.0 \textmu m~ and 28.2 \textmu m. It is possible in some instances that emission from the 25.99 \textmu m~ [Fe~\textsc{ii}] line is blended with the 25.89 \textmu m~ [O~\textsc{iv}] line. Our measurements of line centroids suggest however that we are observing [O~\textsc{iv}] the majority of the time, if any blend is present at all. As such, we henceforth assume the emission is from the 25.89 \textmu m~ [O~\textsc{iv}] line. \begin{figure*} \centering \includegraphics[width=0.75\linewidth]{fig3.pdf} \caption{ The median spectrum of C32-1. The thick black curve is the local spline continuum, while the dashed black line is an estimate of the zodiacal dust emission. Three plateaus are denoted by the green shading. These are determined by measuring the area between the local spline continuum and straight lines fit between the emission at 5, 10, 15 and 18 \textmu m; see Section~\ref{sec:inventory}. The prominent emission features are identified with dotted vertical lines. } \label{fig:spectrum1} \end{figure*} The emission features in each spectrum are isolated from the underlying continuum by fitting a local spline continuum to measure the band and line fluxes (Fig.~\ref{fig:spectrum1}). This spline is anchored at a series of wavelengths where only continuum emission is expected. This is a common approach in the literature for measuring the strengths of the PAH emission features (e.g. \citealt{vankerckhoven2000,hony2001,peeters2002,galliano2008b}). Other possible methods for measuring these bands include fitting Drude profiles (PAHFIT; \citet{smith2007}) or Lorentzian profiles \citep{boulanger1998,smith2007,galliano2008b}. \citet{galliano2008b} showed that the measured PAH fluxes will vary from method to method, but the overall trends and conclusions reached using these methods are consistent. Once the continuum has been identified and subtracted, the spectra are analyzed in several ways: the PAH features are directly integrated, except for the 11.0 and 12.7 \textmu m~ bands, which are blended with the 11.2 \textmu m~ emission and the 12.8 \textmu m~ [Ne~\textsc{ii}] line, respectively. There may also be 12.3 \textmu m~ H$_2$ emission blended with the 12.7 \textmu m~ PAH band. The 12.7 \textmu m~ PAH band is isolated from the 12.3 \textmu m~ H$_2$ and 12.8 \textmu m~ [Ne~\textsc{ii}] lines by fitting a template of the 12.7 \textmu m~ PAH emission (for details see \citealt{stock2014} and \citealt{shannon2015}). The 11.0 \textmu m~ emission is fit with a Gaussian while keeping the shape of the lightly blended 11.2 \textmu m~ band fixed. The other atomic and molecular lines are fit with Gaussians whose widths are fixed to the instrument's spectral resolution. A common method for estimating the plateau strengths is to fit straight lines between the continuum emission at 5, 10, 15 and 18 \textmu m~ \citep{peeters2012,peeters2017}. The difference between this curve and the local spline then defines the plateau regions, which we directly integrate (Fig.~\ref{fig:spectrum1}). \section{Results} \label{sec:results} \subsection{Composite images} We present composite 3-color images of the C32 and C35 fields in Figs.~\ref{fig:rgb_c32} and~\ref{fig:rgb_c35}, respectively. Each composite is constructed from \textit{Spitzer}/IRAC (8 \textmu m), \textit{Herschel}/SPIRE (250 \textmu m) and the SHASSA H$\alpha$~ survey (656 nm). In C32, there appears to be a region of elevated H$\alpha$~ emission (or ``channel" hereafter) that bisects the field. The 8 \textmu m~ and 250 \textmu m~ emission appear to peak on either side of this channel. A similar elevated H$\alpha$~ region/channel is apparent in C35 (Fig.~\ref{fig:rgb_c35}), coincident with positions 3, 4 and 5. There was insufficient coverage to prepare similar figures for the OGLE and NGC 6522 fields. \begin{figure*} \centering \includegraphics[width=0.7\linewidth, clip=false, trim=0.57cm 0.30cm 0.65cm 0.27cm]{fig4.pdf} \caption{ A composite image of C32, constructed with images from Spitzer/IRAC 8 \textmu m~ (red), Herschel/SPIRE 250 \textmu m~ (green) and 656 nm H$\alpha$~ emission (blue). The green rectangles identify the SL and LL apertures (less elongated and more elongated, respectively; c.f. Fig.~\ref{fig:irac_c32c35} and Table~\ref{table:obs} for identifications). An elevated H$\alpha$~ emission zone (or channel) that bisects the IRS apertures is visible. } \label{fig:rgb_c32} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\linewidth]{fig5.pdf} \caption{ A composite image of C35, with Galactic longitude (in degrees) on the x axis and Galactic latitude (in degrees) on the y axis. The image is composed of emission from Spitzer/IRAC 8 \textmu m~ (red), Herschel/SPIRE 250 \textmu m~ (green) and 656 nm H$\alpha$~ (blue). The leftmost C35 pointings (corresponding to positions C35-3, C35-4 and C35-5) are coincident with an elevated H$\alpha$~ emission region or channel (c.f. Fig~\ref{fig:irac_c32c35}). } \label{fig:rgb_c35} \end{figure*} \subsection{The spectra} Median spectra for each position of our four fields (C32, C35, OGLE, NGC 6522) are shown in Fig.~\ref{fig:meanspec}. We have included an estimate for the zodiacal light emission in these spectra using the Spitzer Zodiacal Light Model\footnote{\url{http://irsa.ipac.caltech.edu/data/SPITZER/docs/dataanalysistools/tools/contributed/general/zodiacallight/}}. C32 displays remarkably similar spectra across the sixteen apertures, both in overall continuum shape and individual emission features (Fig.~\ref{fig:meanspec}a). Deviations in continuum brightness between C32 positions are typically less than 5 MJy/sr on $\sim$20-40 MJy/sr continua. We observe significant contribution from zodiacal light, but it does not dominate the overall continuum of these spectra. The PAH features and plateaus appear to be similar in strength and shape across all positions. Only the atomic fine-structure lines show significant variations, as the 18.7 \textmu m~ [S~\textsc{iii}], 33.5 \textmu m~ [S~\textsc{iii}] and 34.8 \textmu m~ [Si~\textsc{ii}] emission lines vary by a factor of approximately two in peak intensity. The 25.89 \textmu m~ [O~\textsc{iv}] line is also visible, which is typically a tracer of shocked gas \citep{simpson2007}, though it appears to vary little across the field. The C35 spectra (Fig.~\ref{fig:meanspec}b) are very similar in appearance to the spectra of C32. The only spectral differences within the C35 field are variations in their continuum shape beyond $\sim25$ \textmu m, with position 2 being slightly flatter than the other positions. Fine-structure line variations are observed, similar to that of C32, and the 25.89 [O~\textsc{iv}] line is again clearly present. The contribution from zodiacal light is essentially identical to that observed toward C32---the rising dust continuum at long wavelengths diverges from the zodiacal dust emission. Spectra for the OGLE field (Fig.~\ref{fig:meanspec}c) display a strong jump between modules SL and LL (near 14.5 \textmu m) in some positions (see Section~\ref{sec:apertures}). Beyond 15 \textmu m~ all spectra within the field have comparable continua, with typical surface brightnesses of approximately 24 MJy/sr. The overall shape of the OGLE spectra essentially trace the zodiacal dust emission, in contrast to C32 and C35. PAH features are visible in the OGLE spectra, though some are weak and/or difficult to detect above the noise (e.g., the 12.7 \textmu m~ feature in OGLE-6). There is clear 11.2 \textmu m~ emission in several of the OGLE positions. The 15-20 PAH emission is distinct from that seen towards the C32 and C35 fields, seemingly extending to 20 \textmu m~ instead of 18 \textmu m---such emission has been previously observed by \citet{vankerckhoven2000} and \citet{peeters2006}. The 25.89 [O~\textsc{iv}] emission line is again observed, along with H$_2$, Ne and S emission lines. Turning to the final field, the emission in NGC 6522 is dominated by zodiacal dust emission (Fig.~\ref{fig:meanspec}d). These spectra are very noisy and show almost no PAH emission features, apart from possibly very weak 7.7 and 11.2 \textmu m~ bands in some positions. However, the 15-20 \textmu m~ plateau is present and very strong in NGC 6522. The 33.5 \textmu m~ [S~\textsc{iii}] and 34.8 \textmu m~ [Si~\textsc{ii}] lines are also present at relatively low signal-to-noise. The 25.89 [O~\textsc{iv}] emission line cannot be detected, if it is present at all. \begin{figure*} \centering \includegraphics[width=1.0\linewidth]{fig6.pdf} \caption{Spectra for C32, C35, OGLE and NGC6522 are shown in panels (a), (b), (c) and (d), respectively. These C32 and C35 spectra were created by stitching the emission from the SL and LL modules, which are not necessarily spatially coincident (see Section~\ref{sec:apertures}), and taking a median over each pointing. The OGLE and NGC 6522 SL and LL spectra are not stitched at their overlap (denoted by the vertical dotted line; see Sec.~\ref{sec:apertures}). The colors and inset labels indicate individual positions (c.f., Table~\ref{table:obs}). The dashed black lines below the spectra are estimates for the zodiacal light emission in each field. } \label{fig:meanspec} \end{figure*} We compare all four fields, overlaid, in Fig.~\ref{fig:specall4}, with a single median spectrum being constructed from all positions in each field. The zodiacal dust emission has been removed from these data. The C32 and C35 spectra are extremely similar, differing only in the strength of their atomic fine-structure lines and possibly the continuum near 12-14 \textmu m. The 18.71 and 33.48 \textmu m~ [S~\textsc{iii}] lines and 34.82 \textmu m~ [Si~\textsc{ii}] line are on average much weaker in C35 than C32. Recall these fields are at similar Galactocentric distances, but on opposite sides of the Galactic plane, residing near (l, b) = (0.0\textdegree, 1.0\textdegree) and (0.0\textdegree, -1.0\textdegree), respectively. The NGC 6522 and OGLE fields, which are further distant at (0.4\textdegree, -2.1\textdegree) and (1.0\textdegree, -3.8\textdegree), respectively, are dominated by zodiacal emission and quite alike after subtraction. The emission features in the OGLE field are a bit brighter than those of NGC 6522, and its continuum emission beyond 28 \textmu m~ rises and slightly diverges from the NGC 6522 spectra. Otherwise, they are quite similar. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig7.pdf} \caption{ A comparison of the median spectra in each field after subtracting the zodiacal dust emission. Surprisingly, the C32 and C35 spectra are exceedingly similar in all but the atomic fine-structure lines. The continua of OGLE and NGC 6522 have essentially disappeared, showing that they were dominated by the zodiacal light. } \label{fig:specall4} \end{figure} \subsection{Variability of the emission features} \label{sec:refappendix} Here we discuss the emission features within fields C32 and C35. The tabulated fluxes of all measured quantities are presented in the Appendix. Additionally, maps of all band/line strengths measured within these fields are presented in the Appendix (Figs.~\ref{fig:c32maps},~\ref{fig:c32maps2},~\ref{fig:c35maps} and~\ref{fig:c35maps2}). Due to the limited number of reliable measurements within the OGLE and NGC 6522 fields (c.f. Tables~\ref{table:pahfluxes} and~\ref{table:atomicfluxes}) no such figures are prepared for these sources. We first examine select emission features in C32 (Fig.~\ref{fig:mapc32}), overlaid on a J-band image from the Two Micron All Sky Survey (2MASS; \citealt{skrutskie2006}). The 7.7 and 11.2 \textmu m~ PAH bands are weakest in the central part of the C32 field (near the apertures of C32-9, C32-10 and C32-11) and peak towards the outer regions of the field (near C32-2 and C32-12; c.f., Fig.~\ref{fig:irac_c32c35}). Conversely, the fine-structure lines all peak near the central part of the map, suggesting an anti-correspondence (e.g., the 18.7 \textmu m~ [S~\textsc{iii}] emission). Taking into account the three-color image of C32 (Fig.~\ref{fig:rgb_c32}), we infer that there is a correspondence between emission strengths and the H$\alpha$~ channel: where the H$\alpha$~ emission is strong, the atomic lines are brightest and PAH bands weakest. The 25.89 [O~\textsc{iv}] emission is brightest within the H$\alpha$~ emission channel, though it is slightly offset from the 33.5 \textmu m~ [S~\textsc{iii}] emission. The 12.8 \textmu m~ [Ne~\textsc{ii}] line is detected throughout C32 (Fig.~\ref{fig:c32maps2}), with significantly variability near the H$\alpha$~ channel: the emission is three times brighter in the H$\alpha$~ channel than in the neighboring leftward positions (positions 12, 14, 15 and 16). The 15.5 \textmu m~ [Ne~\textsc{iii}] line is much less variable, and is only detected in the central part of the field (in five positions). The [Ne~\textsc{iii}]/[Ne~\textsc{ii}] flux ratio shows no monotonic trend, though note the [Ne~\textsc{iii}] line is relatively noisy in these spectra. Turning to the 17.0 \textmu m~ and 28.2 \textmu m~ H$_2$ lines, these are weakest in the H$\alpha$~ channel and strongest towards the far-right pointing (C32-1), furthest from the channel. \begin{figure} \begin{center} \includegraphics[width=1\linewidth] {fig8.pdf} \caption{ Maps of select emission band fluxes in the C32 field, averaged over each aperture and in units of Wm$^{-2}/$sr, overlaid on a 2MASS J-band image. From top to bottom: the 7.7 \textmu m~ PAH band, the 11.2 \textmu m~ PAH band, the PAH 7.7/11.2 flux ratio, the 33.5 \textmu m~ [S~\textsc{iii}] line, and the 25.89 \textmu m~ [O~\textsc{iv}] line. The sulphur fine-structure line peaks in the central part of the map, where the PAH emission is generally weakest (roughly coincident with the elevated H$\alpha$~ emission; see Fig.~\ref{fig:irac_c32c35}). Similar figures for the other PAH bands and atomic/molecular lines are found in Figs.~\ref{fig:c32maps} and~\ref{fig:c32maps2}, respectively. } \label{fig:mapc32} \end{center} \end{figure} We prepare a similar figure for C35 (Fig.~\ref{fig:mapc35}), with the full set of maps located in the Appendix (Figs.~\ref{fig:c35maps} and~\ref{fig:c35maps2}). This field has only five positions (two of which, C35-4 and C35-5, share an LL aperture), so our ability to trace smooth variations is limited. What is clear however is that positions 3, 4 and 5, which are coincident with the elevated H$\alpha$~ emission (Fig.~\ref{fig:rgb_c35}), have elevated fine-structure line emission relative to positions 1 and 2. The PAH emission is relatively flat across the field, though perhaps slightly higher in the H$\alpha$~ channel (positions 3, 4 and 5). These positions also exhibit a $\sim$10\% higher PAH 7.7/11.2 \textmu m~ flux ratio than the other locations. The 17.0 \textmu m~ H$_2$ emission is generally greater towards positions 3, 4 and 5, which is opposite behavior from that of C32. The OGLE and NGC 6522 spectra are generally too noisy for this type of analysis, though the 34.8 \textmu m~ [Si~\textsc{ii}] emission is well detected, peaking strongly at position OGLE-7 within this field. Within NGC 6522, the Si emission is essentially flat across the field, with the exception of possibly NGC 6522-1. \begin{figure} \begin{center} \includegraphics[width=1\linewidth] {fig9.pdf} \caption{Maps of select emission band fluxes in the C35 field, averaged over each aperture and in units of Wm$^{-2}/$sr, overlaid on a 2MASS J-band image. From top to bottom: the 7.7 \textmu m~ PAH band, the 11.2 \textmu m~ PAH band, the PAH 7.7/11.2 flux ratio, the 33.5 \textmu m~ [S~\textsc{iii}] line, and the 25.89 \textmu m~ [O~\textsc{iv}] line. The PAH emission varies weakly across the field, while the fine-structure lines are significantly stronger towards the left in this orientation--where elevated H$\alpha$~ emission is present (positions 3, 4 and 5; see Figs.~\ref{fig:irac_c32c35} and~\ref{fig:rgb_c35}). We present similar figures for the other PAH bands and atomic/molecular lines in Figs.~\ref{fig:c35maps} and~\ref{fig:c35maps2}, respectively.} \label{fig:mapc35} \end{center} \end{figure} \subsection{PAH flux ratio correlations} \label{sec:corr} A common method for indirectly tracing systematic variations in PAH populations is by evaluating band PAH flux ratios across environments (e.g., \citealt{galliano2008b}). In Fig.~\ref{fig:corr} we examine the emission strengths of the 6.2 and 7.7 \textmu m~ PAH bands using the 11.2 \textmu m~ band as a normalization factor. The 6.2 and 7.7 \textmu m~ features are strong in ionized PAHs, while the 11.2 \textmu m~ band is strong in neutral PAHs. As such, the 6.2/11.2 and 7.7/11.2 ratios trace PAH ionization (e.g., \citealt{allamandola1999}). A weak correlation is observed (with weighted Pearson correlation coefficient $r=0.47$ in C32). Our data span a range in 6.2/11.2 of 0.5-1.2 in C32, 0.9-1.0 in C35, and 0.4-0.6 in OGLE; in contrast, the 7.7/11.2 ratio varies little in our spectra. \citet{peeters2017} analyzed spectral maps of the reflection nebula NGC 2023 and found a high correlation coefficient between the 6.2 and 7.7 \textmu m~ bands ($r>0.97$). We plot their line of best fit for comparison in this figure (Fig.~\ref{fig:corr}). Our data exhibit (much) lower 6.2/11.2 flux ratios than in NGC 2023, with the C32 and C35 measurements roughly consistent with an extrapolation of the best fit line towards lower ratios. We also include best fit lines for W49A, a large star-forming region \citep{stock2014}, which is separated into one fit for ultra-compact \HII~regions alone, and one fit for their entire sample (which includes diffuse sight-lines; see their paper for details). Both of their data sets have correlation coefficients $r>0.8$. The W49A 6.2/11.2 flux ratios reach the low ratios that we observe in our sample ($\sim$0.6). The generally weak correlation of the 6.2 and 7.7 \textmu m~ bands in our fields toward the Galactic bulge suggests that we are probing relatively small variations in environmental conditions. For instance, the small range in 7.7/11.2 flux ratio we observe should correspond to a relatively narrow range in PAH ionization fraction (see \citealt{galliano2008b}, their Fig.~18). Furthermore we conclude that, relative to NGC 2023 and W49, our environments contain a higher fraction of neutral PAHs. For comparison, we also plot the flux ratios of the Orion Bar PDR \citep{peeters2002}, the diffuse ISM \citep{bernard1994,boulanger1996a} and a pointing toward the superwind of the starburst galaxy M82 (\citealt{beirao2015}, their region 1)\footnote{Note that we remeasured the PAH emission in the M82 spectrum taken from \citet{beirao2015} with a spline continuum for this analysis, as the authors used a different decomposition approach (PAHFIT).}. These sources are generally consistent with our 7.7/11.2 and 6.2/11.2 flux ratios, with the M82 spectrum being the best match to our Galactic bulge flux ratios. \begin{figure} \centering \includegraphics[width=1\linewidth]{fig10.pdf} \caption{ Flux correlations between the 6.2 and 7.7 \textmu m~ PAH bands, normalized to the 11.2 \textmu m~ PAH emission, for C32 (blue diamonds), C35 (green circles) and OGLE fields (red triangles). In parentheses is the weighted Pearson correlation coefficient for each field (omitted if fewer than 3 data points). The lines shown are lines of best fit from \textit{different} data sets: the dashed black line for NGC 2023 \citep{peeters2017}, the solid gray line for W49A \citep{stock2014}, and the solid black line for ultracompact \HII~regions in W49A \citep{stock2014}. The extents of the solid lines indicate the range in the data from which they were originally determined. Extrapolations of these best fits are shown with the light dotted lines. The black symbols correspond to three environments for comparison: the Orion Bar PDR \citep{peeters2002}, the diffuse ISM \citep{bernard1994,boulanger1996a} and a pointing toward the M82 superwind (\citealt{beirao2015}, their region 1). These are explored in Sec.~\ref{sec:enviro_comparison}. } \label{fig:corr} \end{figure} \subsection{Correlations amongst the entire sample} \label{sec:corrmatrix} Expanding on the 6.2 and 7.7 \textmu m~ band correlation explored in Sec.~\ref{sec:corr}, we now consider correlations between all measured quantities in C32 and C35: PAH bands, plateaus and atomic and molecular emission lines. For this analysis we exclude features with few detections (i.e., the 17.8 \textmu m~ PAH band and the 15.5 \textmu m~ [Ne~\textsc{iii}] and 28.2 \textmu m~ emission lines). We present a correlation matrix summarizing our results in Fig.~\ref{fig:corrmatrix}, ordered by complete-linkage hierarchical clustering. This clustering method computes the maximum (Euclidean) distance between two data points that belong to two different clusters. Based on the matrix, we make a few remarks. Generally, most PAH features correlate with each other. They also correlate with the plateau emission and the 17.0 \textmu m~ H$_2$ line. As an exception, the 12.7 and 16.4 \textmu m~ PAH emission bands exhibit a weak correlation ($r=0.45$) with each other but little else. The 12.7 \textmu m~ band must be isolated from its blended neighbor at 12.8 \textmu m~ ([Ne~\textsc{ii}]), which may explain why there are few statistically significant detections. The solitary nature of the 16.4 \textmu m~ band is peculiar, though it may be due to systematic effects in the continuum determination near this location (which is on the rising red wing of the 15-18 \textmu m~ plateau). The fine-structure lines of 12.8 \textmu m~ [Ne~\textsc{ii}], 18.7 \textmu m~ [S~\textsc{iii}], 33.5 \textmu m~ [S~\textsc{iii}] and 34.8 \textmu m~ [Si~\textsc{ii}] are highly correlated with each other and anticorrelated with all other quantities. These fine-structure line all originate in ionized gas. The S and Ne ions have comparable ionization potentials (21-23 eV), whereas the Si ion has a potential of approximately 8 eV. The 25.89 \textmu m~ [O~\textsc{iv}] ion, which also has an ionization potential of approximately 55 eV, correlates with the 6.2 and 7.7 \textmu m~ PAH bands, suggesting it is prominent in environments that favor ionized PAHs. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{fig11.pdf} \caption{ Correlation matrix for quantities measured in the spectra of C32 and C35: PAH band fluxes, atomic and molecular line emission, and plateau strengths (see Section~\ref{sec:corrmatrix} for details). The matrix is symmetric, with the Pearson R correlation coefficient presented in the lower half. The upper half is a representation of the correlation coefficient with color-coded squares: positive correlations are blue, negative correlations are red (with diagonal white hatching); the deepness of the color is proportional to the absolute value of the correlation coefficient. P-values are also displayed in the upper half (if p-value $\ge 0.01$), where correlations with p $\le 0.05$ are generally considered significant. The quantities are ordered based on a hierarchical clustering algorithm (black squares). Note the abbreviation ``Plat'' refers to the PAH plateaus. } \label{fig:corrmatrix} \end{figure*} \subsection{Spectral energy distributions} Using our photometric images, we have constructed spectral energy distributions (SEDs) for each position in our fields. This was accomplished by sampling the surface brightness of the photometric observations at each pixel in our \textit{Spitzer}/IRS apertures in the following way: if the IRS pixels were entirely contained within a larger pixel of the photometric images, the corresponding surface brightness was adopted; if any IRS pixel overlapped multiple photometric pixels, the mean surface brightness was adopted. In no cases were the IRS pixels larger than the spatial resolution of the photometric images. Fig.~\ref{fig:seds} displays the resulting SEDs for the 16 pointings toward C32 and the four pointings toward C35 (ignoring the fifth C35 position, as it shares the same LL aperture as C35-4). We do not construct SEDs for OGLE and NGC 6522, as there is no coverage from \textit{Herschel} Hi-GAL at these positions (and thus no $>200$ \textmu m~ photometry). Most of the C32 positions have consistent photometric brightnesses within their uncertainties (particularly with their neighboring positions, e.g. \textit{AKARI} 60 \textmu m~ and \textit{Herschel} 70 \textmu m), with the stark exception of the measurements near 160 \textmu m. Specifically, the \textit{AKARI} photometry at 140 and 160 \textmu m~ are (approximately) twice the surface brightness of the \textit{Herschel}/PACS 160 \textmu m~ measurement. The origin of the discrepancy is likely due to the greater calibration uncertainty on the \textit{AKARI} measurements; some residual striping was also observed in the \textit{AKARI} photometric images, which may play a role. The \textit{IRAS}, \textit{AKARI} and \textit{Herschel} measurements all agree at shorter wavelengths ($< 100$ \textmu m). For the purpose of this analysis, we addressed the discrepancy in two ways: first, by including \textit{AKARI}/FIS measurements at 140 and 160 \textmu m; and second, by only including the \textit{Herschel}/PACS measurement at 160 \textmu m. All other measurements are uncontroversial and therefore included to construct the resulting SED. There is little variation in the C32 photometric measurements, Fig.~\ref{fig:seds}, across the field at both short wavelengths ($<60$ \textmu m) and long wavelengths (350, 500 \textmu m; note however that the coverage of the 350 \textmu m~ image is limited, and thus the 350 \textmu m~ emission could not be measured at all positions). In the $60-250$ \textmu m~ range, surface brightnesses vary between individual C32 positions, sometimes by as much as 50\%. At 160 \textmu m, there appears to be a general, though not strictly monotonic, segregation in brightness of C32 positions west of the channel (toward higher brightnesses) and positions east of the channel (toward lower brightnesses). The \textit{Herschel} and \textit{AKARI} measurements (aside from the 160 \textmu m~ filter) show that the C32-1 position is consistently the brightest in the field. This is also true for the \textit{IRAS} measurements, though there is less variation between positions in these data (possibly due to its larger spatial resolution relative to \textit{AKARI} and \textit{Herschel}). The field of C35 exhibits similar trends: we observe consistent measurements between the three observatories, apart from the $\sim$160 \textmu m~ discrepancy. In addition, surface brightness variations within the C35 field are present in the $60-250$ \textmu m~ range. Perhaps the most characteristic is that positions C35-1 and C35-2 cluster together in surface brightness, while positions C35-3 and C35-4 are clustered and offset to higher overall brightnesses. Positions 3 and 4 are coincident with elevated H$\alpha$~ emission (Fig.~\ref{fig:irac_c32c35}), which suggests that environmental conditions vary across the C35 field. This separation and clustering can also be seen in several of the C35 maps (Figs.~\ref{fig:c35maps} and~\ref{fig:c35maps2}). \begin{figure} \centering \subfigure{ \includegraphics[width=\linewidth] {fig12a.pdf} \label{fig:sed_c32} } \\ \subfigure{ \includegraphics[width=\linewidth] {fig12b.pdf} \label{fig:sed_c35} } \caption{ Spectral energy distributions for C32 (top) and C35 (bottom), constructed with photometric observations from \textit{Herschel} Space Observatory, \textit{Infrared Astronomical Satellite} and \textit{AKARI} (see Table~\ref{table:photometry}). A clear separation between individual positions is present at 160 \textmu m, 250 \textmu m~ and 350 \textmu m, particularly in C35. } \label{fig:seds} \end{figure} \subsection{Summary} The spectra of C32 and C35 appear exceptionally similar in continuum shape and PAH feature strength, even after removing the contribution from zodiacal dust. However, spectral variations within the fields are present when examined in detail. We use a composite image of C32 (Fig.~\ref{fig:rgb_c32}) to identify an elevated H$\alpha$~ emission region (or channel). This H$\alpha$~ channel is linked to weak PAH emission and strong fine-structure emission. The 25.89 \textmu m~ [O~\textsc{iv}] line, which is a tracer of shocked gas, is detected across the field but is strongest within and near the H$\alpha$~ channel. SEDs show that there are significant surface brightness variations across the field, sometimes by as much as 50\%, indicating variable dust properties. At 160 \textmu m~ there appears to be a general trend toward positions east of the channel being brighter than positions west of the channel. We examine fewer positions within C35 than C32 but find similar trends: the median spectra in C35 are alike but the composite image identifies significant H$\alpha$~ emission near positions 3, 4 and 5 (Fig.~\ref{fig:rgb_c35}). The fine-structure lines peak in this region, as in C32. The 25.89 \textmu m~ [O~\textsc{iv}] line is also observed across the field, but no clear variation is apparent between on-channel and off-channel regions, in contrast to the field of C32. Our SEDs show that the C35 positions coincident with the H$\alpha$~ channel are systematically brighter than the other locations (1, 2). The PAH emission strength in C35 does not vary to the degree observed in C32, but there is a correspondence with H$\alpha$~ structure: the 7.7/11.2 PAH ratio is $\sim$10\% higher here when compared to off-channel positions. A relatively weak correlation is identified between the 6.2 and 7.7 \textmu m~ PAH band fluxes across our sample, which is normally one of the strongest PAH correlations measured. This discrepancy likely arises because we are probing a limited range of PAH flux ratios. It is remarkable though that our ratios are similar to those in ultra-compact \HII~regions \citep{stock2014} and do not coincide with those of reflection nebulae and the more diffuse ISM. In this respect, we note that the fine-structure line emission we observe is not similar to those observed toward these ultra-compact \HII~regions with similar PAH ratios: ultra-compact \HII~regions exhibit higher degrees of ionization, with strong 10.5 \textmu m~ [S~\textsc{iv}] emission and much stronger 12.8 \textmu m~ [Ne~\textsc{ii}] emission than we see in our spectra \citep{stock2014}. These lines of sight are complex, in general, but we have several tools to help us understand the processes at play: a prominent H$\alpha$~ emission structure, 25.89 \textmu m~ [O~\textsc{iv}] emission possibly tracing shocked gas, variable dust grain surface brightnesses and PAH emission variations that show a morphological link to the H$\alpha$~ channel. To understand what environments we are observing along these sight-lines we need to discern whether or not these emission features are simply coincident or linked to the same physical conditions/environments. \section{Discussion} \label{sec:discussion} \subsection{The Galactic Bulge environment} We wish to identify which environments we are probing on these sight-lines and the corresponding influence each has on the observed spectra. To address this, we first examine the general Galactic bulge environment. The Galactic plane and fields C32 and C35 are displayed in Fig.~\ref{fig:radio} (spanning approximately $-1.3<b<1.3$). This is a composite image composed of 857 GHz emission measured with the High Frequency Instrument \citep{lamarre2010} of the ESA \textit{Planck} mission \citep{tauber2010}, 8 \textmu m~ PAH emission measured with \textit{Spitzer}/IRAC and H$\alpha$~ emission from SHASSA. \begin{figure} \centering \includegraphics[width=1\linewidth]{fig13.pdf} \caption{ A composite image of the Galactic center. The vertical and horizontal axes are Galactic latitude and longitude, respectively. The image consists of \textit{Planck} 857 GHz radio emission (red), 8 \textmu m~ \textit{Spitzer}/IRAC PAH emission (green) and SHASSA H$\alpha$~ emission (blue). The C32 and C35 fields are centered on the white crosses (c.f. Figs.~\ref{fig:rgb_c32},~\ref{fig:rgb_c35} for specific aperture locations). The northern loop, as clearly seen in blue, is the Galactic center lobe (GCL); a corresponding southern lobe is also present but less distinctive. Together, the lobes are tilted slightly with respect to the Galactic plane. } \label{fig:radio} \end{figure} North of the plane is the prominent $\Omega$-shaped emission feature known as the Galactic center lobe (GCL), which here can be seen as bound by the H$\alpha$~ and 8 \textmu m~ emission arcs. This is a feature approximately $\sim$200~pc in diameter spanning $l=359.2\deg - 0.2\deg$ and $b=0.2\deg - 1.2\deg$, first identified by \citet{sofue1984} from 10 GHz radio continuum emission. It is thought to generally have the shape of a telescope dome \citep{bland2003}, with nested shells of radio line emission, radio continuum emission and dust/PAH emission, respectively, from interior to exterior \citep{law2010}. Quantitatively, the general structure (as measured from the center of the GCL) is: radio line emission from a 15-pc thick shell at radius $r=40$ pc, surrounded by radio continuum emission from a 15-pc thick shell at radius $r=55$ pc, and finally a 5-pc thick dust/PAH shell at radius $r=65$ pc \citep{law2010}. Calculations suggest the GCL has a mass of 5$\times$10$^6$ M$_\odot$ and an energy content of approximately 10$^{54}$ to 10$^{55}$ ergs \citep{bland2003}. C32 is on the eastern spur of the GCL, towards its highest-latitude boundary. The H$\alpha$~ emission in this region is consistent with emission expectations from radio recombination line studies of local ionized gas \citep{law2009,law2010}. A complementary southern lobe to the GCL is visible in Fig.~\ref{fig:radio}, previously identified via radio line emission by \citet{alves2015}. While the (northern) GCL exhibits $\sim$0 km s$^{-1}$ velocities, the southern lobe has velocities of $\sim$15 km s$^{-1}$. The entire bipolar structure appears to be inclined roughly 20\textdegree~W of N \citep{bland2003}. In general, the symmetry of the northern and southern lobes suggests a common origin of formation, though in the south the eastern boundary appears to be a relatively complex environment. C35 is on the lobe boundary in the south, similar to the complementary location of C32 in the north. However, due to the overall $\sim$20\textdegree~tilt of the structure, physical conditions towards C32 and C35 may vary. This is reflected in the weaker fine-structure lines observed toward C35 (relative to C32), though the dust continuum and PAH emission strengths seem unwavering between the two fields. The GCL is generally attributed to an outflow emanating from the Galactic plane roughly 7-10 Myr ago \citep{lutz1999,bland2003}. The starburst model of stellar winds and/or supernovae \citep{veilleux2005} is consistent with the energy requirements implied for the GCL and this is thought to be the most likely formation scenario \citep{law2010}. Calculations indicate the observed dust temperatures on the GCL boundary cannot be sustained by radiative heating alone, suggesting shock and/or turbulent heating may be present at its boundary \citep{bland2003}. The 25.89 \textmu m~ [O~\textsc{iv}] line we observe in both C32 \textit{and} C35 spectra is an indication of ongoing shock processing in each environment. Also present in Fig.~\ref{fig:radio} is a feature in H$\alpha$~~emission and 867 GHz radio between the Galactic origin and l, b = (0.06, 0.25), approximately---i.e., directed northward out of the plane. Wisps of 8 \textmu m~ or H$\alpha$~ emission may be present that provide weak filamentary connections between this feature and the H$\alpha$~ arc near C32, though it is unclear if they are causally linked---it is possible that the feature is simply a foreground object or a burst of some type. On the southern side of the plane, we do not identify a similarly isolated feature. Instead, a large, complex H$\alpha$~ zone (of size $\sim 0.6\deg \times 0.6\deg$) is centered near l, b = (359.9\textdegree, -0.4\textdegree). \subsection{Comparison by object type} \label{sec:enviro_comparison} \begin{figure*} \centering \includegraphics[width=1\linewidth]{fig14.pdf} \caption{ A comparison of the mid-IR emission of the C32 and C35 fields (green and grey lines, respectively) and other environments: the Orion Bar PDR (blue), the ISM near the Galactic center (solid black), the diffuse ISM (dotted black) and the superwind of M82 (red). The spectra are normalized to the surface brightness at 7.7 \textmu m. Note that, for clarity, the peaks of the 12.8 \textmu m~ [Ne~\textsc{ii}], 17.0 \textmu m~ H$_2$ and 18.7 \textmu m~ [S~\textsc{iii}] lines have been truncated for the Orion Bar spectrum and Galactic center ISM spectrum. } \label{fig:pahcompare} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.85\linewidth]{fig15a.pdf} \includegraphics[width=0.85\linewidth]{fig15b.pdf} \caption{ A comparison of the M82 spectrum to our C32 and C35 spectra after subtracting a local spline continuum (c.f. Fig.~\ref{fig:pahcompare}). The spectra are normalized to the surface brightness at 7.7 \textmu m~ and the residuals are shown below each panel. See Section~\ref{sec:enviro_comparison} for details. } \label{fig:m82} \end{figure*} \begin{figure} \centering \includegraphics[width=1\linewidth]{fig16a.pdf} \includegraphics[width=1\linewidth]{fig16b.pdf} \includegraphics[width=1\linewidth]{fig16c.pdf} \caption{ PAH flux ratio correlation plots amongst the 6.2, 7.7 and 8.6 \textmu m~ bands for our Galactic bulge sample and three other environments: the Orion Bar (a PDR), diffuse ISM cirrus emission and a pointing toward the M82 superwind. The corresponding mid-IR spectra are presented in Fig.~\ref{fig:pahcompare}. The Pearson correlation coefficient is reported in parentheses for C32 and C35. } \label{fig:morecorrs} \end{figure} To better understand our observations toward the Galactic bulge, we compare our spectra to spectra representative of distinct astrophysical environments in Fig.~\ref{fig:pahcompare}. \paragraph{Mid-IR.} For the mid-IR comparison, we include i) a spectrum of the Orion Bar, a bright star-forming region \citep{peeters2002}, with data acquired with the Short Wavelength Spectrometer (SWS; \citealt{degraauw1996}) on board the Infrared Space Observatory (ISO; \citealt{kessler1996}); ii) a spectrum of high galactic latitude cirrus clouds to represent the diffuse ISM \citep{bernard1994,boulanger1996a}, with data acquired with ISOCAM \citep{boulanger1996b}; iii) a \textit{Spitzer}/IRS ISM pointing from near the Galactic center located at (l, b) = (0.1152\textdegree, 0.2345\textdegree) (\citealt{simpson2007}, their position 38); and iv) a spectrum of the superwind in the starburst galaxy M82 to represent a galactic wind driven by star formation activity (\citealt{beirao2015}, their region 1) positioned 200\arcsec~from the center of M82 along its minor axis\footnote{While the positions of our FOVs at the distance of M82 (3.3Mpc) correspond to approximately 9-19\arcsec\, along M82's minor axis, the superwind of M82 is likely more extended. For example, \citet{grimes2005} found that the size of the galactic wind in X-ray emission correlates with FIR luminosity while \citet{mccormick2013} reports a roughly constant ratio of the minor axis scale height of the 8 $\mu$m IRAC emission to the FIR luminosity. These relationships indicate that M82's galactic wind is 4-30 times more extended than that of the Milky Way. The corresponding M82 minor axis positions then range from 35\arcsec\, to 570\arcsec.}). The spectra are normalized to the peak surface brightness at 7.7 \textmu m~ for the sake of comparison (Fig.~\ref{fig:pahcompare}). Qualitatively, the spectra of C32, C35 and the Galactic center ISM pointing have similar continuum shapes and PAH features. The diffuse ISM and M82 super wind spectra have a weaker underlying continuum but seemingly comparable PAH emission strengths. The PDR spectrum rises steeply beyond 15 \textmu m, diverging from these spectra, yet exhibits similar PAH band strengths. Focusing solely on the PAH emission, the spectrum of the M82 galactic wind is a surprisingly close match to the spectrum of C35 (see Fig.~\ref{fig:m82}, upper panel). The relative PAH strengths and profiles are identical. The continuum-subtracted spectrum of C32 is similar to M82 (Fig.~\ref{fig:m82}, lower panel), though the PAH ratios are slightly different -- and the 12.8 \textmu m~ [Ne~\textsc{ii}] line is clearly present in the former but not the latter. We therefore conclude that the wind-swept PAHs in these regions are exposed to comparable physical conditions, despite the differences in the underlying continua (Fig.~\ref{fig:pahcompare}). The PAH emission in the region 1 position of the M82 superwind is quite similar to other positions north of the plane (regions 5, 6, 7 and 9 in Fig.~4 of \citealt{beirao2015}). The authors show that the 6.2/7.7 flux ratio is very similar for all of these regions. Strong PAHs are also detected in regions 3, 8, 10, 11 and 12 (the latter three of which are south of the plane) but display a flatter continuum. The authors suggest that the northern wind, of which region 1 is a member, contains PAHs that are larger and more ionized when compared to PAHs in the southern wind or the starburst disc. To quantify the PAH feature strengths we examine correlations between the 6.2, 7.7 and 8.6 \textmu m~ PAH bands in Fig.~\ref{fig:morecorrs}. Each is normalized to the 11.2 \textmu m~ feature. The diffuse ISM pointing exhibits a somewhat depressed 8.6/11.2 ratio when compared to our Galactic bulge observations, while Orion Bar has a relatively low 7.7/11.2 ratio when compared with the sample. The M82 superwind position is a close match to our bulge spectra for all three PAH flux ratios. The C35 ratios tend to be closely clustered relative to the span in flux ratios of the C32 field, which may suggest that physical conditions (and correspondingly PAH band fluxes) are changing more rapidly across the C32 field than C35. The OGLE bulge spectra have systematically low 6.2/11.2 and 7.7/11.2 ratios when compared with C32 and C35, which may be a result of the OGLE field being further from the Galactic center---these ratios generally trace PAH ionization, and thus a lower 6.2/11.2 and 7.7/11.2 PAH ratio is consistent with the weak fine-structure lines are observed in the OGLE field (Fig.~\ref{fig:specall4}). \paragraph{Far-IR.} We implement a modified blackbody fit to our SEDs, $I_{\nu} \propto \nu^{\beta} B_{\nu}(T)$, to characterize the properties of the thermal dust in our fields. $B_{\nu}$ is Planck's function, $T$ the dust temperature and $\beta$ the spectral emissivity index (similar to the approach of \citealt{arab2012}). We fit only the photometric observations past 50 \textmu m, which characterizes the emission of large grains in thermal equilibrium (i.e., we do not account for the emission of stochastically heated PAHs and very small grains). A fit is performed for each pixel in the C32 and C35 fields. Per the photometric discrepancy between \textit{AKARI} and \textit{Herschel}/PACS, we use only the latter for our blackbody fitting. \begin{figure} \centering \includegraphics[width=1\linewidth]{fig17.pdf} \caption{ Spectral energy distributions for C32, C35, the diffuse ISM (high galactic latitude cirrus) and Orion bar are fit with a modified blackbody. Here, ``front" refers to a position in front of the Orion Bar (i.e., closer to the exciting star), and conversely, ``behind" is further away from the exciting star than the Bar itself. The fit parameters are included in the legend. } \label{fig:c1} \end{figure} The results of this process are presented in Fig.~\ref{fig:c1}. We determine a mean dust temperature of 22.2$\pm$1.6K in C32 and 20.9$\pm$1.3K in C35, which are consistent with each other. The $\beta$ parameter is 1.6$\pm$0.2 in C32 and 1.9$\pm$0.2 in C35, consistent within their uncertainties. We also fit the SED of diffuse cirrus emission from \textit{COBE} satellite observations \citep{bernard1994,boulanger1996a}. The diffuse cirrus emission has a comparable dust temperature to C32 and C35 (19.2$\pm$0.6K), but the fit returns a significantly higher $\beta$ value (2.7$\pm$0.1). We also fit the FIR emission for Orion Bar, known for its edge-on view of the transition from \HII~region to molecular cloud \citep{tielens1993}. We examine SEDs for three positions in Orion Bar previously identified by \citet{arab2012}, their Fig.~4: on the bar itself, 30\arcsec~ahead of the bar (closer to the illuminating source), and 38 \arcsec~further behind the bar (further distant from the illuminating source). The emission in Orion is very bright, dwarfing the C32 and C35 emission by roughly three orders of magnitude at $\sim70$ \textmu m. The Orion FIR continuum monotonically decreases beyond 70 \textmu m, indicating that we miss the peak emission wavelength and thus hotter grains are present in Orion than in the other environments. Our blackbody fits show that grain temperatures in front of, on and behind the bar are approximately 68.2$\pm$16.6K, 47.6$\pm$7.3K and 36.3$\pm$4.0K, respectively. \citet{arab2012} report for the same positions dust temperatures of $70.6\pm10.5$ K, $48.8\pm4.0$ K and $37.1\pm2.5$ K, respectively, which are in good agreement. \vspace{4mm} We conclude that the thermal dust we observe toward C32 and C35 is very akin to the dust observed towards high galactic latitude, diffuse cirrus clouds. Moreover, the PAH correlation plots indicate that the YSO and diffuse ISM pointings have similar (though not identical) PAH feature strength ratios to our bulge spectra. \subsection{PAH size comparison} \begin{figure} \centering \includegraphics[width=1\linewidth]{fig18a.pdf} \includegraphics[width=1\linewidth]{fig18b.pdf} \caption{ Estimating average PAH sizes within the field of C32. \textit{Top:} The ratio of the 15-20 \textmu m~ PAH features to the 6-9 \textmu m~ features is overlaid on a 2MASS J-band image. Higher values of this ratio imply larger average PAH sizes, as described and quantified by \citet{boersma2010}. \textit{Bottom:} The inferred average PAH sizes in C32. Note that mean sizes are generally higher where there is increased H$\alpha$~ emission, suggesting PAH processing (e.g., the destruction of smaller PAHs). } \label{fig:pahsizes} \end{figure} \citet{boersma2010} determined a relationship between mean PAH size and the ratio of the 15-20 \textmu m~ PAH band flux to the 6-9 \textmu m~ PAH band flux (their Fig.~19), based on the NASA Ames PAH IR Spectroscopic Database\footnote{\url{www.astrochemistry.org/pahdb/}} \citep[PAHdb;][]{bauschlicher2010,boersma2014_amesdb}. Note that the 15-20 \textmu m~ bands reflect C-C-C vibrational modes, while the 6-9 \textmu m~ bands are associated with C-C vibrations (see \citealt{boersma2010} for assumptions and methodology). \citet{tappe2012} examined \textit{Spitzer}/IRS spectra of the supernova remnant N132D in the Large Magellanic Cloud. PAH emission is clearly present in multiple positions in and around the blast wave in N132D, but the 15-20 \textmu m~ PAH bands are not present on the shock boundary itself, suggesting their carriers are destroyed. By using the approach of \citet{boersma2010}, \citet{tappe2012} examined the PAH size characteristics towards N132D. These authors find mean PAH sizes of N$_c = 4000-6000$, which is larger than is observed in more traditional environments. We perform the same analysis for C32 and C35. In C32, the 15-20/6-9 ratio varies between approximately $0.20-0.35$ (Fig.~\ref{fig:pahsizes}), comparable to the mean values measured by \citet{boersma2010} and much less than the ratios of $\sim10$ for supernova remnant N132D \citep{tappe2012}. Note that the sample of \citet{boersma2010} contains a mixture of reflection nebulae, \HII~regions, planetary nebula and an average galaxy spectrum prepared by \citet{smith2007} from the Spitzer Infrared Nearby Galaxies Survey (SINGS, \citealt{kennicutt2003}. Our ratios correspond to PAH sizes of approximately N$_c = 1010 - 1060$. The largest mean PAH sizes in C32 are generally coincident with the H$\alpha$~ channel toward the center of the field. Similar 15-20/6-9 ratios are observed in C35 (approximately between $0.21-0.25$, resulting in PAH sizes of roughly N$_c = 1010 - 1025$), but it is mostly homogeneous across the field, with perhaps a smaller mean size in position 3 where there is elevated H$\alpha$~ emission (in contrast to the trend seen in C32). We also compare our observations to those of supernova remnant N63A \citep{caulet2012}. From the observed positions, we selected the ``NE'' and ``SE'' lobes, based on the similarity of C32 and C35 to these regions in atomic line diagnostic plots (their Fig.~9). On the NE and SE lobes, the 15-20/6-9 ratio is approximately 0.116 and 0.058, respectively. These correspond (very roughly) to PAH sizes of N$_c \sim700$ and $\sim520$, respectively. PAHs are also present in the superwind of starburst galaxy M82 \citep{beirao2015}. The implied PAH sizes from the 15-20/6-9 ratio generally are in the N$_c\sim700-1000$ range, with perhaps two or three cresting N$_c=1000$. In comparison with supernova remnants, we find that the mean PAH sizes in C32 and C35 are much smaller than those observed in supernova remnant N132D \citep{tappe2012} and significantly larger than PAHs in supernova remnant N63A \citep{caulet2012}. The infrared spectra of supernova remnants can be very complex and considering the variation in PAH sizes between remnants N132D and N63A, it's clear that PAH processing is highly dependent on the local conditions and energetics. In contrast, the derived PAH sizes for C32 and C35 are similar to those determined for all but the supernova remnant objects and are thus very typical. On a smaller spatial scale, we note that within the C32 field, larger typical PAH sizes are observed in the central region/H$\alpha$~ channel, which suggests that smaller PAHs are been preferentially destroyed in this region. \subsection{The 17 \textmu m~ PAH plateau} The emission plateau centered near 17 \textmu m~ was originally identified by \cite{vankerckhoven2000} using \textit{ISO} observations of \HII~regions, YSOs and evolved stars. In these data, the plateau emission generally spans 15-20 \textmu m. These authors report that, overall, the shape of the plateau is very similar between sources, with few sources exhibiting discrete emission features on top of the broad plateau, most noticeably at 16.4 \textmu m. Thanks to \textit{Spitzer}'s superb sensitivity, the spectral details of the 15-20 \textmu m~ PAH emission features have been further revealed (e.g., \citealt{werner2004,peeters2004b,sellgren2007,boersma2010,peeters2012,shannon2015}). Most commonly observed are a set of discrete emission features at 15.8, 16.4, 17.4 and 17.8 \textmu m~ located on top of a broader emission band centered at 17 \textmu m. The latter plateau specifically appears to span 15-18 um and is thus flanked by the 15.8 and 17.8 \textmu m~ bands. This is very distinct from the broad, nearly flat-topped plateau from 15-20 \textmu m~ observed in \HII regions \citep{vankerckhoven2000,peeters2006}. This dichotomous behavior is somewhat mysterious. Here we examine the nature of the plateau within our bulge observations. Prior to any continuum subtraction, the spectra in our sample exhibit strong 15-18 \textmu m~ bands and plateau emission in the C32 and C35 fields (Fig.~\ref{fig:specall4}). In NGC 6522, the plateau emission appears to extend to approximately 20 \textmu m, with only the slightest hint of 15-18 \textmu m~ emission bands. The OGLE field shows a mixture of both the broad plateau and weak 15-18 \textmu m~ emission bands. \begin{figure} \centering \includegraphics[width=1\linewidth]{fig19.pdf} \caption{The median emission in the 15-20 \textmu m~ range for each field after subtracting a continuum to isolate the plateaus (see Fig.~\ref{fig:spectrum1}). Emission near 17 \textmu m~ dominates the 15-20 \textmu m~ region of C32 and C35, but it is of comparable surface brightness to the 18-20 \textmu m~ emission in the OGLE and NGC 6522 fields.} \label{fig:plateaus} \end{figure} We examine the continuum-subtracted PAH emission of our observations in Fig.~\ref{fig:plateaus} (see Fig.~\ref{fig:spectrum1} for a sample continuum). Residual emission is present in NGC 6522 between approximately 16 and 20.5 \textmu m, despite the lack of a prominent 17 \textmu m~ bump. This resembles the broad 15-20 \textmu m~ plateau emission seen towards \HII~regions \citep{vankerckhoven2000,peeters2004b,peeters2006}. The emission in the C32 and C35 fields conversely is dominated by the discrete PAH emission features and the 15-18 \textmu m~ plateau centered on 17 \textmu m. The PAH emission in the OGLE field is somewhat intermediate, with a weak but discernible 17 \textmu m~ bump. Note however that despite these differences in the 15-18 \textmu m~ PAH emission, all four fields have comparable (residual) emission between 18 and 20.5 \textmu m~ after continuum subtraction (excluding the 18.7 \textmu m~ [S~\textsc{iii}] line). It is possible that C$_{60}$ emission near 18.9 \textmu m~ could be present in these sources \citep{cami2010,sellgren2010}, but it likely does not affect the residual emission we observe for two reasons: (1) the 17.4 \textmu m~ band is very weak, suggesting a weak 18.9 \textmu m~ C$_{60}$ band, if any, and (2) the full-width at half-maximum of the 18.9 \textmu m~ C$_{60}$ feature is not nearly as wide as the residual/plateau emission. In summary, this suggests that in addition to the visible 15-18 \textmu m~ plateau emission (as clearly seen in the C32 and C35 fields) a second broad emission component may be present. It is unclear if this second component spans the entire 16-20.5 \textmu m~ range, and thus lies underneath the 15-18 \textmu m~ plateau emission, or whether it is only adjacent, spanning 18-20.5 \textmu m. The origin of the dual nature of the plateau emission (i.e., extending to 18 \textmu m~ vs. 20 \textmu m) is unknown. One possibility is that it represents a systematic effect in these analyses: consider that the plateau shape and extent is influenced by the way in which the underlying continuum is drawn. However, in this work it appears that the 15-18 \textmu m~ plateau (as seen in C32 and C35) can be observed prior to continuum subtraction, as can the 15-20 \textmu m~ plateau (as seen in NGC 6522), which minimizes this bias to some degree. Another possible systematic effect is that the Spitzer/IRS modules LL1 and LL2 overlap in the 20-21 \textmu m~ range, so a calibration offset may affect how the continuum is defined and thus the plateau shape. However, the \textit{ISO} spectra had no such stitching point near this spectral range so it is perhaps an unlikely source of the discrepancy. Density functional theory calculations of C-C-C PAH bending modes (which are the vibrational modes attributed to this emission region) predict that the 15-20 \textmu m~ region is dominated in emission intensity by modes in the 15-18 \textmu m~ range (where the 15.8, 16.4, 17.4 and 17.8 \textmu m~ PAH features reside; \citealt{ricca2010}). A small but perhaps important fraction of modes are predicted to emit between 18-20 \textmu m. If we assume that the residual emission we observe in the 18-20 \textmu m~ zone is not a spurious result, then we must conclude that the carriers of this emission region are relatively insensitive to radiation field strength, as similar emission is observed in all four fields---despite varying PAH feature strengths and fine-structure line intensities between these fields. Thus, a possible carrier could be large, compact PAHs (c.f. the grandPAH hypothesis; \citealt{andrews2015}). \subsection{PAH abundances} We calculate the total far-infrared flux based on the modified blackbody fits to the SEDs. Combining this with the total PAH flux, we can use the formalism of \citet{tielensbook} to deduce the fraction of carbon locked in PAHs. We determine that the percentage of carbon locked in PAHs along our sight-lines toward C32 is approximately $2.9\pm0.4$\%, while C35 is a bit lower at $2.3\pm0.3$\%. These values are slightly depressed relative to values typical for the ISM ($\sim$3.5\%, \citealt{tielensbook}). In C32, this fraction is minimal at the ``knee" of the H$\alpha$~ channel at position C32-10 ($2.0\pm0.4$\%). The maximum for the C32 field occurs towards the far right at position C32-02 ($3.7\pm0.4$\%). No systematic variation across the C35 field is detected. \subsection{Implications} To first order, the dust and PAH emission properties are the same toward C32 and C35, despite being diametrically opposed across the Galactic center. This can be explained by the fact that C32 and C35 are both on boundaries of emission lobes (northern and southern, relative to the Galactic plane, respectively), which are thought to originate from a starburst roughly 7 Myr ago \citep{lutz1999,bland2003,law2010}. The dust grain temperatures ($\sim$20 K) in these environments are indistinguishable from those of diffuse cirrus grains. Similarly, the PAH band ratios measured toward cirrus clouds is similar but not identical to that observed in our bulge environments. Shock/turbulent heating is expected in these environments, consistent with the presence of the shock-tracing 25.89 \textmu m~ [O~\textsc{iv}] line. When examined in detail, spatial variations within C32 and C35 are present. In C32 in particular, a region of strong H$\alpha$~ emission is present that is correlated with elevated fine-structure and [O~\textsc{iv}] line intensities and anticorrelated with PAH emission strength. This part of the C32 field may represent a transition zone for the outflow entering the surrounding medium. The small variation seen in PAH characteristics in the C32 and C35 fields, despite significant variations in the fine-structure line emission, is perhaps akin to the similarity found by \citet{andrews2015} in the PDR peaks of 3 reflection nebulae. This may then be further evidence of the presence of a group of stable PAH molecules dominating the emission bands (i.e., grandPAHs; \citealt{andrews2015}). Moreover, the 15-20 \textmu m~ emission plateau exhibits underlying residual emission between 18-20 \textmu m~ that does not vary between all four fields we have examined, which may be additional evidence for a PAH population relatively insensitive to variations in radiation field strength. \section{Conclusions} \label{sec:conclusion} We have analyzed \textit{Spitzer}/IRS spectra of diffuse emission toward the Galactic bulge. Combined with mid- and far-infrared photometry, we have investigated the spectral characteristics of our observations in the context of the local bulge environment. Our primary conclusions are as follows: \begin{enumerate} \item There is an evolution in spectral appearance with increasing distance from the Galactic Center. The spectra of C32 and C35 (located at (l, b) = (0.0\textdegree, 1.0\textdegree), (0.0\textdegree, -1.0\textdegree), respectively) are exceedingly similar, including strong PAH bands, fine-structure lines, plateaus, molecular emission and continuum emission. All of these features weaken at the position of the OGLE field, located at (l, b) = (0.4\textdegree, -2.1\textdegree). Its continuum emission is almost entirely dominated by zodiacal dust, though a weak rising continuum beyond 30 \textmu m~ is still present. At the most distant location, that of NGC 6522 (1.0\textdegree, -3.8\textdegree), the continuum emission is almost entirely due to zodiacal dust, and the PAH bands, plateaus and atomic lines are generally barely detectable. \item The similarity of the C32 and C35 spectra may be explained by their locations: they both lie on boundaries of northern and southern outflow lobes, the former of which is known as the Galactic center lobe. \item The PAH features in C35 are an almost exact match, in relative strength and profile shape, to the PAH features in the M82 superwind after removing their continua (at approximately 200\arcsec~from the center of M82, along its minor axis; region 1 of \citealt{beirao2015}). The C32 PAH features are similar to the M82 superwind but to a lesser extent. Thus, we have a local measurement of a galactic wind, which is common in star-forming galaxies such as M82. \item Within our fields, the strength of the PAH and fine-structure features are related to a region of elevated H$\alpha$~ emission, which generally traces the lobe boundaries: generally, where the H$\alpha$~ emission is bright, the fine-structure lines are bright and the PAH bands are weak. \item In contrast to the PAHs, the 25.89 \textmu m~ [O~\textsc{iv}] line peaks in/near the H$\alpha$~ channel in C32. This line is thought to be a tracer of shocked gas, confirming the presence of an outflow impacting the nascent medium---i.e., the Galactic center lobe. The [O~\textsc{iv}] line is also detected in the south (in C35), which is located within the more complex southern lobe environment. \item SED fitting indicates that the temperatures of thermal dust grains in C32 and C35 are $\sim$20 K, consistent with the temperature found for the diffuse ISM cirrus spectrum. These grains are expected to be heated not only radiatively but by shock and/or turbulent heating. \item We infer that $2.9\pm0.4$\% and $2.3\pm0.3$\% of the total carbon along the sight-lines to C32 and C35 are locked in PAHs, respectively. This is somewhat less than typical ISM expectations of 3.5\%. \item The 15-18 \textmu m~ emission plateau extends to 20 \textmu m~ in all four of our fields; the relative strength of the 17 \textmu m~ bump to underlying emission determines whether the plateau appears to only emit between 15-18 \textmu m~ versus 15-20 \textmu m, if this is not a systematic error. \item While distinct from the PAH sizes obtained towards supernova remnants, mean PAH sizes in C32 and C35 are comparable to those seen towards reflection nebulae, planetary nebulae and \HII~regions, and are thus typical. The average PAH size towards different positions in the superwind of M82 are somewhat smaller than those in C32 and C35. \end{enumerate} The extreme similarity between the spectra towards C32 and C35, which are diametrically opposed from the Galactic center, is quite peculiar in some sense. Although they are on outflow boundaries, it is not immediately obvious why they are so alike. Not only are their dust continua comparable, but their plateaus, PAH bands, H$_2$ lines and 25.89 \textmu m~ [O~\textsc{iv}] emission strengths are also very similar. Only the atomic fine-structure lines appear to differ significantly between them. The PAH similarities may point to a small number of stable PAHs, grandPAHs, as dominating the PAH emission bands in these environments. The natural questions to ask next are: are these sight-lines typical of the bulge environment? Is there a systematic dependence on Galactocentric distance? Will we find similar PAH, dust and line emission at other regions of the outflow boundaries? And finally, what is the relationship between emission towards the bulge and emission towards the general diffuse ISM? Further detailed study of the exact variations within C32 may also be pertinent. We have focused on the position-to-position variations within the field, but careful analysis of the pixel-to-pixel variations may help us understand the link between H$\alpha$~ excitation, the possible release of Si and Fe from dust, and PAH excitation/destruction. The use of recent, higher spectral resolution multi-wavelength imaging (e.g., H$\alpha$~) and comparisons to other extragalactic environments can help us answer these questions and probe starburst events and their influence on the PAH population(s). \section*{Acknowledgments} We thank Kris Sellgren for many helpful suggestions that improved the quality of this manuscript. We also thank William T. Reach for helping us estimate the zodiacal emission in our spectra. For kindly providing their data, we thank Pedro Beir{\~a}o, Adeline Caulet and Janet Simpson. The authors acknowledge support from NSERC discovery grants. MJS acknowledges support from a QEII-GSST scholarship. The IRS was a collaborative venture between Cornell University and Ball Aerospace Corporation funded by NASA through the Jet Propulsion Laboratory and Ames Research Center \citep{houck2004}. The Southern H$\alpha$~ Sky Survey Atlas (SHASSA) \citep{gaustad2001} was produced with support from the National Science Foundation. This research is based on observations with \textit{AKARI}, a JAXA project with the participation of ESA. This research has made use of NASA's Astrophysics Data System Bibliographic Services, and the SIMBAD database, operated at CDS, Strasbourg, France. This work has also made use of the Matplotlib Python plotting library \citep{hunter2007} and the Seaborn Python visualization library\footnote{\url{http://dx.doi.org/10.5281/zenodo.19108}}. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
{ "timestamp": "2018-02-14T02:00:30", "yymm": "1802", "arxiv_id": "1802.04282", "language": "en", "url": "https://arxiv.org/abs/1802.04282" }
\section{Introduction} Though machine learning advances greatly in many areas in recent years such as computer vision and natural language processing, limited interpretability hinders it from impacting areas that require clearer evidence for decision making such as health care and economy. In these domains, most widely used machine learning models are linear regression or decision trees that people can easily understand. To deploy cutting-edge machine learning in such domains, some transparent mechanisms are needed to explain the sophisticated models to users. Limited interpretability also harms improving machine learning models. The black-box behavior makes it difficult to diagnose the models. Without good understanding of how the model works, lots of effort are wasted in model parameters tuning. Thus, machine learning researchers have been trying to better interpret machine learning models. Recent progresses include designing specific neural network structure that imposes linear constraints on weights of input variables in the decision function \cite{choi2016retain}, using model structure based heuristics to decompose prediction scores \cite{yang2016predicting}, approximating any model linearly in a local area \cite{ribeiro2016should}, and track predictions using influence functions back to training data \cite{koh2017understanding}. However, all these work can only provide local interpretation. That is, the interpretation is generated for a particular sample of data. This is not desired when we want to know the general picture of the model. Modern machine models are usually trained and tested on millions of data samples. It is not practical for a researcher to review the interpretation of each of them for model diagnostics. What's more, when machine learning models are used to inform population level decisions, such as an economic policy change, a global effect estimate would be more helpful than thousands of local explanations. Thus there is a gap between local and global machine learning model interpretation, which this paper is going to address. By "interpreting a machine learning model globally", we mean representing a trained machine learning model in an aggregated and human understandable way. This is done by extracting the most important rules that the model learned from training data and would apply to testing data. These rules affect a substantial portion of data from the model perspective and thus are useful to inform decision impacting globally for all data samples. The simplest example of such rules are the coefficients we could learn in a linear regression model. These coefficients represent the magnitude of changes in the output due to one unit of change in the input variables. From the assumption of the linear model, the coefficients are identical for any data sample. Thus they are used widely for effect estimation in observational studies and randomized experiments. Another globally interpretable machine learning model is decision tree, which presents the decision rules in a straightforward tree structure. However, both linear regression and decision tree lack high predictive power. which means people have to tradeoff between predictability and interpretability when both properties are desired. In this work, we propose a new method, Global Interpretation via Recursive Partitioning (GIRP), to build a global interpretation tree for a wide range of machine learning models based on their local explanations. That is, we recursively partition the input variable space by maximizing the difference in the contribution of input variables averaged from local explanations between the divided spaces. By doing so, we end up with a binary tree that we call the interpretation tree describing a set of decision rules that is an approximation of the original machine learning model. Figure \ref{flow} describes the work flow of building a global model interpretation. With a trained machine learning model and the data you want to use to explain it, we first generate a contribution matrix from local explanations either using model specific heuristics or some local model interpreter \cite{ribeiro2016should}. Then we send contribution matrix to our Global Interpretation via Recursive Partitioning (GIRP) algorithm. The algorithm returns with an interpretation tree that generally describes the machine learning model and is fully comprehensible by human beings. The key contributions of our paper are as follows: \begin{itemize} \item We propose an efficient and effective method to address the gap in the literature of globally interpreting many machine learning models. The interpretation is in the form of an easily understandable binary tree that could be used to diagnose the interpreted model or inform population level decisions. \item The CART \cite{cart84} like algorithm that we use to build the interpretation tree could model interactions between input variables. Thus, we are able to find the heterogeneity in variable importances among different subgroups of data. \item In our experiments, we showcase that our method can discover whether a particular machine learning model is behaving in a reasonable way or overfit to some unreasonable pattern. \end{itemize} The rest of the paper is organized as follows. Section 2 describes some works that are closely connected to our work. Section 3 presents the Global Interpretation via Recursive Partitioning (GIRP) algorithm. Section 4 applies GIRP to computer vision, natural language processing, and health care predictive models from structured tabular data. Section 5 concludes the paper. \begin{figure}[!t] \centering{} \includegraphics[width=80mm]{flow2.png} \caption{Work flow of global model interpretation} \label{flow} \end{figure} \section{Related Work} There are four parts of existing work that are closely related to our method, that is, local model interpretation, global model interpretation, recursive partitioning for effects estimation, and feature selections. There are several ways to achieve local model interpretation. First, people structure the model in a way that the output is linear in terms of input variables so that the weights could be used as a measure of importance. For example, \cite{choi2016retain} uses the neural attention mechanism \cite{bahdanau2014neural} to generate interpretable attention weights in recurrent neural networks. However, due to the stochastic training process used, these attentions are shown to be unstable \cite{yang2017machine}. The second way uses model specific heuristics to decompose the predicted scores by input variables. \cite{yang2016predicting} described such methods for regularized regression and gradient boosted machine. Third, locally approximating sophisticated models using simple interpretable model could explain individual predictions. Gradient vector and sparse linear methods are tried as the local explainer \cite{kononenko2010efficient,baehrens2010explain,ribeiro2016should,yang2018visual}. Finally, influence functions from robust statistics can be used to track a particular prediction back to training data that are responsible for it \cite{koh2017understanding}. In conclusion, all local model interpretation methods work at the single data sample level, generating the contributions of input variables to the final predicted score for a specific data sample. People also try to directly build a globally interpretable model, including additive models for predicting pneumonia risk \cite{caruana2015intelligible} and rule sets generated from sparse Bayesian generative model \cite{letham2015interpretable}. However, these models are usually specifically structured thus limited in predictability to preserve interpretability. \cite{craven1996extracting} uses queries to build a tree to approximate neural networks. \cite{atzmueller2011mining} generally discusses presenting machine learning models in different levels. Recursive partitioning and its resulting tree structure is an intuitive way to present rule sets and model interactions between input variables. It has been used for a long time to analyze heterogeneity for subgroup analysis in survey data \cite{morgan1963problems}. Recently it is applied to study heterogeneous causal or treatment effects \cite{su2009subgroup,athey2016recursive}. It is a good fit for our global model interpretation task because we want to extract the rules that machine learning model finds and these rules are affected by the interactions between input variables. Feature selection methods select a subset of important features from the input variables when the machine learning model is trained. The model interpretability could be benefited from this process because it reduces the dimension of input variables, making the model compact and easier to be presented \cite{tibshirani1996regression}. This is very useful when the input is very high dimensional \cite{mungloo2017meta}. The feature selection process could either be conducted before the model fitting \cite{hall1999correlation} or embedded into it \cite{tibshirani1996regression,xu2014gradient}. Though feature selection and global model interpretation tasks both aim to extract the most important variables or their combinations, they are different because global model interpretation is a post model fitting process. We represent the trained model in a compact and comprehensible way with good fidelity to the original model. The goal is not to make predictions using this representation but understand how it predicts. In contrast, feature selection discards unimportant variables and predictions will be solely based on selected ones. \section{Global Interpretation via Recursive Partitioning} We follow the CART \cite{cart84} work flow to build the interpretation tree, including growing a large initial tree, pruning, and using the validation set for best tree size selection. But before we describe the tree building process in detail, we introduce the contribution matrix first, which our method takes as input. \subsection{Contribution Matrix} As mentioned, local model interpretation methods \cite{kononenko2010efficient,baehrens2010explain,ribeiro2016should,choi2016retain,yang2016predicting} can generate the contribution of each single input variable to the final predicted score for a specific data sample. In detail, for a machine learning model that take $N$ input variables, given a new data sample, it generates a quantity $c^{i}$ for the $i$-th variable $v_{i}$ to measure the importance of this variable in the prediction made. We call this quantity the contribution of variable $v_{i}$. If there are $M$ data samples in total, we could generate a contribution matrix using local model interpretation methods as shown in Table \ref{comp}. $c_{j}^{i}$ is the contribution of variable $v_{i}$ to predicted score $p_{j}$ of sample $s_{j}$. Thus, each row of the contribution matrix represents how the model thinks of variable importances in the corresponding prediction. \begin{table*}[t] \begin{center} \begin{tabular}{p{2.5cm}|p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}|p{2cm}} \hline Contribution & Var 1 & Var 2 & Var 3 & ... & Var $i$ & ... & Var $N$ & Predicted Score\\ \hline Sample 1 & $c_{1}^{1}$ & $c_{1}^{2}$ & $c_{1}^{3}$ & ... & $c_{1}^{i}$ &... & $c_{1}^{N}$ & $p_{1}$\\ Sample 2 & $c_{2}^{1}$ & $c_{2}^{2}$ & $c_{2}^{3}$ & ... & $c_{2}^{i}$ &... & $c_{2}^{N}$ & $p_{2}$\\ Sample 3 & $c_{3}^{1}$ & $c_{3}^{2}$ & $c_{3}^{3}$ & ... & $c_{3}^{i}$ &... & $c_{3}^{N}$ & $p_{3}$\\ ... & ... & ... & ... & ... & ... & ... & ... & ... \\ Sample $j$ & $c_{j}^{1}$ & $c_{j}^{2}$ & $c_{j}^{3}$ & ... & $c_{j}^{i}$ &... & $c_{j}^{N}$ & $p_{j}$\\ ... & ... & ... & ... & ... & ... & ... & ... & ... \\ Sample $M$ & $c_{M}^{1}$ & $c_{M}^{2}$ & $c_{M}^{3}$ & ... & $c_{M}^{i}$ &... & $c_{M}^{N}$ & $p_{M}$\\ \hline \end{tabular} \end{center} \caption{ Contribution matrix generated from local model interpretations for every single data sample. $c_{j}^{i}$ is the contribution of variable $v^{i}$ to predicted score $p_{j}$ of sample $s_{j}$.} \label{comp} \end{table*} It is straightforward to obtain the contribution matrix when features are explicit and individual contributions could be generated along with predictions like in linear regression and \cite{choi2016retain,yang2016predicting}. However, in other cases we need some workaround to identify the variables that the contributions could be attributed to. When analyzing convolutional neural networks in \cite{ribeiro2016should}, segmentation of images is generated first to carry contributions. In our experiment diagnosing the scene classification deep learning method, a semantic segmentation algorithm is applied to the scene images to break them into semantic meaningful segments as well. These workarounds are problem specific and affect the formation of the contribution matrix. \subsection{Growing a Large Initial Tree} Now we can move forward to the first step to build the interpretation tree, growing a large initial tree. The same greedy process as CART is adopted. For any input variable $i$, we could apply a split based on values of variable $i$ to divide all the data samples into two subgroups. Note that the split is based on the input variable value but not contribution $c^{i}$. We use $v^{i}$ to denote the input values to discriminate it from contribution $c^{i}$. The type of split depends on type of variable $v^{i}$. If it is binary, the split criteria could be "$v^{i} = 1$?". If $v^{i}$ is ordinal, we could apply the criteria "if $v^{i} < d$" where $d$ is some constant value. If $v^{i}$ is categorical, let $D$ denote a subset of all possible values of variable $v^{i}$, we could apply "$v^{i} \in D$?" as the split criteria. For convenience, assume that all data samples meet the split criteria go to the right subset $S_{R}$ and the others go to the left subset $S_{L}$. For the two subsets of data samples $S_{R}$ and $S_{L}$. Consider the quantity below: \begin{equation} G(split_{i}) = \left(\frac{\sum_{S_{L}}c_{j}^{i}}{|S_{L}|} - \frac{\sum_{S_{R}}c_{j}^{i}}{|S_{R}|}\right) \label{eq1} \end{equation} \noindent $split_{i}$ means the split is over variable $v^{i}$. The first term quantifies the average contribution of variable $v_{i}$ in the left subset $S_{L}$. So does the second term for the right subset $S_{R}$. The difference between these two terms measures how differently variable $v^{i}$ contributes to the predicted score in $S_{R}$ and $S_{L}$. The larger this difference is, the more discriminative the model think variable $v^{i}$ is. Thus, by finding the maximum $|G(split_{i})|$, we could get to know the most import variable from the model perspective. So $|G(split_{i})|$ is used as a measure of split strength in terms of variable importance. We search all possible splits for all variables to find the best initial split. After dividing the data sample into $S_{R}$ and $S_{L}$, we follow CART's greedy approach to recursively partition $S_{R}$ and $S_{L}$ and their child nodes until we reach to some pre-set threshold for maximum tree depth or minimum number of samples in a node. As a result of this step, we would get a large initial tree that explicitly represents the most discriminative rules that the model implicitly contains. We denote this large initial tree $T_{0}$. \subsection{Pruning} Due to the greedy approach to grow the initial tree, the rules contained in initial tree $T_{0}$ are overly optimistic about the real world problem and may not generalize well. Thus we need a procedure to prune $T_{0}$ to improve generalizability. Consider all the internal nodes (non-leaf nodes) in $T_{0}$. All these nodes contain a split, say $t$. Each split corresponds to a split value $G(t)$ defined by Equation (\ref{eq1}). Suppose $T$ is any interpretation tree and $t$ is an internal node of $T$, we have \begin{equation} G(T) = \sum_{t \in T}|G(t)| \end{equation} \noindent as a measure of the total split strength of the tree $T$ that we generally want to maximize. To control the complexity of $T$, we add a penalizing term to $G(T)$ to punish for more nodes in the tree. \begin{equation} G_{\lambda}(T) = \sum_{t \in T}|G(t)| - \lambda|T| \end{equation} \noindent Here $|T|$ stands for the number of internal nodes in $T$. To maximize $G_{\lambda}(T)$, some of the internal nodes need to be removed from $T$ if $G(t)$ for these nodes are less than $\lambda$. For larger $\lambda$, more nodes would be removed so the resulting tree would be simpler and vice versa. But how can we decide which internal nodes to remove? We first define a new quantity for each internal nodes $t$ for this purpose. We use $T_{t}$ to denote the subtree of $T_{0}$ that has $t$ as root. \begin{equation} g(T_{t}) = \frac{|G(T_{t})|}{|T_{t}|} \label{eq4} \end{equation} \noindent The above quantity intuitively defines the average split strength of internal nodes in subtree $T_{t}$. With $g(T_{t})$ defined, we iteratively remove the subtree with the smallest $g(T_{t})$ from the initial full tree $T_{0}$. Due to the greedy process to grow the initial tree, this process would result in a series of nested tree, $\{T_{K}, T_{K-1}, ..., T_{k}, T_{k-1}, ...,T_{0}\}$. $T_{K}$ is the null tree that only contains one node. \cite{cart84} has proved that these nested trees created by the iterative pruning process correspond to a series of $\lambda$ values, with $\lambda_{K} > \lambda_{K-1} > ... > \lambda_{k} > ... > \lambda_{0} = 0$. \subsection{Select Best Sized Tree} But how can we decide which $T_{k}$ is the best sized tree for the final interpretation tree, i.e., which value of $\lambda_{k}$ is the best? Here we use a held-out validation set for making this decision. We feed these new validation data into each of $\{T_{K}, T_{K-1}, ..., T_{k}, T_{k-1}, ...,T_{0}\}$ and calculate for each internal node $t$ \begin{equation} G_{validation}(t) = sgn(G(t))\left(\frac{\sum_{S_{L}}c_{j}^{i}}{|S_{L}|} - \frac{\sum_{S_{R}}c_{j}^{i}}{|S_{R}|}\right) \label{eq5} \end{equation} \noindent where $sgn()$ is the sign function. Then we select the tree $T_{k}$ as the best sized tree with the largest $G_{validation}(T_{k})$: \begin{equation} G_{validation}(T_{k}) = \sum_{t \in T_{k}}(G_{validation}(t)) \label{eq6} \end{equation} \subsection{Choice of Hyperparameters} The only two hyperparameters we have in our approach are the maximum depth of the interpretation tree and the minimum number of data samples within a leaf or internal node. These are mostly chosen empirically depending on the problem setting. \begin{table}[!t] \begin{center} \begin{tabular}{lp{6.2cm}} \hline \textbf{Algorithm:} & Global Interpretation via Recursive Partitioning (GIRP)\\ \hline \textbf{Step 1: }& Randomly split out a held-out validation dataset. The rest data are fed to the trained machine learning model to get the contribution matrix;\\ \textbf{Step 2: }& Use Equation (\ref{eq1}) to split the initial node;\\ \textbf{Step 3: }& Recursively partition the left and right child nodes $S_{L}$ and $S_{R}$ to get the full Tree $T_{0}$ until reaching to maximum tree depth or minimum number of data samples in one node;\\ \textbf{Step 4: }& Use Equation (\ref{eq4}) to calculate the average split strength of each internal node of $T_{0}$. Iteratively remove internal nodes from the one with smallest split strength to get a series of nested tree, $\{T_{K}, T_{K-1}, ..., T_{k}, T_{k-1}, ...,T_{0}\}$;\\ \textbf{Step 5: }& Use the held-out validation set and Equation (\ref{eq5}) and (\ref{eq6}) to calculate $G_{validation}(T_{k})$ for each of $\{T_{K}, T_{K-1}, ..., T_{k}, T_{k-1}, ...,T_{0}\}$. The one with largest $G_{validation}(T_{k})$ is selected as the best sized interpretation tree;\\ \hline \end{tabular} \end{center} \caption{ The complete algorithm to generate the interpretation tree.} \label{alg} \end{table} The full algorithm to generate the interpretation tree is described in Table \ref{alg}. Now we will move forward to demonstrate it on multiple datasets using various machine learning models. \section{Experiment} In this section, we will try to interpret different types of machine learning model on different tasks by representing them using the interpretation tree. First, We will apply the proposed Global Interpretation via Recursive Partitioning (GIRP) algorithm to a scene understanding deep learning model in computer vision. Second, we will try a text classification task and see what words are important to the random forest classifier. Finally, intensive care unit (ICU) mortality prediction using recurrent neural network from medical records demonstrates our approach on tabular data. Each of these cases is different in obtaining the contribution matrix. We will explain it in detail for each of them. \subsection{Scene Understanding} As many computer vision tasks greatly advanced by deep learning, scene understanding has breakthroughs in accuracy with the help of multi-million item labeled dataset and large scale deep neural networks \cite{zhou2016places}. However, as most successful neural network architectures for computer vision, scene understanding neural networks are not easily understandable because they are trained end-to-end and act in a black-box way. What's more, \cite{nguyen2015deep} shows that many popular network architectures are easily fooled. Though some workarounds are proposed to examine the evidence of neural predictions \cite{bau2017network}, they are at the single prediction level that could not be efficiently used when there are millions of training and testing data samples. Thus, people need a tool to extract the general rules contained in the model from the whole set of data. If these rules make sense to humans, we could trust the models more that they will generalize well in real world. In our demonstration, we will try to understand a deep residual network trained for scene understanding on the MIT Place 365 dataset \cite{he2016deep,zhou2016places}. To be specific, we will send images with ground truth label in "kitchen", "living room", "bedroom" and "bathroom" in the validation dataset, 100 images per category, to the trained model and collect the predicted probabilities for these four categories. To obtain the contribution matrix, we apply a scene parsing algorithm, dilated convolutional network \cite{yu2015multi,zhou2017scene}, to segment each image into semantically meaningful parts. Then we perturb each part with noise and re-evaluate the perturbed image in the scene understanding neural network for new prediction scores for the four categories. Using the varying scores, the contribution of each semantic part of the image can be calculated via a sparse linear regression model as the local model interpreter does \cite{ribeiro2016should}. Therefore, we could obtain the contribution of each semantic category to prediction scores of the four scene categories, which form the contribution matrix. Figure \ref{pic} describes this process more clearly. \begin{figure*}[!tb] \centering{} \includegraphics[width=160mm]{picexample.png} \caption{From row 1 to row 4, each row have one example image from the four categories of "bedroom", "living room", "kitchen", and "bathroom" in the MIT Place 365 scene understanding dataset \cite{zhou2016places}. The first column is the raw image. The second column shows that semantic categories that found by the semantic segmentation algorithm, dilated convolutional network \cite{yu2015multi} in the image. Column 3 shows actual semantic segmentation, which contains several superpixels for each image. Using the local prediction interpreter \cite{ribeiro2016should}, we could get the contribution of each superpixel, i.e., semantic category, to the predicted scores. Column 4 presents the important semantic superpixel (with highest contribution scores) that are highlighted green for corresponding ground truth category score respectively. For the "bedroom" image, the "bed" and "floor" superpixels are important. For the "living room" image, "sofa", "window pane", and "fireplace" are important. For the "kitchen" image, "cabinet" is the most important. Finally for the "bathroom" image, "toilet", "screen door" play the most important role. All these explanations seem to be reasonable to us human being.} \label{pic} \end{figure*} After getting the contribution matrix that measures the importance of each semantic category for the scene categories for each image, we could run our Global Interpretation via Recursive Partitioning (GIRP) algorithm to generate the interpretation tree for each category, that is, "kitchen", "living room", "bedroom", and "bathroom". We set the maximum depth of the resulting tree to 100 and each node contains at least 20 images. The results are shown in Figure \ref{itcv}. Only the first four levels of the resulting trees are presented due to space limit. The actual best-sized tree is usually 5 to 10 levels in height. For each node in the interpretation tree, the numbers of images in the nodes are shown. The accuracy number measures what proportion of images are correctly identified as the ground truth category for each tree. The split variable for each node is also shown. The contribution number is the average contribution of the split variable in the left and right child node. From the trees, we can see that for "kitchen", "living room", "bedroom", and "bathroom" scenes, the model finds "cabinet", "sofa", "bed", and "toilet" are the most discriminative semantic categories, which does match our common sense. Besides, our approach also reveals some useful rules that the model is following. For example, the "sofa", "cushion", and "fireplace" combination achieve 0.958 in accuracy for identifying "living room", while the "cabinet", "stove", and "dishwasher" combination gets a perfect accuracy of 1 for "kitchen". All these findings increase our confidence in the black-box residual network based scene understanding deep learning model because it is picking the right important object in the scene to make decisions. \begin{figure*}[t] \begin{minipage}[t]{\columnwidth}% \begin{center} \subfloat[Living room]{\includegraphics[width=90mm]{215.png} \label{qta} } \par\end{center}% \end{minipage}\hfill{}% \begin{minipage}[t]{\columnwidth}% \begin{center} \subfloat[Kitchen]{\includegraphics[width=90mm]{203.png} } \par\end{center}% \end{minipage}\hfill{}% \begin{minipage}[t]{\columnwidth}% \begin{center} \subfloat[Bedroom]{\includegraphics[width=90mm]{52.png} \label{qtd} } \par\end{center}% \end{minipage}\hfill{}% \begin{minipage}[t]{\columnwidth}% \begin{center} \subfloat[Bathroom]{\includegraphics[width=90mm]{45.png} \label{qte} } \par\end{center}% \end{minipage}\hfill{}% \caption{Interaction trees learned for scene categories "living room", "kitchen", "bedroom", and "bathroom".} \label{itcv} \end{figure*} \subsection{Text Classification} We now turn our attention to the text classification task. \cite{ribeiro2016should} reports that the text classifiers are picking up unreasonable words to discriminate articles related to "Christianity" from ones related to "Atheism" using a subset of the 20-newsgroups corpus. While they are showcasing this phenomenon by some randomly picked articles, we want to check if at the corpus level the model does use words unrelated to both concepts to classify articles. For this purpose, we use our proposed approach to generate an interpretation tree using words in the articles as features. We train a random forest classifier \cite{liaw2002classification} with 500 trees that achieves 0.92 in accuracy on the test set to classify "Christianity" and "Atheism" articles. We use TF-IDF \cite{sparck1972statistical} vectorizer to transfer the article into vectors and then send them to the classifier. To get the contribution matrix for building the interpretation tree. we use the local interpreter \cite{ribeiro2016should} that removing each word from the articles one by one and monitoring the change in predicted score to do a regression for evaluating the contribution of each word. After running our Global Interpretation via Recursive Partitioning algorithm, we obtain an interpretation tree as shown in Figure \ref{txt}. Maximum tree depth is set to 100 and minimum number of data samples in each node is 50. The results show that most words found in the tree are not very related to concepts either of "Christianity" or “Atheism”, except "God" and "Christians" in the lower levels. The most important words pulled, "Posting", "Rutgers", and "com", look like just coincidental fake correlations captured by the model. \cite{ribeiro2016should} reports an imbalanced word frequency of these words in the two classes in the corpus. The model definitely overfits to these patterns and would not generalize well in classifying new articles. This finding implies that it would be a better practice to train robust text classification machine learning models from multiple corpora so that they are less likely overfitting to corpus specific features. In this text classification example, we show that GIRP and interpretation tree could be used to diagnose models that overfit to the data and reveal the incorrectly learned pattern. \begin{figure*}[t] \begin{minipage}[t]{\columnwidth}% \begin{center} \subfloat[Text Classification]{\includegraphics[width=90mm]{text.png} \label{txt} } \par\end{center}% \end{minipage}\hfill{}% \begin{minipage}[t]{\columnwidth}% \begin{center} \subfloat[ICU Mortality]{\includegraphics[width=90mm]{icu.png} \label{icu} } \par\end{center}% \end{minipage}\hfill{}% \caption{\textbf{Left: Text Classification, }The interpretation tree to explain the random forest model classifying "Christianity" and "Atheism" related articles. Unfortunately, we can see from the tree that the model is picking up unreasonable words such as "Posting", "rutgers", and "com" as important features. We could expect bad generalizability of this model. \textbf{Right: ICU Mortality, }The interpretation tree to explain the recurrent neural network model predicting mortality in ICU using past medical records. The codes found by the algorithm are relevant to high or low risk of mortality.} \end{figure*} \subsection{Tabular Data: Predicting ICU Mortality}\label{tb} Structured tabular data widely exist in all kinds of relational databases to represent various types of events or transactions. Hospitals use standardized codes to record medical diagnosis, procedure and pharmacy codes. The MIMIC database \cite{johnson2016mimic} is this kind of medical database that contains intensive care unit (ICU) medical records and is publicly available. We apply the RETAIN algorithm \cite{choi2016retain} to the MIMIC database to predict mortality in the intensive care unit (ICU). RETAIN is a specifically designed interpretable recurrent neural network that can produce the contributions of past medical events to a predicted new event using the neural attention mechanism \cite{bahdanau2014neural}. However, due to the stochastic optimization process, \cite{yang2017machine} has shown that these contributions are not stable when the recurrent neural network model is re-trained by re-sampling training data. We apply our proposed Global Interpretation via Recursive Partitioning (GIRP) to see if the global interpretation of the RETAIN model is making sense when the local interpretations are unstable. Past diagnosis codes are used to predict whether a patient will die in the ICU. The contribution of each diagnosis code is generated by RETAIN along with the prediction. For the convenience of interpretation, we aggregate the diagnosis codes to different time frames, though RETAIN predicts on continuous time series. We collect these contributions and organize them as the contribution matrix. Then it is sent to GIRP to generate the interpretation tree, which is shown in Figure \ref{icu}. Maximum tree depth is set to 100 and minimum number of data samples in each node is 100. We can see the most relevant diagnosis found by the algorithm is "convalescence and palliative care in 1 month" that indicates a mortality of 85.3\%. This is making sense because this diagnosis probably means most medical treatments are tried and the doctors can do nothing about the patients' situation. On the other hand, "other perinatal jaundice in 1 month" seems to be a big protective factor of death in the ICU. This is also reasonable because mostly jaundice is not life threatening but needs emergent care. For other codes we may not comment on the rationality because of the lack of health care knowledge. However, this figure may help doctors if it finds some relations between medical conditions and death in the ICU that are not well investigated before in the medical practice. In this way, our proposed method could potentially help discovering new important factors or interactions related to some outcome in complicated situations such as health care, which enables the black-box predictive models for knowledge discovery. \section{Conclusion and Discussion} In this paper, we propose a simple but effective method to interpret black-box machine learning models globally from local explanations of single data samples. Global interpretation is more refined than local explanations thus more efficient when used to diagnose the trained model or extract knowledge from it. We show that our Global Interpretation via Recursive Partitioning (GIRP) algorithm can represent many types of machine learning models in a compact manner. We demonstrate our algorithms using various kinds of machine learning models on different tasks. We have shown that the deep residual network is looking for the right object when classifying scenes. In contrast, in text classification, the interpretation tree indicates that the random forest classifier is focusing on wrong words to discriminate texts with different topics. Besides, the proposed method is also useful to extract decision rules from sophisticated models. Such rules are hidden in black-box models but are critical to know if we want to impact the outcome. We showcase this usage by extracting disease comorbidities leading to high mortality in intensive care unit. In conclusion, our method helps people understand machine learning models efficiently, making it easier to check if the model is behaving reasonably and make use of the knowledge it discovers. However, the proposed method is limited in several ways. First, we lack a quantitative measure of the fidelity of the interpretation tree to the original explained machine learning model. Though the interpretation tree is directly developed from contributions generated from the original model, we lose some details when we extract the general rules. We don't know how important these details are to the high predictability. Second, though we present the split strength as a measure of variable importance in the interpretation tree, the confidence for this measure is unknown. Linear methods are popular in evidence based studies partially because it is easier for confidence interval estimation. Due to the complexity of underlying probabilistic distributions for sophisticated machine learning methods, it is difficult to estimate confidence intervals for the split strengths in the interpretation tree. Finally, the proposed method needs a contribution matrix as an input which is difficult to obtain when feature representation from input variables is not well established such as speech recognition and computer vision. For example, in many vision tasks, even local visual explanations are available using image segmentation \cite{ribeiro2016should,yang2016supervoxel,yang2018visual}, it is difficult to aggregate them to high level visual features. This is closely connected to the broader problem and theories such as the information bottleneck \cite{tishby2000information,tishby2015deep} of understanding how machine learning models identify important high level features and ignore the noises. Additionally, even the explicit feature representation is available to form the contribution matrix, group effects of features are not well captured in the current method. Mechanisms similar to group LASSO \cite{yuan2006model} could be added to solve this problem. Each of the limitations mentioned points to a good direction for future work. We want to quantify the fidelity of the interpretation tree to the explained original model. We are considering setting up bootstrapping methods \cite{efron1979bootstrap} for confidence interval estimation as direct probabilistic distribution estimation is difficult. At last, some representation learning methods could be incorporated into the algorithm when the contribution matrix is difficult to obtain. All these pose exciting challenges for making machine learning more transparent for human beings. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-05-24T02:05:17", "yymm": "1802", "arxiv_id": "1802.04253", "language": "en", "url": "https://arxiv.org/abs/1802.04253" }
\section{Introduction} \label{intro} The hypothesis of cosmic inflation \cite{Guth:1980zm,Linde:1981mu,Albrecht:1982wi,Linde:1983gd} provides a plausible explanation of the large scale homogeneity of the universe and, more importantly, of the primordial density perturbations that evolve into cosmic structure. A simple way inflation can occur is based on a slow-rolling scalar field $\phi$ called the inflaton. Once the Lagrangian for the inflaton field and also the thermal history of the universe after inflation is specified, values for observational parameters can be calculated and compared with constraints coming from measurements of the cosmic microwave background (CMB) anisotropies \cite{Ade:2015xua,Ade:2015lrj}. The observational parameters, in particular the scalar spectral index $n_s$ and the tensor-to-scalar ratio $r$, have been calculated for various inflationary potentials (see \cite{Martin:2013tda} for a comprehensive subset). An assumption often made in the calculations is that the inflaton is minimally coupled. On the other hand, a renormalizable scalar field theory in curved space-time also requires the non-minimal coupling $\xi\phi^2R$ between the inflaton and the Ricci scalar \cite{Callan:1970ze,Freedman:1974ze,Buchbinder:1992rb}. For a given potential, depending on the value of the non-minimal coupling parameter $\xi$, inflationary predictions and even whether inflation occurs or not can change \cite{Abbott:1981rg,Spokoiny:1984bd,Lucchin:1985ip,Futamase:1987ua,Fakir:1990eg,Salopek:1988qh,Amendola:1990nn,Faraoni:1996rf,Faraoni:2004pi}. Here we will investigate how the value of the non-minimal coupling parameter $\xi$ affects the inflationary predictions for potentials where the inflaton has a non-zero vacuum expectation value (VEV) $v$ after inflation. In terms of the redefined field $\varphi\equiv\phi-v$, the non-minimal coupling in the Lagrangian includes a linear term in $\varphi$ as well as a quadratic term. Under some conditions on $\xi$ and $v$ that are discussed in \sektion{nonminimal}, this leads to inflationary predictions that approach those of the Starobinsky ($R^2$ inflation) model \cite{Starobinsky:1980te}, which is in good agreement with the current observations \cite{Ade:2015lrj}. The Starobinsky-like behaviour is obtained not just for the well-known non-minimally coupled quartic potential case but also when inflation occurs near the quadratic minimum of the potential, for inflaton values above (below) the VEV and $\xi>0$ ($\xi<0$). A reason for considering a non-zero VEV after inflation is that such potentials can be associated with symmetry breaking in the early universe. After a general discussion of inflation with non-minimal coupling for such potentials, we then analyze in detail two archetypal symmetry breaking potentials, namely the double-well potential (\sektion{double}) and the Coleman-Weinberg potential (\sektion{cw}). Although both potentials with non-minimal coupling were previously considered, there are some gaps and disagreements in the literature which we address in these sections. For each potential, we display the observational parameter values as functions of $v$ for selected $\xi$ values as well as the regions in the $v$-$\xi$ plane for which the spectral index $n_s$ and tensor-to-scalar ratio $r$ values are compatible with the current observations. \Sektion{small} suggests modifying the double-well potential to obtain a small field inflation (hilltop) potential, which unlike the other two potentials can fit observations for inflaton values below the VEV and $\xi,\,v\ll1$. Finally, \sektion{conclude} concludes the paper with a summary of our results and a remark on perturbative unitarity violation. It is worth mentioning that we use the metric formulation of gravity throughout the paper. For inflation with a non-minimally coupled scalar field, the Palatini formulation leads to different predictions for cosmological parameters \cite{Bauer:2008zj}. In particular, the attractor behaviour leading to the predictions of the Starobinsky model is lost, and $r$ can be much smaller compared to the metric formulation \cite{Bauer:2008zj,Jarv:2017azx}. \section{Inflation with non-minimal coupling} \label{nonminimal} Suppose we have a non-minimally coupled scalar field $\phi$ with a canonical kinetic term and a potential $V_J(\phi)$: \begin{equation} \label{vjphi} \frac{\mathcal{L}_J}{\sqrt{-g}}=\frac12F(\phi)R-\frac12g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V_J(\phi)\,, \end{equation} where the subscript $J$ indicates that the Lagrangian is specified in a Jordan frame. Here, for $V_J(\phi)$ we will be considering symmetry-breaking type of potentials where the inflaton $\phi$ takes positive values and has a non-zero vacuum expectation value (VEV) $v$ after inflation. Our choice for $F(\phi)$ consists of a constant $m^2$ term and a non-minimal coupling $\xi\phi^2R$ between the inflaton and the Ricci scalar. The constant term is familiar from the Einstein-Hilbert action, and the $\xi\phi^2R$ term is required in a renormalizable scalar field theory in curved space-time \cite{Callan:1970ze,Freedman:1974ze,Buchbinder:1992rb}. We are using units where the reduced Planck scale $m_P=1/\sqrt{8\pi G}\approx2.4\times10^{18}\text{ GeV}$ is set equal to unity, so we require $F(\phi)\to1$ after inflation. Therefore taking $m^2=1-\xi v^2$, we have $F(\phi)=m^2+\xi\phi^2=1+\xi(\phi^2-v^2)$. The $\xi v^2$ term in $F(\phi)$ can be neglected in some specific models such as when the standard model Higgs is the inflaton \cite{Bezrukov:2007ep,Atkins:2012yn}, but may well play an important role in other models. For instance, if inflation is associated with symmetry breaking at or near the grand unified theory scale $v\sim0.01$, $|\xi|v^2\gtrsim1$ is possible for values of $|\xi|\gtrsim10^4$ similar to values required for standard model Higgs inflation. As we will see, as cosmological scales exit the horizon $F(\phi)\gtrsim1$ in the observationally favored region of parameters, and the Starobinsky-like regime corresponds to $F(\phi)\gg1$. The effective gravitational constant $G_N=1/[8\pi F(\phi)]$ remains positive throughout the evolution of the field. Indeed, if we switch to the Einstein frame and make a field redefinition so that the kinetic term is again canonical, we see that $G_N=0$ is only reached at infinite values of the field (see \sektion{calculate} and ref. \cite{Linde:2011nh}). This implies, in particular, that if $\xi v^2>1$ there can be no transition from the symmetric ($\phi=0$) phase to the broken-symmetry ($\phi=v$) phase. Nevertheless, we include this case in our investigations as the field evolution does not have to start from the symmetric phase, and could for example start from values above the VEV as would be expected for chaotic initial conditions \cite{Linde:1983gd}. It has been appreciated \cite{Whitt:1984pd,Salopek:1988qh,Barbon:2009ya,Linde:2011nh,Kallosh:2013hoa,Kallosh:2013maa,Kallosh:2013tua,Kehagias:2013mya,Giudice:2014toa,Galante:2014ifa} that different choices for $F(\phi)$ and $V_J(\phi)$ can share the same attractor point with the Starobinsky ($R^2$ inflation) model \cite{Starobinsky:1980te,Mukhanov:1981xt}, which predicts \begin{equation}\label{starpoint} n_s=1-\frac{2}{N}\,,\quad r=\frac{12}{N^2}\,,\quad \frac{\mathrm{d} n_s}{\mathrm{d}\ln k}=-\frac{2}{N^2}\,, \end{equation} to leading order in the number of e-folds $N$, where $\mathrm{d} n_s/\mathrm{d} \ln k$ is the running of the spectral index. An example relevant to our discussion is \begin{equation}\label{3950} F(\phi)=1+\xi \phi^n\,,\qquad V_J(\phi)\propto \phi^{2n}\,, \end{equation} which is a special case of the strong coupling attractor model discussed in \cite{Kallosh:2013tua} (see also \cite{Barbon:2009ya}). In terms of the redefined field $\varphi\equiv\phi-v$ so that $\varphi=0$ after inflation, $F(\varphi)=1+\xi\varphi^2(1+2v/\varphi)$ includes a linear term in $\varphi$ as well as a quadratic term. If $\phi^2-v^2\gg v^2$ ($\varphi^2\gg v^2$) as cosmological scales exit the horizon, it means the inflaton is away from the minimum and $F(\varphi)\approx1+\xi\varphi^2$. Then \eq{3950} is satisfied for $V_J(\varphi)\propto\varphi^4$, the non-minimally coupled quartic model well-known since the late eighties \cite{Fakir:1990eg,Salopek:1988qh,Okada:2010jf,Bezrukov:2013fca}. On the other hand, if $|\phi^2-v^2|\ll v^2$ ($\varphi^2\ll v^2$) as cosmological scales exit the horizon, $F(\varphi)\approx1+2\xi v\varphi$ so that \eq{3950} is satisfied for $V_J(\varphi)\propto\varphi^2$, with $\xi>0$ ($\xi<0$) for inflaton values above (below) the VEV during inflation. Since a generic potential will be quadratic close enough to its minimum, it seems that a generic $V_J(\phi)$ can share the predictions of the Starobinsky model up to leading order in the number of e-folds $N$. For this to happen, $\xi$ and $v$ values should satisfy some constraints which we discuss in \sektion{star}. Before that, we briefly review how to calculate the observational parameters for inflation with non-minimal coupling. \subsection{Calculating the observational parameters} \label{calculate} For calculating the observational parameters given \eq{vjphi}, it is convenient to switch to the Einstein ($E$) frame by applying a Weyl rescaling $g_{\mu\nu}=\tilde{g}_{\mu\nu}/F(\phi)$, so that the Lagrangian density takes the form \cite{Fujii:2003pa} \begin{equation} \label{LE} \frac{\mathcal{L}_E}{\sqrt{-\tilde{g}}}=\frac12\tilde{R}-\frac{1}{2Z(\phi)}\tilde{g}^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V_E(\phi)\,, \end{equation} where \begin{equation} \label{Zphi} \frac{1}{Z(\phi)}=\frac32\frac{F'(\phi)^2}{F(\phi)^2}+\frac{1}{F(\phi)}\,,\qquad V_E(\phi)=\frac{V_J(\phi)}{F(\phi)^2}\,, \end{equation} and $F'\equiv\mathrm{d} F/\mathrm{d}\phi$. If we make a field redefinition \begin{equation}\label{redefine} \mathrm{d}\sigma=\frac{\mathrm{d}\phi}{\sqrt{Z(\phi)}}\,, \end{equation} we obtain the Lagrangian density for a minimally coupled scalar field $\sigma$ with a canonical kinetic term. For $F(\phi)=1+\xi(\phi^2-v^2)$, \eq{Zphi} gives \begin{equation} \label{Zphiexplicit} \frac{1}{Z(\phi)}=\frac{1+\xi(\phi^2-v^2)+6\xi^2\phi^2}{\left[1+\xi(\phi^2-v^2)\right]^2}\,. \end{equation} It will be useful to consider some simplifying cases of this expression: \begin{description}[style=multiline,font=\normalfont] \item[1.] Weak coupling limit \\ If $|\xi(\phi^2-v^2)|\ll1$ and $6\xi^2\phi^2\ll1$, $\phi\approx\sigma$ and $V_J(\phi)\approx V_E(\sigma)$. (Provided $|\xi|\ll1/6$, these conditions will be satisfied when $|\xi|v^2\ll1$ for inflation below the VEV, and $|\xi|\phi^2\ll1$ for inflation above the VEV.) Then, the inflationary predictions are approximately the same as for minimal coupling in general. Note, however, that if $V_J(\phi)$ is very flat as cosmological scales exit the horizon, then even a small correction in the potential can significantly alter the inflationary predictions, as we will discuss in \sektion{small}. \item[2.] Induced gravity limit \cite{Zee:1978wi} \\ In this limit ($\xi v^2=1$, $F(\phi)=\xi\phi^2$), \eq{Zphiexplicit} simplifies to $Z(\phi)=\xi\phi^2/(1+6\xi)$ and using \eq{redefine}, we obtain \begin{equation}\label{induced} \phi=v\exp\left(\sqrt{\frac{\xi}{1+6\xi}}\sigma\right)\,, \end{equation} where we took $\sigma(v)=0$. \item[3.] Strong coupling limit \\ If $6\xi^2\phi^2\gg|\xi(\phi^2-v^2)|\gg1$, we have \begin{equation} \label{strong} \frac{1}{Z(\phi)}\approx\frac{6\phi^2}{(\phi^2-v^2)^2}\,. \end{equation} Using \eq{redefine}, we obtain $\phi^2-v^2\propto e^{2\sigma/\sqrt6}$ where $\sigma$ is positive during inflation. This exponential behaviour in terms of the canonical field $\sigma$ makes it difficult to satisfy observations except for the special cases discussed in \sektion{star} where the Einstein frame potential $V_E(\sigma)$ has a plateau due to cancellations between $V_J(\phi)$ and $F(\phi)^2$. \end{description} Once the Einstein frame potential is expressed in terms of the canonical $\sigma$ field, the observational parameters can be calculated using the slow-roll parameters (see ref. \cite{Lyth:2009zz} for a review and references): \begin{equation}\label{slowroll1} \epsilon =\frac{1}{2}\left( \frac{V_{\sigma} }{V}\right) ^{2}\,, \quad \eta = \frac{V_{\sigma \sigma} }{V} \,, \quad \xi ^{2} = \frac{V_{\sigma} V_{\sigma \sigma\sigma} }{V^{2}}\,, \end{equation} where $\sigma$'s in the subscript denote derivatives. The spectral index $n_s$, the tensor-to-scalar ratio $r$ and the running of the spectral index $\mathrm{d} n_s/\mathrm{d} \ln k$ are given in the slow-roll approximation by \begin{equation}\label{nsralpha1} n_s = 1 - 6 \epsilon + 2 \eta \,,\quad r = 16 \epsilon \,,\quad \frac{\mathrm{d} n_s}{\mathrm{d}\ln k} = 16 \epsilon \eta - 24 \epsilon^2 - 2 \xi^2\,. \end{equation} The amplitude of the curvature perturbation $\Delta_\mathcal{R}$ is given by \begin{equation} \label{perturb1} \Delta_\mathcal{R}=\frac{1}{2\sqrt{3}\pi}\frac{V^{3/2}}{|V_{\sigma}|}\,, \end{equation} which should satisfy $\Delta_\mathcal{R}^2\approx 2.4\times10^{-9}$ from the Planck measurement \cite{Ade:2015xua} with the pivot scale chosen at $k_* = 0.002$ Mpc$^{-1}$. The number of e-folds is given by \begin{equation} \label{efold1} N_*=\int^{\sigma_*}_{\sigma_e}\frac{V\rm{d}\sigma}{V_{\sigma}}\,, \end{equation} where the subscript ``$_*$'' denotes quantities when the scale corresponding to $k_*$ exited the horizon, and $\sigma_e$ is the inflaton value at the end of inflation, which we estimate by $\epsilon(\sigma_e) = 1$. Unfortunately, for general values of $\xi$ and $v$, it is difficult and inconvenient to express the potential in terms of the canonical field $\sigma$. We therefore rewrite these slow-roll expressions in terms of the original field $\phi$ for the numerical calculations, following the approach in ref. \cite{Linde:2011nh}. Using \eq{redefine}, \eq{slowroll1} can be written as \begin{equation}\label{slowroll2} \epsilon=Z\epsilon_{\phi}\,,\quad \eta=Z\eta_{\phi}+{\rm sgn}(V')Z'\sqrt{\frac{\epsilon_{\phi}}{2}}\,,\quad \xi^2=Z\left(Z\xi^2_{\phi}+3{\rm sgn}(V')Z'\eta_{\phi}\sqrt{\frac{\epsilon_{\phi}}{2}}+Z''\epsilon_{\phi}\right)\,. \end{equation} where we defined \begin{equation} \epsilon_{\phi} =\frac{1}{2}\left( \frac{V^{\prime} }{V}\right) ^{2}\,, \quad \eta_{\phi} = \frac{V^{\prime \prime} }{V} \,, \quad \xi ^{2} _{\phi}= \frac{V^{\prime} V^{\prime \prime\prime} }{V^{2}}\,. \end{equation} Similarly, \eq{perturb1} and \eq{efold1} can be written as \begin{eqnarray}\label{perturb2} \Delta_\mathcal{R}&=&\frac{1}{2\sqrt{3}\pi}\frac{V^{3/2}}{\sqrt{Z}|V^{\prime}|}\,,\\ \label{efold2} N_*&=&\rm{sgn}(V')\int^{\phi_*}_{\phi_e}\frac{\mathrm{d}\phi}{Z(\phi)\sqrt{2\epsilon_{\phi}}}\,. \end{eqnarray} To calculate the numerical values of $n_s$, $r$ and $\alpha$ we also need a numerical value of $N_*$. Assuming a standard thermal history after inflation, \begin{equation} \label{efolds} N_*\approx64.7+\frac12\ln\frac{\rho_*}{m^4_P}-\frac{1}{3(1+\omega_r)}\ln\frac{\rho_e}{m^4_P} +\left(\frac{1}{3(1+\omega_r)}-\frac14\right)\ln\frac{\rho_r}{m^4_P}\,. \end{equation} Here $\rho_{e}=(3/2)V(\phi_{e})$ is the energy density at the end of inflation, $\rho_r$ is the energy density at the end of reheating and $\omega_r$ is the equation of state parameter during reheating, which we take to be constant.\footnote{For a derivation of \eq{efolds} see e.g. ref. \cite{Liddle:2003as}. Note that $N_*$ is defined in the Einstein frame. The number of e-folds in the Jordan frame $N_*^J=N_*+(1/2)\ln[F(\phi_*)/F(\phi_e)]$ can be noticeably different in the strong coupling limit \cite{Lerner:2009xg,Lerner:2009na,Burns:2016ric,Karam:2017zno}. However, in a Jordan frame calculation the additional term in $N_*^J$ would appear in both eqs. \ref{efold2} and \ref{efolds}, leaving the physically observable quantities unchanged, as expected \cite{Postma:2014vaa}.} Using \eq{perturb1}, we can express $\rho_*$ in terms of $r$: \begin{equation}\label{nstarandr} \rho_*\approx V(\phi_*)=\frac{3\pi^2\Delta_\mathcal{R}^2 r}{2}\,. \end{equation} To represent a plausible range of $N_*$, we can consider three cases: In the high-$N$ case $\omega_{r}$ is taken to be 1/3, which is equivalent to assuming instant reheating. In the middle-$N$ case we take $\omega_{r}=0$ and the reheat temperature $T_r=10^9$ GeV, calculating $\rho_r$ using the standard model value for the number of relativistic degrees of freedom ($g_*=106.75$). In the low-$N$ case we take $T_r=100$ GeV (again with $\omega_{r}=0$).\footnote{$T_r$ as low as 10 MeV is consistent with big bang nucleosynthesis, however it is difficult to explain how baryogenesis could occur at such low temperatures.} The $n_s$ vs. $r$ curve for each case is shown in figure \ref{higgs_above_reheat} for the double-well potential (discussed in \sektion{double}) along with the 68\% and 95\% confidence level (CL) contours given by the Planck collaboration (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua}. The figure shows that for the double-well potential, the fiducial $N_*$ values of 50 and 60 that are often used essentially coincide with the range expected from a standard thermal history after inflation. This is also the case for the Coleman-Weinberg potential discussed in \sektion{cw}. However, $N_*$ is smaller (e.g. between approximately 45 and 55 if $v\sim0.01$) for the small field inflation models discussed in \sektion{small} due to inflation occurring at a lower energy scale. We have carried out all the calculations in this article up to the leading order in the slow-roll parameters. Higher order corrections slightly change the values of the observational parameters (see e.g. refs. \cite{Lyth:2009zz,Karam:2017zno}). However, the uncertainty in the values of these parameters due to the reheating stage is much larger compared to the theoretical errors associated with the slow-roll approximation. \begin{figure}[!t] \centering \includegraphics[angle=0, width=10cm]{higgs_above_reheat.pdf} \caption{$n_s$-$r$ predictions for varying $\xi$ values and different reheating scenarios as explained in the text. The dots on the curves correspond to $\xi=10^{-2.5},\,10^{-2},\,10^{-1.5},\,0.1,$ and $\,1$, top to bottom. The pink (red) contour corresponds to the 95\% (68\%) CL contour given by the Planck collaboration (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua}.} \label{higgs_above_reheat} \end{figure} \subsection{The Starobinsky conditions} \label{star} As mentioned in the beginning of \sektion{nonminimal}, for a potential $V_J(\varphi)$ which is quartic away from the minimum or quadratic close to the minimum, if some conditions on $\xi$ and $v$ values are satisfied, predictions approach the Starobinsky point given by \eq{starpoint} on the $n_s$-$r$ plane. Following the discussion in ref. \cite{Galante:2014ifa}, we will now derive these conditions using the relation of the Starobinsky point with the order and residue of the leading pole in the kinetic term. Let's write the Einstein frame Lagrangian density in terms of $\chi(\phi)\equiv 1/F(\phi)$: \begin{equation} \label{LEchi} \frac{\mathcal{L}_E}{\sqrt{-\tilde{g}}}=\frac12\tilde{R}-\frac{1}{2}K(\chi)\tilde{g}^{\mu\nu}\partial_{\mu}\chi\partial_{\nu}\chi-V_E(\chi)\,. \end{equation} Suppose $K(\chi)$ is given by a Laurent series with a leading pole located at $\chi=0$ whereas $V_E(\chi)$ is given by a Taylor series starting from a non-vanishing constant term $U_0$ as cosmological scales exit the horizon: \begin{equation} \label{vechi} K(\chi)=\frac{a_p}{\chi^p}+\cdots\,,\qquad V_E(\chi)=U_0(1-c\chi+\cdots)\,. \end{equation} In analogy with motion of a particle for $L=(1/2)m\dot{x}^2-V(x)$ and $m\to\infty$, slow-roll inflation occurs for $\chi\to0$. We can calculate the inflationary predictions using the usual slow-roll expressions (see \sektion{calculate}) and $\mathrm{d}\sigma=\sqrt{K(\chi)}\mathrm{d}\chi$, obtaining \cite{Galante:2014ifa} \begin{equation} \label{Galante} N_*\approx\frac{a_p\chi_*^{1-p}}{c(p-1)}\,,\quad n_s\approx1-\frac{p}{(p-1)N_*}\,,\quad r\approx8\left(\frac{c^{p-2}a_p}{[(p-1)N_*]^p}\right)^{\frac{1}{p-1}}\,. \end{equation} For a standard thermal history after inflation, the current data \cite{Ade:2015xua,Ade:2015lrj} favors $n_s\approx1-2/N_*$, which corresponds to the case $p=2$. Note that for this case $r=8a_2/N_*^2$ does not depend on $c$, which is to be expected since the kinetic term is invariant under $\chi\to c\chi$. The Starobinsky model predictions given by \eq{starpoint} correspond to $p=2$ and $a_2=3/2$. Now consider inflation with $F(\phi)=1+\xi(\phi^2-v^2)$, so that $\chi\to0$ corresponds to $\xi(\phi^2-v^2)\gg1$. This implies that we can look for Starobinsky-like solutions, with inflaton values above (below) the VEV if $\xi>0$ ($\xi<0$). Using \eq{Zphi}, we obtain \begin{equation} \label{Zchi} K(\chi)=\frac{3}{2\chi^2}+\frac{1}{4\xi\chi^2\left[1-\chi(1-\xi v^2)\right]}\,. \end{equation} Note that this equation differs from eq. (22) of ref. \cite{Galante:2014ifa} due to the $\xi v^2$ term in $F(\phi)$. As a consequence, the kinetic term can remain positive after as well as during inflation for both signs of $\xi$. First, consider above VEV solutions satisfying $\phi^2-v^2\gg v^2$ as cosmological scales exit the horizon. In this case \begin{equation} \label{Zchi2} K(\chi)\approx\frac{3\alpha}{2\chi^2}\,, \text{ where }\alpha\equiv1+\frac{1}{6\xi}\approx \left\{ \arraycolsep=1.4pt\def1.5{1.5} \begin{array}{rl} 1 & \text{if } \xi \gg \frac16,\\ \frac{1}{6\xi} & \text{if } \xi \ll \frac16.\\ \end{array} \right. \end{equation} Note that from \eq{Galante}, $\chi_*\approx a_2/(cN_*)$ with $a_2=3\alpha/2$, so $\chi_*\ll1$ corresponds to $\xi\gg1/(4cN_*)$. Also, the assumption $\phi^2-v^2\gg v^2$ corresponds to $\xi v^2\ll 2cN_*/(3\alpha)$. When these conditions are satisfied, the leading order inflationary predictions coincide with those of the $\alpha$--attractor models \cite{Ferrara:2013rsa,Kallosh:2013yoa,Galante:2014ifa}, namely, \begin{equation}\label{alphaattractor} n_s=1-\frac{2}{N_*}\,,\quad r=\frac{12\alpha}{N_*^2}\,. \end{equation} Second, let's assume that $|\phi^2-v^2|\ll v^2$ as cosmological scales exit the horizon. Further assuming $6|\xi|v^2\gg|\phi^2-v^2|$, \eq{Zchi} simplifies to $K(\chi)\approx3/(2\chi^2)$, that is, $p=2$ and $a_2=3/2$. Therefore the Starobinsky model predictions given by \eq{starpoint} are obtained for inflaton values both above and below the VEV whenever these two assumptions are satisfied. Using $\chi_*\approx 3/(2cN_*)$, the two assumptions correspond to $|\xi|v^2\gg(2cN_*)/3$ and $\xi^2v^2\gg cN_*/9$, respectively. As long as these conditions are satisfied and $V_E(\chi)$ is given by \eq{vechi}, the inflationary predictions will match the Starobinsky model predictions up to leading order in $N_*$. From \eq{vechi} we obtain \begin{equation} V_J(\phi)\approx U_0\xi^2(\phi^2-v^2)^2\left(1+\frac{2-c}{\xi(\phi^2-v^2)}+\cdots\right)\,, \end{equation} which implies that regardless of the value of $c$ (as long as it is $\ll1/\chi$), Starobinsky-like solutions are obtained if $V_J(\phi)$ is approximately given by the double well potential for which $c=2$, as cosmological scales exit the horizon. Therefore in terms of $\varphi\equiv\phi-v$, the potentials satisfying \eq{vechi} can be written as \begin{equation} V_J(\varphi)\propto\varphi^4\left(1+2\frac{v}{\varphi}\right)^2\,. \end{equation} This again shows that the Starobinsky point given by \eq{starpoint} is obtained for the $\varphi^4$ potential away from the minimum ($\varphi^2\gg v^2$), and the $\varphi^2$ potential near the minimum ($\varphi^2\ll v^2$). From $\chi_*\approx3/(4N_*)$ (for $c=2$), the value of $\varphi$ as cosmological scales exit the horizon is $\varphi_*^2\approx4N_*/(3\xi)$ and $|\varphi_*|\approx2N_*/(3|\xi| v)$ for these two cases, respectively. We can summarize the Starobinsky conditions as follows. The inflationary predictions for $V_J(\phi)$ coincide with the Starobinsky model predictions up to leading order in $N_*$ if: \begin{description}[style=multiline,font=\normalfont] \item[1.] The inflaton is above the VEV, $V_J(\phi)$ is quartic for $\phi^2\gg v^2$, $\xi v^2\ll4N_*/3$ and $\xi\gg1/6$. (On the other hand, from \eq{Zchi2}, $r\approx2/(\xi N_*^2)$ for $1/(8N_*)\ll\xi\ll1/6$.) \item[2.] The potential is quadratic around the minimum for $\varphi^2\ll v^2$ and \begin{equation}\label{star2} \xi^2v^2\gg \frac{2N_*}{9} \text{ if } |\xi|<\frac16\,,\quad |\xi| v^2\gg \frac{4N_*}{3} \text{ if } |\xi|>\frac16\,,\\ \end{equation} \end{description} where $\xi>0$ ($\xi<0$) for inflaton values above (below) the VEV. These conditions satisfy the strong coupling limit \eq{strong}, so that a plateau type Einstein frame potential is obtained during inflation in terms of the canonical scalar field: \begin{equation} V_E(\sigma)\approx U_0\left(1-e^{-2\sigma/\sqrt6}\right)\,. \end{equation} Both 1. and 2. are special cases of the strong coupling attractor model \eq{3950}, with $n=2$ and $n=1$, respectively. The $n=1$ case is discussed in ref. \cite{Kehagias:2013mya} and also belongs to the class of the induced inflation models discussed in ref. \cite{Giudice:2014toa}. \section{Double-well potential} \label{double} In this section we analyze the prototypical symmetry breaking potential \cite{Goldstone:1961eq} \begin{equation} V_J(\phi)=V_0\left[1-\left(\frac{\phi}{v}\right)^2\right]^2\,, \end{equation} referred to as the double-well potential, the Higgs potential or the Landau-Ginzburg potential. First, let us briefly review inflation with this potential for the minimal coupling case, which was analyzed in several papers, see e.g. refs. \cite{Vilenkin:1994pv,Linde:1994wt,Destri:2007pv,Kallosh:2007wm,Smith:2008pf,Rehman:2008qs,Martin:2013tda,Okada:2014lxa,Ashoorioon:2014jja}. When inflation occurs near the minimum, in terms of $\varphi\equiv\phi-v$, the potential is approximately quadratic: $V\approx(4V_0/v^2)\varphi^2$. Since $\varphi_*^2\approx4N_*$ for quadratic inflation, the observable part of inflation occurs near the minimum for $v^2\gg4N_*$. Then the quadratic potential predictions of \begin{equation}\label{quadratic} n_s\approx1-\frac{2}{N_*}\,,\quad r\approx\frac{8}{N_*}\,,\quad \frac{\mathrm{d} n_s}{\mathrm{d}\ln k}\approx-\frac{2}{N^2}\,, \end{equation} are obtained for inflation both below the VEV and above the VEV. For inflation above the VEV, if $v^2\ll4N_*$ then we have quartic inflation with $n_s\approx1-3/N_*$ and $r\approx16/N_*$. The predictions interpolate between the quadratic and quartic limits for $v^2\sim4N_*$, remaining out of the 95\% CL Planck contour (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua} for all $v$. Whereas for inflation below the VEV, if $v^2\ll4N_*$ then $\phi\ll v$ as cosmological scales exit the horizon, so the potential is effectively of the new inflation (small field or hilltop inflation) type \begin{equation} V(\phi)\approx V_0\left[1-2\left(\frac{\phi}{v}\right)^2\right]\,, \end{equation} which implies a strongly red tilted spectrum $n_s\approx1-8/v^2$ with suppressed $r$. As a result, although both the $v^2\ll4N_*$ and $v^2\gg4N_*$ limits are ruled out, the $n_s$-$r$ values are in the 68\% CL Planck contour (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua} for a narrow range around $v^2\sim4N_*$ (specifically, between $v=19$ and 25 for the high-$N$ case), see figures \ref{higgs_vxi_figure} and \ref{higgs_below}. Note that all the figures in this section are obtained for the high-$N$ case, using the equations given in \sektion{calculate}. In particular from \eq{efold2} we obtain: \begin{equation}\nonumber N_*=\frac18(1+6\xi)(\phi_*^2-\phi_e^2)+\frac{v^2}{4}\ln\frac{\phi_e}{\phi_*}+\frac34\ln\frac{1+\xi(\phi_e^2-v^2)}{1+\xi(\phi_*^2-v^2)}\,. \end{equation} \begin{figure}[!t] \begin{center} \scalebox{0.45}{\includegraphics{higgs_below_negative_vxi.pdf}}\hspace{0.3cm} \scalebox{0.45}{\includegraphics{higgs_below_positive_vxi.pdf}} \\ \vspace{0.3cm} \scalebox{0.45}{\includegraphics{higgs_above_positive_vxi.pdf}} \end{center}\vspace{-0.5cm} \caption{Light green (green) regions in the $v$-$\xi$ plane predict $n_s$ and $r$ values inside the 95\% (68\%) CL Planck contour (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua}. The Starobinsky conditions \eq{star2} and \eq{higgsabove2} are satisfied above the dashed lines.} \label{higgs_vxi_figure} \end{figure} \begin{figure}[!t] \centering \includegraphics[angle=0, width=12cm]{higgs_below.pdf} \caption{Observational parameter values as functions of $v$ for selected $\xi$ values. The pink (red) contour corresponds to the 95\% (68\%) CL contour given by the Planck collaboration (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua}.} \label{higgs_below} \end{figure} From the discussions in \sektion{nonminimal}, we expect that we need $\xi<0$ ($\xi>0$) to improve the fit to the observations for inflation below (above) the VEV. Indeed, figures \ref{higgs_vxi_figure} and \ref{higgs_below} show that for inflation below the VEV, the predictions move out of the 95\% CL Planck contour for $\xi\gtrsim10^{-3}$. For inflation above the VEV, the $\xi=0$ case interpolating between quadratic and quartic inflation is already out of the Planck range, as is the $\xi<0$ case which leads to an even redder spectrum and larger $r$. It was discussed in \sektion{nonminimal} that if certain constraints on $v$ and $\xi$ values are satisfied, a potential quadratic near its minimum or a potential quartic away from it correspond to special cases of the strong coupling attractor model of ref. \cite{Kallosh:2013tua}. The double-well potential satisfies both conditions. The Einstein frame potential can be written as \begin{equation} V_E(\chi)=\frac{V_0}{\xi^2v^4}(1-2\chi+\chi^2)\,. \end{equation} Therefore the discussion in \sektion{star} is directly applicable. In particular, \eq{star2} implies that for inflation below the VEV, as $|\xi|$ is increased for a given $v$, the predictions eventually approach the Starobinsky point given by \eq{starpoint}. As can be seen from \fig{higgs_vxi_figure}, this transition is rather abrupt for $v^2\ll8N_*$ and occurs near $|\xi|v^2=4N_*/3$. This means that if the Starobinsky point is excluded by future observations, the entire parameter space $v^2\ll8N_*$ will be ruled out for below VEV inflation with this potential. For above VEV inflation with $\xi v^2\ll4N_*/3$ (which includes the induced gravity case $\xi v^2=1$) and $\xi\gg1/(8 N_*)$, the inflationary predictions are given by \eq{alphaattractor}. Thus, the $n_s$-$r$ values are in the 95\% (68\%) CL contours for $\xi\gtrsim0.004$ (0.007) for the high-$N$ case, see figures \ref{higgs_vxi_figure} and \ref{higgs_above}. Note that for very low reheat temperatures the $n_s$-$r$ values can move out of the 68\% CL contour, see \fig{higgs_above_reheat}. For negligible values of $v$, as is the case in standard model Higgs inflation \cite{Bezrukov:2007ep}, the model is reduced to the non-minimally coupled quartic inflation model. Our results in this limit agree with previous results \cite{Fakir:1990eg,Salopek:1988qh,Okada:2010jf,Bezrukov:2013fca}. \begin{figure}[!t] \centering \includegraphics[angle=0, width=12cm]{higgs_above.pdf} \caption{Observational parameter values as functions of $v$ for selected $\xi$ values. The pink (red) contour corresponds to the 95\% (68\%) CL contour given by the Planck collaboration (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua}.} \label{higgs_above} \end{figure} Combining \eq{alphaattractor} with \eq{star2} we see that for above VEV inflation the Starobinsky point is obtained when: \begin{equation} \label{higgsabove2} \xi^2v^2\gg \frac{2N_*}{9} \text{ if } 0<\xi<\frac16\,,\quad \forall v \text{ if } \xi\gg\frac16\,. \end{equation} In the induced gravity limit, using \eq{induced}, the Einstein frame potential can be written as \begin{equation} V_E(\sigma)=\frac{V_0}{\xi^2 v^4}\left(1-\exp\left[\frac{-2\sigma}{\sqrt{6\alpha}}\right]\right)^2\,, \end{equation} coinciding with the $\alpha$-$\beta$ model of refs. \cite{Ferrara:2013rsa,Kallosh:2013yoa}. Thus the inflationary predictions approach the quadratic potential predictions given by \eq{quadratic} for $\xi\ll1/(16N_*)$, and \eq{alphaattractor} for larger $\xi$. The double-well potential in the induced gravity limit was previously considered for inflation in refs. \cite{Accetta:1985du,Lucchin:1985ip,Kaiser:1994vs,Cerioni:2009kn,Burns:2016ric}. Ref. \cite{Kaiser:1994vs} also calculated $n_s$ for $\xi v^2$ values between 0 and 1. Our results agree with ref. \cite{Kaiser:1994vs} for $\xi v^2\ll1$ but not for the induced gravity limit $\xi v^2\to1$. For the latter case our results agree with refs. \cite{Cerioni:2009kn,Burns:2016ric}. Finally, ref. \cite{Linde:2011nh} analyzed the double-well potential with non-minimal coupling in detail (see also ref. \cite{Tronconi:2017wps} for the $\xi>0$ case). The difference between our work and theirs is that we take $F(\phi)=m^2+\xi\phi^2=1+\xi(\phi^2-v^2)$ as explained in \sektion{nonminimal}, whereas they take $F(\phi)=1+\xi\phi^2$. As a consequence, although the predictions on the $n_s$-$r$ plane look generally similar, there are a few differences between our and their results. Namely, for inflation below the VEV their predictions approach the Starobinsky point given by \eq{starpoint} when $\xi v^2\to-1$, whereas our predictions approach it when \eq{star2} is satisfied. For above VEV inflation, their predictions approach \eq{starpoint} only for large values of $\xi$, whereas our predictions approach it when \eq{higgsabove2} is satisfied. \section{Small field inflation potentials}\label{small} Consider new inflation type models where the inflaton is below the VEV during inflation. For the double-well potential, we see that consistency with observations require $|\xi|v^2\gtrsim4N_*/3$ so that very large $|\xi|$ values are needed for sub-Planckian values of the VEV $v$. On the other hand, a potential which is flatter near the origin could be compatible with observations even if $|\xi|\ll1/6$. As an example we take a simple generalization of the double-well potential: \begin{equation}\label{generalizedhiggs} V_J(\phi)=V_0\left[1-\left(\frac{\phi}{v}\right)^p\right]^2\,,\quad(p>2)\,. \end{equation} For the weak coupling limit $|\xi|\ll1/6$ and $|\xi|v^2\ll1$, we have $\phi\approx\sigma$. If $v^2\ll4N_*$ then $\sigma\ll v$ during inflation, and the Einstein frame potential can be written as \begin{equation}\label{sfipotential} V_E(\sigma)\approx V_0\left[1-\left(\frac{\sigma}{\mu}\right)^p-2\xi\sigma^2\right]\,, \end{equation} where we have defined $\mu=v/2^{1/p}$.\footnote{This potential also arises in some supersymmetric new inflation models \cite{Izawa:1996dv,Kawasaki:2003zv,Yamaguchi:2004tn,Senoguz:2004ky} and was analyzed in refs. \cite{Boyanovsky:2007ry,Destri:2009wn}.} For $\xi=0$, this small field inflation potential (also called hilltop potential) appears often in the literature, see for example refs. \cite{Boubekeur:2005zm,Lyth:2009zz,Martin:2013tda} and references therein. Using eqs. \ref{slowroll1}, \ref{nsralpha1} and \ref{efold1}, we obtain \begin{equation}\label{nsrminimal} n_s\approx1-\frac{(p-1)2}{(p-2)N_*}\,,\quad r\approx128\left(\frac{16\mu^{2p}}{p^2[4(p-2)N_*]^{2p-2}}\right)^\frac{1}{p-2}\,,\end{equation} which shows that $r$ is suppressed and $n_s$ tends to be smaller than the range favored by observations. To be more specific, let's consider the most optimistic high-$N$ case, where using eqs. \ref{efolds}, \ref{nstarandr}, and $\rho_*\approx\rho_e$, we have \begin{equation} N_*\approx64.7+\frac14\ln\frac{3\pi^2\Delta_\mathcal{R}^2 r}{2}\,. \end{equation} Note that the energy scale during inflation is lower for lower $\mu$ values, which correspond to lower $r$, $N_*$ and therefore $n_s$ values as well. For $\mu=0.01$, $n_s$ can be inside the 95\% CL contour given by the Planck collaboration (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua} only for $p\ge10$, see \fig{sfi_figure}. If $\mu\lesssim6\times10^{-8}$, $n_s$ is outside the 95\% CL contour for any $p$ value. \begin{figure}[!t] \centering \includegraphics[angle=0, width=14cm]{small_field.pdf} \caption{$n_s$ values as functions of $\xi$ for $\mu=0.01$ and selected $p$ values. Left panel: high-$N$ scenario, right panel: low-$N$ scenario (see \sektion{nonminimal}). The pink (red) line corresponds to the 95\% (68\%) CL contour given by the Planck collaboration (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua}.} \label{sfi_figure} \end{figure} Repeating the calculation for the potential given by \eq{sfipotential}, we obtain \begin{equation}\label{sfinsr} n_s\approx1+\frac{8(p-1)\xi}{1-e^{4(p-2)\xi N_*}}-8\xi\,,\quad r\approx\frac{128\xi^2\mu^2(4\xi\mu^2/p)^{2/(p-2)}e^{8(p-2)\xi N_*}}{ \left(e^{4(p-2)\xi N_*}-1\right)^{2(p-1)/(p-2)}}\,. \end{equation} These expressions are in excellent agreement with the numerical results given in \fig{sfi_figure}, which were calculated using the Jordan frame potential given by \eq{generalizedhiggs}. They show that $n_s$ values increase and the fit to observational data improves provided $\xi\sim1/[4(p-2)N_*]$. In particular, $n_s$ can be inside the 95\% CL contour for much smaller VEVs, namely for $\mu\gtrsim2\times10^{-9}$, $2\times10^{-17}$, $7\times10^{-23}$ for $p=6$, 8, 10 respectively. \section{Coleman-Weinberg potential} \label{cw} Symmetry breaking due to the Coleman-Weinberg mechanism \cite{Coleman:1973jx} was associated with inflation since the early eighties when the first new inflation models were proposed \cite{Linde:1981mu,Albrecht:1982wi,Shafi:1983bd}. In these models the effective potential can be written as \cite{Albrecht:1984qt,Linde:2005ht} \begin{equation} \label{potpot} V_J(\phi)= A \phi^4 \left[\ln\left( \frac{\phi}{v}\right) -\frac{1}{4}\right] + \frac{A v^4}{4}\,. \end{equation} For a minimally coupled scalar, the inflationary predictions of this potential were analyzed in ref. \cite{Shafi:2006cs} (see also refs. \cite{Smith:2008pf,Rehman:2008qs,Martin:2013tda,Barenboim:2013wra,Okada:2014lxa,Kannike:2014mia,Senoguz:2015lba}). They are generally similar to the predictions of the double-well potential: Again, for $v^2\gg4N_*$ inflation occurs around the quadratic minimum $V\approx2Av^2\varphi^2$, leading to \eq{quadratic}. For inflation above the VEV, the predictions again interpolate between the quadratic and quartic limits, remaining out of the 95\% CL Planck contour. Whereas for inflation below the VEV, if $v^2\ll4N_*$ then $\phi\ll v$ as cosmological scales exit the horizon, so the potential is effectively of the new inflation (small field or hilltop inflation) type \begin{equation}\label{approxpot} V(\phi)\approx V_0\left[1-\left(\frac{\phi}{\mu}\right)^4\right]\,, \end{equation} which predicts $n_s\approx1-3/N_*$, $\alpha\approx-3/N_*^2$ and a tiny $r$ as given by \eq{nsrminimal}. Comparing with \eq{potpot}, the parameter $\mu$ in \eq{approxpot} is given by \begin{equation} \mu^4\approx-\frac{v^4}{4}\left(\ln\frac{\phi_*}{v}-\frac14\right)^{-1}\,, \end{equation} where using \eq{efold1} we obtain \begin{equation} \left(\frac{\phi_*}{v}\right)^2\approx-\frac{v^2}{16N_*}\left[W_{-1}\left(-\frac{v^2}{16N_*}\right)\right]^{-1}\,. \end{equation} Here, $W_{-1}$ is a branch of the Lambert function satisfying $W(z)e^{ W(z)}=z$. Similarly to the double-well potential, although both the $v^2\ll4N_*$ and $v^2\gg4N_*$ limits are ruled out for inflation below the VEV, the $n_s$-$r$ values are in the 68\% CL Planck contour (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua} for a narrow range around $v^2\sim4N_*$ (specifically, between $v=20$ and 38 for the high-$N$ case), see figures \ref{cw_vxi_figure} and \ref{cw_below}. Note that all the figures in this section are also obtained for the high-$N$ case. \begin{figure}[!t] \begin{center} \scalebox{0.45}{\includegraphics{cw_below_negative_vxi.pdf}}\hspace{0.3cm} \scalebox{0.42}{\includegraphics{cw_below_positive_vxi.pdf}} \\ \vspace{0.3cm} \scalebox{0.54}{\includegraphics{cw_above_positive_vxi.pdf}} \end{center}\vspace{-0.5cm} \caption{Light green (green) regions in the $v$-$\xi$ plane predict $n_s$ and $r$ values inside the 95\% (68\%) CL Planck contour (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua}. The Starobinsky conditions \eq{star2} are satisfied above the dashed lines. The dotted line corresponds to $\xi v^2=1$. } \label{cw_vxi_figure} \end{figure} \begin{figure}[!b] \centering \includegraphics[angle=0, width=12cm]{cw_below.pdf} \caption{Observational parameter values as functions of $v$ for selected $\xi$ values. The pink (red) contour corresponds to the 95\% (68\%) CL contour given by the Planck collaboration (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua}.} \label{cw_below} \end{figure} From \fig{cw_vxi_figure} we see that below VEV inflation predictions are incompatible with the observational data for $|\xi|\ll1/6$ and $v^2\lesssim1$. This is expected from the discussion in \sektion{small} since under these conditions the Coleman-Weinberg potential is approximately given by \eq{sfipotential} with $p=4$ during inflation. As is clear from \fig{sfi_figure} and \eq{sfinsr}, a small non-minimal coupling cannot bring $n_s$ into agreement with observations. For instance $n_s$ remains $\le0.945$ for $v=0.01$ even under the most favorable instant reheating assumption. This result agrees with refs. \cite{Iso:2014gka,Kaneta:2017lnj}, but disagrees with ref. \cite{Panotopoulos:2014hwa} where the quartic term in \eq{sfipotential} is erroneously neglected. The effect of the non-minimal coupling on the predictions of inflation below the VEV is similar for Coleman-Weinberg potential to the double-well potential. Both potentials are quadratic near their minima, so that the Starobinsky point given by \eq{starpoint} is obtained when \eq{star2} holds. Numerically, approaching the Starobinsky point requires even larger $|\xi|v^2$ for the Coleman-Weinberg potential as can be seen from \fig{cw_vxi_figure}. The predictions move out of the 95\% CL Planck contour for $\xi\gtrsim3\times10^{-3}$. \begin{figure}[!t] \centering \includegraphics[angle=0, width=12cm]{cw_above1.pdf} \caption{Observational parameter values as functions of $v$ for selected $\xi$ values. The pink (red) contour corresponds to the 95\% (68\%) CL contour given by the Planck collaboration (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua}.} \label{cw_above1} \end{figure} For inflation above the VEV, the $\xi=0$ case interpolating between quadratic and quartic inflation is already out of the Planck range, as is the $\xi<0$ case which leads to an even redder spectrum and larger $r$ (see \fig{cw_above1}). The $\xi>0$ case, analyzed in ref. \cite{Marzola:2015xbh}, is more subtle since the Coleman-Weinberg potential in the Jordan frame is not simply quartic away from the minimum but also contains a logarithmic factor. There is similarly a logarithmic factor in the Einstein frame potential written in terms of $\chi$: \begin{equation} \label{vechi3} V_E(\chi)\approx\frac{A}{2\xi^2}\left[-\ln(\xi v^2\chi)\right](1-2\chi)\,. \end{equation} Thus, the predictions of \eq{alphaattractor} are approached only when the logarithmic factor can be treated as constant, that is, when the contribution from its derivative can be neglected. Taking derivative of \eq{vechi3} and using $\chi_*\approx 3\alpha/(4N_*)$ (see \sektion{star}), we find that this requires \begin{equation} \xi v^2\ll\frac{4N_*}{3\alpha}\exp\left[-\frac{2N_*}{3\alpha}\right]\text{ for } \xi\gg\frac{1}{8N_*}\,. \end{equation} Numerically, $n_s$-$r$ values are in the 95\% (68\%) CL contours for $\xi\gtrsim0.005$ (0.008), assuming the high-$N$ case and $v\ll1$, see figures \ref{cw_vxi_figure} and \ref{cw_above1}. The Starobinsky point given by \eq{starpoint} is obtained both for $\xi v^2\gg4N_*/3$ and for $\xi\gg1/6$ with extremely small values of $\xi v^2$, whereas the predictions move out of the observationally favored region in the $n_s$-$r$ plane as $\xi v^2$ approaches $4N_*/3$ (see \fig{cw_above2}). \begin{figure}[!t] \centering \includegraphics[angle=0, width=12cm]{cw_above2.pdf} \caption{Observational parameter values as functions of $v$ for selected $\xi$ values. The pink (red) contour corresponds to the 95\% (68\%) CL contour given by the Planck collaboration (Planck TT+lowP+BKP+lensing+ext) \cite{Ade:2015xua}.} \label{cw_above2} \end{figure} Using \eq{induced} we can write the Einstein frame potential in the induced gravity limit ($\xi v^2=1$) as follows \cite{Kannike:2015kda}: \begin{equation} V(\sigma)=\frac{A}{4\xi^2}\left(4\sqrt{\frac{\xi}{1+6\xi}}\sigma +\exp\left(-4\sqrt{\frac{\xi}{1+6\xi}}\sigma\right)-1\right)\,, \end{equation} where $\sigma>0$ ($\sigma<0$) during inflation above (below) the VEV. Analysis of this potential \cite{Cerioni:2009kn,Kannike:2015kda,Karam:2017zno} shows that inflation below the VEV is not compatible with the current observational data, whereas above VEV inflation predictions interpolate between the linear potential and quadratic potential predictions for $v^2\ll2N_*$ and $v^2\gg2N_*$, respectively. The linear potential predictions \begin{equation}\label{linear} n_s\approx1-\frac{3}{2N_*}\,,\quad r\approx\frac{4}{N_*}\,,\quad \frac{\mathrm{d} n_s}{\mathrm{d}\ln k}\approx-\frac{3}{2N^2}\,, \end{equation} are in the Planck \%95 CL contour, which explains the light green region around the dotted line in \fig{cw_vxi_figure}. \section{Conclusion} \label{conclude} In this work we discussed the inflationary predictions of models with the Lagrangian \begin{equation} \frac{\mathcal{L}_J}{\sqrt{-g}}=\frac12(m^2+\xi\phi^2)R-\frac12g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V_J(\phi)\,, \end{equation} where the inflaton $\phi$ has a non-zero VEV $v$ after inflation, and $m^2=1-\xi v^2$. In terms of the redefined field $\varphi\equiv\phi-v$, the non-minimal coupling in the Lagrangian includes a linear term in $\varphi$ as well as a quadratic term. This leads to an attractor behaviour where the predictions approach the Starobinsky model predictions, not just for the well-known non-minimally coupled quartic potential case but also when the inflaton is near the minimum ($\varphi^2\ll v^2$) as cosmological scales exit the horizon. After discussing the conditions under which Starobinsky-like behaviour is obtained in \sektion{nonminimal}, we analyze two prototypical symmetry breaking potentials: the double-well potential in \sektion{double} and the Coleman-Weinberg potential in \sektion{cw}. For each potential, we display the regions in the $v$-$\xi$ plane for which the spectral index $n_s$ and the tensor-to-scalar ratio $r$ values are compatible with the current observations. If $\xi>0$ ($\xi<0$) for inflation above (below) the VEV, large portions of the $v$-$\xi$ plane lead to predictions compatible with the current constraints on $n_s$ and $r$, see figures \ref{higgs_vxi_figure} and \ref{cw_vxi_figure}. Most of these portions lead to predictions approaching the Starobinsky model predictions, so the allowed parameter space would shrink drastically if future observations rule out the Starobinsky model. In particular, if the upper bound on $r$ becomes $<0.002$, both the double-well and Coleman-Weinberg potentials would be ruled out as inflationary models, for any value of $v$ and $\xi$.\footnote{All the results in this article is based on the metric formulation of gravity. If different approaches such as the Palatini formulation is used $r$ could be much smaller \cite{Bauer:2008zj,Jarv:2017azx}.} Although we have displayed the inflationary predictions for a wide range of $v$ and $\xi$ values, it is questionable whether the entire range can be theoretically justified. In particular $v\gtrsim 1$ can be difficult to realize starting from a fundamental theory \cite{Baumann:2014nda}. The value of $\xi$ is ambiguous unless the inflationary part of the Lagrangian is embedded in a specific theory (see e.g. \cite{Muta:1991mw,Faraoni:2004pi}). However, for the well-known non-minimally coupled quartic potential solution, expanding the action around the vacuum reveals a cut-off scale $\Lambda= 1/\xi$ \cite{Burgess:2009ea,Barbon:2009ya,Hertzberg:2010dc}, and requiring this to be higher than the energy scale during inflation corresponds to $\xi\lesssim300$ using \eq{nstarandr} and \eq{starpoint}. On the other hand, ref. \cite{Bezrukov:2010jz} has emphasized that the cut-off scale depends on the background value of the field and can remain above the relevant energy scales during and after inflation. In any case consistency with observations only require $\xi\gtrsim0.005$ for this solution. Starobinsky-like inflationary predictions also arise when \eq{star2} is satisfied. The observable part of inflation then occurs near the minimum where the potential is $\propto\varphi^2$, and $F(\varphi)\approx1+2\xi v\varphi$. This is another special case of the strong coupling attractor model \cite{Kallosh:2013tua}. Interestingly, even though the Starobinsky-like regime corresponds to $|\xi|v\gg 1$ (with $\xi>0$ and $\xi<0$ for inflation above and below the VEV, respectively), the cut-off remains at the Planck scale \cite{Kehagias:2013mya,Giudice:2014toa}. Thus, although consistency with observations require large values of $|\xi|$ for sub-Planckian VEVs, no perturbative unitarity violation is expected at scale $1/\xi$ around the vacuum, unlike the non-minimally coupled quartic potential case.\footnote{The cut-off scale around the vacuum changes when \eq{star2} is satisfied since expanding the Einstein frame Lagrangian for small values of $\varphi$, the leading order kinetic term is no longer canonically normalized but instead given by $1+6\xi^2 v^2\gg1$.} Finally, in \sektion{small}, we briefly considered a higher order version of the double-well potential, which for sub-Planckian VEVs corresponds to a small field (hilltop) inflation potential, with an additional quadratic term coming from the non-minimal coupling. Unlike the two above-mentioned potentials, for this potential inflation below a sub-Planckian VEV can be compatible with observations for a positive $\xi\lesssim0.005$, and a tiny $r$ is predicted. All the considered potentials predict a running of the spectral index that is too small to be observed in the near future, with $\mathrm{d} n_s/\mathrm{d}\ln k$ typically around $-2/N_*^2$. \section*{Acknowledgements} VNŞ thanks Diederik Roest for a useful discussion. This work is supported by T\"UB\.ITAK (The Scientific and Technological Research Council of Turkey) project number 116F385. \makeatletter \interlinepenalty=10000
{ "timestamp": "2018-05-18T02:07:34", "yymm": "1802", "arxiv_id": "1802.04160", "language": "en", "url": "https://arxiv.org/abs/1802.04160" }
\section{Introduction} Spreading processes are ubiquitous in nature: from the contagion of diseases \citep{Anderson1991}, herd behaviour in animals \citep{Sumpter2008}, the diffusion of innovations \citep{Rogers2010}, rumour spreading \citep{Daley1965}, the evolution of social movements \citep{Gonzalez-Bailon2013}, the propagation of hashtags in Twitter \citep{Alvarez_2015}, etc. All these processes share similar dynamics; in a population of initially neutral (disease-free, ignorants of some information, etc) agents (humans, animals or even bots), some of them start carrying some information, pathogen, or behavior, i.e. they adopt this innovation. Through a transmission process they can pass it on to other agents, starting in this way the process of adoption diffusion. The diffusion of adoption has been extensively studied and modeled in several fields including Biology, Physics and Social Sciences \citep{Goel2012,LopezPintado2008,RevModPhys.81.591,RevModPhys.87.925}. In general, new adopters have been in contact with one or several adopters, with two main mechanisms: in disease-like models \citep{Kermack1927,Weiss1971}, adoption takes place with an adoption probability per contact with an adopter which is constant irrespective of the number of adopters; in threshold-like models \citep{LopezPintado2008,Kermack1927,Weiss1971,Granovetter1978}, adoption happens only after a critical number of adopters has been reached. There are also models of ``generalized contagion'' \citep{Dodds2004}, where both disease-like and threshold behaviors are special cases. However, while the models describe individual adoption probabilities, most of the related empirical research was based on aggregated data, typically cumulative adoption curves \citep{Bass1969,Young2009}. Recent studies have focused on individuals' behavior, where the number of adopters accessed by each individual can be measured \citep{Milgram1969,Dasgupta2008,Romero2011,Gallup2012}. These measurements have a direct connection with the form of the adoption probability. In this paper we explore the probability function obtained by \citep{Milgram1969} from a social experiment. They analyzed the correlation between the size of a group looking at the same point in the street and the number of passerbies that joined the behavior of looking at that point. The results of the experiment can be fitted with a Hill function for the probability of adoption \citep{Gallup2012}. We will show that the shape of the adoption probability leads to two different behaviors depending on the parameter values: either a continuous or a discontinuous phase transition. This provides a simple model that describes both regimes within the same framework, depending only on two parameters; with a probability function linked to empirical data. \section{Results} An agent that has not adopted yet, adopts with some probability when interacting with an adopter, which turns her an adopter-maker too. After adoption, the agent is ``recovered'' at a certain rate $\mu$ and becomes again a potential adopter. Here, we study the consequences of the probability of adoption. The transition from adopter to non-adopter is assumed to occur at some constant rate $\mu$. In the standard SIS (susceptible-infected-susceptible) model \citep{Anderson1991}, the adoption probability (from susceptible to infected, S $\to$ I) $\beta$ is constant for each interaction with an adopter. In general, the adoption probability can be a general function of the number of adopted neighbors, $n$ : \begin{eqnarray}\label{pcomp} P(n) = \lambda' f(n) \; . \end{eqnarray} In this contribution we will consider the function proposed by Ref.~\citep{Gallup2012} \begin{eqnarray}\label{Gallup_eq} f(n) = \frac{n^a}{T^a + n^a} \; , \end{eqnarray} where $\lambda'$ is persuasion capacity (similar to $\beta=\lambda'$ for $T=0$ and $a=1$), $a$ is the adoption coefficient (or Hill coefficient) and controls how fast/slow this probability increases with $n$ and $T$ is the adoption threshold and fixes the number of adopters needed to reach half the persuasion limit. $\lambda$, $T$ and $a$ are real positive numbers. This type of function is known as Hill function and has been used in models of population growth and decline \citep{Basios2016a,Gonze2013,santillan2008}. The evolution of such a system in an annealed degree regular network (a network where all the nodes have the same number of neighbors or degree $k$ but where they are chosen randomly in the population at each interaction) is determined by \begin{eqnarray} \frac{d\rho}{dt'}&=&-\mu \rho + (1-\rho) A , \end{eqnarray} where $\rho$ is the density of adopters and $A$ is the probability of adoption given the density $\rho$ and is given by \begin{eqnarray} A = \sum_{n=0}^k P(n) \left( \begin{array} {c} k \\ n \end{array}\right)\rho^n (1-\rho)^{k-n} \; . \end{eqnarray} The number of infected neighbors is assumed to be binomially distributed with a success probability equal to the global density of infected agents. Without loss of generality we get rid of parameter $\mu$ by changing the timescale and rescaling the persuasion capacity $\lambda'$ \begin{eqnarray} \; t=\mu t'\;\\ \; \lambda=\frac{\lambda'}{\mu}\;, \end{eqnarray} which is equivalent to setting $\mu=1$. The equilibrium solutions for the system are determined by the condition \begin{eqnarray}\label{eq} -\rho^* + (1-\rho^*) A^* =0 \; . \end{eqnarray} Given a particular value of $a$ and $T$, there are at most three possible solutions for $\rho^*$ (Figure~\ref{diagram}): i) $\rho^*=0$, corresponding to the adoption-free regime, ii) $\rho^*=\rho^{up}$, represented by the upper branch and iii) $\rho^*=\rho^{down}$, the lower branch. The stability of the fixed points can be easily checked by linear stability analysis. The solution $\rho^*=0$ changes stability at \begin{eqnarray}\label{lcorte} \lambda_0=\frac{1}{kf(1)}, \end{eqnarray} being stable for $\lambda<\lambda_0$ and unstable otherwise. As can be seen in Figure~\ref{diagram}, if the solution $\rho^*=0$ intersects the upper branch, then that branch is stable and the solution $\rho^*=0$ changes stability via a transcritical bifurcation. Then for $\lambda>\lambda_0$ and for any initial $\rho_0\neq 0$ the system will end up in the fixed point $\rho^{up}$ (Figure \ref{diagram}\ref{a}). If, on the contrary, the solution $\rho^*=0$ intersects the lower branch, this one is unstable and there is a region $\lambda_1<\lambda<\lambda_0$ for which two stable solutions ($\rho^*=0$ and $\rho^{up}$) coexist, separated by an unstable solution $\rho^{down}$ (Figure \ref{diagram}\ref{b}). For $\lambda=\lambda_1$ the two fixed points of opposite stability annihilate through a saddle-node bifurcation, while at $\lambda=\lambda_0$ we still have a transcritical bifurcation. Therefore in that region the final state of the system will be the upper branch solution $\rho^{up}$ if the initial density $\rho_0>\rho^{down}$ and $0$ otherwise and we can observe hysteresis. For $\lambda>\lambda_0$ and for any initial $\rho_0>0$ the system will end at $\rho^{up}$. Note that $\lambda_0$ is only the critical point for continuous transitions, while for discontinuous ones would be $\lambda_1$. The sign of the derivative of the $\rho^*$ function at the intersection of $\rho*=0$ and the other branches determines the type of transition. If the derivative is positive ($\rho^*=0$ intersects $\rho^{up}$), the transition is continuous, while if it is negative ($\rho^*=0$ intersects $\rho^{down}$), the transition is discontinuous ((\ref{cont}) and (\ref{disc}) respectively). \begin{subequations}\label{conditions} \begin{eqnarray} \; \left.\frac{d\rho^*}{d\lambda}\right|_{\lambda_0}>0 \; \Longrightarrow \; f(2) < \frac{2k}{k-1} f(1) \; \label{cont} \\ \; \left.\frac{d\rho^*}{d\lambda}\right|_{\lambda_0}<0 \; \Longrightarrow \; f(2) > \frac{2k}{k-1} f(1) \; . \label{disc} \end{eqnarray} \end{subequations} For the particular case when $f(2)=\frac{2k}{k-1} f(1)$ both $\lambda_0$ and $\lambda_1$ coincide. For this condition one can show, by approximating Eq.~\ref{eq} to third order in $\rho*$, that the bifurcation diagram is that one of a supercritical pitchfork bifurcation, i.e., the equation is equivalent to $\dot{x}=rx-x^3$ (Figure \ref{diagram}\ref{c}). In this case, the final fate of the system is similar to the continous case. For $\lambda<\lambda_0$ there is no global adoption and the system ends at $\rho^*=0$, while for $\lambda>\lambda_0$ any initial condition $\rho_0\ne0$ will bring the system to $\rho^{up}$. Simulations using a microscopic model are also included in the plots of Figure \ref{diagram}. This microscopic model simulates an SIS dynamics in a degree regular network of $k=10$ that changes at each timestep. From one step to another, an agent is selected; if it is an adopter it recovers with probability $\mu$, if not, it adopts with probability $P(n)$, where $n$ is the number of adopters among $k$ randomly chosen agents. There is an initial seed of infected agents which we fix to $1\%$ of the total population. In pannels \ref{a} and \ref{b} of Figure \ref{diagram} results of the simulations are shown in blue dots over the analytical solution. For panel \ref{c}, simulations are shown in panel \ref{d}. As can be seen, the system exhibits hysteresis in the region $\lambda_1 < \lambda < \lambda_0$, where there is bistability. There system ends at $\rho^{up}$ or $\rho^{down}$ depending on the initial condition. Fig.~\ref{diagram} also illustrates the two different kinds of transitions. The density of adopters stays at zero until a critical value of $\lambda$, where the system goes to $\rho^{up}$ by either a continuous transition or a discontinuous transition. As can be observed, provided a value for $T$, the size of the jump increases with $a$. For values of $a \sim 1$ the system resembles the epidemic-like models while for values $a>1$ the transition is threshold-like. \begin{figure*}[h] \begin{center} \renewcommand{\thesubfigure}{A} \subfloat[\label{a}]{\includegraphics[width=0.45\textwidth]{figures/diagr_a12pos.eps}} \renewcommand{\thesubfigure}{B} \subfloat[\label{b}]{\includegraphics[width=0.45\textwidth]{figures/diagr_a153pos.eps}}\\ \renewcommand{\thesubfigure}{C} \subfloat[\label{c}]{\includegraphics[width=0.45\textwidth]{figures/diagr_a18pos.eps}} \renewcommand{\thesubfigure}{D} \subfloat[\label{d}]{\includegraphics[width=0.45\textwidth]{figures/histeresis.eps}} \caption{Complete solutions of Eq.~(\ref{eq}) are shown in black for $T=3$, $k=10$ and $a=1.2,1.53, 1.8$ (panels \ref{a}, \ref{b} and \ref{c} respectively). Continuous lines represent stable solutions. Note that when $\lambda_0$ intersects the upper branch, the transition is continuous (\ref{a}). When $\lambda_0$ intersects the lower branch (\ref{c}), two stable solutions coexist in the region $\lambda_1<\lambda<\lambda_0$, 0 and $\rho^{up}$, and the transition is discontinuous. Simulations of the microscopic model are shown in blue points in panels \ref{a} and \ref{b}. For panel \ref{c} the simulation is shown in panel \ref{d}, that amplifies the region $\lambda_1-\lambda_0$, showing the histeresis of the system. Panel \ref{b} illustrates the case when $\lambda_0=\lambda_1$. \label{diagram}} \end{center} \end{figure*} For the case of our choice of $f(n)$ (Eq.~\ref{Gallup_eq}) the conditions in Eqs.~\ref{conditions} give bounds for the parameters region for which the transition is of one regime or the other: \begin{subequations}\label{conditions2} \begin{eqnarray} \text{Cont.: } && T < \left( \frac{2^a (k+1)}{2^a(k-1)-2k} \right)^{\frac{1}{a}} \; \label{cont2} \\ \text{Disc.: } && T > \left( \frac{2^a (k+1)}{2^a(k-1)-2k} \right)^{\frac{1}{a}} \; . \label{disc2} \end{eqnarray} \end{subequations} Fig.~\ref{discontR} shows this parameters space for $k=5,10,20$. The white region represents the parameters combination for a continuous transition while the light gray region corresponds to a discontinuous transition. The dark gray region is the condition that $\lambda_0 \leq1$ on Eq. (\ref{lcorte}), that is, that the value where both curves meet is in the range $\lambda \le 1$, \begin{eqnarray}\label{condition0} T< (k-1)^{\frac{1}{a}} \; . \end{eqnarray} This constraint implies that the in dark gray region in the plot there is only one possible solution, $\rho*=0$. Both conditions together, Eq.~\ref{conditions2} and \ref{condition0}, predict the values of the parameters for which the model shows one type of transition or another, or none. For example, in panel \ref{3b} of Figure~\ref{discontR}, a continuous transition is allowed for all values of $a \in [1,2]$ and some values of $T \in [0,10]$, while the discontinuous transition is only possible for values of $a$ higher than 1.25 and values of $T$ higher than 1.5. As can be seen in Figure~\ref{discontR}, for small values of $k$, there are only continuous transitions, while for higher values of $k$, also discontinuous transitions are allowed. Besides, the higher the value of $k$, the more paramater space allows for $\rho \neq 0$ solutions. \begin{figure*}[h] \setcounter{subfigure}{0} \begin{center} \renewcommand{\thesubfigure}{A} \subfloat[\label{3a}]{\includegraphics[width=0.45\textwidth]{figures/fig3k5.eps}} \renewcommand{\thesubfigure}{B} \subfloat[\label{3b}]{\includegraphics[width=0.45\textwidth]{figures/fig3k10.eps}}\\ \renewcommand{\thesubfigure}{C} \subfloat[\label{3c}]{\includegraphics[width=0.45\textwidth]{figures/fig3k20.eps}} \caption{Parameter space for a regular random network with $k=5,10,20$ (panels \ref{3a}, \ref{3b} and \ref{3c} respectively). The white area is for contininuous transitions while the light gray area is for discontinous transitions. Both areas are separated by the curve given by equation \eqref{conditions2}, corresponding to a supercritical pitchfor bifurcation diagram. In the dark gray area only the solution $\rho^*=0$ exists, i.e., there is not global adoption.\label{discontR}} \end{center} \end{figure*} Finally, we perform simulations to characterize numerically the behavior of the system using a similar microscopic model on quenched regular random network. Again, at each timestep an agent is selected, if she is an adopter it recovers with probability $\mu$, if not, she adopts with probability $P(n)$, where now $n$ refers to the number of adopters in her network neighborhood, which is now fixed. There is an initial seed of infected agents equal to $1\%$ of the total population. The long term values of the fraction of adopters $\rho_{\infty}$ are shown in Figure~\ref{simR} for 10 realizations and different values of $a$ for $T=1.2,3$. The realizations are not averaged to show the low dispersion (inset of upper panel in Figure~\ref{simR} and lower panel of Figure~\ref{simR}). \begin{figure*}[h] \begin{center} \includegraphics[width=.6\textwidth]{figures/T12.eps} \includegraphics[width=.6\textwidth]{figures/PcompT3.eps} \caption{Simulations of the microscopic model on a degree regular random network with degree $k=10$. Individuals might adopt with probability $P(n)$. Upper panel shows the results for $T=1.2$ and lower panel for $T=3$ for different values of $a$. For $T=1.2$ the transitions are continuous for any $a$ (inset, same color code as lower panel). The upper panel shows the region of the critical point for the simulations of the microscopic model the quenched network (pink), the simulations on the annealed network (blue) and the exact solution (black line) of the equation for $a=1.0$ respectively. For $T=3$ there are continuous or discontinous transitions depending on the value of $a$.\label{simR}} \end{center} \end{figure*} As Fig.~\ref{discontR} indicates for $T=1.2$ and $k=10$, the system exhibits always a continuous transition no matter the values of $a\in [1,2]$ (inset of the upper panel). For $T=3$ and $k=10$, for values of $a$ higher than 1.5 the transition is discontinuous, as shown in Figure~\ref{discontR}. The upper panel of Figure \ref{simR} zooms in the region of the critical point for the case of $a=1.0$. It shows the simulations of the microscopic model on a quenched degree regular random network (pink), on an annealed degree regular random network (blue) and the exact solution of the equation (black). As can be seen, there is a small discrepancy for the model on the quenched version of the network. This is because when the topology is fixed correlations appear and in particular the approximation that the infected agents are binomially distributed among the neighbors with a success probability equal to the global fraction of infected agents breaks down. As in the cases presented above, the simulations won the annealed network and the exact solution agree. For both microscopic models, the type of transition is predicted by the parameters space represented in Figure \ref{discontR}. \section{Conclusions} We have analyzed a model of social contagion (SIS-like) on degree regular random networks with an adoption probability measured in empirical data in \citep{Gallup2012} that interpolates between the cases of epidemic-like spreading and threshold-like dynamics. We show that this simple model displays both continuous and discontinuous transitions from a disease-free state to an endemic state. We find the values of the parameters that separate this transitions and the critical persuasion capacities $\lambda$ by applying standard linear stability and bifurcation theory tools. The simplicity of the model studied here allows for relaxing some of the assumptions considered here. For example, the stability condition given by Eq.~\ref{lcorte} resembles the structure of the critical point in the SIS model in uncorrelated random networks with arbitrary degree distributions. Following this similarity, we conjecture that the solution of our model in complex networks will be given by $\lambda_0= <k>/<k^2> f(1)$. Thus degree heterogeneity will lead to the vanishing threshold unless $f(1) \to 0$ as $N\to infinity$. This can be achieved for example by considering that $T= c k_max$. Alternatively, an interesting variation is to consider that the adoption probability depends not on the absolute number of adopters but on the fraction of them. Besides, heterogeneity can emerge not only at the degree level, but also in the distributions of the adoption threshold $T$ and adoption coefficient $a$ and furthermore they can be correlated with the degree of the nodes. How heterogeneity affects the nature of the transition needs to be explored in detail. Another possible line of research is adding non-Markovianity to the dynamics, for example by letting the adoption probability depend not only on the state of the neighboring agents, but also on some internal time which takes into account when an agent tries to convince another one for adopting the innovation. Our results highlight that not only the structure of the interaction network neither the dynamics alone are responsible of the type of transition that the system displays. Furthermore this simplified framework is able to capture this seemingly disparate types of transition, which are usually taken as a signature of different dynamics. Furthermore the choice of the adoption probability curve is based on empirical measurements from \citep{Gallup2012}, which highlights the relevance of our results for realistic modeling of social phenomena.
{ "timestamp": "2018-02-13T02:16:20", "yymm": "1802", "arxiv_id": "1802.03951", "language": "en", "url": "https://arxiv.org/abs/1802.03951" }
\section{Introduction} The Born's interpretation of $|\psi|^2$ as position probability density is one of the most fundamental axioms of quantum mechanics as it provides a link between the mathematical formalism and empirical results \cite{M. Born}. This axiom has been incredibly successful in predicting position probability density in non-relativistic quantum mechanics. Although, in the relativistic regime, the quadratic relation between position probability density and wave function has been confirmed by recent high accuracy single photon multi-slit experiments \cite{Sinha 2010, Hickmann 2011, Gagnonn 2014, Sawant 2014, Kauten 2017}, a satisfactory mathematical expression for position probability density of relativistic bosons has not yet been found. In the simplest case, finding a well-defined position probability density for the free spinless particles is a long-standing problem (see, e.g., \cite{Weinberg 1995, Nikolic 2007}): The time component of the well-known Klein-Gordon conserved current, ${J_{KG}^\mu}{=}i({\phi}^{*}\partial^\mu \phi -\phi {\partial^\mu \phi}^{*})$, may be negative on some regions of space-time and can not be interpreted as position probability density \cite{footnote1}. One may suggest to use the $|\phi|^2$ as probability density, similar to the non-relativistic theory \cite{Kowalski, Rembielinski, Horwitz 1985}; In which case it is easy to see that the Klein-Gordon equation, \begin{equation}\label{1} \Box \phi+m^2\phi=0, \end{equation} leads to the following continuity equation for $|\phi|^2$ \cite{Kowalski}: \begin{equation}\label{2} \partial_t\rho_B+\nabla.\textit{\textbf{J}}_{B}=0, \end{equation} where \begin{equation}\label{3} \rho_B=|\phi(x)|^2=N\int \tilde{\phi}(p) \tilde{\phi}^{*}(k) e^{i(p-k).x}\ d^4p\ d^4k, \end{equation} \begin{equation}\label{4} \textit{\textbf{J}}_{B}=N\int \tilde{\phi}(p) \tilde{\phi}^{*}(k)\ e^{i(p-k).x}\ \textit{\textbf{u}}(p,k)\ d^4p\ d^4k, \end{equation} \begin{equation}\label{5} \textit{\textbf{u}}(p,k)=\frac {\textit{\textbf{p}}+\textit{\textbf{k}} }{p_0+k_0}, \end{equation} and $\tilde\phi(p)$ is Fourier-transformation of wave function. It should be noted, despite the fact that $|\phi|^2$ is non-negative and conserved, it cannot be considered as position probability density: due to the Lorentz length contraction, the probability density can not be a scalar \cite{Sakurai}. In other words, $J_B^\mu=(\rho_B,\textit{\textbf{J}}_{B})$ is not a four-vector and therefore cannot be interpreted as a relativistic probability current density \cite{Horwitz 1985}. In addition, Born's probability density leads to faster than light particle propagation \cite{Peskin 1995, Padmanabhan 2016}. In principle, a reasonable probability current must satisfy the following conditions: \\ I) Lorentz transformation: \ $J^{\mu'}=\Lambda_\mu^{\mu'} J^\mu $, \\ II) probability conservation: \ $\partial_\mu J^\mu=0$, \\ III) future-orientation: \ $J^0\geq 0$, \\ IV) causal propagation: \ $J^\mu J_\mu\geq 0$. \\ The last condition is necessary as it ensures the causal propagation of particles. In fact, there are several other currents which have been suggested for the Klein-Gordon equation \cite{Mostafazadeh 2006, E. Marx 1972} all of which do not satisfy, at least, one of the above conditions. The aim of this paper is to propose a proper expression for relativistic probability current that satisfies all of the aforementioned conditions. \section{Position Distribution} According to equations (\ref{3}) and (\ref{4}), we suggest the following expression as the relativistic probability current \cite{footnote2}: \begin{equation}\label{6} J^\mu{=}\int \tilde{\phi}(p) \ \tilde{\phi}^{*}(k)\ e^{i(p-k).x}\ u^\mu(p,k)\ d^4p\ d^4k, \end{equation} where $u^\mu(p,k)$ is an unknown function that must be determined by theoretical constrains. In this regard, the condition (I) implies that the $u^\mu(p,k)$ is a four-vector. The general form of a four-vector made by $p$ and $k$ is given by \begin{equation}\label{7} u^{\mu}(p,k)=\alpha (p^\mu+k^\mu) + \beta (p^\mu-k^\mu), \end{equation} where $\alpha$ and $\beta$ are scalar coefficients. Next, the conservation condition (II) leads to $\beta =0$. Also, in principle, the coefficient $\alpha$ should be determined using conditions (III) and (IV). This procedure in (1 + 1)-dimensions is straightforward and a possible choice is (see appendix A) \begin{equation}\label{8} \alpha(p,k)= \frac{ \xi}{\sqrt{(p+k)^2}}, \end{equation} where $\xi=\frac{1}{2}(\frac{k^0}{|k^0|}+\frac{p^0}{|p^0|})$. So finally we get the $u^{\mu}(p,k)$ as follows: \begin{equation}\label{9} u^{\mu}(p,k)= \xi\frac{p^{\mu}+k^{\mu}}{\sqrt{(p+k)^2}}. \end{equation} Equation (\ref{9}) can be rewritten as $u^{\mu}=|\xi|\gamma(1,\textit{\textbf{u}})$, in which $\textit{\textbf{u}}$ is the velocity-vector defined in equation (\ref{5}) and $\gamma=1/\sqrt{1-{\textit{\textbf{u}}}^2}$ is the corresponding Lorentz coefficient. In fact, the expression (\ref{6}) is the simplest covariant generalization of the equations (\ref{3}) and (\ref{4}). The only difference between this expression and the Born probability current, $J_B^\mu$, is the factor $|\xi|\gamma$: \begin{equation}\label{10} \rho(x)=N\int |\xi|\gamma \tilde\phi(p) \tilde\phi^{*}(k)\ e^{i(p-k).x}\ d^4p\ d^4k, \end{equation} \begin{equation}\label{11} \textit{\textbf{J}}(x)=N\int |\xi|\gamma \textit{\textbf{u}}\ \tilde\phi(p) \tilde\phi^{*}(k) e^{i(p-k).x}\ d^4p\ d^4k, \end{equation} The factor $\gamma$ comes naturally in accordance with Lorentz contraction and the factor $|\xi|$ prohibits the occurrence of \textit{Zitterbewegung} behavior \cite{Schrodinger, Krekora}. In appendix A, it is shown that, for massive particles in $(1+1)$-dimensions, equations (\ref{10}) and (\ref{11}) can be rewritten in position representation as follows: \begin{equation}\label{12} \rho=| {\mathcal{D}}^+ \phi_+|^{2}+| {\mathcal{D}^-}\phi_+|^{2}+|{\mathcal{D}^+}\phi_- |^{2}+| {\mathcal{D}^-}\phi_- |^{2}, \end{equation} \begin{equation}\label{13} J=\left(| {\mathcal{D}}^+ \phi_+|^{2}-| {\mathcal{D}^-}\phi_+|^{2}+|{\mathcal{D}^+}\phi_- |^{2}-| {\mathcal{D}^-}\phi_- |^{2}\right) c, \end{equation} where $\phi_{\pm}$ are positive and negative frequency components of wave function, $\phi=\phi_{+}+\phi_-$, and ${\mathcal{D}}^{\pm}$ are pseudo-differential operators which are defined as follows: \begin{equation}\label{14} {\mathcal{D}^{\pm}}\equiv \sqrt{\frac{1}{2} (\sqrt{1-\lambda_c^2\frac{d^2}{dx^2}}{\mp}i\lambda_c\frac{d}{dx})}, \end{equation} in which $\lambda_c\equiv\hbar/mc$ is Compton wave length. From equations (\ref{12}) and (\ref{13}) It is clear that, the probability density is unambiguously positive definite and $|J/{\rho}|\leq c$. It is clear that when the wave function only has positive energy part \cite{Newton wave funcion}, $\phi=\phi_+$, Klein Gordon equation leads to \begin{equation}\label{15} i\hbar \frac{\partial \phi}{\partial t} =\sqrt{-\nabla^2+m^2} \phi. \end{equation} and equations (\ref{12}) and (\ref{13}) reduce to the following simpler forms: \begin{equation}\label{16} \rho=| {\mathcal{D}}^+ \phi|^{2}+| {\mathcal{D}^-}\phi|^{2}, \end{equation} \begin{equation}\label{17} J=\left(| {\mathcal{D}}^+ \phi|^{2}-| {\mathcal{D}^-}\phi|^{2}\right) c. \end{equation} In this case, in non-relativistic regime ($c\to \infty$), the equation (\ref{15}) reduces to non-relativistic Schr\"{o}dinger equation, also equations (\ref{16}) and (\ref{17}) reduce to non-relativistic probability density, $|\phi|^2$, and conventional Schr\"{o}dinger probability current, $(\hbar/m) \Im({\phi}^*\partial_x\phi)$, respectively. \begin{figure}[t] \includegraphics[width=0.45\textwidth, height=0.60\textwidth, trim={0cm 4cm 0cm 4cm}]{col2.pdf \caption{\label{fig:epsart}(a) The first component of Klein-Gordon current $J^0_{KG}$ (dashed line), the Born probability density $\rho_B$ (dash-dotted line) and the relativistic probability density $\rho$ (solid line) referring to the Gaussian wave function (\ref{19}) with $\sigma_p/mc=1000$ and $\bar{p}/mc=0$. (b) Represents the $\chi$ for Gaussian wave function (\ref{19}).}\label{fig1} \end{figure} For comparing the relativistic probability density (\ref{16}) with $|\phi|^2$, In Figure \ref{fig1}, we plot $\chi$ (a measure of deviation from Born probability) defined as: \begin{equation}\label{18} \chi=\int_{-\infty}^\infty \left|\rho-|\phi|^2\right|\ dx, \end{equation} for this Gaussian wave function \begin{equation}\label{19} \tilde{\phi}(p)=N e^{-(p-\bar{p})^2/\sigma_p^2}. \end{equation} From figure (1-b) it is clear that, when momentum uncertainty is small compared to $mc$, relativistic probability density deviation from Born probability density is negligible, even assuming that the group velocity of wave packet is comparable with velocity of light. Finally, it should be noted, although the expression for probability density in terms of wave function, equation (\ref{16}), is non-local, there is no inconsistency with special relativity. In fact, this non-locality is essential to introduce a self-consistent relativistic probability density; since the relativistic wave function can propagate outside the light cone, a local relation between wave function and probability density, for instance $\rho=|\phi|^2$, leads to faster than light particles propagation \cite{Padmanabhan 2016,Peskin 1995}. In the following section, the relativistic requirements imposed on the definition of probability density is further discussed together with an account of how our suggested expression satisfy them. \section{localization and Causality} It is well-known there are some problems on concept of "localized particle" in relativistic quantum mechanics \cite{Foldy Wouthuysen 1950, Newton 1949, Rosenstein 1981, Rosenstein 1987, Thaller 1992, Hegerfeldt 1974, Skagerstam 1976, Perez 1977, Hegerfeldt 1980, Hegerfeldt 1985, Bracken 1999, Bracken 2005, Bracken 2007}. The notion of localization is closely related to the concept of position probability density. In this section we will briefly review these problems and demonstrate how our definition of relativistic probability density can circumvent such obstacles. One of the earliest attempts to analyze the notion of localized particle in relativistic quantum mechanics was made by Newton and Wigner. In 1949 they uniquely derived a relativistic position operator and its eigenstates using some justifiable postulates about the exact localized states \cite{Newton 1949}. However, the Newton-Wigner position operator, although arising from seemingly reasonable postulates, suffers from the following drawbacks: \begin{itemize} \item A state which is exactly localized in one reference frame, i.e. an eigenstate of Newton-Wigner position operator, is not localized in other reference frames \cite{Newton 1949}. \item The definition of position probability density based on the Newton-Wigner position operator, i.e. $\rho_{NW}=|\psi_{NW}|^2$ \cite{Newton wave funcion}, leads to faster than light particle propagation \cite{Rosenstein 1981, Rosenstein 1987, Thaller 1992}. \end{itemize} These difficulties indicate that the Newton-Wigner position operator is not quite acceptable. Moreover, it has been shown that, for a general case, any strict localization leads to superluminal propagation \citep{Hegerfeldt 1974, Skagerstam 1976, Perez 1977, Hegerfeldt 1980, Hegerfeldt 1985}. An apparent way out of this problem is to assume that such strict localization is not possible. This implies that a proper relativistic self-adjoint position operator does not exist \cite{Thaller 1992, Hegerfeldt 1985}, and hence defining the position distribution via the projection valued-measure associated to position operator is not realizable \cite{note}. A possible treatment is to introduce a reasonable probability density without recourse to a position operator \cite{Hegerfeldt 1974}, as the one presented in this paper. It must be emphasized, the problem of superluminal propagation is not just the characteristic behavior of Newton-Wigner's probability density. Hegerfeldt proved \cite{Hegerfeldt 1974, Skagerstam 1976, Hegerfeldt 1980}, on very general grounds and for any reasonable definition of probability density, that a particle initially localized with probability 1 in a finite volume of space, immediately develops infinite "tails". In what follows, we prove a theorem that shows how our probability density keeps the particle from strict localization, which is the main requirement of Hegerfeldt theorem. This is similar to what Thaller proved for the case of Dirac probability density \cite{Thaller 1992}. \textbf{Theorem.} Let $\rho$ be the probability density associated with an arbitrary positive-energy wave function $\phi$, presented in equation(\ref{16}). Then \begin{equation}\label{20} \textnormal{Supp}(\rho)=\mathbb{R} \end{equation} where $\textnormal{Supp}(\rho)$ stands for support of $\rho$ which is defined as \begin{center} $\textnormal{Supp}(\rho)\equiv \textnormal{Closure of} \ \{x\in \mathbb{R} \ | \ \rho(x)\neq 0 \}.$ \end{center} \textbf{Proof.} From equation (\ref{16}) it is clear, for a particle to be strictly localized in a compact subset of $\mathbb{R}$, the supports of $\mathcal{D}^+\phi$ and $\mathcal{D}^-\phi$ should be compact subsets of $\mathbb{R}$. On the other hand, by Paley-Wiener-Schwartz theorem \cite{Paley Wiener 1934, Schwartz 1952}, the Fourier transform of a compactly supported function is guaranteed to be analytic anywhere on the complex plane. But Fourier transform of $\mathcal{D}^+\phi$ and $\mathcal{D}^-\phi$ cannot be simultaneously analytic; since they are related to each other by \begin{equation}\label{21} \widetilde{{\mathcal{D}^+}\phi}(p)=\frac{1}{m}(\sqrt{p^2+m^2}+p)\widetilde{{\mathcal{D}^-}\phi}(p). \end{equation} The branch cut in $\sqrt{p^2+m^2}$ at $p=im$ means both $\widetilde{{\mathcal{D}^+}\phi}$ and $\widetilde{{\mathcal{D}^-}\phi}$ cannot be analytic when $p$ is imaginary with magnitude $m$. Hence, this proves the theorem. $\blacksquare$ \\ The above theorem implies that there is no state for which the probability of finding the particle in a set $\Delta$ is 1 unless $\Delta =\mathbb{R}$. Nevertheless, the strict localization of particle is irrelevant for most practical purposes, and it is quite sufficient to adopt an appropriate notion of localization with adjustable precision. It must be emphasized, although our probability density has tails extending to infinity, arbitrarily small values of position uncertainty is possible. In fact for any point of space, $a\in \mathbb{R}$, there is a sequence of wave functions, $\{\phi_n\}_{n=1}^{\infty}$, whose corresponding probability density sequence, $\{\rho_n\}_{n=1}^{\infty}$, approaches $\delta(x-a)$; see appendix B. This fact indicates that the particle could be localized arbitrarily sharply in the vicinity of any given point. This notion of "arbitrary precise localization" differs from the one introduced by Newton and Wigner i.e. "exact localization", and was initially employed by Bracken and Melloy for the case of free Dirac electrons \cite{Bracken 1999, Bracken 2005, Bracken 2007}. It naturally avoids the problems plaguing Newton-Wigner's exact localization; Firstly, the localization defined in this sense has correct properties under Lorentz transformations as $J_\mu$ is a covariant vector \cite{Bracken 1999}, secondly, since velocity of probability flow, $J/\rho$, is less than the speed of light, propagation of particle is guaranteed to be causal. \section{Momentum Distribution} Since the position probability density deviates from $|\phi(x)|^2$ in relativistic regime, one may raise the question of whether momentum probability distribution deviates from $|\tilde{\phi}(p)|^2$ as well? To answer this question, we note that, Based on the quantum theory of measurement, each physical measurement can be described as a position measurement: In principle, the variables that account for the outcome of an experiment are ultimately particle positions \cite{Bell, Holland, Durr, Wheeler}. This fact has been made clear by John Bell \cite{Bell}: \textit{"In physics the only observations we must consider are position observations, if only the positions of instrument pointers. ... If you make axioms, rather than definitions and theorems, about the "measurement" of anything else, then you commit redundancy and risk inconsistency."} In this regard, It is shown in non-relativistic quantum mechanics that the Born's rule for any observable can be derived considering the Born's rule on position of particles \cite{Bell, Holland, Durr}. Here we aim to propose a derivation of relativistic momentum probability density from the relativistic position probability density (\ref{16}). The given argument is based on the Feynman's method for initially confined systems, namely the Time-of-flight measurements \cite{Holland, Feynman 1965, Park 1968, application}. Suppose the wave function is initially confined to a region $\Delta$ centered around the origin $x_0=0$ and is negligible elsewhere. After allowing the wave function to freely propagate for a considerable amount of time, a measurement on the position $x$ of the particle is effected. The probability of the particle's momentum lying inside the element $dp$ around the point $p$ at $t = 0$ is equal to probability of finding the particle's position in the element $dx$ around the point $x=vt$ provided the limit $t\to \infty$ is taken in order to discard the effect of uncertainty in initial position. So we have: \begin{equation}\label{22} g(p)dp=\lim_{t\to\infty}[\rho(x,t)dx]_{x=vt}, \end{equation} where $g(p)$ represents the momentum probability density and $v=p/E$. using the relativistic position probability density (\ref{16}), the equation (\ref{22}) leads to \begin{equation}\label{23} g(p)=\dfrac{m^2}{E^3}\lim_{t\to\infty}t\left(| {\mathcal{D}}^+ \phi|^{2}+| {\mathcal{D}^-}\phi|^{2}\right)_{x=pt/E}. \end{equation} In the non relativistic regime ($c\to \infty$) the equation (\ref{23}) leads to \begin{equation}\label{24} g(p)=\dfrac{1}{m}\lim_{t\to\infty}[t|\phi(x,t)|^2]_{x=pt/m}. \end{equation} In this case, the Schr\"{o}dinger equation for an initially confined wave function leads to $\phi(pt/m,t)\sim t^{-1/2}\tilde\phi(p)$ at $t\to\infty$ and so the equation (\ref{24}) reduces to the Born rule in momentum space $g(p)=|\tilde\phi(p)|^2$ \cite{Feynman 1965, Holland}. But finding an explicit expression for momentum probability density in the relativistic regime is not straightforward, therefore a numerical calculation of $g(p)$ for the Gaussian wave packet (\ref{19}) is presented in Figure \ref{kkk}. This numerical study indicates that the relativistic momentum probability density deviates significantly from Born rule only when the width of the wave function in momentum space is greater than $mc$. \begin{figure}[t] \includegraphics[width=0.45\textwidth, height=0.60\textwidth, trim={1cm 0cm 1cm 0cm}]{m.pdf} \caption{The plot of the relativistic and non-relativistic momentum probability density referring to the Gaussian wave function (\ref{19}) with $\bar{p}=0$.} \label{kkk} \end{figure} \section{Conclusion and outlook} In this paper, in a simple case of single free spinless particle in $(1+1)$-dimensions, we have extracted a "reasonable" probability density current. By "reasonable" we mean: the current i) is manifestly covariant, ii) is conserved, iii) has a non-negative first component, iv) does not lead to faster than light particle propagation and v) reduces to Born probability current density in non-relativistic limit. These conditions naturally give rise to the given probability density current. Therefore, at least in $(1+1)$-dimensions, probabilistic interpretation of relativistic spinless wave function is possible. Extending this study to $(3+1)$-dimensional interacting particle systems will be the next step. Such systems should be described by quantum filed theory. The state of a system in quantum filed theory is an arbitrary vector in the appropriate Fock space and may well involve a superposition of states of different particle numbers, namely $\ket{\Psi}= \sum_n \int {\tilde{\phi}}_n(p_1,..,p_n) \ket{p_1,..,p_n}$. It evolves according to the appropriate Schr\"{o}dinger equation; $i \partial_t\ket{\Psi}=\text{H} \ket{\Psi} $ where $\text{H}$ is the Hamiltonian operator in Schr\"{o}dinger picture. In the presence of interaction this equation leads to a system of coupled integro-differential equations for multi-particle wave functions, $\phi_n $, a reccurrent procedure in the literature of Light-front quantization \cite{Brodsky 1998}. In future works, we aim to find a probabilistic interpretation for these wave equations in position space. \section{Acknowledgement} We thank M.M. Sheikh-Jabbari, Y. Rokni and H. Abedi for helpful discussions and A. Deriglazov for carefully reading the manuscript and his useful comments. \section{Appendix A} In this appendix, we will derive the equations (\ref{8}), (\ref{12}) and (\ref{13}) in $(1+1)$-dimensions . Without loss of generality, the wave function can be expanded as a linear combination of plane waves: \begin{equation}\label{25} \phi(x)=\sum_{n} A_n e^{ip_n.x}. \end{equation} Plugging this into (\ref{6}) and using (\ref{7}) yields, \begin{eqnarray}\label{26} \begin{split} J^0\pm J^1=\sum_{n,m} A_n A_m^* \alpha(p_n,p_m) (p^\pm_n + p^\pm_m) e^{i(p_n-p_m).x}, \end{split} \end{eqnarray} where $p^\pm_n=p^0_n \pm p^1_n$. In $(1+1)$-dimensions, conditions $ J^\mu J_\mu\geq0$ and $J^0\geq0$ lead to \begin{equation}\label{27} J^0\pm J^1\geq 0, \end{equation} for arbitrary wave-functions. Therefore we can consider \begin{equation}\label{28} \alpha(p_n,p_m)= \frac{[F^\pm(p_n)] \ [F^\pm(p_m)]^*+[F^\pm(p_n)]^* \ [F^\pm(p_m)]}{p^\pm_n+p^\pm_m}, \end{equation} which leads to the following positive definite expression for $J^0\pm J^1$: \begin{equation}\label{29} J^0\pm J^1= \left|\sum_n F^\pm(p_n) A_n e^{ip_n.x}\right|^2+\left|\sum_n [F^\pm(p_n)]^* A_n e^{ip_n.x}\right|^2, \end{equation} where $F^\pm(p_n)$ is an unknown function which must be determined. Since the only scalar can be made by $p_n$ is the rest mass, a dimensional analysis leads to $|\alpha(p_n,p_n)|=\frac{1}{2m_0}$; the factor $1/2$ is a convention and can be absorbed in normalization constant. Therefore, \begin{equation}\label{30} F^\pm(p_n)=e^{i\lambda^{\pm}(p_n)}\sqrt{{p^\pm_n}/{2m_0}}. \end{equation} Whether one substitutes $F^+$ or $F^-$ the resulted $\alpha$ is the same. This fact can be used to determine phase of $F^\pm$ as $\lambda^\pm(p_n)=\pm l \pi$, where $l$ is an integer number. Then we have \begin{equation}\label{31} \alpha(p_n,p_m)= \frac{[\sqrt{p^\pm_n}] \ [\sqrt{p^\pm_m}]^*+[\sqrt{p^\pm_n}]^* \ [\sqrt{p^\pm_m}]}{2m_0(p^\pm_n+p^\pm_m)}. \end{equation} A straightforward but tedious calculation shows that, $\alpha(p_n,p_m)$ can be rewritten as equation (\ref{8}) which ensures that $\alpha(p_n,p_m)$ is a scalar. Also from equations (\ref{29}) and (\ref{30}), it is clear that \begin{equation}\label{32} \begin{split} J^0= |\sum_n \sqrt{\frac{p^+_n}{4m_0}} A_n e^{ip_n.x}|^2+|\sum_n [\sqrt{\frac{p^+_n}{4m_0}}]^* A_n e^{ip_n.x}|^2 \\ +|\sum_n \sqrt{\frac{p^-_n}{4m_0}} A_n e^{ip_n.x}|^2 +|\sum_n [\sqrt{\frac{p^-_n}{4m_0}}]^* A_n e^{ip_n.x}|^2, \end{split} \end{equation} \begin{equation}\label{33} \begin{split} J^1= |\sum_n \sqrt{\frac{p^+_n}{4m_0}} A_n e^{ip_n.x}|^2+|\sum_n [\sqrt{\frac{p^+_n}{4m_0}}]^* A_n e^{ip_n.x}|^2 \\ -|\sum_n \sqrt{\frac{p^-_n}{4m_0}} A_n e^{ip_n.x}|^2 -|\sum_n [\sqrt{\frac{p^-_n}{4m_0}}]^* A_n e^{ip_n.x}|^2. \end{split} \end{equation} Finally, using the definition of ${\mathcal{D}^{\pm}}$ operators, (\ref{14}), equations (\ref{32}) and (\ref{33}) reduce to (\ref{12}) and (\ref{13}). \section{Appendix B} In this appendix, we will show there is a sequence of positive energy wave functions, $\{\phi_n\}_{n=1}^{\infty}$, whose corresponding probability density sequence, $\{\rho_n\}_{n=1}^{\infty}$, approaches $\delta(x-a)$ as a generalized function \cite{Lighthill}. This argument is closely similar to that of Bracken and Melloy \cite{Bracken 1999} for the case of Dirac electron. Consider following sequence of positive energy wave functions: \begin{equation}\label{B1} \phi_n(x)=\int \sqrt{\frac{m}{n E(p)}}f(\frac{p}{n})e^{ip(x-a)} dp, \end{equation} in which $E_p=\sqrt{p^2+m^2}$ and $f(p)$ is a normalized Gaussian function, $\int |f(p)|^2dp=1$, defined as: \begin{equation}\label{B2} f(p)=(1/{m\sqrt{\pi}})^{\frac{1}{2}} e^{-p^2/2m^2}. \end{equation} Substituting Eq.(\ref{B1}) in Eq.(\ref{16}), reads \begin{equation}\label{B3} \begin{split} \rho_n(x)=\frac{1}{n}\left|\int S_+(p)f(p)e^{ip(x-a)}dp\right|^2 \\+\frac{1}{n} \left|\int S_-(p)f(p)e^{ip(x-a)} dp\right|^2 , \end{split} \end{equation} where $S_{\pm}=\sqrt{\frac{E_p\pm p}{2 E_p}}$. Using convolution theorem, the Fourier transform of probability density, $\tilde{\rho}_n(p)$, can be written as \begin{equation}\label{B4} \tilde{\rho}_n(p)=R_n(p)\frac{e^{-ipa}}{\sqrt{2\pi}}, \end{equation} in which \begin{equation}\label{B5} R_n(p)=\frac{1}{n}\int f(\frac{q-p}{n})f(\frac{q}{n}) \Gamma(q-p,q) dq, \end{equation} \begin{equation}\label{B6} \Gamma(q,p)=S_+(q)S_+(p)+S_-(q)S_-(p). \end{equation} Since the Fourier transform of $\delta(x-a)$ is $\frac{e^{-ipa}}{\sqrt{2\pi}}$, we need to show that $\lim_{n\to\infty} R_n(p)=1$. For this , we consider $q=nr$ and rewrite $R_n(p)$ as \begin{equation}\label{B7} R_n(p)=\int f(r-\frac{p}{n})f(r) \Gamma(nr-p,nr) dr. \end{equation} A straightforward calculation shows that $\Gamma(q,p)$ can be rewritten as \begin{equation}\label{B8} \Gamma(q,p)=G_1(q)G_1(p)+G_2(q)G_2(p), \end{equation} in which \begin{equation}\label{B9} G_1(p)=\sqrt{\frac{E_p+m}{2E_p}}, \end{equation} \begin{equation}\label{B9} G_2(p)=\frac{p}{\sqrt{2E_p(E_p+m)}}. \end{equation} By Taylor's Theorem, we have \begin{equation}\label{B10} f(r-\frac{p}{n})=f(r)-\frac{p}{n} f'(r-\eta\frac{p}{n}), \end{equation} \begin{equation}\label{B11} G_i(nr-p)=G_i(nr)-\frac{p}{n} G_i'(nr-\theta p), \end{equation} where $0\leq\eta\leq 1$ and $0\leq\theta\leq 1$. Using equations (\ref{B10}) and (\ref{B11}), the Eq.(\ref{B7}) reads \begin{equation}\label{B12} R_n(p)=A_n(p)+B_n(p)+C_n(p)+D_n(p), \end{equation} where \begin{equation}\label{B13} A_n(p)=\int |f(r)|^2 \Gamma(nr,nr)dr, \end{equation} \begin{equation}\label{B14} B_n(p)=-\frac{p}{n}\int f'(r-\eta \frac{p}{n})f(r) \Gamma(nr,nr)dr, \end{equation} \begin{equation}\label{B15} C_n(p)=\frac{p^2}{n}\int f'(r-\eta\frac{p}{n})f(r) \Upsilon(nr-\theta p,nr) dr, \end{equation} \begin{equation}\label{B16} D_n(p)=-p\int |f(r)|^2 \Upsilon(nr-\theta p,nr) dr, \end{equation} \begin{equation}\label{B17} \Upsilon(q,p)= G_1'(q)G_1(p)+ G_2'(q)G_2(p). \end{equation} Finally, after tedious calculations we get \begin{equation}\label{B18} \begin{aligned} &A_n(p)=1, \\ \lim_{n\to\infty} B_n(p)=\lim_{n\to\infty} &C_n(p)=\lim_{n\to\infty} D_n(p)=0, \end{aligned} \end{equation} which show the probability density sequence, $\{ \rho_n(x)\}$, converges to $\delta(x-a)$ as $n$ tends to infinity.
{ "timestamp": "2018-07-24T02:06:06", "yymm": "1802", "arxiv_id": "1802.04266", "language": "en", "url": "https://arxiv.org/abs/1802.04266" }
\section{#1}\setcounter{equation}{0}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{\TN}[1]{{\color{red} [{\bf TN:} #1]}} \newcommand{\TNmod}[1]{{\color{red} #1}} \newcommand{\SA}[1]{{\color{blue} [{\bf SA:} #1]}} \newcommand{\DI}[1]{{\color[rgb]{0.1,0.5,0.1} #1}} \allowdisplaybreaks \allowdisplaybreaks[3] \usepackage[hang]{footmisc} \setlength{\footnotemargin}{3.5mm} \usepackage[compress,square,numbers]{natbib} \usepackage[colorlinks,linktocpage,linkcolor=blue,citecolor=blue,urlcolor=blue]{hyperref} \renewcommand{\d}{\mathrm{d}} \renewcommand{\b}{\textrm{b}} \newcommand{\notag \\ &\quad\,}{\notag \\ &\quad\,} \newcommand{\notag \\ &}{\notag \\ &} \linespread{1.1} \begin{document} \numberwithin{equation}{section} \thispagestyle{empty} \begin{flushright} \small KOBE-COSMO-18-01\\ \small MAD-TH-17-07 \normalsize \end{flushright} \vspace*{1cm} \begin{center} {\LARGE \bf A Tower Weak Gravity Conjecture} \vspace{0.5cm} {\LARGE \bf from Infrared Consistency} \vspace{1cm} {\large Stefano Andriolo${}^{1}$, Daniel Junghans${}^{2}$, Toshifumi Noumi${}^{3}$ and Gary Shiu${}^{1,4}$}\\ \vspace{1cm} ${}^1$ Department of Physics and Jockey Club Institute for Advanced Study\\ Hong Kong University of Science and Technology, Hong Kong\\ \vspace{0.5cm} ${}^2$ Institut f{\"{u}}r Theoretische Physik, Ruprecht-Karls-Universit{\"{a}}t Heidelberg,\\ Philosophenweg 19, 69120 Heidelberg, Germany\\ \vspace{0.5cm} ${}^3$ Department of Physics, Kobe University\\ Kobe 657-8501, Japan\\ \vspace{0.5cm} ${}^4$ Department of Physics, University of Wisconsin-Madison\\ Madison, WI 53706, USA\\ \vspace{1.5cm} \begin{abstract} We analyze infrared consistency conditions of 3D and 4D effective field theories with massive scalars or fermions charged under multiple $U(1)$ gauge fields. At low energies, one can integrate out the massive particles and thus obtain a one-loop effective action for the gauge fields. In the regime where charge-independent contributions to higher-derivative terms in the action are sufficiently small, it is then possible to derive constraints on the charge-to-mass ratios of the massive particles from requiring that photons propagate causally and have an analytic S-matrix. We thus find that the theories need to contain bifundamentals and satisfy a version of the weak gravity conjecture known as the convex-hull condition. Demanding self-consistency of the constraints under Kaluza-Klein compactification, we furthermore show that, for scalars, they imply a stronger version of the weak gravity conjecture in which the charge-to-mass ratios of an infinite tower of particles are bounded from below. We find that the tower must again include bifundamentals but does not necessarily have to occupy a charge (sub-)lattice. \end{abstract} \end{center} \newpage \setcounter{tocdepth}{2} \tableofcontents \section{Introduction} Many aspects of quantum gravity can conveniently be analyzed within the framework of effective field theory (EFT). At low energies, the dynamics are expected to be governed by an effective action with only a small number of degrees of freedom, while most of the complicated details of the underlying microscopic theory do not play a role. A natural question to ask is then whether all EFTs one can write down arise as the low-energy limit of a consistent quantum gravity theory. Based on thought experiments and general expectations about the properties of quantum gravity, it has been argued that this is most likely not the case. Instead, the EFTs which admit a UV completion into a theory of quantum gravity (termed the ``landscape'') are distinguished from those which do not (the ``swampland'') by a number of rules and consistency conditions. \medskip A well-known proposal for such a condition is the weak gravity conjecture (WGC) \cite{ArkaniHamed:2006dz}, which asserts that a $U(1)$ gauge theory coupled to gravity needs to contain at least one particle with mass $m$ and charge $q$ whose charge-to-mass ratio satisfies a lower bound \begin{equation} z = \frac{gq}{m} \ge \mathcal{O}(1) \label{wgc} \end{equation} in Planck units. Here, $g$ is the gauge coupling constant, and the precise numerical value of the bound depends on the considered theory. Depending on the version of the conjecture, the particle may also be required to satisfy additional properties such as being the lightest particle in the theory. \medskip The WGC has a natural generalization to $p$-branes charged under $(p+1)$-form fields. In particular, the $0$-form (axion) version of the conjecture has generated a lot of activity in recent years since it may imply strong constraints on large-field inflation \cite{Rudelius:2014wla, delaFuente:2014aca, Montero:2015ofa, Brown:2015iha, Brown:2015lia, Hebecker:2015rya, Bachlechner:2015qja, Junghans:2015hba, Rudelius:2015xta, Heidenreich:2015wga, Kooner:2015rza, Hebecker:2015zss, Palti:2015xra, Baume:2016psm, Hebecker:2016dsw, Hebecker:2017wsu, Hebecker:2017uix, Blumenhagen:2017cxt} (see also \cite{Ibanez:2015fcv,Hebecker:2015zss,Brown:2016nqt} for an application to cosmological relaxation). Moreover, it has recently been realized that the conjecture is closely related to the swampland conjecture \cite{Ooguri:2006in, Klaewer:2016kiy, Palti:2017elp}, to cosmic censorship \cite{Crisford:2017gsb,Cottrell:2016bty} and to instabilities of non-supersymmetric AdS vacua \cite{Ooguri:2016pdq, Danielsson:2016mtx, Freivogel:2016qwc, Ooguri:2017njy, Danielsson:2017max}. In particular, the last relation implies an intriguing constraint that the WGC imposes on neutrino physics \cite{Ooguri:2016pdq}. Possible {\it correlated} consequences in particle physics and cosmology were explored in \cite{Ibanez:2017kvh, Ibanez:2017oqr, Hamada:2017yji}. Various other extensions/applications of the WGC and related quantum gravity conjectures have furthermore been discussed in the recent works \cite{Montero:2017yja, Montero:2017mdq, Hebecker:2017lxm, Lust:2017wrl, Ibanez:2017vfl, rudelius2017}. See \cite{Brennan:2017rbf} for a recent review. \medskip Another way to generalize the WGC is to consider theories in which the gauge group contains multiple $U(1)$ factors. In \cite{Cheung:2014vva}, it was argued based on black hole arguments that such theories are only consistent with quantum gravity if they satisfy a convex-hull condition, i.e., they require a set of particles such that the convex hull of the charge-to-mass vectors contains a ball of radius $\mathcal{O}(1)$. An even stronger version of the WGC---the so-called lattice WGC---can be motivated if one additionally demands that the WGC is self-consistent under Kaluza-Klein compactification \cite{Heidenreich:2015nta}. Requiring consistency under dimensional reduction suggests that a bound on the charge-to-mass vectors has to be satisfied by the whole charge lattice. This stronger version of the WGC was subsequently shown to not always hold \cite{Montero:2016tif,Heidenreich:2016aqi}, though there are examples in string theory where the particles satisfying the WGC occupy a proper sub-lattice \cite{Montero:2016tif,Heidenreich:2016aqi}.\footnote{There appear to be string theory examples where BPS states do not span a (sub-)lattice. We thank Eran Palti for private communication on this point. See \cite{palti2018} for work relating this to the swampland and weak gravity conjectures.} \medskip In view of the potentially far-reaching implications of the WGC, it is of obvious importance to understand which of the many versions of it, if any, holds in quantum gravity. While the conjecture was originally motivated by general black hole arguments and circumstantial evidence in string theory, there have been efforts in the recent literature to make this more precise and bring us closer to proving the conjecture. Indeed, this has been achieved with some success, at least in specific setups, using AdS/CFT \cite{Nakayama:2015hga, Harlow:2015lma, Benjamin:2016fhe, Montero:2016tif} or from entropy considerations \cite{Cottrell:2016bty}.\footnote{Subsequent works \cite{Fisher:2017dbc,Cheung:2018cwt} using arguments along the lines of \cite{Cottrell:2016bty} have appeared. However, despite what their titles suggest, \cite{Fisher:2017dbc,Cheung:2018cwt} do not present proofs of the WGC. The entropy corrections formulae used in \cite{Fisher:2017dbc} cannot be applied in the regime of macroscopic black holes, nor away from extremality, which is where conflicts with the WGC were argued to arise. Ref.~\cite{Cheung:2018cwt} made an interesting connection between the WGC and the positivity of entropy corrections. It is not known, however, if the latter follows from some fundamental consistency conditions. Logarithmic corrections to extremal black hole entropy are not universally positive. See, e.g., \cite{Sen:2011ba, Sen:2014aja}.} \medskip Another possibility is to derive WGC-like bounds from infrared consistency conditions of the EFTs. In \cite{Adams:2006sv}, it was shown that the requirements of causal photon propagation and an analytic S-matrix imply that the coefficients of certain higher-derivative terms in the effective action must be positive or zero. Since these coefficients receive loop corrections from charged particles, one can reformulate the positivity constraints in terms of a bound of the form \eqref{wgc}. This idea was used in \cite{Cheung:2014ega} to show that the WGC must indeed hold in simple EFTs with a single $U(1)$ gauge field, provided that a certain parameter containing the charge-independent contributions to the higher-derivative terms is sufficiently small. The value of this parameter depends on the UV completion of the EFT and can thus be interpreted as encoding the microscopic properties of the quantum gravity theory. \medskip In the present paper, we apply the ideas of \cite{Adams:2006sv, Cheung:2014ega} to 3D and 4D EFTs with multiple $U(1)$ gauge fields. We find that causality and analyticity constraints then again lead to lower bounds for the charge-to-mass ratios of particles in the regime where charge-independent contributions to the higher-derivative terms in the effective action are small. We are thus able to recover the convex-hull condition without making reference to arguments involving black holes. However, compared to the single-$U(1)$ case analyzed in \cite{Cheung:2014ega}, we also find qualitatively new effects. Specifically, one of the constraints we find yields an \emph{upper} bound instead of a lower bound on the charge-to-mass ratios, unless the theory contains particles charged under multiple $U(1)$'s. This constraint is due to the requirement that photons travel subluminally in backgrounds generated by different gauge fields and does therefore not appear in theories with just a single $U(1)$ factor. We interpret it as evidence that the WGC for theories with multiple $U(1$)'s should be stronger than the convex-hull condition (which can also be satisfied in theories with a diagonal charge matrix). \medskip In order to substantiate this claim, we then analyze the self-consistency of the causality and analyticity constraints under the compactification of a class of 4D EFTs on a circle. We find that, due to the Kaluza-Klein gauge field, the constraints become stronger in the compactified theories. In particular, for scalar theories, they cannot be satisfied anymore by proposing a finite number of particles with bounded charge-to-mass ratios. In order that both ordinary and Kaluza-Klein photons travel subluminally, the theories instead need to contain an infinite tower of particles satisfying such a bound. Interestingly, we find that the tower must include bifundamentals but does not necessarily have to occupy the full charge lattice. Our result thus suggests a very specific version of the WGC, which, to our knowledge, is compatible with all known examples in string theory. \medskip Let us take stock of our findings. The requirement for an ultraviolet-completable theory to be well-behaved upon compactification has been used as a guiding principle for distinguishing the landscape from the swampland \cite{Brown:2015iha,Heidenreich:2015nta, Montero:2017yja}. Rather than assuming consistency of some conjectured principles upon Kaluza-Klein reduction, the present work examines potential inconsistencies directly in the lower-dimensional theory. In a sense, our result is a more direct test of consistency of the theory under dimensional reduction, as causality and unitarity are well-tested principles of Nature. While our analysis applies to theories with multiple $U(1)$'s, we consider for simplicity phases of the theories where all the $U(1)$ gauge fields remain massless. It would be interesting to extend our study to cases where some of the $U(1)$'s gain a mass in the infrared \cite{Shiu:2015uva, Shiu:2015xda, Saraswat:2016eaz}. We leave this investigation to a future work. \medskip This paper is organized as follows. In Sec.~\ref{sec:3d}, we derive causality and analyticity constraints for 3D EFTs with multiple $U(1)$ gauge fields and use them to obtain bounds on the charge-to-mass ratios of charged particles. In Sec.~\ref{sec:4d}, we repeat our analysis for 4D EFTs. In Sec.~\ref{sec:comp}, we study the Kaluza-Klein compactification of 4D EFTs on a circle and argue that causality and analyticity then imply a strong form of the WGC in which the charge-to-mass ratios of an infinite tower of particles are bounded from below. We conclude in Sec.~\ref{concl} with a discussion of our results. The details of several longer computations can be found in Apps.~\ref{app:formulae}--\ref{app:reduction}. \section{Infrared Consistency in $D=3$} \label{sec:3d} In the proceeding two sections, we derive a class of bounds on charge-to-mass ratios of matter in theories with multiple $U(1)$ gauge symmetries, by generalizing the analysis in~\cite{Cheung:2014ega}. We focus on 3D in this section and then extend our arguments to 4D in the next section. \medskip As we explain in Sec.~\ref{setup3d}, our starting point is the low-energy EFT of multiple photons, whose EFT parameters depend on the charge-to-mass ratios of matter fields that have been integrated out. The positivity bounds on these EFT parameters, derived in Secs.~\ref{causality3d}--\ref{analyticity3d} from causality and analyticity, are then translated into bounds on the charge-to-mass ratios in Sec.~\ref{ex:3d}. We demonstrate that, in addition to an ordinary WGC-type lower bound, this includes a new upper bound on the charge-to-mass ratios unless the theory contains particles charged under multiple $U(1)$'s. As we discuss in Sec.~\ref{sec:comp}, our new bound turns out to be crucial to motivating the tower WGC. \subsection{Setup} \label{setup3d} Let us suppose that the dynamics below a cutoff scale $\Lambda$ is captured by massive charged particles coupled to gravity and $N$ $U(1)$ gauge fields. For concreteness, we consider a Wilsonian effective action of the form\footnote{Throughout the paper, we use the mostly-plus convention for the metric.} \begin{align} \label{startingEFT_fermion} \Gamma & = \int \d^3 x \sqrt{-g} \bigg[ \frac{M_3}{2} R - \frac{1}{4} \sum_iF_i^2 \bigg] +\text{C.S.} +\text{H.O.} + \left\{\begin{matrix*}[l] \Gamma_\text{scalar} \\ \Gamma_\text{fermion} \end{matrix*}\right. \,, \end{align} where we consider either scalar or fermionic matter fields with \begin{align} \Gamma_\text{scalar} &= \int \d^3 x \sqrt{-g}\, \sum_a \left(-|D_{\mu} \phi_a|^2 - m_a^2 |\phi_a|^2\right), \\ \Gamma_\text{fermion} &= \int \d^3 x \sqrt{-g}\, \sum_a\bar\psi_a (-\slashed{D} - m_a) \psi _a\,. \label{3dfermion-action} \end{align} Here, $M_3$ is the 3D Planck mass. In what follows, we use $i,j,...$ to label the $N$ photons and $a,b,...$ for the massive charged particles. The covariant derivative is defined by \begin{equation} D_{\mu} = \nabla_\mu + i \sum_iq_{ai} g_i A_{i\mu} \,, \end{equation} where $q_{ai}$ is the $i$-th $U(1)$ charge of the particle $a$ and $g_i$ is the gauge coupling of the $i$-th $U(1)$. ``C.S." denotes parity-violating Chern-Simons terms which can in general appear in the effective action (notice that parity is already violated by the presence of fermion masses in \eqref{3dfermion-action}). ``H.O." stands for higher-dimensional operators, which depend on the UV completion beyond the cutoff $\Lambda$ and are given by combinations of Riemann tensors and gauge field strengths. In 3D, the Riemann tensor is completely determined by the Ricci tensor, and terms involving the latter can be eliminated by a field redefinition at the four-derivative level (see App.~\ref{simplify_action}).\footnote{This is, however, different in 4D, see Sec.~\ref{setup4d}.} The general form of the higher-dimensional operators is therefore \begin{align} \label{HOops} \text{H.O.} = \sum_{i,j,k,l} c_{ijkl} (F_i \cdot F_j) (F_k \cdot F_l) \end{align} up to terms with more than four derivatives. \medskip The charge-to-mass ratio of a scalar or a fermion is defined by \begin{align} \label{3dz} z_{ai}\equiv \frac{q_{ai}g_i\sqrt{M_3}}{|m_a|}\,, \end{align} and this is what we would like to constrain in the following by requiring that the EFT is consistent in the IR. \medskip For this purpose, we integrate out the massive charged particles to obtain a 1-loop effective action of gravity and $N$ photons. Since the calculation is quite long, we only present the results here and refer the interested reader to App.~\ref{app:heatkernel} for the details. As we derive there, integrating out the particles yields higher-derivative corrections to the effective action which are given by products of Riemann tensors and gauge field strengths. At the four-derivative level, they are schematically of the form $R^2$, $R F^2$ and $F^4$. However, as stated above, the curvature dependence of these terms can be eliminated by a field redefinition such that, subsequently, all corrections are of the form $F^4$ (see App.~\ref{simplify_action}). Up to terms with more than four derivatives, we thus find \begin{equation} \label{3dlagrangian} \Gamma_1 = \int \d^3 x \sqrt{-g} \left[ \frac{M_3}{2} R - \frac{1}{4} \sum_{i,j}\delta_{ij} F_i \cdot F_j + \sum_{i,j,k,l} C_{ijkl} (F_i \cdot F_j) (F_k \cdot F_l) \right] +\widetilde{\text{C.S.}} \end{equation} with \begin{align} \label{cijkl} C_{ijkl} &= c_{ijkl} + \sum_a\frac{1}{1920\pi|m_a|M_3^2} \cdot \left\{\begin{matrix*}[l] \left[ \frac{7}{8}z_{ai}z_{aj}z_{ak}z_{al} + \frac{3}{2} z_{ai}z_{aj} \delta_{kl} - z_{ai}z_{ak} \delta_{jl} \right. \\[0.5em] \left. \qquad + \frac{1}{2}\delta_{ij}\delta_{kl} + \delta_{ik}\delta_{jl} \right] & \text{(scalars)} \\[1.2em] \left[ z_{ai}z_{aj}z_{ak}z_{al} + z_{ai}z_{aj} \delta_{kl} - \frac{3}{2} z_{ai}z_{ak} \delta_{jl} \right. \\[0.5em] \left. \qquad -\frac{1}{2}\delta_{ij}\delta_{kl} + \frac{3}{2} \delta_{ik}\delta_{jl}\right] & \text{(fermions)\,.} \end{matrix*}\right. \end{align} Here, $\widetilde{\text{C.S.}} = \text{C.S.} + \text{C.S.}_{\text{1-loop}} $ is the 1-loop corrected Chern-Simons term, which generates a mass for the corresponding photons if nonzero. Indeed, the Chern-Simons level is shifted by fermion loop effects \cite{Dunne:1998qy}. In this paper, we would like to analyze massless $U(1)$'s and therefore focus on the case where the total Chern-Simons term vanishes, $\widetilde{\text{C.S.}}=0$.\footnote{It was argued in \cite{Montero:2017yja} that EFTs consistent with quantum gravity must contain phases in which a different type of Chern-Simons term is nonzero, which involves a coupling of a $U(1)$ gauge field to other form fields. For simplicity, we will assume a phase in which such terms are not present.} \medskip According to \eqref{cijkl}, the $C_{ijkl}$ coefficients are given schematically by \begin{align} C_{ijkl} \sim {\cal O}(z^4) + {\cal O}(z^2) + {\cal O}(z^0) \,. \end{align} As illustrated in Fig.~\ref{fig:diagrams}, diagrams involving loops of the scalars/fermions we integrated out contribute to all three types of terms in $C_{ijkl}$. The ${\cal O}(z^0)$ term furthermore receives contributions from photon loops and the higher-order operators denoted by ``H.O." in \eqref{startingEFT_fermion}. This implies in particular that the magnitude of the ${\cal O}(z^0)$ term depends on the UV completion of the EFT and should consequently be viewed as an (a priori unknown) boundary condition encoding the microscopic properties of the quantum gravity theory \cite{Cheung:2014ega}. \begin{figure}[t] \centering \includegraphics[scale=1.1]{diagrams} \caption{\label{fig:diagrams}\emph{Typical diagrams for the effective $F^4$ operator after integrating out scalars/fermions. In the left, the scalar/fermion loop induces $F^4$ through four gauge couplings. In the middle, the loop induces an $RF^2$ term through two gauge couplings and one gravitational coupling, hence it is $\mathcal{O}(z^2)$. After using the tree-level equation of motion, $R\sim F^2$, it is converted to $F^4$. In the right, the loop induces $R^2$, which is converted to $F^4$ with an $\mathcal{O}(z^0)$ coefficient.}} \end{figure} \medskip An interesting observation by Cheung and Remmen~\cite{Cheung:2014ega} is that the positivity of the EFT parameters $C_{ijkl}$ implies a WGC-type lower bound on the charge-to-mass ratios if the $\mathcal{O}(z^0)$ terms mentioned above are in a certain range. This was shown in \cite{Cheung:2014ega} for EFTs with a single $U(1)$ gauge field, where the effective action only depends on one parameter $C_{1111}$. We extend their argument to the multiple-$U(1)$ case in the rest of this section. First, in the next two subsections, we show that both causality and analyticity imply a positivity bound on (a particular combination of) the $C_{ijkl}$'s. In Sec.~\ref{ex:3d}, we then use this bound to constrain the charge-to-mass ratios. There, we show that an ordinary WGC-type lower bound follows in a certain range of the $\mathcal{O}(z^0)$ term, which can be understood as a 3D analogue of the convex-hull condition. Interestingly, we also find that a \emph{new} upper bound on the charge-to-mass ratios shows up unless the theory contains particles charged under multiple $U(1)$'s. This new constraint will be crucial in order to motivate the tower WGC in Sec.~\ref{sec:comp}. \medskip Before proceeding with the discussion, let us briefly summarize the parameter range where our argument is applicable. Throughout the discussion, we assume that the gauge and gravitational interactions are in the perturbative regime: \begin{align} \frac{|qg|}{\sqrt{|m|}}\ll1\,, \qquad \frac{|m|}{M_3}\ll1\,, \end{align} where we dropped the $(a,i)$ indices. Moreover, restricting to terms with at most four derivatives in the effective action is tantamount to working in the weak-field limit (for both gravity and photons). This means working in the regime where \begin{equation} \frac{|qgF| }{m^2} \ll 1 \,, \qquad \frac{|R|}{m^2} \ll 1 \,. \end{equation} Since the charge-to-mass ratio~\eqref{3dz} takes the form \begin{align} z=\frac{qg}{\sqrt{|m|}}\left|\frac{m}{M_3}\right|^{-1/2}\,, \end{align} we may cover a parametrically wide range of the charge-to-mass ratios: \begin{align} \left|\frac{qg}{\sqrt{|m|}}\right|\ll|z| \ll \left|\frac{m}{M_3}\right|^{-1/2}\,. \end{align} In particular, $z\sim\mathcal{O}(1)$ near the WGC bound is in our regime of validity. \subsection{Causality Constraints} \label{causality3d} We now study the IR consistency of the effective Lagrangian~\eqref{3dlagrangian} with vanishing Chern-Simons term, \begin{equation} \label{photon-eft} \Gamma_1 = \int \d^3 x \sqrt{-g} \left[ \frac{M_3}{2} R - \frac{1}{4} \sum_{i,j}\delta_{ij} F_i \cdot F_j + \sum_{i,j,k,l} C_{ijkl} (F_i \cdot F_j) (F_k \cdot F_l) \right] \,, \end{equation} where $C_{ijkl}$ is given by \eqref{cijkl} and depends on the charges and masses of the matter fields that have been integrated out. In 3D, a massless vector field is dual to a massless scalar field. Instead of \eqref{photon-eft}, we can therefore consider the dual scalar theory \begin{equation} \label{finalEFT_dualised} \Gamma_1 = \int \d^3 x \sqrt{-g} \left[ \frac{M_3}{2} R - \frac{1}{2} \sum_{i,j}\delta_{ij} \partial\phi_i \cdot \partial\phi_j + 4 \sum_{i,j,k,l}C_{ijkl} (\partial\phi_{i} \cdot\partial\phi_j)(\partial\phi_{k}\cdot \partial\phi_l) \right] \,, \end{equation} which is valid up to terms with more than four derivatives and obtained by the usual procedure of integrating out an auxiliary field (see App.~\ref{app:dualization}). \medskip Following the ideas in \cite{Adams:2006sv, Cheung:2014ega}, we can now derive a bound on the $C_{ijkl}$'s by requiring that fluctuations of the fields around nontrivial backgrounds are subluminal.\footnote{To be precise, we need to discuss the global causal structure. It turns out, however, that the subluminality argument we make here practically reproduces the same condition. See~\cite{Adams:2006sv} for more details.} For this purpose, we expand $g_{\mu\nu}$ and $\phi_i$ around their background values, denoted with a bar: \begin{align} g_{\mu\nu} = \bar g_{\mu\nu} + h_{\mu\nu} \,, \qquad \phi_i = \bar\phi_i + \varphi_i \,. \end{align} Since the graviton is non-dynamical in 3D \cite{Deser:1983tn}, we set $h_{\mu\nu} = 0$. \medskip For simplicity, let us assume a constant electromagnetic background field $\overline{\partial_\alpha\phi_i} = w_{i \alpha}$. Here and in what follows, we take the local Lorentz frame and use $\alpha,\beta,\ldots$ for local Lorentz indices. The metric is given by $\eta_{\alpha\beta}=(-++)$ in particular. At quadratic order in the fluctuations $\varphi_i$, the Lagrangian then takes the form\footnote{We define symmetrized and anti-symmetrized quantities as $A_{(ij)}=A_{ij}+A_{ji}$ and $A_{[ij]}=A_{ij}-A_{ji}$, respectively.} \begin{equation} \label{finalEFT_fluctuations} {\cal L} = -\frac{1}{2} \sum_{i,j}\delta_{ij} \partial\varphi_i \cdot \partial\varphi_j + 4 \sum_{i,j,k,l}C_{(ij)(kl)} (w_i \cdot \partial\varphi_j) (w_k \cdot \partial\varphi_l) \,, \end{equation} where we used the leading-order equations of motion $\partial^2 \varphi_i = 0$ (amounting to a field redefinition) to simplify the expression. In momentum space, it may be rewritten as \begin{align} \mathcal{L}=\frac{1}{2}\sum_{i,j}K_{ij}(w\cdot k)\varphi_{i}(k)\varphi_{j}(-k)\,, \end{align} where $K_{ij}$ is the kinetic operator \begin{align} K_{ij}(w\cdot k)=-\delta_{ij}k^2+4D_{ij}(w\cdot k)\,, \end{align} and $D_{ij}$ is the correction to the dispersion, \begin{align} \label{dispersion3D} D_{ij}(w\cdot k)=\sum_{k,l}\left( C_{(ik)(jl)}+C_{(jk)(il)}\right)(w_{k}\cdot k)(w_l\cdot k). \end{align} \medskip To discuss subluminality, let us diagonalize the kinetic operator as \begin{align} \widetilde{K}_{ij}={\rm diag}\Big(-k^2+4D_1(w\cdot k),-k^2+4D_2(w\cdot k),\ldots , -k^2+ 4D_N(w\cdot k)\Big)\,, \end{align} where we denote by $D_i$ the eigenvalues of the matrix $D_{ij}$. We then have $N$ modes with the dispersion relations \begin{align} -k^2+ 4D_i(w\cdot k)=0\,. \label{dispersion1} \end{align} The subluminality of the fluctuations therefore implies \begin{align} D_i(w\cdot k)\geq0 \quad\text{on the shell}. \label{dispersion2} \end{align} Here, the dispersion relations \eqref{dispersion1} should be considered order by order in the weak-field expansion. Recall that the corrections $D_i(w\cdot k)$ to the leading-order equation $k^2=0$ originate from four-derivative terms in the effective action \eqref{finalEFT_dualised}. Up to higher-order corrections which we neglect, it is then valid to rephrase \eqref{dispersion2} as \begin{align} D_i(w\cdot k)\geq0 \quad\text{for any null vector $k_\alpha$ and any $w_{i\alpha}$}. \end{align} In terms of the original matrix $D_{ij}$, we thus have \begin{align} \sum_{i,j}D_{ij}(w\cdot k)u_iu_j\geq0 \end{align} for arbitrary real $u_i$ and $w_{i\alpha}k^\alpha$. Writing $v_i \equiv w_{i\alpha}k^\alpha$ for convenience, we can rewrite this as \begin{align} \label{bound_on_D_3d} \sum_{i,j,k,l}C_{(ij)(kl)}u_iv_ju_kv_l\geq0 \end{align} for arbitrary real $\vec{u}$ and $\vec{v}$. In the following, without loss of generality, we assume that $\vec{u}$ and $\vec{v}$ are unit vectors. \medskip The positivity constraint \eqref{bound_on_D_3d} is one of the main results of this paper. We will see below that it can also be obtained from requiring analyticity of the photon S-matrix and that it can be used to obtain WGC-like bounds on the charge-to-mass ratios of the particles we integrated out before. \subsection{Analyticity Constraints} \label{analyticity3d} \begin{figure}[t] \centering \includegraphics[scale=1.2]{contour} \caption{\label{fig:contour}\emph{The blue curve in the left figure is the integration contour for Eq.~\eqref{int_IR}, which captures the IR physics. On the other hand, the one in the right figure is for Eq.~\eqref{int_UV}, which carries the UV information. The integrand accommodates a pole at the origin and discontinuities on the real axis associated to on-shell intermediate states (depicted by red).}} \end{figure} We now derive the same positivity constraint by using the optical theorem and analyticity of scattering amplitudes. The key is that we may relate IR amplitudes to the UV ones by virtue of analyticity. Following~\cite{Adams:2006sv}, let us consider a contour integral, \begin{align} \oint \frac{\d s}{2\pi i}\frac{\mathcal{M}(1_i,2_j,3_k,4_l;s)}{s^3}\,, \end{align} of the photon forward scattering amplitude $\mathcal{M}(1_i,2_j,3_k,4_l;s)$, where, e.g., $1_i$ means that the first photon is for the $i$-th $U(1)$ and $s$ is the Mandelstam variable satisfying $s=-(k_1+k_2)^2$. The integration contour is defined such that it encircles the origin $s=0$ (see Fig.~\ref{fig:contour}). We then evaluate this integral in two different ways, based on the IR and UV viewpoints. First, our effective Lagrangian~\eqref{finalEFT_dualised} tells us that the photon forward scattering takes the form \begin{align} \mathcal{M}(1_i,2_j,3_k,4_l;s)=\left( C_{(ij)(kl)}+C_{(kl)(ij)}+C_{(il)(kj)}+C_{(kj)(il)}\right)s^2+\mathcal{O}(s^3)\,, \end{align} where $C_{ijkl}$ is defined in Eq.~\eqref{cijkl}. The integral is then expressed in the IR language as \begin{align} \label{int_IR} \oint \frac{\d s}{2\pi i}\frac{\mathcal{M}(1_i,2_j,3_k,4_l;s)}{s^3}= C_{(ij)(kl)}+C_{(kl)(ij)}+C_{(il)(kj)}+C_{(kj)(il)}\,. \end{align} \medskip It is further convenient to introduce a crossing-symmetric combination of amplitudes, \begin{align} \mathcal{M}(s)=\sum_{i,j,k,l}u_iv_ju_kv_l\mathcal{M}(1_i,2_j,3_k,4_l;s)\,, \end{align} where $\vec{u}$ and $\vec{v}$ are arbitrary real unit vectors and \begin{equation} \oint \frac{\d s}{2\pi i}\frac{\mathcal{M}(s)}{s^3} = 4\sum_{i,j,k,l}u_iv_ju_kv_l C_{(ij)(kl)}\,. \end{equation} We may then deform the integration contour as (see Fig.~\ref{fig:contour}) \begin{align} \label{int_UV} \oint \frac{\d s}{2\pi i}\frac{\mathcal{M}(s)}{s^3} =\left(\int_{-\infty}^{-s_0} + \int_{s_0}^\infty\right) \frac{\d s}{2\pi i}\frac{{\rm Disc}[\mathcal{M}(s)]}{s^3}\,, \end{align} where we assumed that the scattering amplitude is analytic away from the real axis of $s$ and it enjoys the Froissart bound to drop the boundary contributions. $s_0$ is the square of the lowest energy for which the non-analyticity shows up. The analyticity also implies that the discontinuity function is nothing but the imaginary part of the amplitude: \begin{align} {\rm Disc}[\mathcal{M}(s)] &=\mathcal{M}(s+i\epsilon)-\mathcal{M}(s-i\epsilon) =2i\,{\rm Im}\,\mathcal{M}(s+i\epsilon)\,. \end{align} Here, we used the Schwarz reflection principle, which implies that $\mathcal{M}(s-i\epsilon)=\mathcal{M}^*(s+i\epsilon)$ for real $s$. We therefore have \begin{align} \label{eq:UV_IR} \oint \frac{\d s}{2\pi i}\frac{\mathcal{M}(s)}{s^3} =\left(\int_{-\infty}^{-s_0} + \int_{s_0}^\infty\right) \frac{\d s}{\pi}\frac{{\rm Im}\,\mathcal{M}(s)}{s^3} =\frac{2}{\pi} \int_{s_0}^\infty \d s\frac{{\rm Im}\,\mathcal{M}(s)}{s^3}\,, \end{align} where at the second equality we used the crossing-symmetric property of $\mathcal{M}(s)$. Notice here that the l.h.s.\ is evaluated in the IR, whereas the r.h.s.\ is an integration over the UV region. This is how the UV information is encoded into the IR observables. \medskip To show that the r.h.s.\ is positive, we use the optical theorem, which states that \begin{align} 2{\rm Im}\,\mathcal{M}(1_i,2_j,3_k,4_l;s) =\sum_{n}\mathcal{M}_{ij\to n}(s)\mathcal{M}_{kl\to n}^*(s)\,, \end{align} where the r.h.s.\ is a sum over the partial waves with the intermediate on-shell state $n$. It is easy to see that the imaginary part of the crossing-symmetric amplitude is positive, \begin{align} 2{\rm Im}\,\mathcal{M}(s)=\Big|\sum_{i,j}\mathcal{M}_{ij\to n}u_iv_j\Big|^2\geq0\,. \end{align} All in all, we arrive at the positivity bound \begin{align} 4\sum_{i,j,k,l}u_iv_ju_kv_l C_{(ij)(kl)} =\oint \frac{\d s}{2\pi i}\frac{\mathcal{M}(s)}{s^3} =\frac{2}{\pi} \int_{s_0}^\infty \d s\frac{{\rm Im}\,\mathcal{M}(s)}{s^3}\geq0\,, \end{align} which is the same as the one derived in the previous subsection from causality. \subsection{Bounds on Charge-to-Mass Ratios} \label{ex:3d} We now use the positivity condition~\eqref{bound_on_D_3d} on the EFT parameters to derive bounds on the charge-to-mass ratios of the scalars and fermions. Using \eqref{cijkl}, we find \begin{align} \nonumber 0 &\le M_3^2\sum_{i,j,k,l}C_{(ij)(kl)}u_iu_kv_jv_l \\ &=\sum_{a}\frac{1}{480\pi|m_a|}\bigg[ |\vec{u}\cdot \vec{z}_a|^2|\vec{v}\cdot \vec{z}_a|^2 - \frac{3}{8}|\vec{u}\cdot \vec{z}_a|^2 - \frac{3}{8}|\vec{v}\cdot \vec{z}_a|^2 + \frac{1}{4}(\vec{u}\cdot\vec{v})(\vec{u}\cdot \vec{z}_a)(\vec{v}\cdot \vec{z}_a) \bigg] \notag \\ &\quad\, +\gamma_f(\vec{u},\vec{v}) \label{inequality1} \end{align} for fermions and \begin{align} \nonumber 0 &\le M_3^2\sum_{i,j,k,l}C_{(ij)(kl)}u_iu_kv_jv_l \\ &=\sum_{a}\frac{7}{3840\pi|m_a|}\bigg[ |\vec{u}\cdot \vec{z}_a|^2|\vec{v}\cdot \vec{z}_a|^2 -\frac{2}{7}|\vec{u}\cdot \vec{z}_a|^2 -\frac{2}{7}|\vec{v}\cdot \vec{z}_a|^2 +\frac{8}{7}(\vec{u}\cdot\vec{v})(\vec{u}\cdot \vec{z}_a)(\vec{v}\cdot \vec{z}_a) \bigg] \notag \\ &\quad\, +\gamma_s(\vec{u},\vec{v}) \label{inequality2} \end{align} for scalars. Here, the functions $\gamma_{f/s}(\vec{u},\vec{v})$ are defined such that they contain all $\mathcal{O}(z^0)$ contributions to the inequalities, i.e., those which are independent of the $U(1)$ charges. They are given by \begin{align} \gamma_f(\vec{u},\vec{v})&= \sum_{a}\frac{1}{480\pi|m_a|} \left( \frac{3}{4} +\frac{1}{4}(\vec{u}\cdot\vec{v})^2 \right) +M_3^2\sum_{i,j,k,l}c_{(ij)(kl)}u_iu_kv_jv_l\,,\\ \gamma_s(\vec{u},\vec{v})&= \sum_{a}\frac{7}{3840\pi|m_a|} \left( \frac{4}{7}+\frac{8}{7}(\vec{u}\cdot\vec{v})^2 \right) +M_3^2\sum_{i,j,k,l}c_{(ij)(kl)}u_iu_kv_jv_l\,. \end{align} As we mentioned in Sec.~\ref{setup3d}, the coefficients $c_{ijkl}$ in the above expressions depend on the details of the UV completion of the EFT, so that we leave them arbitrary numbers. The values of $\gamma_f(\vec{u},\vec{v})$ and $\gamma_s(\vec{u},\vec{v})$ are therefore in general unknown. Let us stress, however, that $\gamma_f(\vec{u},\vec{v})$ and $\gamma_s(\vec{u},\vec{v})$ depend on the cutoff scale $\Lambda$ of the EFT. For example, we can imagine increasing/decreasing $\Lambda$ such that some particles whose masses were originally above the cutoff scale are now below it or vice versa. In general, the inequalities \eqref{inequality1} and \eqref{inequality2} are trivially satisfied if $\gamma_f(\vec{u},\vec{v})$, $\gamma_s(\vec{u},\vec{v})$ are positive and large enough. However, whenever they are sufficiently small in a given EFT at some energy scale $\Lambda$, this leads to nontrivial bounds on the charge-to-mass ratios. \medskip Since $\vec{u}$ and $\vec{v}$ are arbitrary unit vectors, we may obtain the strongest bounds on the charge-to-mass ratios by scanning over all the choices of $\vec{u}$ and $\vec{v}$. However, it turns out to be illustrative and interesting enough for our purpose to focus on two extremal cases: $\vec{u}=\vec{v}$ and $\vec{u}\cdot\vec{v}=0$. Let us begin with the case $\vec{u}=\vec{v}$, under which the positivity conditions \eqref{inequality1} and \eqref{inequality2} are reduced to \begin{align} & \sum_{a}\frac{1}{480\pi|m_a|} |\vec{u}\cdot \vec{z}_a|^2\Big(|\vec{u}\cdot \vec{z}_a|^2- \frac{1}{2}\Big) +\gamma_f(\vec{u},\vec{u})\geq0 && \text{(fermions)}\,, \label{inequality3a} \\ & \sum_{a}\frac{7}{3840\pi|m_a|} |\vec{u}\cdot \vec{z}_a|^2\left(|\vec{u}\cdot \vec{z}_a|^2 + \frac{4}{7}\right) +\gamma_s(\vec{u},\vec{u})\geq0 && \text{(scalars)}\,. \label{inequality3b} \end{align} Note that the inequalities have a different $z$-dependence in the fermion and scalar case, respectively. In particular, the scalar contribution to \eqref{inequality3b} is always positive such that the condition is trivially satisfied unless $\gamma_s(\vec{u},\vec{u})$ is negative. In the fermionic case \eqref{inequality3a}, the condition is trivially satisfied if $\gamma_f(\vec{u},\vec{u})$ is positive and large enough but provides nontrivial bounds on charge-to-mass ratios when $\gamma_f(\vec{u},\vec{u})$ is in a certain range. As an illustrative case, let us consider $\gamma_f(\vec{u},\vec{u})=0$: this value requires the existence of a particle satisfying\footnote{Our numerical bound differs from the one obtained in \cite{Cheung:2014ega} for the single-$U(1)$ case due to a different convention for the charge-to-mass ratio, i.e., $z_\text{\cite{Cheung:2014ega}}=\frac{\sqrt{2} qg\sqrt{M_3}}{|m|}$ in units where $M_3=\frac{1}{2}$. Similarly, in 4D, $z_\text{\cite{Cheung:2014ega}}=\frac{\sqrt{2}qgM_4}{|m|}$ in units where $M_4=\frac{1}{\sqrt{2}}$.} \begin{align} |\vec{u}\cdot\vec{z}_a|^2\geq \frac{1}{2}\,. \label{3dchc} \end{align} Since this condition has to be satisfied for an arbitrary unit vector $\vec{u}$, it means that the charge-to-mass vectors $\vec z_a$ span a convex hull which contains a ball of radius $1/\sqrt{2}$. Notice that the bound on the charge-to-mass ratio becomes stronger (weaker) if $\gamma_f(\vec{u},\vec{u})$ gives a negative (positive) contribution. This is a natural extension of the original Cheung-Remmen argument~\cite{Cheung:2014ega} to the multiple $U(1)$ case. \medskip Our result is the 3D analogue of the convex-hull condition, which was originally motivated in 4D using black-hole arguments \cite{Cheung:2014vva}. Since there are no black holes in 3D (unless one considers AdS boundary conditions), there is no extremality bound to compare with the numerical factor in our bound \eqref{3dchc}. Nevertheless, our bound states that the convex hull of the charge-to-mass vectors $\vec z_a$ must contain a ball of radius $\mathcal{O}(1)$. 3D EFTs in which $\gamma_f(\vec{u},\vec{u})$ or $\gamma_s(\vec{u},\vec{u})$ is small enough at a given cutoff scale $\Lambda$ must therefore satisfy a convex-hull condition even though this is not required by any arguments involving black hole decay.\footnote{For nonzero $\gamma_f(\vec{u},\vec{u})$, $\gamma_s(\vec{u},\vec{u})$ with general $c_{ijkl}$, the numerical bound depends on $\vec u$ and is therefore not necessarily isotropic, i.e., the object contained in the convex hull of the charge-to-mass vectors need not necessarily be a ball.} \medskip Perhaps even more interestingly, a new type of bound may be obtained by choosing $\vec{u}\cdot\vec{v}=0$ in \eqref{inequality1} and \eqref{inequality2}. This yields \begin{align} & \sum_{a}\frac{1}{480\pi|m_a|}\bigg[ |\vec{u}\cdot \vec{z}_a|^2|\vec{v}\cdot \vec{z}_a|^2 - \frac{3}{8}|\vec{u}\cdot \vec{z}_a|^2 - \frac{3}{8}|\vec{v}\cdot \vec{z}_a|^2\bigg] +\gamma_f(\vec{u},\vec{v})|_{\vec{v}\perp\vec{u}} \ge 0 && \text{(fermions)} \,, \\ & \sum_{a}\frac{7}{3840\pi|m_a|}\bigg[ |\vec{u}\cdot \vec{z}_a|^2|\vec{v}\cdot \vec{z}_a|^2 -\frac{2}{7}|\vec{u}\cdot \vec{z}_a|^2 -\frac{2}{7}|\vec{v}\cdot \vec{z}_a|^2\bigg] +\gamma_s(\vec{u},\vec{v})|_{\vec{v}\perp\vec{u}} \ge 0 && \text{(scalars)} \,. \end{align} If $\gamma_f(\vec{u},\vec{v})|_{\vec{v}\perp\vec{u}}$ and $\gamma_s(\vec{u},\vec{v})|_{\vec{v}\perp\vec{u}}$ are below a critical value, these inequalities can only be satisfied if the first terms in the brackets are nonzero. Hence, for any choice of $\vec u$, $\vec v$ with $\vec u \cdot \vec v = 0$, there must then exist at least one particle satisfying both $\vec u \cdot \vec z_a \neq 0$ and $\vec v \cdot \vec z_a \neq 0$. This implies in particular that it is not consistent to have a theory in which $\gamma_{f}(\vec{u},\vec{v})|_{\vec{v}\perp\vec{u}}$ or $\gamma_{s}(\vec{u},\vec{v})|_{\vec{v}\perp\vec{u}}$ is small and the charge vectors of all particles are orthogonal to one another. We may rephrase this as the statement that we require the existence of bifundamentals for \emph{any} (orthogonal) basis choice for the $U(1)$ gauge fields. \begin{figure}[t] \centering \vspace{-4em} \includegraphics[scale=0.45]{examples} \put(-305,108){$\scriptstyle z_1$} \put(-163,108){$\scriptstyle z_1$} \put(-20,108){$\scriptstyle z_1$} \put(-355,182){$\scriptstyle z_2$} \put(-213,182){$\scriptstyle z_2$} \put(-70,182){$\scriptstyle z_2$}\\[-2em] \caption{\label{fig:examples}\emph{Positivity constraints for 3D EFTs with two $U(1)$'s and particles with charge-to-mass vectors $\vec z_a$ (orange arrows). The first example does not satisfy the convex-hull condition (with the positivity bound indicated by the blue circle), and the second one does not have bifundamental particles for all basis choices of the $U(1)$ gauge fields. The third example is consistent with both positivity constraints.}} \end{figure} \medskip As a simple example, consider a theory with two $U(1)$'s and take $u_i=\delta_{i1}$ and $v_i=\delta_{i2}$: \begin{align} & \sum_{a}\frac{1}{480\pi|m_a|} \bigg[ z_{a1}^2z_{a2}^2 -\frac{3}{8}(z_{a1}^2+z_{a2}^2) \bigg] +\gamma_f(\delta_{i1},\delta_{i2})\geq0 && \text{(fermions)}\,,\\ &\sum_{a}\frac{7}{3840\pi|m_a|} \bigg[ z_{a1}^2z_{a2}^2 -\frac{2}{7}(z_{a1}^2+z_{a2}^2) \bigg] +\gamma_s(\delta_{i1},\delta_{i2})\geq0 && \text{(scalars)}\,. \end{align} If $\gamma_f(\delta_{i1},\delta_{i2})$ and $\gamma_s(\delta_{i1},\delta_{i2})$ are sufficiently small, these inequalities can only be satisfied for nonzero $z_{a1}^2z_{a2}^2$, i.e., we require at least one bifundamental particle. This requirement together with the convex-hull condition is, for example, realized in a theory which has \mbox{(anti-)particles} with charge vectors $(\pm 1,\pm 1)$ and appropriately chosen masses. However, the same argument can now also be repeated for any other $\vec u$, $\vec v$ satisfying $\vec u \cdot \vec v =0$, e.g., for the choice $u_i=\frac{1}{\sqrt{2}}(\delta_{i1}+\delta_{i2})\equiv\delta_{i+}$ and $v_i=\frac{1}{\sqrt{2}}(\delta_{i1}-\delta_{i2})\equiv\delta_{i-}$. This yields \begin{align} & \sum_{a}\frac{1}{480\pi|m_a|} \bigg[ \frac{1}{4}(z_{a1}+z_{a2})^2(z_{a1}-z_{a2})^2 -\frac{3}{8}(z_{a1}^2+z_{a2}^2) \bigg] +\gamma_f(\delta_{i+},\delta_{i-})\geq0 && \text{(fermions)}\,,\\ &\sum_{a}\frac{7}{3840\pi|m_a|} \bigg[ \frac{1}{4}(z_{a1}+z_{a2})^2(z_{a1}-z_{a2})^2 -\frac{2}{7}(z_{a1}^2+z_{a2}^2) \bigg] +\gamma_s(\delta_{i+},\delta_{i-})\geq0 && \text{(scalars)}\,. \end{align} In order for the first terms in the brackets to be nonzero, we also need at least one particle charged under both $A^\prime_{1\mu}=\frac{1}{\sqrt{2}}(A_{1\mu}+A_{2\mu})$ and $A^\prime_{2\mu}=\frac{1}{\sqrt{2}}(A_{1\mu}-A_{2\mu})$. A theory with only orthogonal charge vectors such as $(\pm 1,\pm 1)$ is therefore not consistent for sufficiently small $\gamma_f(\delta_{i+},\delta_{i-})$, $\gamma_s(\delta_{i+},\delta_{i-})$. The positivity constraints for EFTs with two $U(1)$'s are illustrated in Fig.~\ref{fig:examples}. \medskip To summarize, the positivity condition \eqref{bound_on_D_3d} for $\vec{u}=\vec{v}$ leads to bounds similar to the convex-hull type WGC bounds unless the UV-sensitive parameters $\gamma_f(\vec u,\vec u)$ and $\gamma_s(\vec u,\vec u)$ are large enough. Furthermore, the positivity condition for $\vec{u}\cdot\vec{v}=0$ requires the existence of bifundamental particles for all basis choices of the $U(1)$'s unless $\gamma_f(\vec u,\vec v)|_{\vec{v}\perp\vec{u}}$ and $\gamma_s(\vec u,\vec v)|_{\vec{v}\perp\vec{u}}$ are positive and large enough. The second type of condition turns out to be useful when we discuss the tower WGC later. \section{Infrared Consistency in $D=4$} \label{sec:4d} Let us now move on to discuss causality and analyticity constraints in 4D EFTs. Since gravity is dynamical in 4D, such an analysis is somewhat more complicated than in the previously discussed 3D case. Nevertheless, we will be able to obtain positivity bounds similar to those obtained in the previous section, which we will again use to derive bounds on the charge-to-mass ratios of charged particles. \subsection{Setup} \label{setup4d} The starting Wilsonian EFT (with cutoff $\Lambda$) is \begin{align} \label{4deffaction} & \Gamma = \int \d^4 x \sqrt{-g} \bigg[ \frac{M_4^2}{2} R - \frac{1}{4} \sum_i F_i^2 \bigg] + \text{H.O.} + \left\{\begin{matrix*}[l] \Gamma_\text{scalar} \\ \Gamma_\text{fermion} \end{matrix*}\right. \,, \end{align} where $M_4$ is the 4D Planck mass and we consider scalars or fermions with actions \begin{align} \Gamma_\text{scalar} &= \int \d^4 x \sqrt{-g}\, \sum_a \left(-|D_{\mu} \phi_a|^2 - m_a^2 |\phi_a|^2\right), \\ \Gamma_\text{fermion} &= \int \d^4 x \sqrt{-g}\, \sum_a\bar\psi_a (-\slashed{D} - m_a) \psi _a\,. \end{align} We define the charge-to-mass ratio of a matter field as \begin{align} z_{ai}\equiv \frac{q_{ai}g_i M_4}{|m_a|}\,. \end{align} \medskip Note that, unlike in 3D, there is no Chern-Simons term in \eqref{4deffaction}. However, as before, we allow higher-dimensional operators whose coefficients depend on the UV completion of the EFT. They may, for example, be generated by loops of heavy particles with masses above the cutoff scale and are therefore arbitrary from the low-energy point of view. In general, the operators are given by combinations of Riemann tensors and gauge field strengths (and derivatives thereof) but some of them can be eliminated by field redefinitions. The general form of the higher-dimensional operators is thus (see App.~\ref{simplify_action} for more details) \begin{align} \label{ho4d} \text{H.O.} = \sum_{i,j,k,l} \left[ c_{1ijkl}(F_i\cdot F_j)(F_k\cdot F_l) + c_{2ijkl}(F_i\cdot \tilde{F}_j)(F_k\cdot \tilde{F}_l) \right] +\sum_{i,j}c_{3ij}W^{\mu\nu\rho\sigma}F_{i\mu\nu}F_{j\rho\sigma} \end{align} up to terms with more than four derivatives. Here, $\tilde F_{i\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\rho\lambda}F_i^{\rho\lambda}$ is the dual gauge field strength, $W_{\mu\nu\rho\sigma} = R_{\mu\nu\rho\sigma} - \frac{1}{2}(g_{\mu[\rho}R_{\sigma]\nu}-g_{\nu[\rho}R_{\sigma]\mu}) + \frac{1}{6} R g_{\mu[\rho}g_{\sigma]\nu}$ is the Weyl tensor and $c_{1ijkl}$, $c_{2ijkl}$, $c_{3ij}$ are undetermined coefficients. \medskip As in 3D, we further integrate out charged matter in order to get the final EFT we are interested in. The final 1-loop 4-derivative effective action then reads (see App.~\ref{app:heatkernel} for the explicit computation) \begin{align} \Gamma_1 & = \int \d^4 x \sqrt{-g} \left[ \frac{M_4^2}{2} R - \frac{1}{4} \sum_i F_i \cdot F_i +\sum_{i,j,k,l} C_{1ijkl}(F_i\cdot F_j)(F_k\cdot F_l) \right. \notag \\ &\quad\, \qquad\qquad\quad\, \left. +\sum_{i,j,k,l} C_{2ijkl}(F_i\cdot \tilde{F}_j)(F_k\cdot \tilde{F}_l) +\sum_{i,j}C_{3ij}W^{\mu\nu\rho\sigma}F_{i\mu\nu}F_{j\rho\sigma} \right] \,, \end{align} where we again used field redefinitions to eliminate some of the terms (see App.~\ref{simplify_action}). The coefficients are given by \begin{align} C_{1ijkl} &= c_{1ijkl} + \frac{1}{2880\pi^2M_4^4} \sum_a \left( \frac{7}{8}z_{ai}z_{aj}z_{ak}z_{al} - z_{ai}z_{ak}\delta_{jl} + \frac{3}{4}\mathcal{I}_a \delta_{ik}\delta_{jl}\right) \,, \label{4dcoeff1}\\ C_{2ijkl} &= c_{2ijkl} + \frac{1}{2880\pi^2M_4^4} \sum_a \left( \frac{1}{8}z_{ai}z_{aj}z_{ak}z_{al} - z_{aj}z_{ak}\delta_{il} + \frac{3}{4}\mathcal{I}_a \delta_{il}\delta_{jk}\right) \,, \label{4dcoeff2}\\ C_{3ij} &= c_{3ij} - \sum_a \frac{z_{ai}z_{aj}}{2880\pi^2M_4^2} \label{4dcoeff3} \end{align} for the case of scalar matter and by \begin{align} C_{1ijkl} &= c_{1ijkl} + \frac{1}{2880\pi^2M_4^4} \sum_a \left( 2 z_{ai}z_{aj}z_{ak}z_{al} -\frac{11}{2} z_{ai}z_{ak}\delta_{jl} + \frac{9}{4}\mathcal{I}_a \delta_{ik}\delta_{jl}\right) \,, \label{4dcoeff4} \\ C_{2ijkl} &= c_{2ijkl} + \frac{1}{2880\pi^2M_4^4} \sum_a \left( \frac{7}{2}z_{ai}z_{aj}z_{ak}z_{al} -\frac{11}{2} z_{aj}z_{ak}\delta_{il} + \frac{9}{4}\mathcal{I}_a \delta_{il}\delta_{jk}\right) \,, \label{4dcoeff5} \\ C_{3ij} &= c_{3ij} + \sum_a \frac{z_{ai}z_{aj}}{1440\pi^2M_4^2} \label{4dcoeff6} \end{align} for fermions, where $\mathcal{I}_a = 2\ln\frac{\Lambda}{|m_a|}-\gamma$ and $\gamma$ is the Euler-Mascheroni constant. Similarly to our 3D analysis, we observe the structure $C_{1ijkl}, C_{2ijkl} \sim \mathcal{O}(z^4)+\mathcal{O}(z^2)+\mathcal{O}(z^0)$, $C_{3ij} \sim \mathcal{O}(z^2)+\mathcal{O}(z^0)$. Here, the scalar/fermion loops contribute to all three types of terms, while the $\mathcal{O}(z^0)$ terms receive a further contribution from the UV-sensitive operators \eqref{ho4d}. Furthermore, there are contributions from graviton and photon loops which are also of the order $\mathcal{O}(z^0)$. We refrain from computing these contributions explicitly and absorb them into the unknown coefficients $c_{1ijkl}$, $c_{2ijkl}$ and $c_{3ij}$ without loss of generality. \medskip Let us again point out the domain of validity of our results. Considering matter loops with photon and graviton legs, we find that our 1-loop effective action is valid in the perturbative regime \begin{equation} \alpha \equiv |qg| \ll 1 \,, \qquad \beta \equiv \frac{|m|}{M_4} \ll 1 \,, \end{equation} where we dropped the $(a,i)$ indices for simplicity. Restricting to terms with at most four derivatives is valid in the weak-field regime \begin{equation} \frac{|qg F|}{m^2} \ll 1 \,, \qquad \frac{|R|}{m^2} \ll 1 \,. \end{equation} Since, in 4D, the charge-to-mass ratio $z$ satisfies $\displaystyle |z| = \frac{|qg| M_4}{|m|} = \frac{\alpha}{\beta}$, our EFT expansion is valid for a wide range of values \begin{equation} \alpha \ll |z| \ll \frac{1}{\beta} \, \end{equation} including the regime $z \sim \mathcal{O}(1)$ around the WGC bound. \subsection{Causality Constraints} Let us now discuss subluminality constraints on the IR-effective Lagrangian \begin{align} \nonumber \mathcal{L}&= \frac{M_4^2}{2} R - \frac{1}{4} \sum_i F_i \cdot F_i +\sum_{i,j,k,l}\left[ C_{1ijkl}(F_i\cdot F_j)(F_k\cdot F_l) +C_{2ijkl}(F_i\cdot \tilde{F}_j)(F_k\cdot \tilde{F}_l) \right] \\ \label{4DEFT} &\quad +\sum_{i,j}C_{3ij}W^{\mu\nu\rho\sigma}F_{i\mu\nu}F_{j\rho\sigma}\,. \end{align} Just as in the 3D case, we turn on background gauge fields and a background metric and then require subluminality of fluctuations on this background to constrain the EFT parameters. However, in contrast to the 3D case, the graviton propagates in 4D and kinematically mixes with the photons in the presence of nontrivial electromagnetic backgrounds. To avoid technical complication due to such kinetic mixings, we follow Cheung and Remmen~\cite{Cheung:2014ega} and consider propagation in a thermal photon gas, where the electromagnetic fields have vanishing thermal average, $\overline{F_{i\mu\nu}}=0$, but nonzero, constant variance $\overline{F_{i\mu\nu}F_{j\rho\sigma}}\neq0$. \medskip In such a thermal photon gas, the part of the Lagrangian quadratic in the gauge field fluctuations takes the form \begin{align} \nonumber \mathcal{L}&=-\frac{1}{4}\sum_{i}F_{i\alpha\beta}F_i^{\alpha\beta} +\sum_{i,j,k,l}\left(C_{1(ij)(kl)}\overline{F_{j\alpha\beta}F_{l\gamma\delta}} +C_{2(ij)(kl)}\overline{\tilde{F}_{j\alpha\beta}\tilde{F}_{l\gamma\delta}}\right)F_i^{\alpha\beta}F_k^{\gamma\delta} \\ &\quad +\sum_{i,j}C_{3ij}\overline{W}_{\alpha\beta\gamma\delta}F_i^{\alpha\beta}F_j^{\gamma\delta}\,. \end{align} Here, we focus on the geometric-optics limit, where the photon wavelength is much shorter than the spacetime curvature scale. The indices $\alpha,\beta,\ldots$ are again for locally flat coordinates. We also performed a field redefinition to simplify the action. To make the argument more concrete, let us make the following ansatz for the photon background:\footnote{The energy density $\rho$ and pressure $p$ of a photon gas with temperature $T$ are given by $\rho=3p=\pi^2T^4/15$. As explained, e.g., in~\cite{Cheung:2014ega}, we therefore have $\overline{F_{\alpha\beta}F_{\gamma\delta}}=\overline{\tilde{F}_{\alpha\beta}\tilde{F}_{\gamma\delta}} =\frac{\pi^2}{45}T^4(\delta_{\alpha\gamma}\delta_{\beta\delta}-\delta_{\alpha\delta}\delta_{\beta\gamma}) $ in the rest frame of the photon gas.} \begin{align} \overline{F_{i\alpha\beta}F_{j\gamma\delta}}=\overline{\tilde{F}_{i\alpha\beta}\tilde{F}_{j\gamma\delta}}=\frac{\pi^2}{45}T_{ij}^4(\delta_{\alpha\gamma}\delta_{\beta\delta}-\delta_{\alpha\delta}\delta_{\beta\gamma})\,, \label{photongas} \end{align} where the Kronecker delta $\delta_{\alpha\beta}$ breaks Lorentz invariance. The symmetric matrix $T_{ij}^4$ specifies the properties of the photon gas. For example, when photons of all $N$ gauge fields are in thermal equilibrium such that they have the same temperature $T$, the matrix takes the form $T_{ij}^4=\delta_{ij}T^4$. Later, we will consider more general situations. \medskip Under the assumption \eqref{photongas}, the kinetic matrix for $2N$ helicity modes of $N$ photons simplifies in momentum space as \begin{align} \label{K4D} &K_{ij}= -\delta_{ij}k^2+D_{ij}\delta_{\alpha\beta}k^\alpha k^\beta \end{align} with $D_{ij}$ defined by \begin{align} D_{ij}=\frac{4\pi^2}{45}\sum_{k,l}\left( C_{1(ik)(jl)}+C_{1(jl)(ik)}+C_{2(ik)(jl)}+C_{2(jl)(ik)} \right)T^4_{kl}\,, \end{align} where we dropped helicity indices because the dispersion is helicity-independent in our setup. We also used $\overline{W}_{\alpha\beta\gamma\delta}=0$ because the FRW spacetime sourced by the background photons is conformally flat. The kinetic matrix may be diagonalized to \begin{align} \widetilde{K}_{ij}={\rm diag}\Big(\big(k_0^2-|\vec{k}|^2\big)+D_1\big(k_0^2+|\vec{k}|^2\big),\ldots ,\big(k_0^2-|\vec{k}|^2\big)+D_N\big(k_0^2+|\vec{k}|^2\big)\Big)\,, \end{align} where the $D_i$'s are the eigenvalues of the matrix $D_{ij}$. Subluminality requires that all eigenvalues $D_i$ of $D_{ij}$ should be non-negative, which can be rephrased as \begin{align} \label{4D_causality1} \sum_{i,j}D_{ij}u_iu_j\geq0 \end{align} for an arbitrary real vector $u_i$. \medskip Finally, let us take a closer look at the constraint~\eqref{4D_causality1} for several photon gas setups. First, when all photons have the same temperature, $T_{ij}^4=\delta_{ij}T^4$, we obtain the condition \begin{align} \sum_{i,j,k}\left( C_{1(ik)(jk)}+C_{2(ik)(jk)} \right)u_iu_j\geq0 \quad \forall u_i\,. \end{align} A stronger condition may be obtained by considering the case where photons of different gauge fields have different temperatures $T_i$, i.e., for $T_{ij}^4={\rm diag}(T_1^4,\ldots,T_N^4)$. By requiring subluminality for an arbitrary choice of the photon temperatures $T_i$, we arrive at the condition \begin{align} \sum_{i,j}\left( C_{1(ik)(jk)}+C_{2(ik)(jk)} \right)u_iu_j\geq0 \quad \forall k, \,u_i\,. \end{align} In order to further generalize this bound, we observe that $T_{ij}^4$ is not $SO(N)$ invariant anymore when each photon has a different temperature. By rotating the photon basis or, equivalently, by considering the case when some linear combination of $N$ photons has a definite temperature, we obtain the condition \begin{align} \label{4Dcausality} \sum_{i,j,k,l}\left(C_{1(ij)(kl)}+C_{2(ij)(kl)}\right)u_iu_kv_jv_l\geq0 \end{align} for arbitrary real vectors $\vec{u}$ and $\vec{v}$, which we assume to be unit vectors without loss of generality. In this paper, we use the condition~\eqref{4Dcausality} to constrain the IR-effective Lagrangian. As we will see, the same condition arises from an analyticity argument under some assumptions. \subsection{Analyticity Constraints} We now discuss constraints from the analyticity of scattering amplitudes. In the setup~\eqref{4DEFT}, 4-point amplitudes with the forward-type helicity structure are given by \begin{align} \mathcal{M}(1_i^+,2_j^+,3_k^-,4_l^-)&=\mathcal{M}(1_i^-,2_j^-,3_k^+,4_l^+) \notag \\ & =\left( C_{1(ij)(kl)}+C_{1(kl)(ij)}+C_{2(ij)(kl)}+C_{2(kl)(ij)} \right)s^2 \notag \\ &\quad\, +\text{(graviton exchange)}\,, \\ \mathcal{M}(1_i^+,2_j^-,3_k^-,4_l^+)&=\mathcal{M}(1_i^-,2_j^+,3_k^+,4_l^-)\notag \\ &=\left( C_{1(ij)(kl)}+C_{1(kl)(ij)}+C_{2(ij)(kl)}+C_{2(kl)(ij)} \right)u^2\notag \\ &\quad\, +\text{(graviton exchange)}\,, \end{align} where $C_{1ijkl}$ and $C_{2ijkl}$ are defined in Eqs.~\eqref{4dcoeff1}, \eqref{4dcoeff2}, \eqref{4dcoeff4} and \eqref{4dcoeff5}. The second term on the r.h.s.\ of each equation is from the single graviton exchange in the tree-level Einstein-Maxwell theory. As in the single photon case~\cite{Cheung:2014ega}, this contribution is singular $\sim s^2/t$ in the forward limit $t\to0$ and dominates over the 1-loop corrections from charged matter. Because of this singularity, it is not clear whether it is possible to derive a rigorous bound on higher-dimensional operators using analyticity arguments. However, in order to compare our multiple-photon setup with the single-photon case, let us follow~\cite{Cheung:2014ega} and compute the positivity bound on higher-dimensional operators by simply dropping the singular contribution due to graviton exchange. \medskip To apply the analyticity argument, it is convenient to introduce a linear combination, \begin{align} \nonumber \mathcal{M}_{ijkl}&= \mathcal{M}(1_i^+,2_j^+,3_k^-,4_l^-) +\mathcal{M}(1_i^-,2_j^-,3_k^+,4_l^+) +\mathcal{M}(1_i^+,2_j^-,3_k^-,4_l^+) +\mathcal{M}(1_i^-,2_j^+,3_k^+,4_l^-) \\ &=2\left( C_{1(ij)(kl)}+C_{1(kl)(ij)}+C_{2(ij)(kl)}+C_{2(kl)(ij)} \right)(s^2+u^2)\notag \\ &\quad\,+\text{(graviton exchange)} \,, \end{align} which is $s$-$u$ symmetric with respect to helicities. We further symmetrize the photon index as \begin{align} \mathcal{M}(\vec u,\vec v)=\sum_{i,j,k,l}u_iu_kv_jv_l\mathcal{M}_{ijkl}\,, \end{align} where $u_i$ and $v_i$ are real unit vectors. Just like in the 3D case, such a symmetric combination gives a positivity bound after using the optical theorem. Following the argument of Sec.~\ref{analyticity3d} and neglecting the contribution from graviton exchange, we arrive at the bound \begin{align} \label{bound_on_D} \sum_{i,j,k,l}\left(C_{1(ij)(kl)}+C_{2(ij)(kl)}\right)u_iu_kv_jv_l\geq0 \quad \forall u_i \,, v_i \,. \end{align} Although the argument here is not rigorous because of the singularity due to graviton exchange, we thus obtained the same bound as from the subluminality constraints. \subsection{Bounds on Charge-to-Mass Ratios} We now use the positivity conditions~\eqref{bound_on_D} on the EFT parameters to derive bounds on charge-to-mass ratios. Substituting either \eqref{4dcoeff1}, \eqref{4dcoeff2} or \eqref{4dcoeff4}, \eqref{4dcoeff5}, we may compute the l.h.s.\ for our 4D setup as \begin{align} \nonumber &M_4^4 \sum_{i,j,k,l}\left(C_{1(ij)(kl)}+C_{2(ij)(kl)}\right)u_iu_kv_jv_l \\ \nonumber &=\alpha_{f/s} \sum_{a}\bigg[ |\vec{u}\cdot \vec{z}_a|^2|\vec{v}\cdot \vec{z}_a|^2 - \frac{1}{2}\Big(|\vec{u}\cdot \vec{z}_a|^2+|\vec{v}\cdot \vec{z}_a|^2 +2(\vec{u}\cdot\vec{v})(\vec{u}\cdot \vec{z}_a)(\vec{v}\cdot \vec{z}_a) \Big) \bigg] \\ &\quad +\gamma_{f/s}(\vec{u},\vec{v})\,, \end{align} where $\alpha_f=11/1440\pi^2$ for fermions and $\alpha_s=1/720\pi^2$ for scalars. The functions $\gamma_{f/s}(\vec{u},\vec{v})$ are again defined such that they contain all $\mathcal{O}(z^0)$ contributions to the inequalities: \begin{align} \gamma_f(\vec{u},\vec{v}) &= \frac{9}{2880\pi^2} \sum_a \mathcal{I}_a \left( 1 + (\vec u \cdot \vec v)^2 \right) + M_4^4 \sum_{i,j,k,l}\left(c_{1(ij)(kl)}+c_{2(ij)(kl)}\right)u_iu_kv_jv_l\,,\\ \gamma_s(\vec{u},\vec{v}) &= \frac{3}{2880\pi^2} \sum_a \mathcal{I}_a \left( 1 + (\vec u \cdot \vec v)^2 \right) + M_4^4 \sum_{i,j,k,l}\left(c_{1(ij)(kl)}+c_{2(ij)(kl)}\right)u_iu_kv_jv_l \end{align} with $\mathcal{I}_a = 2\ln\frac{\Lambda}{|m_a|}-\gamma$. For any choice of $\vec u,\vec v$, the positivity constraint~\eqref{bound_on_D} then implies nontrivial bounds on the charge-to-mass ratios when $\gamma_{f/s}(\vec{u},\vec{v})$ is in a certain range. Just as we did in 3D, let us now focus on the two illustrative cases $\vec{u}=\vec{v}$ and $\vec{u}\cdot\vec{v}=0$. \medskip First, we consider the case $\vec{u}=\vec{v}$ in which the positivity condition is reduced to \begin{align} \sum_{a}\alpha_{f/s} |\vec{u}\cdot \vec{z}_a|^2\Big(|\vec{u}\cdot \vec{z}_a|^2-2\Big)+\gamma_{f/s}(\vec{u},\vec{u})\geq0\,. \end{align} This condition is trivially satisfied if $\gamma_{f/s}(\vec{u},\vec{u})$ is positive and large enough but provides nontrivial bounds whenever it is sufficiently small. Let us again take $\gamma_{f/s}(\vec{u},\vec{u})=0$ for illustration. The inequality then simplifies to \begin{align} \sum_{a}\alpha_{f/s} |\vec{u}\cdot \vec{z}_a|^2\Big(|\vec{u}\cdot \vec{z}_a|^2-2\Big)\geq0\,, \end{align} which implies the existence of a super-extremal particle satisfying \begin{align} |\vec{u}\cdot\vec{z}_a|\geq \sqrt{2} \,. \end{align} Since we may take an arbitrary unit vector $\vec{u}$, we arrive at the convex-hull condition, which requires a super-extremal particle in any direction of the charge space. Note that, for $\gamma_{f/s}(\vec{u},\vec{u})=0$, our bound on the charge-to-mass ratios is in fact numerically stronger than a super-extremality bound (which would only require $|\vec{u}\cdot\vec{z}_a|\geq \frac{1}{\sqrt{2}}$). The bound becomes stronger (weaker) if $\gamma_{f/s}(\vec{u},\vec{u})$ is negative (positive). \medskip Another illustrative example is the case $\vec{u}\cdot\vec{v}=0$. For concreteness, let us take $u_i=\delta_{i1}$ and $v_i=\delta_{i2}$, which yields \begin{align} \sum_{a}\alpha_{f/s}\bigg[ z_{a1}^2z_{a2}^2 -\frac{1}{2}z_{a1}^2-\frac{1}{2} z_{a2}^2 \bigg]+ \gamma_{f/s}(\delta_{i1},\delta_{i2}) \geq0\,. \end{align} For sufficiently small $\gamma_{f/s}(\delta_{i1},\delta_{i2})$, this inequality can only be satisfied if $z_{a1}^2z_{a2}^2$ is nonzero, i.e., it implies the existence of at least one bifundamental particle. This is true unless $\gamma_{f/s}(\delta_{i1},\delta_{i2})$ satisfies \begin{align} \gamma_{f/s}(\delta_{i1},\delta_{i2})&\geq \frac{1}{2} \sum_{a}\alpha_{f/s} (z_{a1}^2+z_{a2}^2) \,. \end{align} We can repeat the above argument for any other choice of $\vec u, \vec v$ satisfying $\vec u \cdot \vec v=0$. Following the argument in Sec.~\ref{ex:3d}, we therefore conclude that, for sufficiently small $\gamma_{f/s}(\vec u,\vec v)$, bifundamental particles are required to exist for any orthogonal basis choice of the $U(1)$ gauge fields. \section{Compactification and the Tower WGC} \label{sec:comp} In this section, we analyze causality and analyticity constraints of 4D EFTs compactified on a circle. It was shown in \cite{Heidenreich:2015nta} that a theory which satisfies the convex-hull condition proposed in \cite{Cheung:2014vva} does not necessarily satisfy it after compactification since then also charges under the KK $U(1)$ have to be considered. This was interpreted in \cite{Heidenreich:2015nta} as evidence for a stronger form of the WGC, i.e., the lattice WGC, which is robust under compactification. Here, we want to check whether we can conclude anything analogous from the study of infrared consistency conditions. We will find that, in comparison to the results of the previous section, the causality and analyticity constraints indeed become stronger in the compactified theories. This suggests a particular version of the WGC in which the charge-to-mass ratios of an infinite tower of particles are bounded from below. \subsection{Setup} \label{sec:comp-setup} Our starting point is a 4D EFT with metric $G_{MN}$ and either a scalar or a fermion charged under a $U(1)$ gauge field $A_M$. Here and in the following, we denote the 4D coordinates by $x^M=(x^\mu, x^3)$ with $\mu=0,1,2$ and the coordinate along the circle by $x^3$. We consider the effective action \begin{equation} \label{ea} \Gamma = \int \d^4 x \sqrt{-G} \left( \frac{M_4^2}{2} R - \frac{1}{4}F^2 \right) + \text{H.O.} + \left\{\begin{matrix*}[l] \Gamma_\text{scalar} \\ \Gamma_\text{fermion} \end{matrix*}\right.\,, \end{equation} where ''H.O.`` denotes possible higher-derivative terms and \begin{align} \Gamma_\text{scalar} &= \int \d^4 x \sqrt{-G}\, \left(-\left|\partial_M \Phi + iqg_4 A_M \Phi\right|^2 - m^2 \left|\Phi\right|^2 \right)\,, \\ \Gamma_\text{fermion} &= \int \d^4 x \sqrt{-G}\, \bar\Psi (- \,\slash\!\!\!\! \nabla - i q g_4 \,\slash\!\!\!\! A - m) \Psi\,. \label{eaf} \end{align} \medskip We now compactify this theory on a circle with radius $r$. To this end, we decompose the 4D metric $G_{MN}$ as \begin{equation} G_{\mu\nu} = \textrm{e}^\lambda g_{\mu\nu} + r^2\textrm{e}^{-\lambda}B_\mu B_\nu\,, \quad G_{\mu 3} = -r\textrm{e}^{-\lambda}B_\mu\,, \quad G_{33} = \textrm{e}^{-\lambda}\,. \end{equation} Here, $g_{\mu\nu}$ is the 3D metric, $\lambda$ is the radion, and $B_\mu$ is the graviphoton with field strength $H_{\mu\nu} = \partial_\mu B_\nu - \partial_\nu B_\mu$. The gauge field $A_M$ is decomposed into a 3D vector $A_\mu$ and an axion $A_3$. We then make the usual mode expansion \begin{align} g_{\mu\nu}(x^\mu,x^3) &= g^{(0)}_{\mu\nu}(x^\mu) + \sum_{n\neq 0} \frac{g^{(n)}_{\mu\nu}(x^\mu)}{\sqrt{\pi r}M_4} \textrm{e}^{inx^3/r}\,, \\ B_{\mu}(x^\mu,x^3) &= \sum_n \frac{B^{(n)}_{\mu}(x^\mu)}{\sqrt{\pi r }rM_4} \textrm{e}^{inx^3/r}\,, \\ \lambda(x^\mu,x^3) &= \sum_n \frac{\lambda^{(n)}(x^\mu)}{\sqrt{\pi r}M_4} \textrm{e}^{inx^3/r}\,, \\ A_\mu(x^\mu,x^3) &= \sum_n \frac{A_\mu^{(n)}(x^\mu)}{\sqrt{2\pi r}} \textrm{e}^{inx^3/r}\,, \\ A_3(x^\mu,x^3) &= \sum_n \frac{A_3^{(n)}(x^\mu)}{\sqrt{2\pi r}} \textrm{e}^{inx^3/r}\,, \end{align} where the reality of the 4D fields imposes the conditions $g^{(n)*}_{\mu\nu}=g^{(-n)}_{\mu\nu}$, $B^{(n)*}_{\mu}=B^{(-n)}_{\mu}$, etc. and we have chosen the prefactors in the expansions such that the fields are canonically normalized for $\lambda=0$. Analogously, we can expand the 4D scalar field, \begin{equation} \Phi(x^\mu,x^3) = \sum_n \frac{\phi^{(n)}(x^\mu)}{\sqrt{2\pi r}} \textrm{e}^{inx^3/r}\,. \end{equation} The 4D spinor $\Psi$ is decomposed into two 3D spinors $\psi$ and $\chi$ with mode expansions \begin{equation} \label{spinor_4D_to_3D} \psi(x^\mu,x^3) = \sum_n \frac{\psi^{(n)}(x^\mu)}{\sqrt{2\pi r}}\textrm{e}^{inx^3/r}\,, \qquad \chi(x^\mu,x^3) = \sum_n \frac{\chi^{(n)}(x^\mu)}{\sqrt{2\pi r}}\textrm{e}^{inx^3/r}\,. \end{equation} \medskip Since $G_{MN}$ has two propagating degrees of freedom, we expect that the combined degrees of freedom of $g^{(n)}_{\mu\nu}$, $B^{(n)}_{\mu}$ and $\lambda^{(n)}$ should also equal two for each KK level $n$. For $n=0$, we have a massless spin-2 field, a massless vector and a massless scalar in 3D, which indeed adds up to two degrees of freedom. For each $n\neq 0$, $g^{(n)}_{\mu\nu}$ eats up the vector and the scalar via a St\"{u}ckelberg mechanism\footnote{We thank Gianluca Zoccarato for a useful discussion on this point.} (see, e.g., \cite{Hinterbichler:2011tt} for a review). A massive spin-2 field in 3D has two degrees of freedom and, hence, we again arrive at the expected number. Similarly, one can check that $A^{(n)}_3$ is eaten by $A^{(n)}_{\mu}$ for all $n\neq 0$. The degrees of freedom of $A^{(n)}_{\mu}$ and $A^{(n)}_3$ thus add up to two for each KK level, in agreement with the two degrees of freedom of $A_M$ in 4D. \medskip For simplicity, we will assume a suitable stabilization mechanism such that the zero modes of the radion $\lambda^{(0)}$ and the axion $A_3^{(0)}$ are stabilized at $\lambda^{(0)} = A_3^{(0)} = 0$. Their precise masses are irrelevant for our analysis since they are uncharged and the constraints we want to derive are only sensitive to loops of \emph{charged} particles. The KK gravitons $g^{(n)}_{\mu\nu}$ and KK photons $A^{(n)}_{\mu}$, on the other hand, are charged under the KK $U(1)$ such that they generally contribute to the causality/analyticity constraints. We will see below, however, that there is a regime in which we can draw conclusions without having to know their precise contributions. The different types of fields in the spectrum of the compactified theory are summarized in Table \ref{Tab:fields}. \begin{table}[t] \centering \setlength{\tabcolsep}{12pt} \renewcommand{\arraystretch}{1.2} \begin{tabular}{cccc} \toprule field type & scalar case & fermion case & charge \\ \midrule massless real & $B_\mu^{(0)}$, $A_\mu^{(0)}$ & $B_\mu^{(0)}$, $A_\mu^{(0)}$ & $(0,0)$ \\ massive real & $\lambda^{(0)}$, $A_3^{(0)}$ & $\lambda^{(0)}$, $A_3^{(0)}$ & $(0,0)$ \\ massive complex & $g_{\mu\nu}^{(n\neq 0)}$, $A_\mu^{(n\neq 0)}$ & $g_{\mu\nu}^{(n\neq 0)}$, $A_\mu^{(n\neq 0)}$ & $(n,0)$ \\ & $\phi^{(n)}$ & $\psi^{(n)}$, $\chi^{(n)}$ & $(n,q)$ \\ \bottomrule \end{tabular} \vspace{5pt} \caption{\label{Tab:fields} \emph{Spectrum of 3D fields and their charges under $B_\mu^{(0)}$, $A_\mu^{(0)}$.}} \end{table} \medskip Our strategy in the next subsection will be to integrate out all massive fields in order to obtain a low-energy effective action which only depends on the massless gauge fields $A^{(0)}_\mu$ and $B^{(0)}_\mu$. Imposing causality/analyticity constraints as in the previous sections will then lead to inequalities for the charge-to-mass ratios of the massive fields with respect to the KK $U(1)$ and the original $U(1)$. As in the previous sections, we will perform the path integration in the one-loop approximation. It is therefore sufficient to restrict to terms in the action which are at most quadratic in any of the massive fields. \medskip Let us now rewrite the action \eqref{ea} in terms of the 3D fields, keeping in mind the above remarks. We define the 3D couplings \begin{equation} \label{couplings_3d} M_3 = 2\pi r M_4^2 \,, \qquad g_3 = \frac{g_4}{\sqrt{2\pi r}}\,,\qquad g_\text{KK} = \frac{\sqrt{2}}{\sqrt{M_3}r}\,. \end{equation} The Einstein-Maxwell part of the action then reads\footnote{Here, we omit couplings to $\lambda^{(0)}$ and $A_3^{(0)}$ as well as couplings between the KK modes and derivatives of zero modes (such as $R^{(0)} g^{(n)*}g^{(n)}$) because they are not relevant for our analysis below.} \begin{align} \Gamma &\supset \int \d^3x \sqrt{-g^{(0)}} \left[ \frac{M_3}{2} R^{(0)} -\frac{1}{4} H^{(0)2} -\frac{1}{4} F^{(0)2} + \sum_{n\ge 1} \left(-\frac{1}{2}D_\lambda g_{\mu\nu}^{(n)*} D^\lambda g^{(n)\mu\nu} \right.\right. \notag \\ &\quad\, \left.\left. + D_\lambda g_{\mu\nu}^{(n)*} D^\nu g^{(n)\mu\lambda} -\frac{1}{2} D_\mu g^{(n)*} D_\nu g^{(n)\mu\nu}- \frac{1}{2}D_\nu g^{(n)*\mu\nu}D_\mu g^{(n)}+\frac{1}{2}D_\mu g^{(n)*} D^\mu g^{(n)} \right.\right. \notag \\ &\quad\, \left.\left. - \frac{n^2}{2r^2} (g_{\mu\nu}^{(n)*}g^{(n)\mu\nu}-g^{(n)*}g^{(n)}) - \frac{1}{2} |F^{(n)}|^2 - \frac{n^2}{r^2} A_{\mu}^{(n)*}A^{(n)\mu} \right) \right]\,, \label{em-kk} \end{align} where $F^{(n)}_{\mu\nu} = D_\mu A^{(n)}_\nu - D_\nu A^{(n)}_\mu$ and $D_\mu = \nabla_\mu + in g_\text{KK} B_\mu^{(0)}$. We furthermore obtain the scalar action \begin{equation} \Gamma_\text{scalar} = \sum_n \int \d^3 x \sqrt{-g^{(0)}}\, \left(-\left|\partial_\mu \phi^{(n)} + iqg_3 A^{(0)}_\mu \phi^{(n)} + in g_\text{KK}B^{(0)}_\mu \phi^{(n)}\right|^2 - m_n^2 \left|\phi^{(n)}\right|^2 \right) \end{equation} with masses \begin{equation} m_n = \sqrt{m^2+\frac{n^2}{r^2}} \label{m_n} \end{equation} and the fermion action \begin{align} \Gamma_\text{fermion} &= \sum_n \int \d^3 x \sqrt{-g^{(0)}} \bigg[ - \bar\psi^{(n)} (\,\slash\!\!\!\! \nabla+iqg_3\, \slash\!\!\!\! A^{(0)}+ing_\text{KK}\,\slash\!\!\!\! B^{(0)}) \psi^{(n)} \notag \\ &\quad\, - \bar\chi^{(n)} (\,\slash\!\!\!\! \nabla+iqg_3 \,\slash\!\!\!\! A^{(0)}+ing_\text{KK}\,\slash\!\!\!\! B^{(0)}) \chi^{(n)} - m (\bar\psi^{(n)} \chi^{(n)} + \bar\chi^{(n)} \psi^{(n)}) \notag \\ &\quad\, - \bar\psi^{(n)} \left(\frac{n}{r} + \frac{i}{4\sqrt{2M_3}} \, \slash\!\!\!\! H^{(0)}\right) \psi^{(n)} - \bar\chi^{(n)} \left(-\frac{n}{r} - \frac{i}{4\sqrt{2M_3}} \, \slash\!\!\!\! H^{(0)}\right) \chi^{(n)} \bigg]\,, \label{3d-dirac} \end{align} where $\,\slash\!\!\!\! H^{(0)} = \gamma^\mu\gamma^\nu H^{(0)}_{\mu\nu}$. The mass eigenvalues of the fermions are $\pm m_n$ with $m_n$ again given by \eqref{m_n}. Also note that the scalars and fermions are charged under the two $U(1)$'s with $\vec{q_n} = (n,q)$. \medskip We finally comment on the regime of validity of our approach. In order to observe compactification effects in our analysis, the KK scale $r^{-1}$ should lie below the cutoff of the 4D EFT. Furthermore, we expect to find the strongest constraints in the regime where the compactification radius is small in units of $m$ since this was also the case for the black hole arguments of \cite{Heidenreich:2015nta}. We therefore consider the following hierarchy of scales: \begin{equation} \label{regime} m \ll r^{-1} < \Lambda < M_4\,. \end{equation} Since we consider an EFT with cutoff $\Lambda$, we should keep all KK modes with masses $m_n \lesssim \Lambda$. For $m \ll \Lambda$, this implies that we should sum over all $n$ with $|n| \lesssim r\Lambda$. As discussed in Sec.~\ref{setup3d}, the one-loop approximation of the effective action is justified in the regime $|n| g_\text{KK}/\sqrt{m_n} \ll 1$. One can check that this is indeed the case for all $|n| \lesssim r\Lambda$ if we respect the hierarchy \eqref{regime}. \subsection{Bounds on Charge-to-Mass Ratios} Let us for the moment ignore the KK gravitons $g_{\mu\nu}^{(n\neq 0)}$ and KK photons $A_\mu^{(n\neq 0)}$ and assume that the only charged particles in the theory are the KK tower of scalars $\phi^{(n)}$ or fermions $\psi^{(n)}, \chi^{(n)}$. Integrating out this KK tower, we then obtain an effective action analogous to the one in Sec.~\ref{sec:3d}, where we now consider the special case of the gauge group $U(1)_\text{KK}\times U(1)$. As before, this yields inequalities of the form \begin{align} & \sum_n \frac{1}{|m_n|} \left(\lambda_1 z_{n1}^4 + \lambda_2 z_{n1}^2 \right)+ \gamma_1 \ge 0 \,, \\ & \sum_n \frac{1}{|m_n|} \left(\lambda_3 z_{n2}^4 + \lambda_4 z_{n2}^2 \right) + \gamma_2 \ge 0 \,, \\ & \sum_n \frac{1}{|m_n|} \left(\lambda_5 z_{n1}^2 z_{n2}^2 + \lambda_6 z_{n1}^2 + \lambda_7 z_{n2}^2 \right) + \gamma_3 \ge 0 \,, \end{align} where the parameters \begin{equation} \gamma_1 \equiv 3840\pi\gamma_{f/s}(\delta_{i1},\delta_{i1})\,, \quad \gamma_2 \equiv 3840\pi\gamma_{f/s}(\delta_{i2},\delta_{i2})\,, \quad \gamma_3 \equiv 3840\pi\gamma_{f/s}(\delta_{i1},\delta_{i2}) \end{equation} contain all ${\cal O}(z^0)$ contributions as usual. We stress again that these contributions depend on the UV completion of the 4D EFT but also on the properties of the particles we integrated out. The latter implies in particular that the values of the $\gamma_i$ coefficients generally depend on the mass scales \eqref{regime}, i.e., $\gamma_i = \gamma_i (m,r,\Lambda)$. We will see below that, if $\gamma_3$ happens to drop below a critical value in a given EFT for some $m$, $r$, $\Lambda$, this implies a bound on the charge-to-mass ratios of a whole tower of 4D particles. \medskip The values of the $\lambda_i$'s depend on the spin and the couplings of the charged particles. As discussed in Sec.~\ref{sec:comp-setup}, we consider particles with charge vectors $\vec{q_n} = (n,q)$ and masses $m_n =\sqrt{m^2+\frac{n^2}{r^2}}$, where $|n| \lesssim r\Lambda$. The charge-to-mass ratios are therefore \begin{equation} \label{ctm} z_{n1} = \frac{ng_\text{KK}\sqrt{M_3}}{\sqrt{m^2 + \frac{n^2}{r^2}}}\,, \qquad z_{n2} = \frac{qg_3\sqrt{M_3}}{\sqrt{m^2 + \frac{n^2}{r^2}}}\,. \end{equation} The values of the $\lambda_i$'s are given in Table \ref{Tab:scalars_red} and computed in App.~\ref{app:reduction}. Notice that, for scalars, the computation is straightforward: a scalar charged under a single $U(1)$ in 4D corresponds to a tower of scalars charged under $U(1)_\text{KK}\times U(1)$ in 3D. Therefore, the $\lambda_i$ coefficients are the same as in the $U(1)^2$ case without compactification, which was already discussed in Sec.~\ref{sec:3d}. However, for fermions, the $\lambda_i$ coefficients are different from those derived in Sec.~\ref{sec:3d} due to the appearance of extra interaction terms $\sim \slashed{H}^{(0)}$ in the 3D action \eqref{3d-dirac} which are not present in the standard Dirac Lagrangian (see App.~\ref{app:reduction} for details). \medskip We now include the effect of the KK gravitons and KK photons which we have neglected so far. According to \eqref{em-kk}, these particles have masses $\tilde m_n = \hat m_n = \frac{n}{r}$ and are charged under the KK $U(1)$ but not under the ordinary $U(1)$ such that \begin{equation} \tilde z_{n1}=\hat z_{n1}=\sqrt{2}\,, \qquad \tilde z_{n2}=\hat z_{n2}=0\,. \label{ctm-kk} \end{equation} Here, we dressed masses and charge-to-mass ratios with tildes for the KK gravitons and hats for the KK photons in order to distinguish them from the corresponding quantities for the scalars/fermions. Including the contributions due to loops of these particles, the inequalities become \begin{align} & \sum_n \frac{1}{|m_n|} \left(\lambda_1 z_{n1}^4 + \lambda_2 z_{n1}^2 \right) + \sum_{n\neq 0} \frac{1}{|\tilde m_n|} \left(\tilde \lambda_1 \tilde z_{n1}^4 + \tilde \lambda_2 \tilde z_{n1}^2 \right) + \sum_{n\neq 0} \frac{1}{|\hat m_n|} \left(\hat \lambda_1 \hat z_{n1}^4 + \hat \lambda_2 \hat z_{n1}^2 \right) \notag \\ &\quad\, + \gamma_1 \ge 0 \,, \label{ineq1a} \\ & \sum_n \frac{1}{|m_n|} \left(\lambda_3 z_{n2}^4 + \lambda_4 z_{n2}^2 \right) + \gamma_2 \ge 0 \,, \label{ineq1b} \\ & \sum_n \frac{1}{|m_n|} \left(\lambda_5 z_{n1}^2 z_{n2}^2 + \lambda_6 z_{n1}^2 + \lambda_7 z_{n2}^2 \right) + \sum_{n\neq 0} \frac{\tilde \lambda_6 \tilde z_{n1}^2 }{|\tilde m_n|} + \sum_{n\neq 0} \frac{\hat \lambda_6 \hat z_{n1}^2 }{|\hat m_n|} + \gamma_3 \ge 0 \,. \label{ineq1c} \end{align} Note that \eqref{ineq1b} is only sensitive to particles charged under $A_\mu^{(0)}$ and therefore unaffected by the KK gravitons and KK photons (apart from possible $\mathcal{O}(z^0)$ contributions to $\gamma_2$). On the other hand, \eqref{ineq1a} and \eqref{ineq1c} depend on particles charged under $B_\mu^{(0)}$ and thus receive corrections from the KK gravitons and KK photons, which are encoded in the coefficients $\tilde \lambda_i,\hat \lambda_i$. It is in principle possible to compute these coefficients, but we will see below that their exact values are not required for our argument. \medskip As in the previous sections, the inequalities \eqref{ineq1a}--\eqref{ineq1c} are trivially satisfied if the $\gamma_i$ coefficients are above a critical value but they yield nontrivial bounds on the charge-to-mass ratios for sufficiently small $\gamma_i$. For concreteness, let us make our usual assumption $\gamma_i=0$ in the following. For positive (negative) $\gamma_i$, the bounds on the charge-to-mass ratios become weaker (stronger). Substituting the charge-to-mass ratios \eqref{ctm}, \eqref{ctm-kk} into the inequalities \eqref{ineq1a}--\eqref{ineq1c} and performing the sums, we find \begin{align} & \sum_n \frac{z_{n1}^4}{|m_n|} = \sum_{n\neq 0} \frac{\tilde z_{n1}^4}{|\tilde m_n|} = \sum_{n\neq 0} \frac{\hat z_{n1}^4}{|\hat m_n|} \simeq 8r \Psi^{(0)}\left(r\Lambda+1\right)+8r\gamma\,, \label{z-sums1} \\ & \sum_n \frac{z_{n1}^2}{|m_n|} = \sum_{n\neq 0} \frac{\tilde z_{n1}^2}{|\tilde m_n|} = \sum_{n\neq 0} \frac{\hat z_{n1}^2}{|\hat m_n|} \simeq 4r \Psi^{(0)}\left(r\Lambda+1\right)+4r\gamma\,, \\ & \sum_n \frac{z_{n1}^2z_{n2}^2}{|m_n|} \simeq 2 m^2r^3 z_{02}^2 \left[2\zeta(3)+\Psi^{(2)}\left(r\Lambda+1\right)\right]\,, \\ & \sum_n \frac{z_{n2}^4}{|m_n|} \simeq \frac{z_{02}^4}{m}\,, \\ & \sum_n \frac{z_{n2}^2}{|m_n|} \simeq \frac{z_{02}^2}{m}\,, \label{z-sums2} \end{align} where $\Psi^{(n)}(x)=\frac{\d^n}{\d x^n}\frac{\Gamma^\prime(x)}{\Gamma(x)}$ is the $n$th polygamma function, $\gamma$ is the Euler-Mascheroni constant and we truncated the summation to modes lighter than the cutoff scale (i.e., $|n| \lesssim r\Lambda$) and expanded in $mr$, which is consistent in the regime \eqref{regime}. Note that, if there is a large hierarchy $\Lambda \gg r^{-1}$, we have $\Psi^{(0)}\left(r\Lambda+1\right) \simeq \ln(r\Lambda)$. Our discussion below will be applicable both for moderately large $\Lambda \gtrsim r^{-1}$ and for $\Lambda \gg r^{-1}$, where in the latter case we will assume the regime $mr \ln(r\Lambda)\ll 1$. As a consequence, the leading order expressions for the inequalities are \begin{align} & 2\left( \lambda_1+\tilde\lambda_1+\hat\lambda_1 \right) + \lambda_2+\tilde\lambda_2+\hat\lambda_2 \ge 0 \,, \label{ineq2a} \\ & \lambda_3 z^4_{02} + \lambda_4 z^2_{02} \ge 0 \,, \label{ineq2b} \\ & \lambda_7 z^2_{02} \ge 0 \,. \label{ineq2c} \end{align} \medskip Whether the first inequality \eqref{ineq2a} is satisfied or not depends crucially on the contributions of the KK gravitons and KK photons. Since we have not computed $\tilde \lambda_i, \hat \lambda_i$, we cannot draw any conclusions about the IR consistency of the EFT based on this inequality. The second inequality \eqref{ineq2b} is of the form we already found in the case without compactification. Consulting Table \ref{Tab:scalars_red}, we find that, for scalars, it is trivially satisfied (for our choice $\gamma_2=0$), while, for fermions, it leads to a WGC-like bound for the charge-to-mass ratio of the $n=0$ mode $z_{02}$, \begin{equation} z^2_{02} \left( z^2_{02} -\frac{1}{2} \right) \ge 0\,. \label{wgc-fermion} \end{equation} \begin{table}[t] \centering \setlength{\tabcolsep}{12pt} \renewcommand{\arraystretch}{1.2} \begin{tabular}{cccc} \toprule $\lambda_i$ & & scalar & fermion \\ \midrule $\lambda_1$ & & $7$ & $- \frac{7}{8} $ \\ $\lambda_2$ & & $4$ & $ \frac{59}{2} $\\ $\lambda_3$ & & $7$ & $ 16 $ \\ $\lambda_4$ & & $4$ & $- 8 $ \\ $\lambda_5$ & & $7$ & $ \frac{17}{2}$ \\ $\lambda_6$ & & $ - 2$ & $- \frac{7}{2}$ \\ $\lambda_7$ & & $ - 2$ & $9$ \\ \bottomrule \end{tabular} \vspace{5pt} \caption{\label{Tab:scalars_red} \emph{Values of the $\lambda_i$ coefficients for scalar and fermion case. We factorized out the common factor $\frac{1}{3840\pi}$.}} \end{table} \medskip The most interesting inequality is \eqref{ineq2c}. Recall that it is due to IR consistency conditions mixing the ordinary $U(1)$ and the KK $U(1)$. We therefore expect this inequality to yield the strongest constraints among the three, analogously to the analysis of \cite{Heidenreich:2015nta} where black holes charged under both $U(1)$'s led to the strongest constraints. Interestingly, the dependence on the KK graviton and KK photon loops has completely vanished in \eqref{ineq2c} in the limit of small compactification radii. From Table \ref{Tab:scalars_red}, we can now read off the sign of $\lambda_7$: \begin{equation} \text{scalar:}\quad \lambda_7 < 0\,, \qquad \text{fermion:}\quad \lambda_7 > 0\,. \end{equation} Hence, in the scalar case, \eqref{ineq2c} is violated for \emph{all} values of $z_{02}$, i.e., the effective theory in the regime \eqref{regime} is inconsistent in the IR. In the fermionic case, on the other hand, \eqref{ineq2c} is always satisfied such that the only nontrivial constraint on the charge-to-mass ratio is due to \eqref{wgc-fermion}.\footnote{Curiously, this is opposite to the results of \cite{Cottrell:2016bty}, where a black hole entropy calculation led to stronger constraints for fermions than for scalars.} Note that the constraints on fermions may be stronger in case that $\gamma_3 < 0 $. However, the corresponding inequalities can then still be satisfied by choosing a large enough $z_{02}$, while this is not possible in the scalar case. Let us also point out that further obstructions to satisfying the inequalities may exist both for scalars and fermions in regimes where the contributions from the KK gravitons and KK photons become relevant. We leave the difficult task of computing these contributions for future work and continue to discuss the scalar case in the following. \subsection{The Tower WGC} How can the IR inconsistency indicated by \eqref{ineq2c} be cured in the absence of fermions? A possible resolution of the problem would be to postulate a restriction on the mass scales such that considering the regime \eqref{regime} is not allowed in the first place. This is reminiscent of a minimal radius found in \cite{Heidenreich:2015nta} below which the convex-hull condition could not be satisfied anymore. We cannot exclude that such a restriction exists in some EFTs such that the problem discussed above is avoided in these theories. \medskip However, there is another way to satisfy the inequalities which seems to be more in line with existing examples in string theory (where a version of the WGC stronger than the convex-hull condition holds). As we will demonstrate in the following, this is possible if one replaces the single 4D scalar by a whole \emph{tower} of 4D particles whose masses and charges under the gauge field $A_M$ need to satisfy certain conditions. One can check that the one-loop corrections due to these extra particles then contribute to the effective action such that all three inequalities \eqref{ineq1a}--\eqref{ineq1c} can be satisfied simultaneously. In particular, the term proportional to $\lambda_5$ in \eqref{ineq1c} is then not suppressed anymore compared to the other terms and thus helps to satisfy the inequality. \medskip To see this, consider a 4D EFT including $M$ scalars $\Phi_l$ with masses $m_l \in [m, m_\text{max}]$ and charges $q_l$ under the 4D gauge field $A_M$, where $l=0,\ldots,M-1$ is an index counting the 4D particles. After compactification, we then obtain a separate KK tower of 3D particles for each $l$. The relevant inequality \eqref{ineq1c} thus becomes \begin{align} & \sum_{n,l} \frac{1}{|m_{nl}|} \left(\lambda_5 z_{nl1}^2 z_{nl2}^2 + \lambda_6 z_{nl1}^2 + \lambda_7 z_{nl2}^2 \right) + \sum_{n\neq 0} \frac{\tilde \lambda_6 \tilde z_{n1}^2 }{|\tilde m_n|} + \sum_{n\neq 0} \frac{\hat \lambda_6 \hat z_{n1}^2 }{|\hat m_n|} + \gamma_3 \ge 0 \, \label{ineq4c} \end{align} with $m_{nl}=\sqrt{m_l^2 + \frac{n^2}{r^2}}$ and \begin{equation} z_{nl1} = \frac{ng_\text{KK}\sqrt{M_3}}{\sqrt{m_l^2 + \frac{n^2}{r^2}}}\,, \qquad z_{nl2} = \frac{q_lg_3\sqrt{M_3}}{\sqrt{m_l^2 + \frac{n^2}{r^2}}}\,. \end{equation} For scalars with $m_l$ much smaller than the KK scale, the sums over the KK states in \eqref{ineq4c} can be evaluated explicitly by expanding in $m_l r$ as in the single scalar case we considered before. On the other hand, for scalars with $m_l$ of the order of the KK scale $r^{-1}$ or larger, closed-form expressions for the sums are not available. However, we can still qualitatively understand their behavior up to $\mathcal{O}(1)$ factors by splitting the sums into three regimes: $n \sim m_l r$, $n \ll m_l r$ and $n \gg m_l r$, where in the last two regimes we can expand in $n/(m_lr)$ or $m_lr/n$, respectively. We thus find \begin{align} & z_{nl1}^2 = \frac{2n^2}{n^2+m_l^2r^2} \simeq 0\,, && z_{nl2}^2 = z_l^2 \frac{m_l^2r^2}{n^2+m_l^2r^2} \simeq z_l^2\,, && (m_lr \gg n) \notag \\ & z_{nl1}^2 = \frac{2n^2}{n^2+m_l^2r^2} \simeq \mathcal{O}(1)\,, && z_{nl2}^2 = z_l^2 \frac{m_l^2r^2}{n^2+m_l^2r^2} \simeq \mathcal{O}(1)z_l^2\,, && (m_lr \sim n) \notag \\ & z_{nl1}^2 = \frac{2n^2}{n^2+m_l^2r^2} \simeq 2\,, && z_{nl2}^2 = z_l^2 \frac{m_l^2r^2}{n^2+m_l^2r^2} \simeq 0\,, && (m_lr \ll n) \end{align} where $z_l = z_{0l2} = \frac{q_l g_3 \sqrt{M_3}}{|m_l|}=\frac{q_l g_4 M_4}{|m_l|}$ is the 4D charge-to-mass ratio. This yields \begin{align} & \sum_{n} \frac{1}{|m_{nl}|} \left(\lambda_5 z_{nl1}^2 z_{nl2}^2 + \lambda_6 z_{nl1}^2 + \lambda_7 z_{nl2}^2 \right) \notag \\ &\simeq \left\{ \begin{array}{ll} \lambda_7 \frac{z_{l}^2}{m_l} & \quad (m_l \ll r^{-1}, \Lambda \gtrsim r^{-1}) \vspace{1em} \\ \lambda_7 \frac{z_{l}^2}{m_l} +4\lambda_6 r \ln(r\Lambda) & \quad (m_l \ll r^{-1}, \Lambda \gg r^{-1}) \vspace{1em} \\ \mathcal{O}(1) \lambda_5 r z_{l}^2 + \mathcal{O}(1) \lambda_6r + \mathcal{O}(1) \lambda_7 r z_{l}^2 & \quad (m_l \sim r^{-1}, \Lambda \gtrsim r^{-1}) \vspace{1em} \\ 4 \lambda_6 r \ln(r\Lambda) & \quad (m_l \sim r^{-1}, \Lambda \gg r^{-1})\vspace{1em} \\ \mathcal{O}(1) \lambda_5 r z_{l}^2 + \mathcal{O}(1) \lambda_6r + \mathcal{O}(1) \lambda_7 r z_{l}^2 & \quad (m_l \gg r^{-1}, \Lambda \gtrsim m_l) \vspace{1em} \\ 4 \lambda_6 r \ln(\frac{\Lambda}{m_l}) & \quad (m_l \gg r^{-1}, \Lambda \gg m_l)\,. \end{array} \right. \label{small-mass} \end{align} It is now straightforward to derive some basic properties the particle tower must have in order that the inequality \eqref{ineq4c} can be satisfied: \begin{itemize} \item Because of \eqref{small-mass} and Table \ref{Tab:scalars_red}, the KK sum for a single 4D particle $l$ contributes positively to the inequality \eqref{ineq4c} only when $(m_l, \Lambda)$ are in the third or fifth regimes of~\eqref{small-mass}. Moreover, the charge-to-mass ratio has to satisfy a bound $z_{l} \gtrsim \mathcal{O}(1)$ in order for the positive $\lambda_5$ term to dominate over the negative $\lambda_6$ and $\lambda_7$ terms. \item In the third and fifth regimes of \eqref{small-mass}, the scalar mass $m_l$ is near the cutoff scale, $m_l \lesssim \Lambda$. In particular, the KK sums for scalars with $m_l \ll \Lambda$ always contribute negatively to the inequality \eqref{ineq4c}. \item Suppose that the mass $m$ of the lightest scalar is much smaller than the KK and cutoff scales: $m\ll r^{-1},\Lambda$. We can see from \eqref{small-mass} that the positive contribution from a scalar $l$ with a mass close to the cutoff $m_l\lesssim\Lambda$ is suppressed by a factor $mr \ll 1$ compared to the negative contribution of the lightest scalar with mass $m$, unless its charge-to-mass ratio $z_{l}$ is parametrically larger than that of the lightest scalar. Assuming particles with finite charge-to-mass ratios, the number of particles in the tower thus needs to be at least of the order $(mr)^{-1}$. In the limit of small radii $mr \to 0$, this corresponds to a tower with an infinite number of particles. \end{itemize} \medskip Note that the above constraints also imply a bound on the cutoff scale. We have seen that the tower needs to contain particles satisfying $z_l=\frac{q_l g_4 M_4}{m_l} \gtrsim \mathcal{O}(1)$, where $m_l$ is required to be of the order of the cutoff scale. It follows that $\Lambda \lesssim q_l g_4 M_4$, i.e., the cutoff needs to be much smaller than the 4D Planck scale in order for the EFT to be consistent. We thus also reproduce the magnetic WGC \cite{ArkaniHamed:2006dz} from our arguments. \medskip As a simple example for the case $\Lambda \gg r^{-1}$, consider a 4D EFT including a tower of scalars $\Phi_l$ with masses $m_l=\sqrt{m^2+l^2\mu^2}$ and charges $q_l=(l+1)q$ under the 4D gauge field $A_M$. Here, $\mu$ denotes a mass scale by which the masses of the particles with different $l$ are separated. Analogously to our discussion in Sec.~\ref{sec:comp-setup}, the compactified theory then has a scalar particle spectrum labelled by indices $n,l$ with \begin{equation} \vec z_{nl} = \left( z_{nl1}, z_{nl2} \right) = \frac{\sqrt{M_3}}{m_{nl}}\left( n g_\text{KK}, (l+1)qg_3 \right)\,, \quad m_{nl} = \sqrt{m^2+\frac{n^2}{r^2}+l^2\mu^2}\,. \end{equation} The 3D particles thus fill a 2-dimensional charge lattice, see Fig.~\ref{fig:lattice}. \begin{figure}[t] \centering \includegraphics[scale=0.8]{lattice} \caption{\label{fig:lattice}\emph{Starting with a tower of 4D particles, after compactification the charges form a 2D lattice. Consistency requires to sum over all modes such that $m_{nl}^2=m^2+\frac{n^2}{r^2}+l^2\mu^2 \le \Lambda^2$. }} \end{figure} \medskip Inserting these values into \eqref{ineq1a}--\eqref{ineq1c}, using \eqref{couplings_3d} and summing over all modes with $\frac{n^2}{r^2}+l^2\mu^2 \lesssim \Lambda^2$, we find \begin{align} & 3\lambda_1 + 2\lambda_2 \ge 0\,, \label{ineq3a} \\ & 3\lambda_3 q^2g_3^2 M_3 + 4\lambda_4 \mu^2 \ge 0 \,, \\ & q^2g_3^2 M_3\left( \frac{1}{2}\lambda_5+\lambda_7\right)+ 2\lambda_6 \mu^2 \ge 0\, \label{ineq3b} \end{align} at leading order in an expansion in $(r\Lambda)^{-1}$, $\mu/\Lambda$. We can now substitute the $\lambda_i$ values from Table \ref{Tab:scalars_red} and use again \eqref{couplings_3d} to express the inequalities in terms of 4D couplings. We thus find that \eqref{ineq3a}--\eqref{ineq3b} are satisfied if \begin{equation} \mu \le \sqrt{\frac{3}{8}} q g_4 M_4\,, \end{equation} i.e., the mass separation of the 4D particles is bounded from above. \medskip It is obvious from \eqref{ineq1a}--\eqref{ineq1c} that the above example is not the only possibility to satisfy the inequalities. Since they are not sensitive to each charge-to-mass ratio individually but only to their sum, more general distributions of particles will also suffice to satisfy them. For example, one could introduce variations between the mass separation of different particles in the tower or consider a tower with particles of different spins, etc. Another interesting possibility compatible with our constraints is to leave some of the charges unoccupied or filled by particles with negligible charge-to-mass ratios. This implies in particular that the tower of particles for which the charge-to-mass ratios are bounded from below need not necessarily occupy a charge lattice or even a charge sub-lattice. As long as the sum over the full tower behaves similarly to our example at leading order, our constraints can still be satisfied. \medskip The form of the WGC suggested by our analysis is thus stronger than the convex-hull condition but less restrictive than the (sub-)lattice WGC. It will be interesting to see whether there are examples in string theory which satisfy our version of the WGC but violate the stronger proposal of the (sub-)lattice WGC. \section{Conclusions} \label{concl} In this work, we have argued for a specific version of the WGC in the presence of multiple $U(1)$ gauge fields by exploiting infrared consistency conditions of low-energy EFTs. When the UV-sensitive EFT parameters $\gamma_{f/s}$ are in a certain range, our analysis leads to the following three constraints on the matter contents: \begin{itemize} \item The theories must contain particles consistent with the convex-hull type lower bounds on charge-to-mass ratios. \item The theories must contain bifundamental particles in any basis choice for the $U(1)$ gauge fields. \item The scalar theories must contain a tower of particles whose charge-to-mass ratios satisfy a lower bound. \end{itemize} This suggests that the convex-hull condition, which was originally motivated by black hole arguments, may not be strong enough. On the other hand, it is interesting that the constraints we find are flexible enough to not require the tower of particles to fill a full charge lattice. Instead, a sub-lattice or even a non-periodic occupation of charges are also consistent with our findings. Our version of the WGC is thus less restrictive than the previously proposed lattice WGC. It would be interesting to see whether there are string theory examples confirming such a behavior. \medskip As stated before, a crucial assumption of our work (and the earlier work \cite{Cheung:2014ega}) is that the $\gamma_{f/s}$ parameters encoding charge-independent corrections to the higher-derivative terms in the EFT are sufficiently small when evaluated at some cutoff scale $\Lambda$. These parameters are sensitive to the UV-completion of the EFT and can therefore be interpreted as a quantum gravity input that the EFT analysis alone cannot fix. It is not surprising that such an extra ingredient is necessary to arrive at the above conclusions. After all, a key property of conditions separating the swampland from the landscape is precisely that they are \emph{not} visible from a pure low-energy perspective. Our results are therefore not a general ``proof'' of the WGC. Rather, we believe that our work, together with the earlier work \cite{Cheung:2014ega}, could provide a useful different perspective on the WGC by relating it to the values of the $\gamma_{f/s}$ parameters. An intriguing possibility is that the parameters are forced to be below the critical value in EFTs compatible with quantum gravity, or at least in certain classes thereof. In that case, we have shown that a specific version of the WGC automatically follows. We stress, however, that the converse is not true: it is in principle possible that the $\gamma_{f/s}$ parameters can take arbitrary values in quantum gravity theories, and that theories in which these parameters are large still satisfy the WGC for reasons unrelated to our constraints (see also a comment in \cite{Harlow:2015lma}). \medskip Our work is in the same spirit as a number of recent results showing that the original formulation of the WGC can be related to other claims which at first sight appear to be quite different. Thus, depending on the considered setup, the WGC can be understood as a statement about black hole decay \cite{ArkaniHamed:2006dz}, field space variations \cite{Klaewer:2016kiy, Palti:2017elp}, CFT states \cite{Nakayama:2015hga, Harlow:2015lma, Benjamin:2016fhe, Montero:2016tif}, cosmic censorship \cite{Crisford:2017gsb, Cottrell:2016bty}, instabilities of AdS vacua \cite{Ooguri:2016pdq, Danielsson:2016mtx, Freivogel:2016qwc, Ooguri:2017njy}, or, as we argued here, the smallness of certain EFT parameters. In our point of view, it is far from obvious at the moment which of the many formulations of the WGC will ultimately turn out to be the most helpful, the easiest to prove or the most fundamental one. \medskip Our work suggests several opportunities for further research. Straightforward generalizations include an analysis of theories in dimensions greater than $4$, with a more general matter content, or with a nonzero cosmological constant. It would also be interesting to consider compactifications on manifolds other than a circle and check whether this yields further constraints in addition to those found in Sec.~\ref{sec:comp}. \medskip Another interesting route is to test our general arguments in concrete string compactifications. In particular, it would be nice to check explicitly whether there are obstructions to the assumptions that went into our analysis, for example, regarding the matter content, the hierarchy of scales, or the stabilization of the radion. One may also investigate how our analysis changes when the radion is left unstabilized and how this relates to the arguments of \cite{Palti:2017elp}, where a form of the WGC in the presence of massless scalar fields was conjectured. \medskip An important extension of our work would also be to derive the value of the $\gamma_{f/s}$ parameters in explicit string models. Unfortunately, higher-derivative corrections of the form necessary for our analysis are at present only partially known in string/M-theory compactifications (see, e.g., \cite{Grimm:2013gma, Grimm:2017okk} for the computation of some $R^2$ terms). The computation of these corrections is technically challenging but not impossible. This might allow us to prove that, at least for certain classes of compactifications, the WGC is indeed implied by analyticity and causality of the EFT. \section*{Acknowledgements} We would like to thank Billy Cottrell, Eran Palti, Pablo Soler and Gianluca Zoccarato for useful discussions. SA is supported in part by the Research Grants Council (RGC) of Hong Kong through grants HKUST4/CRF/13G, 604213, and 16304414. DJ is supported in part by the DFG Transregional Collaborative Research Centre TRR 33 ``The Dark Universe''. TN is in part supported by Grant-in-Aid for Scientific Research (B) No. 17H02894 from the Japan Society for the Promotion of Science (JSPS). GS is supported in part by the DOE grant DE-SC0017647 and the Kellett Award of the University of Wisconsin. \newpage
{ "timestamp": "2018-02-14T02:00:35", "yymm": "1802", "arxiv_id": "1802.04287", "language": "en", "url": "https://arxiv.org/abs/1802.04287" }
\subsection{Contributions of this work} Over the course of single activity sessions, we measure different quantities capturing user behaviour, e.g.,\ propensity to engage in social interactions, or amount of produced content, and finally contrast results between bots and humans. The present study advances our understanding of bots and human user behavior in the following ways: \begin{itemize} \item We reveal the presence of short-term behavioural trends among humans that are instead absent in the case of bots. Such trends may be explained by a deterioration of human user's performance (in terms of quality and quantity of produced content), and by an increasing engagement in social interactions over the course of an online session; in both cases, we would not expect bots to be affected, and indeed we record no significant evidence of any short-term temporal evolution for this category of users. \item In the spirit of the research line on bot detection, we codify our findings in a set of highly predictive features capable of separating human activity sessions from bots' ones; then, we design and evaluate the performance of a machine learning framework that leverages this features to detect bot activity sessions. This can prove extremely desirable when trying to detect so-called \textit{cyborgs}, users that are in part controlled by humans and in part bots. Our session classification system yields an accuracy of 94\% AUC (\textit{Area Under the ROC curve}). The addition of the features identified by our analysis yields an average improvement over the baseline of up to 14\% AUC. \end{itemize} \section{Introduction} \input{intro} \section{Data \& Methods} \label{data} \input{data} \section{Experimental Analysis} \label{exp} \input{exp} \section{Predictions} \label{pred} \input{pred} \section{Discussion} \input{disc} \section{Related work} \input{rw} \section{Conclusions} \input{conc} \begin{acks} \input{ack} \end{acks} \balance \bibliographystyle{ACM-Reference-Format} \subsection{Contributions of this work} Over the course of single activity sessions, we measure different quantities capturing user behaviour, e.g.,\ propensity to engage in social interactions, or amount of produced content, and finally contrast results between bots and humans. The present study advances our understanding of bots and human user behavior in the following ways: \begin{itemize} \item We reveal the presence of short-term behavioural trends among humans that are instead absent in the case of bots. Such trends may be explained by a deterioration of human user's performance (in terms of quality and quantity of produced content), and by an increasing engagement in social interactions over the course of an online session; in both cases, we would not expect bots to be affected, and indeed we record no significant evidence of any short-term temporal evolution for this category of users. \item In the spirit of the research line on bot detection, we codify our findings in a set of highly predictive features capable of separating human activity sessions from bots' ones; then, we design and evaluate the performance of a machine learning framework that leverages this features to detect bot activity sessions. This can prove extremely desirable when trying to detect so-called \textit{cyborgs}, users that are in part controlled by humans and in part bots. Our session classification system yields an accuracy of 94\% AUC (\textit{Area Under the ROC curve}). The addition of the features identified by our analysis yields an average improvement over the baseline of up to 14\% AUC. \end{itemize} \section{Introduction} \input{intro} \section{Data \& Methods} \label{data} \input{data} \section{Experimental Analysis} \label{exp} \input{exp} \section{Predictions} \label{pred} \input{pred} \section{Discussion} \input{disc} \section{Related work} \input{rw} \section{Conclusions} \input{conc} \begin{acks} \input{ack} \end{acks} \balance \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2018-02-15T02:12:23", "yymm": "1802", "arxiv_id": "1802.04286", "language": "en", "url": "https://arxiv.org/abs/1802.04286" }
\section{Introduction} \label{sec:Intro} Dust is an essential component in understanding star formation properties of galaxies both observationally and theoretically. Because dust absorbs stellar ultraviolet (UV)--optical light and reemits it in the infrared (IR) \citep[e.g.][]{2000ApJ...533..682C,2002A&A...383..801B,2012ApJ...755..144T}, a precise estimation of star formation rate (SFR) in galaxies requires correction for dust extinction \citep[e.g.][]{1999ApJ...519....1S,2010A&A...514A...4T,2012ARA&A..50..531K}. {The analysis of} \cite{2005A&A...440L..17T} revealed that a higher fraction of star formation is hidden by dust at $z\sim 1$ than at $z\sim 0$, where $z$ is the redshift, and that more than half of the star formation activity is {enshrouded by dust} at $0.5 \le z \le 1.2$ (see also \cite{2013A&A...554A..70B} and references therein). There are some interesting theoretical issues in which dust plays a significant role. Dust is an efficient catalyst of molecular hydrogen (H$_{2}$) formation in the interstellar medium (ISM) \citep[e.g.][]{1963ApJ...138..393G,2004ApJ...604..222C,2009A&A...496..365C}. \cite{2002MNRAS.337..921H} {showed that early dust production at high redshift} dramatically enhances the H$_2$ abundance, which probably leads to an enhancement of star formation {activity} in galaxies. Dust has an impact on gas dynamics in dusty clouds through radiation pressure \citep[e.g.][for a recent development]{2017MNRAS.466L.123I}. In addition, the typical mass of the final fragments in star-forming clouds is also regulated by dust cooling \citep[][]{1998MNRAS.299..554W,2000ApJ...534..809O,2005MNRAS.359..211L,2005ApJ...626..627O,2006MNRAS.369.1437S}; this effect could have a dramatic impact on the stellar initial mass function. Moreover dust eventually becomes the ingredient of planets in protostellar discs. { The evolution of the total dust amount in a galaxy can be broadly understood in the chemical evolution framework since dust evolution is strongly linked to metal enrichment \citep{1998ApJ...496..145L,2014MNRAS.440.1562M}. As shown in a variety of chemical evolution models, the increase of the dust amount is mainly driven by dust condensation in stellar ejecta and dust growth in the dense ISM, while the decrease occurs when the dust is swept by supernova (SN) shocks \citep{1998ApJ...501..643D,2008A&A...479..453Z,2008A&A...479..669C,2011EP&S...63.1027I,2011MNRAS.416.1340H,2013EP&S...65..213A,2017MNRAS.471.4615G}. } When we consider the property of dust grains in the ISM, not only the total dust abundance but also the grain size distribution is of fundamental importance \citep[e.g.][]{1977ApJ...217..425M,2013ApJ...770...27N}. In particular, the extinction curve (i.e.\ the wavelength dependence of absorption and scattering cross-section) {depends sensitively on} the grain size distribution \citep[][]{1983Natur.306..625B,2017MNRAS.469..870H}. In addition, {the total grain surface area, which depends on the grain size distribution, governs the rate of grain-surface H$_2$ formation} \citep[][]{1976ApJ...207..131B,2011ApJ...735...44Y}. Dust evolution is driven by the interactions not only {with} gas particles but also {with} dust itself in the ISM \citep[see][and references therein for the processes described in what follows]{2013MNRAS.432..637A}. Dust grains are produced by SNe and asymptotic giant branch (AGB) stars, and after being injected into the ISM, they suffer destruction in SN shocks sweeping the ISM. Dust grains grow by accreting surrounding gas-phase metals in the dense ISM. Dust grains interact with themselves via collisional processes such as coagulation and shattering. The rates of the above grain processing mechanisms in the ISM (dust destruction, accretion, coagulation, and shattering) depend not only on the local physical condition of the gas but also on the dust abundance and metallicity. Moreover, as found by \cite{2012MNRAS.424L..34K}, the efficiency of interstellar processing could depend strongly on the grain size distribution. Therefore, for a complete understanding of dust evolution, we must consider not only the evolution of dust abundance but also that of grain size distribution. \cite{2013MNRAS.432..637A} constructed a full framework for treating the evolution of grain size distribution consistently with the enrichment of metals and dust in a galaxy. They treated all the above processes of dust evolution and revealed that all of these processes are necessary for a comprehensive understanding of the observed dust-to-gas mass ratios and extinction curves in nearby galaxies (see \citealt{2015MNRAS.447L..16N} for an extension of their model to high redshift). To focus on the dust evolution, they treated a galaxy as a one-zone object. As a consequence of their modelling, they succeeded in providing a tool to understand not only observed gas-to-dust mass ratios but also extinction curves in galaxies \citep[][]{2014MNRAS.440..134A}. Since the dust evolution is affected by the physical condition of the ISM where it resides, it is important to deeply understand the hydrodynamical evolution of the ISM in a spatially resolved way. Hydrodynamical simulations have indeed been a powerful tool to clarify galaxy formation and evolution. {They provide a significant advantage over simple one-zone calculations, which generally need to introduce some strong assumptions such as instantaneous mixing and homogeneity.} Many cosmological hydrodynamic simulations have reproduced and predicted the observed galaxy mass and luminosity functions \citep[e.g.][]{2001ApJ...558..497N,2004MNRAS.350..385N,2012MNRAS.419.1280C,2012MNRAS.427..403J,2013ApJ...766...94J,2014MNRAS.440..731S,2014ApJ...780..145T,2014MNRAS.444.1518V,2015arXiv150900800S,2015MNRAS.446..521S, 2015MNRAS.454.2277S,2017MNRAS.470.3300D,2017MNRAS.465.2936M, 2018MNRAS.473.4077P}. There have been some attempts to include dust evolution in cosmological hydrodynamical simulations. { \cite{2010MNRAS.403..620D} calculated dust formation and destruction by SNe in their cosmological simulation and predicted the submillimetre fluxes from high-redshift Lyman break galaxies.} \cite{2015MNRAS.451..418Y} calculated the radiation transfer of UV light based on the spatial distribution of metals in their zoom-in simulations, and estimated the IR luminosities of individual high-$z$ galaxies. They assumed a constant dust-to-metal ratio, and did not explicitly treat the dust evolution. {\citet{2013MNRAS.432.2298B,2013MNRAS.436.2254B,2015MNRAS.449.1625B} treated} dust as a separate component from gas, dark matter and star particles and solved the interaction between dust and gas. They calculated H$_2$ formation on dust surfaces and dust evolution consistently to investigate the spatial distribution of dust and molecular gas in galaxies. \cite{2016MNRAS.457.3775M} traced dust evolution along with the hydrodynamical evolution of the gas by performing cosmological zoom-in simulations. They revealed the importance of dust growth by accretion, and pointed out the necessity of a more realistic treatment of dust destruction and feedback by SNe. In addition, \cite{2017MNRAS.468.1505M} compared statistical properties of dust, especially, the dust mass function and the comoving dust mass density, and found that their simulation broadly reproduced the observation in the present-day Universe, although it tended to underestimate the dust abundance in high-$z$ dusty galaxies. {\cite{2016ApJ...831..147Z} analyzed dust evolution in an isolated Milky Way-like galaxy by post-processing the simulation of \cite{2013MNRAS.432..653D}. They put particular focus on dust growth by accretion and examined gas-temperature-dependent sticking coefficient in accretion, in order to reproduce the relation between silicon depletion and gas density.} All these simulations only {traced} the dust abundance, but did not treat the grain size distribution. As mentioned above, the grain size distribution affects the dust evolution. For the grain size distribution, in addition to the processes included in the above simulations, shattering and coagulation are important. Implementation of grain size distributions in hydrodynamical simulations has not been successful, mainly because of the high computational cost. Calculating the grain size distribution in a fully self-consistent manner over the cosmic age is computationally expensive even in one-zone calculation as shown by \cite{2013MNRAS.432..637A}. {Recently, \cite{2018MNRAS.tmp.1185M} implemented a full grain size distribution in their hydrodynamical simulation. However their simulation is still limited to an isolated galaxy.} For the purpose of treating the evolution of grain size distribution within the available computational capability, \citet[][hereafter A17]{2017MNRAS.466..105A} and \cite{2017MNRAS.469..870H} adopt the two-size approximation formulated by \cite{2015MNRAS.447.2937H}, in which the entire grain size range is represented by two sizes ranges divided at around $a \simeq 0.03 \mu$m ($a$ is the grain radius). \cite{2015MNRAS.447.2937H} confirmed that the two-size approximation gives the same evolutionary behavior of grain size distribution and extinction curve as calculated by the full treatment of \cite{2013MNRAS.432..637A,2014MNRAS.440..134A}. Because {this two-size approximation reduces the computational cost}, it provides a feasible way to compute the evolution of grain size evolution in hydrodynamical simulations. Consequently, we can not only compute the spatial variations in dust abundance, but also examine the grain size distribution as a function of time and metallicity. The hydrodynamical simulation in A17{, } \cite{2017MNRAS.469..870H} {and \cite{2018arXiv180406855G}} treated the dust evolution using the two size approximation in a consistent manner with the local physical states such as the local gas density and temperature. They succeeded in theoretically predicting spatial inhomogeneity in the dust abundance (A17), extinction curves \citep{2017MNRAS.469..870H} {and the relation between dust-to-gas mass ratio and oxygen abundance \citep[][]{2018arXiv180406855G}. } However, in {A17 and \cite{2017MNRAS.469..870H}}, only a single isolated spiral galaxy was simulated{.} {\cite{2018arXiv180406855G} performed zoom-in simulations and analyzed only four massive clusters.} Therefore {no statistical information on dusty galaxies was achievable.} In order to obtain general evolutionary features of galaxies, a simulation on a cosmological spatial scale and time-scale is desired. Such a cosmological simulation {is capable of predicting the evolution of a large number of galaxies.} \cite{2016MNRAS.457.3775M,2017MNRAS.468.1505M} implemented dust evolution in a cosmological hydrodynamic simulation, and succeeded in predicting statistical properties of galaxies such as the dust mass function and the scaling relations of dust abundance with quantities characterizing galaxies. However, they did not include the evolution of grain size distribution. As mentioned above, the evolution of grain size distribution is of fundamental importance in understanding the dust evolution. Another important point of cosmological simulations {is that they are able to predict} the dust and metal enrichment of the intergalactic medium (IGM). \cite{2010MNRAS.405.1025M} observationally revealed that dust grains are sure to exist in the circumgalactic medium (CGM) and IGM. { The CGM is defined as the medium located from $\gtrsim $ a few tens of kpc to $\sim 1$ Mpc from the galaxy centre \citep{2010MNRAS.405.1025M}.} \cite{2012ApJ...754..116M} estimated the abundance and radial profile of dust in the CGM using Mg \textsc{ii} absorbers as tracers. On the theoretical side, \cite{2003MNRAS.341L...7I} showed that dust grains in the IGM {affect the thermal history of the IGM} through photoelectric heating. They also pointed out that the efficiency of photoelectric heating depends on the grain size. Therefore, clarifying the evolution of dust abundance and grain size distribution in the IGM is important. In addition, \citet{2016SSRv..202...79N} and \citet{2011MNRAS.412.1059Z} mention that the metal distribution in the IGM is sensitive to feedback (energy injection) models. Because dust grains are created by metal condensation {and spread to the ISM and the IGM in a way dependent on the feedback strength \citep{2017MNRAS.469..870H}}, the distribution of dust grains could also be useful for testing feedback models. In this paper, we perform cosmological $N$-body/SPH simulations with \textsc{gadget3-osaka} developed in A17 {based on the original \textsc{gadget} code \citep[][]{2005MNRAS.364.1105S}}. In this paper, we particularly focus on the overall dust properties in a cosmological volume. {We also test the statistical properties of the dust content in galaxies, and examine} the dust enrichment in the IGM as a result of cosmic structure formation and SN feedback. The cosmic history of dust enrichment and the evolution of grain size distribution on a cosmological spatial scale and time-scale are the main topics of this paper. The detailed analysis of individual galaxies is described in a separate paper (Hou et al., in preparation). This paper is organized as follows. In Section \ref{model}, we explain the model of dust evolution in the cosmological simulation. We present the simulation results in Section \ref{sec:result}. We discuss the parameter dependence in Section 4. We conclude in Section 5. Throughout this paper, we adopt $Z_{\odot} = 0.02$ for the solar metallicity following \cite{2015MNRAS.447.2937H}. {This value, used as simple metallicity normalization, does not affect our main results.} {We adopt the following cosmological parameters \citep{2016A&A...594A..13P}:} baryon density parameter $\Omega_{\rm b} = 0.049$, total matter density parameter $\Omega_{\rm m} =0.32$, cosmological constant parameter $\Omega_\Lambda =0.68$, Hubble constant $H_{0} = 67$ km s$^{-1}$ Mpc$^{-1}$, power spectrum index $n_{\rm s}=0.9645$, and density fluctuation normalization $\sigma_{8}=0.831$. In this paper, we also use $h \equiv H_{0} / (100$ km s$^{-1}$ Mpc$^{-1})=0.67$ {for the non-dimensional Hubble constant}. \begin{table} \centering \begin{minipage}{90mm} \caption{Simulation setup} \label{table:simulation} \begin{tabular}{cccccc}\\ \hline Name & Boxsize & $N$ & $\varepsilon_{\rm grav} $ & $m_{\rm dm}$ & $m_{\rm gas}^{\rm init}$ \\ & [$h^{-1}$Mpc] &&[$h^{-1}$kpc] &[$h^{-1}{\rm M}_{\odot}$]&[$h^{-1}{\rm M}_{\odot}$] \\ \hline L50N512 & 50 & $2\times 512^{3}$ & 3 &$6.89 \times 10^{7}$& $1.28 \times 10^{7}$\\ L50N256 & 50 & $2\times 256^{3}$ & 5 &$5.51 \times 10^{8}$& $1.02 \times 10^{8}$ \\ \hline \end{tabular} \textit{Notes.} $N$, $\varepsilon_{\rm grav} $, $m_{\rm dm}$ and $m_{\rm gas}^{\rm init}$ are the number of particles, the gravitational softening length, the mass of dark matter particle and the initial mass of gas particle, respectively. \end{minipage} \end{table} \section{Model}\label{model} \subsection{Galaxy evolution simulation} We basically use the same simulation code as used in A17 (\textsc{gadget3-osaka}) {but we perform a cosmological simulation.} {We generated initial conditions at $z=99$ with \textsc{MUSIC} \citep[][]{2011MNRAS.415.2101H}.} The basic features in the simulation other than dust implementation are described in a separate paper (Shimizu et al., in preparation). Below we explain our simulation focusing on the difference from A17. We performed cosmological hydrodynamical simulations with a comoving 50$h^{-1}$ Mpc box. The initial number of particles are $2\times 256^{3}$ and $2\times 512^{3}$ (referred to as L50N256 and L50N512, respectively). Other parameters of simulations are shown in Table \ref{table:simulation}. In our simulation, stars are formed in dense and cold gas particles. Star particles are created from gas particles whose number density is greater than $n_{\rm th}=0.1$ cm$^{-3}$ and temperature is less than $T_{\rm th}=10^{4}$ K with the following SFR: \begin{eqnarray} \dfrac{d\rho_{\ast}}{dt}=\varepsilon_{\ast}\dfrac{\rho_{\rm gas}}{t_{\rm ff}}, \end{eqnarray} where $\rho_{\ast}$ and $\rho_{\rm gas}$ are the local mass density of newly formed stars and gas, respectively, $\varepsilon_{\ast}$ is the star formation efficiency in a free-fall time (we adopt $\varepsilon_{\ast}=0.05$ in this paper), and $t_{\rm ff}\equiv \sqrt{3\pi \slash \left( 32 G \rho_{\rm gas} \right)}$ is the local free-fall time. {The temperature threshold is used to search a region where Lyman $\alpha$ cooling has made a favourable condition for gas collapse (or star formation) \citep[e.g.][]{1993ApJS...88..253S}. } The threshold density is determined by the extrapolation of so-called Larson's law \citep{1981MNRAS.194..809L} to our spatial resolution ($\sim 1$ kpc): {this density criterion serves to choose regions where bound objects like molecular clouds are potentially formed}. We also confirmed that the resulting SFR is roughly consistent with the Schmidt--Kennicutt law in isolated galaxies (Shimizu et al., in preparation). {We include the metal and dust production by not only Type II SNe but also Type Ia SNe and AGB stars.} The formation of various {heavy} elements is treated by implementing the \textsf{CELib} package \citep[][]{2017AJ....153...85S} to {\sc gadget3-osaka}. Energy input from {massive stars} (stellar feedback) is also considered. \begin{figure} \includegraphics[width=8.3cm]{fig1} \caption{ (a) Stellar mass function at $z=0$ for L50N512 (blue solid line) and L50N256 (red dashed line) (see Table \ref{table:simulation} for these two simulations). The observational data are taken from \citet{2013ApJ...767...50M} and \citet{2014ApJ...783...85T} as shown in the legend. (b) Cosmic star formation rate density (SFRD) with L50N512 (blue solid line) and L50N256 (red dashed line). They are shown with observational data based on far-UV galaxy samples \citep[][]{2005ApJ...619L..47S} and Lyman break galaxies \citep[][]{2009ApJ...705..936B,2009ApJ...692..778R}. {Taking the observational detection limits into account,} we only consider the galaxies with star formation rate $\psi > 0.7\,{\rm M}_{\odot}\, {\rm yr}^{-1}$ ({corresponding to the} absolute AB magnitude $M_{\rm AB}<-18$ according to the conversion formula in \citet{1998ApJ...498..106M} ) for a fair comparison. } \label{fig:SFRD} \end{figure} \begin{figure*} \includegraphics[width=16.5cm]{fig2} \caption{ Time evolution of {the} projected density field $(\int \rho (\textbf{x}) dz)$ of gas and small and large grains at $z=6.1,\,2.4,\,1.0,\,0.0$, where $\rho(\textbf{x})$ is the density at position $\textbf{x}$ of each component. The depth of the integration is 50 $h^{-1}$Mpc. The comoving box size is 50 $h^{-1}$Mpc and the colour indicates the log-scale density of each component in units of $M_{\odot}\, {\rm kpc}^{-2}$ (shown by the colour bar). } \label{fig:dustDistribution} \end{figure*} The model is successful in reproducing the main statistical properties of star formation and stellar content in galaxies, especially, the cosmic star formation rate density (SFRD) and the stellar mass function at $z=0$ \citep[][]{2005ApJ...619L..47S,2009ApJ...705..936B,2009ApJ...692..778R,2013ApJ...767...50M,2014ApJ...783...85T}, except at the very massive end of the stellar mass function, as shown in Fig.~\ref{fig:SFRD}. Comparing the results of L50N512 and L50N256, we find that the stellar mass function and the SFRD at low-z ($z\lesssim 3$) do not significantly depend on the mass resolution. { However the SFRD at high $z$ ($z\gtrsim 3$) does depend on the mass resolution, because more low-mass galaxies can form in a higher resolution simulation.} Feedback of active galactic nuclei (AGN) is not included to avoid further complexity not related to dust production, although it might resolve the discrepancy in the stellar mass function at the very massive end. AGN feedback has been implemented in many cosmological simulations and shown to be responsible for suppressing the formation of massive objects \citep[e.g.][]{2013MNRAS.436.3031V, 2014MNRAS.444.1518V, 2015MNRAS.452..575S,2017MNRAS.465.3291W,2017arXiv171200023F}. However, the treatment of AGN feedback depends on subgrid models, which involve choice of model parameters \citep[e.g.][]{2013MNRAS.436.3031V}. In addition, AGN feedback is also related to the formation of supermassive black holes. Although the relation between growth of supermassive black hole and dust enrichment is an interesting topic \citep{2011MNRAS.416.1916V}, we choose to focus on the processes directly related to dust formation in this paper. Therefore, we leave the influence of AGN feedback for the future work. \subsection{Basic treatment of dust evolution}\label{subsec:dust_ev} In this paper, we basically adopt the dust evolution model {used} in our previous isolated-galaxy simulation \citep[A17;][]{2017MNRAS.469..870H}. We represent the whole range of grain radii by large and small grain populations roughly separated at $a\sim 0.03~\micron$ according to \citet{2015MNRAS.447.2937H}. We set the typical radii of the large and small grain populations as $0.1\, \mu$m and $5\times 10^{-3}\, \mu$m, respectively. The abundances of the two dust populations on a gas particle are represented by the dust-to-gas mass ratios, $\mathcal{D}_{\rm L}$ and $\mathcal{D}_{\rm S}$, as \begin{eqnarray} \mathcal{D}_{\rm L} = \dfrac{m_{\rm L}}{m_{{\rm g}}}\, ,\\ \mathcal{D}_{\rm S} = \dfrac{m_{\rm S}}{m_{{\rm g}}}\, , \end{eqnarray} where $m_{{\rm g}}$ is the mass of the gas particle, and $m_{\rm L}$ and $m_{\rm S}$ are the total mass of large and small grains in the gas particle, respectively. Hereafter, we refer to $\mathcal{D}_{\rm L}$ ( $\mathcal{D}_{\rm S}$ ) as the large (small) grain abundance. The total dust-to-gas ratio $\mathcal{D}_\mathrm{tot}$ is defined as \begin{eqnarray} \mathcal{D}_{\rm tot} \equiv \mathcal{D}_{\rm L} + \mathcal{D}_{\rm S}. \end{eqnarray} In our simulation, each gas particle has its own dust abundance $\mathcal{D}_{{\rm L}(i)}$ and $\mathcal{D}_{{\rm S}(i)}$, where suffix $(i)$ indicates the label for the gas particle. Based on the two-size model, we calculate the formation and destruction of large and small dust grains on each gas particle using variables and outputs in the simulation as described below (see A17, especially their equations 13 and 14 for further details). We {calculate} the time evolution of the large and small grain abundances in the $i$-th particle at time $t$ as {(Appendix \ref{appendixA})} \begin{eqnarray} \dfrac{\mathrm{d}\mathcal{D}_{{\rm L}(i)}(t)}{\mathrm{d}t} &=&- \left( \dfrac{\mathcal{D}_{{\rm L}(i)}(t)}{\tau_{\rm sh}} - \dfrac{\mathcal{D}_{{\rm S}(i)}(t)}{\tau_{\rm co}}\right)- \dfrac{\mathcal{D}_{{\rm L}(i)}(t)}{\tau_{\rm sp}(a_{\rm L})}\notag \\ & & +\left[ \dfrac{\mathrm{d}\mathcal{D}_{{\rm L}(i)}(t)}{\mathrm{d}t}\right]_{{\rm Source}} - \left[ \dfrac{\mathrm{d}\mathcal{D}_{{\rm L}(i)}(t)}{\mathrm{d}t}\right]_{{\rm SNe}} \notag \\ & &-\frac{\mathcal{D}_{{\rm L}(i)}(t)}{m_{\mathrm{g}(i)}}\dfrac{\mathrm{d} m_{{\rm g}(i)}^{\rm return}}{\mathrm{d} t}\, , \label{eq:timeL} \\ \dfrac{\mathrm{d}\mathcal{D}_{{\rm S}(i)}(t)}{\mathrm{d}t} &=& \left(\dfrac{\mathcal{D}_{{\rm L}(i)}(t)}{\tau_{\rm sh}} - \dfrac{\mathcal{D}_{{\rm S}(i)}(t)}{\tau_{\rm co }} + \dfrac{\mathcal{D}_{{\rm S}(i)}(t)}{\tau_{\rm acc}}\right)\, \notag \\ & &- \dfrac{\mathcal{D}_{{\rm S}(i)}(t)}{\tau_{\rm sp}(a_{\rm S})} -\left[ \dfrac{\mathrm{d}\mathcal{D}_{{\rm S}(i)}(t)}{\mathrm{d}t}\right]_{{\rm SNe}}\notag \\ & &-\frac{\mathcal{D}_{{\rm S}(i)}(t)}{m_{\mathrm{g}(i)}} \dfrac{\mathrm{d} m_{{\rm g}(i)}^{\rm return}}{\mathrm{d} t}\, ,\label{eq:timeS} \end{eqnarray} where $\tau_{\rm sh}$, $\tau_{\rm co}$, and $\tau_{\rm acc}$ are the time-scales of shattering, coagulation, and accretion, respectively, {and $\mathrm{d} m_{{\rm g}(i)}^{\rm return}/{\mathrm{d} t}$ is the gas ejection rate from stars (note that the ejected gas dilutes the dust-to-gas ratio)}.\footnote{Equations (13) and (14) in A17 need to be corrected by including this dilution term. However, A17 correctly included this term in their {code and calculations}.}. The parameter $\tau_\mathrm{sp}(a)$ is the sputtering time-scale as a function of grain radius in the hot gas not associated with SNe (see Section \ref{subsec:sput}), and the terms with `Source' and `SNe' describe the stellar dust production and SN destruction, respectively. In our formulation, these time-scales depend on the gas density, dust abundances, and/or metallicity but the dependence on those quantities are not explicitly shown here for the brevity of notation. The formation and destruction terms are evaluated by \begin{eqnarray} \left[ \dfrac{d\Delta \mathcal{\mathcal{D}}_{{\rm L}(i)} (t)}{dt}\right]_{\rm Source}\mathrm{d}t&=& f_{\rm in}\dfrac{\Delta m_{\rm metal}}{m_{{\rm g}(i)}} \left( 1-\delta \right), \label{eq:dustsource}\\ \left[ \dfrac{d\Delta {\mathcal{D}}_{{\rm L, S}(i)} (t)}{dt}\right]_{\rm SNe}\mathrm{d}t&=& \left[ 1-( 1 - \eta )^{N_{\rm SN}}\right] {\mathcal{D}_{{S/L}(i)}(t)}, \notag\\ & &\label{eq:dustdestruct-1} \end{eqnarray where $f_\mathrm{in}$ is the dust condensation efficiency of metals in the stellar ejecta (we assume $f_\mathrm{in}=0.1$ following A17), $\Delta m_{\rm metal}$ is the ejected metal mass from stars, $\delta$ is the destroyed fraction of newly formed dust, $N_{\rm SNe}$ is the number of SNe that affect the gas particle of interest (note that, because a star particle represents a cluster of $\sim 10^{6}$--$10^{7}$ stars and contains a number of massive stars, a number of SNe are treated as a single explosion from the star particle), and $\eta$ is the destroyed fraction of preexisting dust by a single SN (see the evaluations of $\delta$, $N_\mathrm{SNe}$ and $\eta$ in A17).\footnote{In A17, their equation (19) corresponding equation (\ref{eq:dustdestruct-1}) used a notation of $\mathcal{D}_{{(\rm SNe\slash L, S)}(i)}(t)$, which should be simply $\mathcal{D}_{{\rm S/L}(i)}(t)$.} The time-scale parameters of accretion, coagulation, and shattering are determined in the following way (see A17 and references therein for the detailed derivation). Since accretion and coagulation occur in the dense clouds, which cannot be resolved in our simulations, we adopt a subgrid model. We assume that dense ($n_\mathrm{gas}>1\,{\rm cm}^{-3}$, where $n_\mathrm{gas}$ is the gas number density) and cold gas particles ($T_\mathrm{gas}<10^{4}\,{\rm K}$, where $T_\mathrm{gas}$ is the gas temperature), which are referred to as the \textit{dense gas particles}, host dense clouds with 50 K and $10^{3}\, {\rm cm}^{-3}$. Because the cosmological simulations in this paper only resolve less dense gas than the single-galaxy simulation in A17, we apply a looser condition for the identification of the dense gas particles. {We assume that the dense clouds occupy a mass fraction of $f_\mathrm{dense}=0.1$ in the dense gas particles (see Section 2.3 of A17 for the definition of $f_\mathrm{dense}$).} Accretion and coagulation are assumed to occur only in the dense gas particles. The time-scales of accretion and coagulation ($\tau_\mathrm{acc}$ and $\tau_\mathrm{co}$) are evaluated as follows: \begin{eqnarray} \tau_{\rm acc}&=&\begin{cases} 1.2\times 10^{6}{\rm ~yr}\left(\dfrac{Z}{Z_{\odot}} \right)^{-1}\left(1-\dfrac{\mathcal{D}_{\rm tot}}{Z}\right)^{-1}\slash f_{\rm dense}\\ \hspace{3cm}(\mbox{in {\it dense gas particles}})~,\\ \infty~\mbox{(otherwise)}\,, \end{cases} \end{eqnarray} \begin{eqnarray} \tau_{\rm co}&=& \begin{cases} 2.71 \times 10^{5}{\rm ~ yr}\left( \dfrac{\mathcal{D}_{\rm S}}{0.01} \right)^{-1} \left( \dfrac{v_{\rm co}}{0.1 {\rm ~km}{\rm ~s}^{-1}} \right)^{-1} \slash f_{\rm dense}\\ \hspace{3cm}(\mbox{in {\it dense gas particles}})~,\\ \infty~({\rm otherwise})~,\label{coagulation1} \end{cases} \end{eqnarray} where $Z$ is the metallicity of the gas particle. Shattering is assumed to occur only in the diffuse gas whose number density is smaller than 1 cm$^{-3}$: \begin{eqnarray} \tau_{\rm sh}&=& \begin{cases} 5.41 \times 10^{7}~{\rm yr} \left(\dfrac{\mathcal{D}_{\rm L}}{0.01}\right)^{-1} \left(\dfrac{n_{\rm gas}}{1~\mathrm{cm}^{-3}}\right)^{-1}\\ \hspace{3cm}(n_{\rm gas} < 1\,{\rm cm}^{-3})~,\\ \infty~(n_{\rm gas} \ge 1\,{\rm cm}^{-3} )~.\label{shattering2} \end{cases} \end{eqnarray} \subsection{Sputtering not directly associated with SNe}\label{subsec:sput} The sputtering terms in equations (\ref{eq:timeL}) and (\ref{eq:timeS}) were not included in our previous model (A17). Dust grains are destroyed by sputtering in high-temperature ($\gtrsim 10^6$ K) regions such as X-ray-emitting hot gas in the CGM or IGM \citep{1995ApJ...448...84T}. Note that we have already counted the dust destruction in SN shocks. Thus, to avoid double-counting the destruction, we extract the diffuse hot gas not associated with SNe by imposing the density threshold for sputtering at $n_{\rm th}^{\rm sp}=0.01$ cm$^{-3}$, and consider the dust destruction by sputtering only in regions with $n_\mathrm{gas}<n_\mathrm{th}^\mathrm{sp}$ and $T>10^{6}\,{\rm K}$. We adopt the following destruction time-scale based on \citet{1995ApJ...448...84T} \citep[see also][]{1979ApJ...231...77D,2006ApJ...648..435N,2015MNRAS.454.1620H}: \begin{eqnarray} \tau_{\rm sp}(a, n_{\rm H}) = \begin{cases} 2.1\times 10^{5}\left( \dfrac{a}{1\, \mu m} \right)\left( \dfrac{n_{\rm gas}}{1\, {\rm cm}^{-3}} \right)^{-1}\\ ~~~(\mbox{if}~n_{\rm gas}<n_{\rm th}~\mbox{and}~T_\mathrm{gas} >10^{6}\,{\rm K}),\\ \infty~\mbox{otherwise}. \end{cases} \label{eq:sputtering} \end{eqnarray} where $n_{\rm H}$ is the hydrogen number density. \subsection{Time-step for dust treatment} Some of the time-scales concerning dust evolution {processes} could be shorter than the hydrodynamical time-step adopted in the \textsc{gadget-3} code \citep{2001NewA....6...79S}. In this case, we calculate the dust evolution by dividing a single hydrodynamical time-step into multiple sub-cycles. We set the sub-cycle time-step $\Delta t_{\rm sub}$ as follows: \begin{eqnarray} \Delta t_{\rm sub} &=&\varepsilon_{\rm sub} \left[{\rm max}\left( \tau_{\rm hydro}^{-1},~ \tau_{\rm acc}^{-1},~ \tau_{\rm sh}^{-1},~ \tau_{\rm co}^{-1} \right)\right]^{-1}~, \end{eqnarray} where $\varepsilon_{\rm sub} $ is a constant which controls the accuracy of the calculation. Because we use the fourth-order classical Runge-Kutta method for the dust evolution, the error of time integration should be suppressed as $\propto \varepsilon_{\rm sub}^{4}$. In this paper, we set $\varepsilon_{\rm sub} = 0.1$. \subsection{Galaxy identification and definition of the IGM}\label{subsec:id} We analyze the dust associated with galaxies and that contained in the IGM separately. {In order to} distinguish between these two regions, we identify galaxies, and define the intergalactic space as the regions not associated with the galaxies. In order to identify galaxies in a simulation snapshot, we use \textsc{P-Star groupfinder} \citep{2001MNRAS.328..726S}. In what follows, we give a brief summary of the algorithm following \cite{2004MNRAS.350..385N}. First, the baryonic (gas + stars) density peaks in the smoothed density field are identified. Second, the densities of $N_{\rm ngb}$ nearest neighbor particles around the density peak are measured, and the peak-density particle is considered as a `head particle' if all the neighbor particles have lower densities. In this paper, we adopt $N_{\rm ngb}= 128$ and 512 for L50N256 and L50N512, respectively. Finally, the gas and star particles near the head particle above a density threshold described by \cite{2004MNRAS.350..385N} are grouped and identified as a galaxy. In our definition, we identify the objects whose stellar mass is greater than $10^{8} M_{\odot}$ as galaxies, since the smaller structures are affected by our finite mass resolution. The IGM is defined as the medium not belonging to the galaxies identified above. \section{Results}\label{sec:result} First, we test if the dust enrichment is successfully implemented by comparing our theoretical prediction to the observed dust abundance at $z=0$, where the relation between dust abundance and metallicity is well studied. As mentioned in the Introduction, we focus on the dust properties in galaxies, the CGM and the IGM. Second, we present the evolution of the cosmic dust abundance. In particular, we predict the grain size distribution (i.e.\ small/large grain abundance) in a cosmological volume for the first time. We also show the dust mass function of galaxies and present the radial profile of dust mass around massive galaxies in the CGM. Finally, we show the cosmic extinction (extinction {on cosmological scales}) and compare it with the corresponding observational results. \subsection{Relation between dust abundance and metallicity} Because dust production is strongly associated with metal enrichment, the relation between dust abundance and metallicity gives a strong test for dust evolution models \citep{1998ApJ...496..145L,1998ApJ...501..643D}. Some cosmological modeling with dust implementation also uses this relation for a critical test \citep[][]{2017MNRAS.471.3152P, 2018MNRAS.473.4538G}. We show the galaxy distribution on the $\mathcal{D}_\mathrm{tot}$--$Z$ (total dust-to-gas ratio vs.\ metallicity) diagram at $z=0$ in Fig.\ \ref{fig:d-z}. Each point indicates an individual galaxy and its colour indicates the total stellar mass ($M_\ast$). We observe {in Fig.\ \ref{fig:d-z}} that the metallicity and dust abundance are low for $M_{\ast}\lesssim 10^{8.5}\, M_{\odot}$. {If the metallicity is low, dust growth by accretion is not efficient, so that the major part of dust is produced by stellar sources. Thus, the dust-to-gas ratio follows $\mathcal{D}_{\rm tot}\simeq f_{\rm in}Z$ for low-mass galaxies. \begin{figure} \includegraphics[width=8.3cm]{fig3} \caption{Relation between dust-to-gas mass ratio $(\mathcal{D}_{\rm tot})$ and metallicity $Z$ of individual galaxies. The color indicates the logarithmic stellar mass (M$_{\sun}$). The yellow and red lines represent the linear relation of the stellar yield $(\mathcal{D}_{\rm tot}=f_{\rm in}Z)$ and the saturation limit $(\mathcal{D}_{\rm tot}=Z)$, respectively. The black stars denote the observational result which was reported by \citet[][]{2014A&A...563A..31R}. } \label{fig:d-z} \end{figure} In the middle mass range $10^{8.5}\,M_{\odot}\lesssim M_{\ast} \lesssim 10^{10}\,M_{\odot}$, $\mathcal{D}_{\rm tot}$ increases steeply at $Z\gtrsim 0.1Z_{\odot}$. In {these} galaxies, star formation occurs continuously and metal enrichment proceeds. As a consequence, dust growth by accretion occurs and the dust-to-gas ratio increases steeply. {The relation between dust-to-gas ratio and metallicity in this galaxy mass range is also consistent with} the observational trend in a nearby star-forming galaxy sample in \cite{2014A&A...563A..31R}. At the massive end, $M\gtrsim 10^{10}\,M_{\odot}$, where the metallicity is high, dust growth by accretion is saturated because of the limit $\mathcal{D}_\mathrm{tot}\leq Z$. The $\mathcal{D}_\mathrm{tot}$--$Z$ relation of the simulated high-metallicity galaxies lies within the dispersion of the observational data points. Some observational data are in the area of $\mathcal{D}_\mathrm{tot}>Z$, which is unphysical. There could still be a significant uncertainty in the observational dust mass estimate. In our simulation, there are some outliers located far below the line of $\mathcal{D}_\mathrm{tot}=f_\mathrm{in}Z$. In our model, such an extremely low dust-to-metal ratio can only be produced by SN destruction. Interestingly, some observational data points also show such an extremely low dust-to-metal ratio. However, we emphasize that those outliers {account for a tiny fraction} of the entire galaxy population in our model, and that most of the galaxies show a clear correlation between dust-to-gas ratio and metallicity. \subsection{Cosmic dust abundance}\label{subsec:Omega} \begin{figure*} \includegraphics[width=18cm]{fig4} \caption{ Redshift evolution of cosmic dust abundance $\Omega_{\rm dust}(z)$ for various components. The blue dashed line labelled `galaxy+IGM' shows the total amount of dust on gas particles inside simulated galaxies and the IGM. The red dot-long-dashed line labelled `galaxy' presents only the amount of dust on gas particles inside simulated galaxies (the ISM and a part of the CGM). The purple solid line labelled `IGM' is the amount of dust in the IGM. For the IGM component, we also show the small grain abundance with {the mosgreen} dotted line. The orange dot-short-dashed line labelled `star' shows the total amount of dust {absorbed into} stars. We compare our results against the following observational results: the halo component of dust grains at $0.6\lesssim z \lesssim 2$ ($\blacklozenge$) and $z=0.5$ ($\bigcirc$) from \citet{2012ApJ...754..116M}. The open star is from \citet{2010MNRAS.405.1025M}. For the total abundance of dust grains in the Universe, $\bigtriangleup, \rhd, \bigtriangledown, \lhd$, $\Box$ and pentagon are from \citet{2007MNRAS.379.1022D}, \citet{2013MNRAS.433..695C}, \citet{2011MNRAS.417.1510D}, \citet{2011arXiv1103.4191F}, \citet{2004ApJ...616..643F} and \citet{2010MNRAS.405.1025M}, respectively. The hatched region and the dot-filled region are from \citet{2012ApJ...760...14D} and \citet{2013ApJ...768...58T}, respectively. Upper limits of large and small grain abundance in the IGM from \citet{2003MNRAS.341L...7I} are shown {with the purple (`L') and {mosgreen} (`S') arrows}, respectively. } \label{fig:OmegaDust} \end{figure*} We show the evolution of the cosmic dust density normalized to the critical density of the Universe, i.e.\ $\Omega_{\rm dust}(z)$ in Fig.~\ref{fig:OmegaDust}. We also separately show the dust abundances in galaxies and the IGM based on the galaxy identification explained in Section \ref{subsec:id}; that is, we sum up all the dust mass contained in galaxies and subtract it from the total dust mass to obtain the IGM dust abundance. To specify each component, we put superscript `L' and `S' for large and small grains, respectively, no superscript for the total dust abundance, and subscript `gal' and `IGM' for dust in the galaxies and the IGM, respectively. For example, $\Omega^{\rm L}_{\rm dust, IGM}$ {and $\Omega^{\rm S}_{\rm dust, IGM}$} mean {the comoving density of large and small grains in IGM, respectively,} and $\Omega_\mathrm{dust,IGM}=\Omega^\mathrm{S}_\mathrm{dust,IGM}+\Omega^\mathrm{L}_\mathrm{dust,IGM}$. Galaxies are enriched with dust through stellar dust production and dust growth by accretion. Star formation starts {at} $z\sim 15$ in our simulation. Dust in galaxies continuously increases. {As galaxies grow through their star formation activity} (Fig.~\ref{fig:SFRD}b), they are also enriched with metals and dust. The increase of metallicity further drives the dust enrichment through dust growth by accretion in the dense gas. The cosmic dust abundance continues to increase down to $z\sim 2$ (Fig.~\ref{fig:OmegaDust}). Since accretion dominates the abundance of small grains in the metal-rich environment, the small grain abundance continue to increase even at $z<2$ (Fig.~\ref{fig:OmegaDust}). In our simulation, the comoving dust density peaks at $z=1$--2, which coincides with the most dust enshrouded epoch in the Universe derived from \textit{Herschel} observations \citep{2013A&A...554A..70B,2018MNRAS.475.2891D}. { The cosmic dust density declines slightly at $z\lesssim 1$ because of astration. To support this, we also show the {comoving dust density removed by astration} in Fig.~\ref{fig:OmegaDust}. At $z \sim 2$,} more than half of dust grains are consumed by stars (astration). Interestingly, more than 80 per cent of the dust grains have been absorbed into stars by $z=0$. We also show the dust mass evolution in the IGM in Fig.~\ref{fig:OmegaDust}. Both metals and dust are produced and spread into the IGM as a result of stellar production and feedback. The IGM is enriched with dust continuously; throughout all redshifts, almost all dust grains remain in the host galaxies, but about 10 per cent of the dust is ejected out of galaxies into the IGM. In Fig.~\ref{fig:OmegaDust}, we also present the observational data for the total dust abundance in various environments, which are obtained from integration of galactic dust emission and from fluctuation analysis of the cosmic infrared background radiation (CIRB). The total dust abundance in the simulated galaxies broadly agrees with the observed data from CIRB \citep{2013ApJ...768...58T, 2012ApJ...760...14D} and the dust abundance in disk galaxies \citep[][]{2007MNRAS.379.1022D}. However, we tend to overestimate the galactic dust amount compared with the observational estimates. This is because our identification of simulated galaxies includes their circum-galactic regions. As we show below {in Section \ref{subsec:circum}}, a significant fraction of dust is contained in the CGM in our simulation. Observationally, \cite{2010MNRAS.405.1025M} and \cite{2015ApJ...813....7P} found, {based on the analysis of reddening of background QSOs, that the sum of the dust mass contained in the CGM is comparable to that existing in the galactic discs}. The dust abundance estimated from Mg \textsc{ii} absorbers \citep{2012ApJ...754..116M} also indicates that a significant amount of dust is contained in the CGM. They argue that Mg \textsc{ii} absorbers trace the CGM environment based on their impact parameters. \cite{2003MNRAS.341L...7I} constrained the dust abundance in the IGM based on the observed thermal history of the IGM.\footnote{They define the radii of large grains as $10^{-2}\,\mu{\rm m}\le a \le 10^{-1}\,\mu{\rm m}$ and those of small grains as $a \sim 10^{-3}\,\mu{\rm m}$.} They argued that, {if photoelectric heating by dust is significant, } it heats the IGM too much to be consistent with its observed thermal history. By calculating the gas heating rate by dust grains and comparing it with the observed IGM temperatures at $2\lesssim z \lesssim 4$ \citep{2000MNRAS.318..817S}, they obtained upper limits of $\Omega^{\rm L}_{\rm dust}<7\times 10^{-6}$ and $\Omega^{\rm S}_{\rm dust}<7\times 10^{-7}$ at $z\gtrsim 2$. Our simulation results are consistent with these upper limits. A possible source of uncertainty in our galaxy/IGM dust abundance is the finite spatial resolution. We only identify the structures with $M_\ast >10^8$ M$_\odot$ as galaxies (Section \ref{subsec:id}). In other words, small `galaxies' whose stellar masses are less than $10^{8} M_{\odot}$ are regarded not as a galaxy but as a part of the IGM. On the other hand, observations also have a similar problem because of their finite sensitivity and spatial resolution. Although it is extremely difficult to correct for {the limited computational and observational} capabilities, the above rough match between theory and observations indicates that the dust abundance in the cosmic volume can be broadly understood by the processes we included in the simulation (mainly stellar dust production and dust growth by accretion). In Fig.~\ref{fig:dustLS}, we show the time evolution of small and large grains for galaxies and the IGM. To clarify the relative abundance, we also show the small-to-large grain abundance ratio, $\Omega^{\rm S}_{\rm dust}\slash \Omega^{\rm L}_{\rm dust}$. The dust abundance in the Universe is always dominated by large grains. In galaxies, the redshift dependence of the small-to-large grain abundance ratio is flat. The large grain formation is dominated by stellar dust production and coagulation while the small grain formation is governed by shattering and accretion. If we sum up all galaxies, the statistics is dominated by small galaxies in which the major part of the dust is produced by stellar sources. The abundance of small grain is significant only in massive ($M_\ast\gtrsim 10^{10}\,{\rm M}_{\odot}$) galaxies where shattering and accretion {are efficient because of their high metallicity} (Hou et al., in preparation). In contrast, the small-to-large grain abundance ratio in the IGM monotonically increases from high redshift down to $z\sim 0$. The dust abundance in the IGM is fully dominated by large grains at high redshift because dust grains are ejected into the IGM before being processed by shattering and accretion, as discussed in \citet{2017MNRAS.469..870H}. The increase of the small-to-large grain abundance ratio in the IGM {is due to the supply of small grains formed by shattering and accretion. Massive galaxies are assembled at low redshift {$(z \lesssim 2)$}, and the ISM in massive galaxies contains a large amount of small grains. As a consequence, more small grains are supplied at lower redshift. \textit{In situ} small-grain formation in the IGM by shattering may also be possible. However, we find that this path of small-grain formation is negligible because the grain--grain collision time-scale is longer than the cosmic age in the IGM.} Indeed, we do not see any enhancement of the small grain abundance relative to the large grain abundance in the CGM as we see in Section \ref{subsec:circum} (Fig.\ \ref{fig:profile}). As shown later in this section, sputtering decreases both small and large grain abundances almost equally. Thus, the processing in the IGM is not important in determining the grain size distribution in the IGM. There is no observational data to compare for the grain size distribution in the IGM, while there are some observational clues in the CGM. Thus, the dust properties in the CGM may provide some insight into the IGM dust. As mentioned above, the dust properties in the CGM could be traced by Mg \textsc{ii} absorbers as argued by \citet{2012ApJ...754..116M}, who analyzed the background quasar (QSO: quasi-stellar object) data taken by the Sloan Digital Sky Survey \citep[SDSS;][]{2000AJ....120.1579Y} in combination with {\it Galaxy Evolution Explorer} \citep[\textit{GALEX};][]{2005ApJ...619L...1M} data. The reddening curves of Mg \textsc{ii} absorbers are fitted well with the Small Magellanic Cloud (SMC) extinction curve, which indicates that dust grains smaller than 0.03~$\micron$ {are} abundant \citep[][]{1992ApJ...395..130P,2001ApJ...548..296W}. However, our result shows that large grains dominate the IGM dust abundance (Fig.~\ref{fig:OmegaDust}). Thus, there is a tension between our simulation results and the observed reddening curves for Mg \textsc{ii} absorbers. We discuss this further in Sections~\ref{subsec:circum} and \ref{subsec:ext}. \begin{figure} \includegraphics[width=8.3cm]{fig5} \caption{Panels ({\it a}) and ({\it b}): time evolution of cosmic dust abundance of large (thick solid line) and small grains (thick long-dashed-short-dashed line) in galaxies and in the IGM, respectively. The sum of large and small grains (`Total') is shown by {the} red thick dashed line in each panel. Panel ({\it c}): small-to-large grain abundance ratio for galaxies (blue long-dashed-short-dashed line), the IGM (red solid line), and the total (black dotted line). Thin lines in each panel represent the time evolution of each component without sputtering (i.e.\ $\tau_{\rm sp}(a)\to +\infty$).} \label{fig:dustLS} \end{figure} We show the significance of sputtering in the hot gas (not associated with SNe) on the time evolution of the cosmic dust abundance in Fig.~\ref{fig:dustLS}. {As explained in Section \ref{subsec:sput}, we select the hot gas not associated with SNe (mainly associated with the CGM and IGM) with {the} criterion $n_\mathrm{gas}<0.01$ cm$^{-3}$ and $T_\mathrm{gas}>10^6$ K.} In Fig.~\ref{fig:dustLS}, we indeed confirm that sputtering in the diffuse hot gas affects little the dust abundance in galaxies. In the IGM, in contrast, 70--80 per cent of the IGM dust could be destroyed by sputtering. Although small grains are more sensitive to sputtering than large grains, the ratio $\Omega^\mathrm{S}_{\rm dust} / \Omega^\mathrm{L}_{\rm dust}$ is not significantly affected by sputtering, as we observe in Fig.~\ref{fig:dustLS}c. This is because the sputtering time-scales for both large and small grains are much shorter than the hydrodynamical time-scale, and almost all dust grains which suffer from sputtering are destroyed on the spot regardless of the grain size. \subsection{Dust mass function} \begin{figure} \includegraphics[width=8.3cm]{fig6} \caption{Dust mass function of simulated galaxies at $z=2.4$ (panel ({\it a})) and $z=0$ (panel ({\it b})) compared with observational results. The solid and dashed lines represent the result of L50N512 and L50N256, respectively. The symbols $\star, \bigcirc, \bigtriangledown, \lhd, \Box$ and $\bigtriangleup$ show the observational data taken from \citet{2003MNRAS.341..589D,2000MNRAS.315..115D} \citet{2011MNRAS.417.1510D,2013MNRAS.433..695C} \citet{2015MNRAS.452..397C} and \citet{2005MNRAS.364.1253V}, respectively, while $\rhd$ is obtained from the {\it IRAS} PSCz-extrapolated mass function shown in \citet{2005MNRAS.364.1253V}. } \label{fig:dustMF} \end{figure} The dust abundance in galaxies can also be expressed in the form of the dust mass function, which is the distribution function of dust mass in galaxies, as shown in Fig.~\ref{fig:dustMF}. {We compare the dust mass function of the simulated galaxies with observed dust mass functions. We have to keep in mind that observational estimates of dust mass depend on the adopted dust mass absorption coefficient (dust emissivity per dust mass). The observationally derived dust mass is simply proportional to $\kappa_{850}^{-1}$ ($\kappa_{850}$ is the dust mass absorption coefficient at a wavelength of 850 $\micron$), while the dust mass in our simulation is not affected by $\kappa_{850}$.} We adopt a uniform dust mass absorption coefficient $\kappa_{\rm 850}=0.77\,{\rm cm}^{2}{\rm g}^{-1}$ at 850 $\micron$ with a wavelength dependence of $\propto\lambda^{-2}$ according to \cite{2000MNRAS.315..115D}. This value of $\kappa_{850}$ is based on \cite{2000MNRAS.315..115D}, and is intermediate between the values for graphite and silicates as given by \cite{1984ApJ...285...89D} and \cite{1993MNRAS.263..607H}. \cite{2003ARA&A..41..241D} claimed $\kappa_{850}=0.383\,{\rm cm}^{2}{\rm g}^{-1}$, which is approximately a half of the above value. Thus, it is important to note that there is a factor 2--3 uncertainty in the dust mass derived from the observed IR emission. Our simulation produces a larger number of galaxies with high dust mass ($M_\mathrm{d}\gtrsim 10^8$\,M$_{\sun}$) at $z=0$ than at $z=2.4$, while the observational data indicate the opposite. As a result, although we reproduce the dust mass function at $z=2.4$ well (if we consider a factor 2--3 uncertainty in the observational dust mass estimates), we overpredict the number of high-$M_\mathrm{d}$ galaxies at $z=0$. Recalling that our simulation also overproduces the massive end in the galaxy stellar mass function (Fig.~\ref{fig:SFRD}b), we argue that the overproduction of dusty galaxies is linked to that of massive galaxies. As discussed above, a possible reason is that we did not include AGN feedback. We expect that AGN feedback suppresses the dust mass in two ways: (i) loss of dust by outflow, and/or (ii) suppression of star formation and subsequent chemical enrichment. In Fig.~\ref{fig:dustMF}, the cut-off at the low-$M_\mathrm{d}$ end ($\sim 10^4$ M$_{\odot}$) is determined by the spatial resolution of the simulation. Our simulation tends to underproduce the dust mass function at $M_\mathrm{d}\sim 10^5$--$10^7$\,M$_{\sun}$, although it is marginally in the range of scatter of {the} observational data. {The slope at $M_\mathrm{d}\sim 10^5$--$10^8$ M$_{\odot} $ is similar to the observed dust mass function. We should recall again that, on the observational side, there is a factor 2--3 uncertainty in the dust mass.} Therefore we just conclude that our simulation {qualitatively accounts for the observational dust mass function.} For comparison, we also show the dust mass function for the lower resolution run, L50N256. Although the mass resolution of L50N256 is eight times worse than that of L50N512, the amplitude and the cut-off of mass function at the massive end roughly agree with each other. This means that the mass resolution does not affect the statistical properties of dust mass in galaxies very much. As expected, the number of low-mass galaxies with $M_\mathrm{d}\lesssim 10^{5}\,M_{\odot}$ is higher for L50N512 than for L50N256 because of the higher mass resolution. \subsection{Circum-galactic dust around massive galaxies}\label{subsec:circum} \cite{2010MNRAS.405.1025M} detected a large abundance of dust in the CGM by {analyzing the} correlation between the reddening of background QSOs and a large number of galaxies in the SDSS sample whose median redshift is $z\sim 0.3$. \cite{2015ApJ...813....7P} found a similar radial profile of CGM dust at $z\sim 0.05$. These studies showed that the dust mass in galaxy halos is comparable to that in galactic discs. In order to examine if our simulation also reproduces such a large dust abundance in galaxy halos (or in the CGM), we compare the radial profile of the surface density of dust up to $\sim 1$ Mpc from the galaxy centre. We select 1617 simulated galaxies in the similar stellar mass range as the sample of \citet{2010MNRAS.405.1025M}\footnote{\citet{2010MNRAS.405.1025M} sampled galaxies with luminosity $L\simeq 0.45L^{\ast}$ at $z=0.3$, where $L^{\ast}$ is the characteristic luminosity of the luminosity function at that redshift. \cite{2014ApJ...783...85T} reported the characteristic mass of the stellar mass function to be $\simeq 10^{11.05}$\,M$_{\odot}$. If we assume $L\propto M_*$, a galaxy with $0.45L^*$ would have $M_* \simeq 5\times 10^{10}$ M$_\odot$. Thus, we assume that \cite{2010MNRAS.405.1025M}'s sample has a stellar mass range of $10^{10}<M_*/{\rm M}_{\odot}<10^{11}$.}, and computed the average radial profile of dust surface densities {in the following way}. We first extract the $(2\, {\rm Mpc})^{3}$ region around each simulated galaxy at $z=0$, and divide each region into a $200^{3}$ cubic grid. Thus, the effective resolution of the radial profile is $\sim 10$\,kpc. By calculating the averaged dust mass density at each grid point for small and large grains, we obtain the distribution of dust grains around the galaxies. Then, the 3-dimensional dust distribution is projected onto a 2-dimensional surface to obtain the radial profile of surface density as plotted in Fig.~\ref{fig:profile} (the projected radius is denoted as $r$). \begin{figure} \includegraphics[width=8.3cm]{fig7}\caption{ Panel (a): Averaged surface density profile of metals and dust around massive galaxies ($r$ is the distance from the galaxy centre). We sampled all galaxies whose stellar mass is in the range of $10^{10}M_{\odot}\le M_{\ast} \le 10^{11}M_{\odot}$. The number of galaxies is 1617. The surface densities of metals and dust are presented with the dashed (purple) and solid (blue) lines, respectively. The blue and red dotted lines show the surface densities of large and small grains, respectively. Points with error bars are the observed radial profile of dust \citep{2010MNRAS.405.1025M}. Panel (b): Radial profile of dust-to-metal surface density ratio. The dashed line shows the constant dust-to-metal ratio adopted by \citet{2011MNRAS.412.1059Z}. } \label{fig:profile} \end{figure} In Fig.~\ref{fig:profile}a, we compare our simulation result with {the observations}. We find that our total dust mass (the sum of small and large grains) can account for the observed dust abundance at $r\gtrsim 30$\,kpc. We find that large grains dominate the total dust abundance in the entire circum-galactic region. The small-to-large grain mass ratio is $\sim$1/16 at $r=10$\,kpc and $\sim$1/10 at $r=1$\,Mpc. The dominance of large grains is also observed in the IGM as discussed in Section~\ref{subsec:Omega}. The slightly lower small-to-large grain abundance ratio at the small radius is due to the continuous supply of large grains by {coagulation}. We also plot the dust-to-metal surface density ratio, $\Sigma_{\rm dust}\slash \Sigma_{\rm metal}$, in Fig.~\ref{fig:profile}b. The ratio is high ($\simeq$\,0.6) in the central region ($r<20\,{\rm kpc}$), while it dramatically drops to $\sim$0.25 at $r\simeq 20$\,kpc, and slowly decreases at $r>20\,{\rm kpc}$ because dust grains are destroyed by sputtering in hot ($T_\mathrm{gas}>10^6$ K) gas. At $r\gtrsim 200\, {\rm kpc}$, it drops to $\Sigma_{\rm dust}\slash \Sigma_{\rm metal}\simeq$\,0.14. {A part of the dust still survives because not all the gas in the CGM is hot.} We compare the value of $\Sigma_{\rm dust}\slash \Sigma_{\rm metal}$ {obtained in our simulation with that derived} by \cite{2011MNRAS.412.1059Z}. {They} only calculated the spatial distribution of metals in the IGM and assumed a fixed dust-to-metal ratio {(i.e.\ they did not calculate dust evolution)} to obtain the dust distribution. They showed that $\Sigma_{\rm dust}\slash \Sigma_{\rm metal} \simeq 0.24$ explains the dust abundance at $r\sim 1h^{-1}$\,Mpc obtained by \cite{2010MNRAS.405.1025M}. They only normalized the amplitude of extinction at $1 h^{-1}$Mpc, but their prediction agreed with the result of \cite{2010MNRAS.405.1025M} in a wide radius range of $0.01\,h^{-1}{\rm Mpc}\lesssim r \lesssim 8\,h^{-1}{\rm Mpc}$. {Interestingly, our model that incorporated dust evolution in the simulation} also predicts $\Sigma_{\rm dust}\slash \Sigma_{\rm metal} \sim 0.25$ at $0.02\,h^{-1}{\rm Mpc} \lesssim r \lesssim 0.15\,h^{-1}{\rm Mpc}$ without any tuning of dust-to-metal ratio. \subsection{Extinction of circum-galactic and intergalactic dust}\label{subsec:ext} The extinction over a cosmic distance (hereafter, referred to as the cosmic extinction) also puts a constraint on the dust properties in the cosmic volume. We calculate the cosmic extinction up to redshift $z$, $A_{\lambda}(z)$, in units of magnitude as \citep[][]{2010MNRAS.405.1025M, 2018arXiv180400848H} \begin{eqnarray} A_{\lambda}(z) \!\!\!\!&=& \!\!\!\!2.5 (\log_{10}\mathrm{e})\notag\\ & \times & \!\!\!\!\displaystyle\int_{0}^{z} \left(\kappa^{\rm S}(\lambda^{\prime })\rho_{\rm dust}^{\rm S}(z^{\prime }) + \kappa^{\rm L}(\lambda^{\prime })\rho_{\rm dust}^{\rm L}(z^{\prime }) \right) \dfrac{c(1+z^{\prime })^{2}}{H(z^{\prime })}dz^{\prime }~,\notag \\ \lambda &=& \lambda^{\prime }~(1+z^{\prime })\,,\notag\\ H(z)&=&H_{0}\sqrt{\Omega_{\rm m}(1+z)^{3}+\Omega_{\Lambda}\,}\, , \label{eq.12} \end{eqnarray} where $\kappa^{\rm S}(\lambda )$ and $\kappa^\mathrm{L}(\lambda )$ are the dust mass extinction coefficient for small and large grains, respectively, as a function of wavelength, $\rho_\mathrm{dust}^\mathrm{S}(z)$ and $\rho_\mathrm{dust}^\mathrm{L}(z)$ are the comoving mass density of small and large grains as a function of redshift, respectively, $c$ is the light speed, and $H(z)$ is the Hubble parameter at $z$. For the expression of $H(z)$, we assumed a flat Universe. We adopt the mass extinction coefficients from \citet{2017MNRAS.469..870H}, who assumed spherical and homogeneous dust grains with a mixture of silicate and graphite \citep[][]{1984ApJ...285...89D} and applied the Mie theory {\citep[][]{1983asls.book.....B} based on the same optical constants in \citet[][]{2001ApJ...548..296W}.} In this paper, the mass fractions of silicate and carbonaceous dust are assumed to be 0.54 and 0.46, respectively \citep[][]{2009MNRAS.394.1061H}. Although these mass fractions give a good start (since we do not have any knowledge on the grain composition in the IGM), they should vary depending on the evolutionary stage of galaxy \citep[e.g.][]{2011A&A...525A..61P, 2015ApJ...810...39B}. The production rate of each element also changes as a function of galaxy age \citep[e.g.][]{2014MNRAS.438.2765C}. However, our conclusions regarding the cosmic extinction are not affected by the detailed mixture of the two dust species within the uncertainties in the observational constraint. An important conclusion drawn later about a flat extinction curve shape also holds regardless of the mixture ratio. We consider the cosmic extinction at $z>0.3$, where some observational constraints are available. Because the comoving distance at $z=0.3$ is 1.23 Gpc, which is much larger than the comoving scale of {the} large scale structure $\sim 100h^{-1}$\,Mpc \citep[e.g.][]{2005ApJ...633..560E}, we are able to justify the usage of the mean density without considering the spatial inhomogeneity. Moreover, the observational constraints we adopt are derived from the analysis of a large number of objects in {a large cosmic volume}; thus, only averaged quantities are relevant for the analysis in this subsection. We adopt the $V$-band wavelength (0.55 $\micron$) at the observer's frame for $\lambda$, and show $A_V(z)$ in Fig.\ \ref{fig:extinction}. For the dust mass densities, $\rho_\mathrm{dust}^\mathrm{S}$ and $\rho_\mathrm{dust}^\mathrm{L}$, we examine two cases: in the first case, we adopt all the dust in the simulation box, while in the second case, we count only the dust in the IGM. {The latter case is motivated by the fact that the observations of background QSOs used to derive the cosmic extinction are biased against lines of sight passing though galaxies where the extinction is high. That is, lines of sight without galaxies are preferentially sampled in the measurement of the cosmic extinction \citep[][]{2005A&A...444..461V}.} Thus, we expect that observational results should lie between those two cases. \begin{figure} \includegraphics[width=8.3cm]{fig8} \caption{Cosmic dust extinction in the $V$ band. The solid (green) and dashed (red) lines show the extinction by the IGM component and all dust grains in the simulation box, respectively. The numbers `1', `2', `3' and `4' are observational constraints obtained by \citet{2009ApJ...696.1727M}, \citet{2009JCAP...06..012A}, \citet{2003JCAP...09..009M} and \citet{2008MNRAS.385.1053M}. The circle indicates the estimate based on the statistics of halo extinction by \citet{2010MNRAS.405.1025M}. } \label{fig:extinction} \end{figure} \begin{figure} \includegraphics[width=8.3cm]{fig9} \caption{Estimated reddening curves of Mg\,\textsc{ii} absorbers. The solid (blue) and dashed (green) lines represent the curves at $z=1$ and 2, respectively. The shaded area associated with these lines show the range of the column densities assumed in the calculation. The filled circles and triangles with error bars are observational data taken from \citet{2012ApJ...754..116M}.} \label{fig:extinction8} \end{figure} Cosmic dust extinction has been investigated {in} various ways. \cite{2009ApJ...696.1727M} (marked by `1' in Fig.\ \ref{fig:extinction}) focused on the effect of dust extinction on the apparent relation between the luminosity distance and the angular diameter distance {of} distant galaxies and obtained {an} upper limit {on} the cosmic dust extinction {at} $z\sim 0.35$. \cite{2009JCAP...06..012A} (marked by `2' in Fig.\ \ref{fig:extinction}) put an upper limit on dust extinction by using the fact that dust grains decrease the apparent luminosity of SNe Ia and could affect the cosmological parameter estimates. \cite{2003JCAP...09..009M} (marked by `3' in Fig.\ \ref{fig:extinction}) made an attempt of finding systematic reddening for the SDSS QSO sample. They did not find such a systematic reddening and they set an upper limit of $A_{\rm V}<0.20$ at $z=1$. \cite{2008MNRAS.385.1053M} (marked by `4' in Fig.\ \ref{fig:extinction}) observed the reddening of a statistical sample of Mg\,\textsc{ii} absorbers. Since they only counted Mg\,\textsc{ii} absorbers, the estimated extinction is regarded as a lower limit. \citet{2010MNRAS.405.1025M} estimated the cosmic extinction {at $z\sim 0.3$ on the assumption that it is dominated by dust in galaxy halos.} As mentioned above, we expect that the actually observed cosmic extinction would lie between the cosmic extinction arising from all the dust in the cosmic volume and that contributed from only the IGM dust (shown by the dashed and solid lines in Fig.~\ref{fig:extinction}, respectively). In Fig.\ \ref{fig:extinction}, we indeed find that the observational constraints are broadly located between those two lines. This indicates that our dust abundance in the cosmic volume agrees with the observational constraints on the cosmic extinction. The wavelength dependence of extinction could provide useful information on the grain size \citep[e.g.][]{1977ApJ...217..425M}. Therefore, we further calculate the extinction (or reddening) curve, which could be compared with observations. Following \cite{2012ApJ...754..116M}, we assume that Mg \textsc{ii} absorbers trace the medium in galaxy halos. \cite{2015ApJ...813....7P} have shown that the reddening curves in galaxy halos are indeed similar to those in Mg \textsc{ii} absorbers. Because the column density is important for the reddening, using Mg \textsc{ii} absorbers is advantageous because of their known column densities. The extinction at wavelength $\lambda$ is estimated by the following formula \citep[][]{2018arXiv180400848H}: \begin{eqnarray} A_{\lambda}=2.5(\log_{10}\mathrm{e})\mu m_{\rm H}N_{\rm H}^\mathrm{MgII} \left(\kappa^{\rm L}(\lambda)\mathcal{D}_{\rm L}^{\rm MgII} +\kappa^{\rm S}(\lambda)\mathcal{D}_{\rm S}^{\rm MgII}\right)\, , \label{eq:A_lambda} \end{eqnarray} where $\mu$ is the gas mass per hydrogen ($\mu =1.4$), $m_{\rm H}$ is the mass of hydrogen atom, $N_{\rm H}^\mathrm{MgII}$ is the hydrogen column density of an Mg \textsc{ii} absorber, and $\mathcal{D}_{\rm L}^{\rm MgII}$ and $\mathcal{D}_{\rm S}^{\rm MgII}$ are the large and small grain abundances (dust-to-gas ratios) in Mg \textsc{ii} absorbers. We adopt $N_{\rm H}=10^{19.5}\,{\rm cm}^{-2}$, which is derived as a geometric mean for an Mg \textsc{ii} absorber sample by \cite{2009MNRAS.393..808M}. According to \cite{2009MNRAS.393..808M}, the dust-to-gas ratio is 60--80 per cent of the Milky Way value if we use $A_V /N_\mathrm{H}^\mathrm{MgII}$ for the indicator of dust-to-gas ratio. Assuming the typical dust-to-gas ratio of the Milky Way to be 0.01 (or slightly less) \citep[e.g.][]{1992ApJ...395..130P}, we adopt the total dust-to-gas ratio for Mg \textsc{ii} absorbers as $\mathcal{D}_\mathrm{tot}^\mathrm{MgII}=0.006$. We assume that the small-to-large grain abundance ratio in Mg \textsc{ii} absorbers is equal to that in the IGM: \begin{eqnarray} \mathcal{D}_{\rm L}^{\rm MgII}&=&\dfrac{\Omega^{\rm L}_{\rm dust,IGM}}{\Omega_{\rm dust,IGM}}\mathcal{D}_{\rm tot}^{\rm MgII},\\ \mathcal{D}_{\rm S}^{\rm MgII}&=&\dfrac{\Omega^{\rm S}_{\rm dust,IGM}}{\Omega_{\rm dust,IGM}}\mathcal{D}_{\rm tot}^{\rm MgII}. \end{eqnarray} where $\Omega^{\rm L}_{\rm IGM\, dust}$, $\Omega^{\rm S}_{\rm IGM\, dust}$, and $\Omega_{\rm IGM\, dust}$ are introduced in Section \ref{subsec:Omega}. Considering the uncertainties in the column density and the dust-to-gas ratio, we examine an order-of-magnitude range of $N_\mathrm{H}^\mathrm{MgII}=10^{19}$--$10^{20}$ cm$^{-2}$ (while we fix $\mathcal{D}_\mathrm{tot}^\mathrm{MgII}=0.006$ because of the degeneracy between $N_\mathrm{H}$ and $\mathcal{D}_\mathrm{tot}^\mathrm{MgII}$). For the reddening, we plot the difference from the extinction in the $i$-band, $A_\lambda -A_\mathrm{i}$ (note that only such a `reddening' is measurable in the observation) in Fig.\ \ref{fig:extinction8}. Here, we show the wavelength in the observer's frame [thus, apply $\lambda /(1+z)$ for $\lambda$ in equation (\ref{eq:A_lambda})]. The wavelength dependence of reddening is referred to as the reddening curve. We plot the reddening curves of Mg \textsc{ii} absorbers at $z=1$ and 2 in Fig.\ \ref{fig:extinction}. Our model predicts reddening curves marginally consistent with the observational data at $z=1$ except at the shortest wavelength. There is a clear discrepancy between the reddening curve in our simulation and the observational data at $z=2$. The reason why the estimated extinction curve is very flat is that the dust abundance in the IGM is dominated by large grains in our simulation as shown in Section \ref{subsec:Omega}. Therefore, there is a tension between the simulation and the observation in terms of the grain size distribution at $z=2$. On the other hand, the abundance of small grains at $z=1$ becomes greater than that at $z=2$; accordingly, the estimated extinction curve at $z=1$ becomes relatively steep and able to explain a couple of observational points. We further discuss the discrepancy between our results and the observation data in Section \ref{subsec:issues_IGM}. \section{Discussion} \subsection{Comparison with other theoretical studies on CGM/IGM dust}\label{subsec:trans_halo} The spatial distribution of dust in the CGM and IGM has been predicted by cosmological hydrodynamical simulations \citep[e.g.][]{2016MNRAS.457.3775M, 2017MNRAS.468.1505M} and semi-analytic models \citep[e.g.][]{2017MNRAS.471.3152P}. The radial profile of dust surface density in \cite{2017MNRAS.468.1505M} is steeper than the observational data by \cite{2010MNRAS.405.1025M} in the range $r<100~{\rm kpc}$ and flatter at $r>100~{\rm kpc}$. \cite{2017MNRAS.468.1505M} underproduced the dust abundances relative to those reported by \cite{2010MNRAS.405.1025M}. We predict a mild slope for the radial profile compared to that in \cite{2017MNRAS.468.1505M} except in the central region. As a consequence, our result widely agrees with the observed profile by \cite{2010MNRAS.405.1025M}. \cite{2017MNRAS.471.3152P} estimated the dust abundance in hot halos as a function of halo mass, and predicted a comparable halo dust mass to that observed in \cite{2015ApJ...813....7P} for galaxies with $M_\ast\sim 10^{10}$ M$_\odot$. The spatial dust distribution in the CGM and IGM highly depends on the feedback model. The distribution of dust grains around massive galaxies was also simulated by \cite{2011MNRAS.412.1059Z} based on their metallicity distribution under a fixed dust-to-metal ratio. They adopted a momentum-driven wind model, which was claimed originally {by} \cite{2005ApJ...621..227M} and \citet{2005ApJ...618..569M} as a physically reasonable feedback model. \citet{2011MNRAS.412.1059Z} tuned the dust-to-metal ratio and varied the feedback strength. In our work, we roughly reproduced the observed radial profile without fixing the dust-to-metal ratio: in the central region ($r \lesssim 20$\,kpc), dust growth by accretion is active and the dust-to-metal ratio is high ($\simeq 0.6$), whereas it drops to $\sim 0.3$ at $r\sim 20$\,kpc and gradually decreases to $\sim 0.14$ at $r\sim 1$\,Mpc. This decrease is due to the dust destruction in the hot gas. Therefore, {our model suggests that there is a significant variation in the dust-to-metal ratio of the CGM/IGM. Importantly, there is a systematic decrease in the dust-to-metal ratio along the distance from the galaxy centre. It would be interesting to reexamine \citet{2011MNRAS.412.1059Z}'s analysis by taking the variation in the dust-to-metal ratio into account.} \subsection{Issues for circum-galactic and intergalactic extinctions}\label{subsec:issues_IGM} As mentioned in Section \ref{subsec:circum}, small grains are deficient in the IGM and CGM in our simulation. This leads to a flat reddening curve, which does not fit to the actually observed curve for Mg \textsc{ii} absorbers. A possible reason for this discrepancy is that our simulation failed to fully include the physical processes important for dust in the CGM and IGM. For example, we did not include radiation pressure that could be important in transporting dust out of the galaxies \citep[][]{1991ApJ...381..137F,2005MNRAS.358..379B}. However, there is no physical reason that radiation pressure preferentially transport small grains; thus, radiation pressure will not make the reddening curve in the IGM and CGM steeper. Another possibility is that {our simulation failed to treat the hydrodynamical effects (especially, shocks and turbulence) in galaxy outflows because of low spatial resolution.} Some authors argue that Mg \textsc{ii} absorbers originate from outflows driven by active star formation \citep[][]{1996ApJ...472...73N,2001ApJ...562..641B}. In such a high-velocity environment, shocks or high-velocity turbulence could be induced. Both shocks \citep[][]{1996ApJ...469..740J} and turbulence \citep[][]{2004ApJ...616..895Y,2009MNRAS.394.1061H} can cause grain shattering. Since our simulation is not capable of resolving {shocks and turbulence}, {this production path of small grain by shattering could not be successfully treated. } It would be interesting to investigate the possibility of shattering in outflows using higher resolution zoom-in simulations. The lack of small grains leads to a significantly flatter reddening curve than the observed one by \cite{2012ApJ...754..116M} as shown in Fig.\ \ref{fig:extinction8}. The assumption on the reddening or extinction curve is crucial in converting the observed reddening to the dust abundance. \cite{2010MNRAS.405.1025M} and \citet{2012ApJ...754..116M} adopted the SMC extinction curve to derive the dust abundance in galaxy halos. {A steeper extinction curve requires a smaller amount of dust to explain a certain amount of reddening.} Therefore, if our calculation is correct and the extinction curve is flat, the CGM and IGM contain more dust than estimated by \citet{2010MNRAS.405.1025M} and \citet{2012ApJ...754..116M}. However, the observational analysis in \citet{2015ApJ...813....7P} shows that galaxy halos at $z\sim 0.05$ show a steep extinction curve similar to the SMC curve. Therefore, more efforts on the theoretical side would be required to investigate the possibilities of small grain production in the CGM and IGM. \subsection{Possible importance of AGN feedback} In order to concentrate on the issues related to stellar processes, we did not include AGN feedback in this paper. As we showed above, the stellar mass function and the dust mass function are both overestimated at the massive end (Figs.\ \ref{fig:SFRD} and \ref{fig:dustMF}). Such an overestimate could be resolved by implementation of AGN feedback as done in various models \citep[e.g.][]{2017NatAs...1E.165H,2015MNRAS.452..575S,2017MNRAS.465.3291W,2017arXiv171200023F}, some of which took radiative feedback from AGNs into account \citep[e.g.][]{2014MNRAS.444.1518V}. The importance for gas heating is also reported observationally \citep[e.g.][]{2012ARA&A..50..455F}. We expect that AGN feedback suppresses the formation of dense clouds, where dust grains grow efficiently. {This suppression of dust growth together with the effect of mass ejection will decrease the number of dust-rich galaxies.} \cite{2014MNRAS.444.1518V} claimed that the gas heated by AGNs could enhance dust destruction by sputtering. This affects the dust abundance in the CGM and IGM. The above effects overall suppress the dust abundance, especially in or around massive galaxies. However, the effects of AGN may not be so simple. {\cite{2011A&A...525A..61P} already developed a model that can treat the evolution of dust in QSOs. In their model, QSOs are regarded as progenitors of elliptical galaxies. Therefore, the development of AGNs could be tightly related to the host galaxy properties \citep[see also][]{2011MNRAS.416.1916V}.} \cite{2017arXiv170107200H} suggest that the AGN feedback could create a cycle of gas cooling and heating, which would cause a cyclic behaviour of dust mass increase and decrease with a period of AGN feedback. \cite{2002ApJ...567L.107E} suggested positive feedback to the dust abundance by pointing out that dust could condense in AGN-driven winds. Therefore, clarifying the effects of AGN feedback on the dust abundance requires inclusion of the above negative and positive influences and {an} appropriate {modelling} of {the} host galaxies. \section{Conclusion} We investigate the dust evolution using a cosmic hydrodynamical simulation by extending our previous single-galaxy simulations in A17 and \cite{2017MNRAS.469..870H}. The grain size distribution is also treated in the bimodal form of large and small grains, with a grain radius boundary of $\sim 0.03~\micron$. In our simulation, we take into account not only dust generation by SNe and AGB stars but also dust growth by accretion. We also include other interstellar processing mechanisms such as dust destruction by SN shocks, coagulation in the dense ISM, and shattering in the diffuse ISM. For dust destruction, coagulation, and accretion, we adopt the sub-grid models developed in A17. We first confirm that the relation between dust-to-gas ratio and metallicity for the simulated galaxies is consistent with the observed relation at $z=0$. In particular the nonlinear increase of dust-to-gas ratio as a function of metallicity at $Z\gtrsim 0.1$ Z$_\odot$ is described well by the transition from the dust production dominated by stellar sources, to that dominated by accretion (dust growth). The consistency with the observational data suggests that the implementation of dust abundance evolution is successful in our simulation. After confirming this, we put particular focus on the cosmological-volume properties of dust without going into detailed analysis of individual galaxies, which will be reported in a separate paper (Hou et al., in preparation). We present the comoving density of dust mass as a function of redshift. The comoving dust mass density in our simulation is roughly consistent with the one derived from the analysis of observed infrared radiation at $z<2.4$. In our simulation, the peak of the comoving dust density lies at $z=1$--2, which coincides with the most dust-enshrouded epoch in the Universe derived from \textit{Herschel} observations \citep{2013A&A...554A..70B}. We also find that the dust abundance in the IGM is always dominated by large grains. The statistical properties of dust in galaxies are also investigated using the dust mass function. Our simulation reproduces the dust mass function at $z=2.4$ well. At $z=0$, it {broadly accounts for the observational slope of dust mass function at $M_\mathrm{d}\sim 10^5$--$10^8$ M$_{\sun}$}, but is in excess at the massive end. The excess could be improved if we include AGN feedback in the simulation. We further investigate the dust properties in the IGM and CGM. For the CGM, we examine the radial profile of dust surface density around galaxies with $M_\ast =10^{10}$--10$^{11}$\,M$_\odot$ up to $r=1$\,Mpc, and find that we reproduce the observed radial profile at $r>20$\,kpc. This means that our stellar feedback model is successfully transporting the dust formed in galaxies to the circum-galactic region. However, we also find that the {dust abundance dominated by large grains is not consistent with} the steep reddening curve derived for Mg \textsc{ii} absorbers at $z=1$ and 2 \citep{2012ApJ...754..116M}. This indicates that our simulation still fails to include a mechanism of supplying small grains to the CGM/IGM. We predict that the dust-to-metal ratio in the halo decreases with increasing $r$, since dust grains in the CGM/IGM are destroyed by hot gas via sputtering. Using the dust properties in the simulation, we predict cosmological reddening of $A_{V}\sim 10^{-2}$ at $z=0.3$ while $A_{V}\sim10^{-1}$ at $z=2.0$. Both values satisfy the observational constraints. \section*{Acknowledgment} We thank the anonymous referee for careful reading and useful comments. We are grateful to Volker Springel for providing us with the original version of \textsc{gadget-3} code. Numerical computations were carried out on Cray XC30 at the Center for Computational Astrophysics, National Astronomical Observatory of Japan and XL at the Theoretical Institute for Advanced Research in Astrophysics (TIARA) in Academia Sinica. HH thanks the Ministry of Science and Technology for financial support through MOST 105-2112-M-001-027-MY3 and MOST 107-2923-M-001-003-MY3. This work was in part supported by JSPS KAKENHI Grant Number JP17H01111. \bibliographystyle{mnras}
{ "timestamp": "2018-07-24T02:10:54", "yymm": "1802", "arxiv_id": "1802.04027", "language": "en", "url": "https://arxiv.org/abs/1802.04027" }
\section{Introduction} Let $C$ denote a smooth, irreducible, complex projective curve of genus $g \geq 3$. Let $U_C(2, d)$ be the moduli space of semistable, degree $d$, rank $2$ vector bundles on $C$ and let $U^s_C(2,d)$ be the open dense subset of stable bundles (when $d$ is odd, more precisely one has $U_C(2, d)=U^s_C(2,d)$). Let $B^k_{2,d} \subseteq U_C(2, d)$ be the {\em Brill-Noether locus} which consists of vector bundles $\mathcal F$ having $h^0(\mathcal F)\ge k$, for a positive integer $k$. Traditionally, one denotes by $W^k_d$ the Brill-Noether locus $B^{k+1}_{1,d}$ of line bundles $L\in \mbox{Pic}^d(C)$ having $h^0(L)\ge k+1$, for a non-negative integer $k$. In what follows, we sometimes identify line bundles with corresponding divisor classes, interchangeably using multiplicative and additive notation. For the case of rank $2$ vector bundles, we simply put $B^k_d:=B^k_{2,d}$, for which it is well-known that the {\em expected dimension} of $B_d^k \cap U^s_C(2,d)$ is $\rho_d^{k}:=4g-3-ik$, where $i:=k+2g-2-d$ (cf.\;\cite{Sun}). Recall that, as customary, an irreducible component of $B_d^k$ is said to be {\em regular}, if it is reduced with expected dimension, and {\em superabundant}, otherwise. In the range $0 \le d \le 2g-2$, $B^1_d$ has been deeply studied on any curve $C$ by several authors (cf.\;e.g.\;\cite{Sun,L}). Concerning $B^2_d$, using a degeneration argument, N. Sundaram \cite{Sun} proved that $B^2_d$ is non-empty for any $C$ and for odd $d$ such that $g\le d\le 2g-3$. M. Teixidor I Bigas generalizes Sundaram's result as follows: \begin{theorem}[\cite{Teixidor1}]\label{Teixidor} Given a non-singular curve $C$ of genus $g$ and an integer $d$, where \linebreak $3\le d\le 2g-1$, then $B^2_d \cap U^s_C(2,d)$ has a component $\mathcal B$ of (expected) dimension $\rho^2_d=2d-3$ and a general point on it corresponds to a vector bundle whose space of sections has dimension $2$. If $C$ is general (i.e. $C$ is a curve {\em with general moduli}), this is the only component of $B^2_d\cap U^s_C(2,d)$. Moreover, $B^2_d\cap U^s_C(2,d)$ has extra components if and only if $W^1_n$ is non-empty, with $\dim W^1_n\ge d+2n-2g-1$, for some integer $n$ such that $2n<d$. \end{theorem} \noindent \begin{remark}\label{rem:TeixRes} {\normalfont The previous result is sharp concerning non-emptiness of $B^2_d \cap U^s_C(2,d)$; indeed, on any smooth curve $C$ of genus $g \geq 3$ one has $B^2_d \cap U^s_C(2,d) = \emptyset$ for $d = 0,\; 1,\; 2$ (cf. \cite{Teixidor1}). Moreover, Theorem \ref{Teixidor} has a {\em residual version}, giving information also on the isomorphic Brill Noether locus \linebreak $B^{d-2g+4}_{4g-4-d}\cap U^s_C(2,4g-4-d)$. Indeed, for any non--negative integer $i$, if one sets $k_i := d-2g+2+i$ and $$B_d^{k_i} : = \{\mathcal F \in U_C(2,d)\;|\: h^0(\mathcal F) \ge k_i\} = \{\mathcal F \in U_C(2,d)\;|\: h^1(\mathcal F) \ge i\},$$one has natural isomorphisms $B_d^{k_i} \simeq B_{4g-4-d}^{i}$, which arise from the natural correspondence \linebreak $\mathcal F \to \omega_C \otimes \mathcal F^*$, from Serre duality and from semistability. Under this natural {\em residual correspondence} one has: } \end{remark} \begin{theorem}[Residual Version of Theorem \ref{Teixidor}]\label{TeixidorRes} Given a non-singular curve $C$ of genus $g$, an integer $d$, where $2g-3\le d\le 4g-7$, let $k_2:= d-2g+4$. Then, $B^{k_2}_d \cap U^s_C(2,d)$ has a component $\mathcal B$ of (expected) dimension $\rho^{k_2}_d=8g-2d-11$ and a general point on it corresponds to a vector bundle whose space of sections has dimension $k_2$. If $C$ is general, this is the only component of $B^{k_2}_d \cap U^s_C(2,d)$. Moreover, $B^{k_2}_d \cap U^s_C(2,d)$ has extra components if and only if $W^1_n$ is non-empty with $\dim W^1_n\ge 2g + 2n - d - 5$, for some integer $n$ such that $2n<4g-4-d$. \end{theorem} Inspired by Theorem \ref{TeixidorRes}, in \cite{CFK} we focused on $B^{k_2}_d$ as above, for $C$ a {\em general $\nu$-gonal curve} of genus $g$, i.e. $C$ corresponding to a general point of the {\em $\nu$-gonal stratum} $\mathcal M^1_{g,\nu} \subset \mathcal M_g$. Observe that in this case, as a consequence of Theorem \ref{TeixidorRes}, $B^{k_2}_d \cap U^s_C(2,d)$ is empty for $d = 4g-4,\; 4g-5, \;4g-6$ and it consists only of the irreducible component $\mathcal B$ as in Theorem \ref{TeixidorRes}, for any $4g-4-2\nu \leq d \leq 4g-7$ (cf.\;Remark\;\ref{rem:important} below). Concerning the residual values for $d$, the aim of this paper is two--fold. The first is to strongly improve \cite[Theorem\;3.1]{CFK}, \color{black} where we proved the following result: \begin{theorem}\label{thm3.1}(cf.\;\cite[Theorem\;3.1]{CFK}) Let $$3\le \nu \le \frac{g+8}{4} \;\; {\rm and} \;\;3g-1\le d\le 4g-6-2 \nu$$be integers. Then, the reduced components of $B^{k_2}_d \cap U^s_C(2,d)$ are only two, which we denote by $B_{\rm reg}$ and $B_{\rm sup}$: \begin{enumerate} \item[(i)] The component $B_{\rm reg}$ is {\em regular}, i.e. generically smooth and of dimension $\rho^{k_2}_d=8g-2d-11$. A general element $\mathcal F$ of $B_{\rm reg}$ fits in an exact sequence $$ 0\to \omega_C(-D) \to \mathcal F\to \omega_C(-p)\to 0,$$where $p\in C$ and $D\in C^{(4g-5-d)}$ are general. Specifically, $s_1(\mathcal F) \ge 1$ (resp., $2$) if $d$ is odd (resp., even). Moreover, $\omega_C(-p)$ is of minimal degree among special quotient line bundles of $\mathcal F$ and $\mathcal F$ is very ample for $\nu \ge 4$; \item[(ii)] The component $B_{\rm sup}$ is generically smooth, of dimension $6g-d-2 \nu -6 > \rho^{k_2}_d$, i.e. $B_{\rm sup}$ is {\em superabundant}. A general element $\mathcal F$ of $B_{\rm sup}$ is very-ample and fits in an exact sequence $$0\to N\to \mathcal F\to \omega_C\otimes A^{\vee} \to 0,$$where $A \in \pic^{\nu}(C)$ such that $|A| = g^1_{\nu}$ on $C$ and where $N\in\pic^{d-2g+2+\nu}(C)$ general. Moreover, $s_1(\mathcal F)=4g-4 - d - 2\nu$ and $\omega_C \otimes A^{\vee}$ is of minimal degree among quotient line bundles of $\mathcal F$. \end{enumerate} \end{theorem} In the present paper, \color{black} under conditions \begin{eqnarray}\label{eq:ourbounds} \nu \geq 3 \;\; {\rm and} \;\;3g-5\le d\le 4g-5-2 \nu, \end{eqnarray} we first prove the following: \begin{theorem}\label{thm:main2} For $\nu$ and $d$ as in \eqref{eq:ourbounds}, the irreducible components of $B^{k_2}_d \color{black} \cap U^s_C (2,d)$ \color{black} are only two, which we denote by $B_{\rm reg}$ and $B_{\rm sup}$. \begin{enumerate} \item[(i)] The component $B_{\rm reg}$ is {\em regular} and {\em uniruled}. A general element $\mathcal F$ of $B_{\rm reg}$ fits in an exact sequence \begin{equation}\label{exactB0} 0\to \color{black} \omega_C(-D) \color{black} \to \mathcal F\to \color{black} \omega_C(-p) \color{black} \to 0, \end{equation}where $p\in C$ and $D\in C^{(4g-5-d)}$ are general. Moreover, $ \color{black} \omega_C(-p) \color{black}$ is \color{black}of minimal degree \color{black} among special quotient line bundles of $\mathcal F$. \item[(ii)] If $3g-3 \leq d \leq 4g-5-2\nu$, the component $B_{\rm sup}$ is generically smooth, of dimension $6g-d-2 \nu -6 > \rho^{k_2}_d$, i.e. $B_{\rm sup}$ is {\em superabundant}, and {\em ruled}. \noindent If otherwise $d = 3g-5,\;3g-4$, the component $B_{\rm sup}$ is of dimension $6g-d-2 \nu -6 \geq \rho^{k_2}_d$, where equality holds only for $\nu=\frac{g}{2}$ and $d=3g-5$; the component $B_{\rm sup}$ is {\em ruled} and {\em superabundant}, being non--reduced. In any case, a general element $\mathcal F$ of $B_{\rm sup}$ fits in an exact sequence \begin{equation}\label{exactB1} 0\to N\to \mathcal F\to \color{black} \omega_C \otimes A^{\vee} \to 0, \end{equation} \color{black} where $A \in \pic^{\nu}(C)$ such that $|A| = g^1_{\nu}$ on $C$ and where \color{black} $N\in\pic^{d-2g+2+\nu}(C)$ is general. Moreover, $s(\mathcal F)=4g-4 - d - 2\nu$ and $\color{black} \omega_C \otimes A^{\vee} $ \color{black} is \color{black} of minimal degree among quotient line bundles \color{black} of $\mathcal F$. \end{enumerate} \end{theorem} \begin{remark}{\normalfont (i) Notice that Theorem \ref{thm:main2} strongly improves \cite[Theorem 3.1]{CFK} \color{black} (i.e. Theorem \ref{thm3.1} reported above) \color{black} from several points of view. First of all, Theorem \ref{thm:main2} holds for any $\nu \geq 3$ and for any $3g-5 \leq d \leq 4g-5-2\nu$, whereas \cite[Theorem 3.1]{CFK} was proved under the assumptions $3 \leq \nu \leq \frac{g+8}{4} \;\; {\rm and} \;\; 3g-1 \leq d \leq 4g-6-2\nu$ (cf. \cite[formula\;(3.1)]{CFK}). Moreover, no reducedness assumption appears in the statement of Theorem \ref{thm:main2}, as it occurred in \cite[Theorem 3.1]{CFK}. In fact, for $d = 3g-5,\; 3g-4$, in this paper we discover non--reduced components of $B_d^{k_2} \cap U^s_C(2,d)$. Finally, Theorem \ref{thm:main2} contains important information on the {\em birational structure} of these irreducible components (i.e. ruledness, uniruledness, etc.). \vskip 3pt \noindent (ii) Theorem \ref{thm:main2} also exhibits the postulated (by Theorem \ref{TeixidorRes}) reducibility of $B^{k_2}_d\cap U_C^s(2,d)$ for a general $\nu$-gonal curve, in the interval $3g-5\le d\le 4g-5-2 \nu$. Indeed, in such cases, one can take $n = \nu$ and $W^1_{\nu} = \{A\}$. \vskip 3pt \noindent (iii) Theorem \ref{thm:main2} moreover shows that Teixidor I Bigas' component $\mathcal B$ in Theorem \ref{TeixidorRes} coincides with our component $B_{\rm reg}$ (cf. Remark \ref{rem:min} below). \vskip 3pt \noindent (iv) The strategies used in the proof of Theorem \ref{thm:main2} can be used also for the remaining values of $d$, namely $2g-3 \leq d \leq 3g-6$. One can find non--emptiness results also in these cases, nevertheless the description of the (birational) geometry of $B^{k_2}_d$ and of a general point of any of its irreducible components is not as precise as in the statement of Theorem \ref{thm:main2}. \vskip 3pt \noindent (v) \color{black} Other consequences \color{black} of Theorem \ref{thm:main2} are also discussed (cf.\;\color{black}Corollary\color{black}\;\ref{fixeddeterminant} below). } \end{remark} The second aim of the paper is concerned with Hilbert schemes of surface scrolls. Precisely, after proving some sufficient very--ampleness conditions for bundles arising from Theorem \ref{thm:main2} (cf.\;Theorem \ref{thm:veryampleness} below), we can study suitable components of the Hilbert scheme $\mathcal H_{d,g,k_2-1}$ parametrizing smooth surface scrolls $S$ of degree $d$, sectional genus $g$, speciality $2$, which are linearly normal in $\mathbb{P}^{k_2-1}$. The range for $d$ in which we study Hilbert schemes are taken from Theorem \ref{thm:veryampleness} and from \eqref{eq:ourbounds}. Notice indeed that the inequality $d\leq 4g-5-2\nu$ in \eqref{eq:ourbounds} yields $ d\leq 4g-11$ for $\nu=3$ and hence the study of Hilbert schemes of surface scrolls will be considered in this maximal range $ d\leq 4g-11$. Precisely, we show: \begin{theorem}\label{thm:Hilb} Consider the Hilbert scheme $\mathcal H_{d,g,k_2-1}$ as above. Then: \vskip 2pt \noindent (i) for $3g-1\leq d \leq 4g-11$, $\mathcal H_{d,g,k_2-1}$ contains an irreducible component, denoted by $\mathcal H_{\rm reg}$, whose general point corresponds to a smooth scroll $S$ as above, arising from a stable bundle as in \eqref{exactB0} where $C$ is with general moduli (i.e. $\mathcal H_{\rm reg}$ dominates $\mathcal M_g$ ). Moreover, $\mathcal H_{\rm reg}$ is generically smooth, of (expected) dimension $$\dim\; \mathcal H_{\rm reg} =7g-7 + k_2(k_2-2) = 7g-7 + (d-2g+4)(d-2g+2)$$i.e. it is a {\em regular} component of $\mathcal H_{d,g,k_2-1}$. Scrolls arising from general (very ample) bundles in $B_{\rm reg}$ as in Theorem \ref{thm:main2} (i) fill--up a closed subset $\mathcal Y' \subsetneq \mathcal H_{\rm reg}$, where $\mathcal Y'$ dominates $\mathcal M^1_{g,\nu}$; \vskip 2pt \noindent (ii) for $3g-2\leq d\leq 4g-11$ (resp., $d=3g-3$), $\mathcal H_{d,g,k_2-1}$ carries distinct irreducible components $\mathcal H_{\rm sup, \nu}$, for any $\nu$ with $3\leq \nu \leq [\frac{4g-5-d}{2}]$ (resp., $4\leq \nu \leq [\frac{4g-5-d}{2}]$). $\mathcal H_{\rm sup, \nu}$ is generically smooth of dimension $$\dim\; \mathcal H_{\rm sup, \nu} =8g-d-12 + k_2^2 = 8g-d - 12 + (d - 2g + 4)^2$$ which is higher than the expected one, so it is a {\em superabundant} component of $\mathcal H_{d,g,k_2-1}$, unless $d=3g-3$. In case $d=3g-3$, $\mathcal H_{\rm sup, \nu}$ is a regular component for every possible $\nu$. For any such $d$, $\mathcal H_{\rm sup, \nu}$ dominates $\mathcal M^1_{g,\nu}$ and its general point corresponds to a smooth scroll $S$ as above, arising from a general $\mathcal F \in B_{\rm sup}$ as in Theorem \ref{thm:main2}. \end{theorem} \noindent The rest of the paper will be concerned with the proof of the aforementioned results. \color{black} In what follows, we may sometimes abuse notation and identify divisor classes with the corresponding line bundles, interchangeably using additive and multiplicative notation when this does not create ambiguity. \color{black} For standard terminology, we refer the reader to \cite{H}. \smallskip \noindent {\bf Acknowledgements}. The authors thank KIAS and Dipartimento di Matematica Universita' di Roma "Tor Vergata" for the warm atmosphere and hospitality during the collaboration and the preparation of this article. \color{black} The authors are indebted to the referee for the careful reading of the first version of the paper and for valuable comments and suggestions which have certainly improved the readability of the paper.\color{black} \section{Preliminaries}\label{S:Pre} In what follows, $C$ will always denote a smooth, irreducible, projective curve of genus $g \geq 3$. We recall some standard notation and results frequently used below. Given a rank $2$ vector bundle $\mathcal F$ on $C$, the \textit{Segre invariant} $s(\mathcal F) \in \mathbb{Z}$ of $\mathcal F$ is defined by$$s(\mathcal F) = \min_{N \subset \mathcal F} \left\{ \deg\; \mathcal F - 2\; \deg \; N \right\},$$ where $N$ runs through all the sub-line bundles of $\mathcal F$. One has $s(\mathcal F) = s(\mathcal F \otimes L)$, for any line bundle $L$, and $s(\mathcal F) = s (\mathcal F^*)$, where $\mathcal F^*$ denotes the dual bundle of $\mathcal F$. A sub-line bundle $ N \subset \mathcal F$ is called a \textit{maximal sub-line bundle} of $\mathcal F$ if $\deg \; N$ is maximal among all sub-line bundles of $\mathcal F$; in such a case $\mathcal F/N$ is a \textit{minimal quotient line bundle} of $\mathcal F$, i.e. is of minimal degree among quotient line bundles of $\mathcal F$. In particular, $\mathcal F$ is {\em semistable} (resp. {\em stable}) if and only if $s(\mathcal F) \ge 0$ (resp. $s(\mathcal F) > 0$). Let $\delta$ be a positive integer. Consider $L\in \pic^\delta(C)$ and $N\in\pic^{d-\delta}(C)$. The extension space $\ext^1(L,N)$ parametrizes isomorphism classes of extensions and any vector $u\in\ext^1(L,N)$ gives rise to a degree $d$, rank $2$ vector bundle $\mathcal F_u$, fitting in an exact sequence \begin{equation}\label{degree} (u):\;\; 0 \to N \to \mathcal F_u \to L \to 0. \end{equation} \noindent In order for $\mathcal F_u$ as above to be semistable, a necessary condition is \begin{equation}\label{eq:neccond} 2\delta-d \ge s(\mathcal F_u)\ge 0. \end{equation} In such a case, the Riemann-Roch theorem gives \begin{equation}\label{eq:m} \dim \; \ext^1(L,N)= \begin{cases} \ 2\delta-d+g-1\ &\text{ if } L\ncong N \\ \ g\ &\text{ if } L\cong N. \end{cases} \end{equation} Since we will deal with {\em special} rank 2 vector bundles $\mathcal F_u$, i.e. $h^1(\mathcal F_u) >0$, then $\mathcal F_u$ always admits a special quotient line bundle. Recall the following: \begin{theorem}\label{CF}(\cite[Lemma\;4.1]{CF}) Let $\mathcal F$ be a semistable, special, rank $2$ vector bundle on $C$ of degree $d\ge 2g-2$. Then there exist a special, effective line bundle $L$ on $C$, of degree $\delta \leq d$, $N \in {\rm Pic}^{d-\delta}(C)$ and $ u \in \ext^1(L,N)$ such that $\mathcal F = \mathcal F_u$ as in \eqref{degree}. \end{theorem} Take $\mathcal F_u$ as in \eqref{degree}. When $(u)$ does not split, it defines a point $[(u)] \in \mathbb P (\ext^1(L,N)) \cong \mathbb P(H^0(K_C+L-N)^*) := \mathbb{P}$. When the natural map $\varphi:=\varphi_{|K_C+L-N|}: C\to\mathbb P$ is a morphism, set $X:=\varphi(C)\subset \mathbb P$. For any positive integer $h$ denote by $\Sec_h(X)$ the $h^{st}$-{\em secant variety} of $X$, defined as the closure of the union of all linear subspaces $\langle \varphi(D)\rangle\subset\mathbb P$, for general effective divisors $D$ of degree $h$ on $C$. One has $\dim\;\Sec_h(X)\ =\min\,\{\dim\:\mathbb P,\;2h-1\}$. Recall: \begin{theorem} (\cite[Proposition 1.1]{LN})\label{LN} Let $2\delta-d\ge 2$; then $\varphi$ is a morphism and, for any integer $$s \equiv 2\delta-d\ \text{ (mod\ 2) } \; \; \; \mbox{such that} \; \; \; 4+ d-2\delta\le s\le 2\delta-d,$$one has $$ s (\mathcal F_u)\ge s \Leftrightarrow [(u)]\notin \Sec_{\frac{1}{2}(2\delta-d+s-2)}(X). $$ \end{theorem} \section{Proof of Theorem \ref{thm:main2}}\label{S:proof} This section will be devoted to the proof of Theorem \ref{thm:main2}, which will be done in several steps (cf.\;\S's\;\ref{ss:extension},\;\ref{ss:regular},\;\ref{ss:superabundant} below). \begin{remark}\label{rem:important} {\normalfont Notice first that, when $C$ is a general $\nu$-gonal curve, then $B^{k_2}_d \cap U^s_C(2,d)$: \vskip 2pt \noindent $(a)$ is empty, for $d = 4g-4,\; 4g-5, \;4g-6$ (cf. Remark \ref{rem:TeixRes}); \vskip 2pt \noindent $(b)$ consists only of the component $\mathcal B$ (of expected dimension $\rho_d^{k_2} = 8g-2d-11$) in Theorem \ref{TeixidorRes}, for any $4g-4-2\nu \leq d \leq 4g-7$. Indeed, conditions in Theorem \ref{TeixidorRes} guaranteeing reducibility are: $$2n < 4g-4-d, \;\; W^1_n \neq \emptyset \;\; {\rm and} \; \; \dim\;W^1_n \geq 2g + 2n - d - 5.$$One must have $2\nu \leq 2 n$, since $C$ has no $g^1_n$ for $n < \nu$ (cf. \cite{AC}). Therefore $2 \nu \leq 2 n < 4g-4-d$ forces $d \leq 4g-5-2\nu$, which explains why $B_d^{k_2} \cap U^s_C(d)$ must be irreducible for $d \geq 4g-4-2\nu$. } \end{remark} \noindent The previous remark motivates why we focus on $d$ as in \eqref{eq:ourbounds} in our Theorem \ref{thm:main2}. Before proving it, observe first \color{black} that its direct consequence is the following.\color{black} \begin{corollary}\label{fixeddeterminant} With assumptions as in Theorem \ref{thm:main2}, let $M \in {\rm Pic}^d(C)$ be general, then the Brill-Noether locus $B_M^{k_2}(C)$, parametrizing semistable rank 2 vector bundles of given determinant $M$, with at least $k_2 = d-2g+4$ independent global sections, is not empty, even if its expected dimension $$\rho_M^{k_2} := 3g-3 - 2k_2= 3g-3 - 2(d-2g+4)$$is negative for $d > \frac{7g-11}{2}$ \end{corollary} \begin{proof} Take $\mathcal F \in B_{\rm sup}$ general, as in Theorem \ref{thm:main2} (ii). From \eqref{exactB1}, one has $$\det(\mathcal F) \cong K_C-A+N,$$which is general in ${\rm Pic}^d(C)$, since $N \in {\rm Pic}^{d-2g+2+\nu}(C)$ is general by assumption in Theorem \ref{thm:main2} (ii). Therefore, the determinantal map $$B_{\rm sup} \stackrel{det}{\longrightarrow} {\rm Pic}^d(C)$$is dominant. Thus $B_M^{k_2}(C)$ contains $B_{\rm sup} \cap U_C(2,M)$, where $U_C(2,M)$ the moduli space of semistable vector bundles of determinant $M$ (i.e. the fiber in $U_C(2,d)$ over $M$ via the map $det$). Notice that $\dim\;B_{\rm sup} \cap U_C(2,M) = 5g-6-2\nu-d > 0$, since $M= K_C - A + N \cong M':= K_C - A + N'$ if and only if $N \cong N'$ (see Remark \ref{rem:fixeddeterminant} for the classification of $B_{\rm sup} \cap U_C(2,M)$). \end{proof} \begin{remark} \label{rem:fixeddeterminant} {\normalfont (i) From the construction of $B_{\rm sup}$ conducted in \S\;\ref{ss:superabundant}, it will be clear that \linebreak $B_{\rm sup} \cap U_C(2,M)$ is birational to $\mathbb{P}(\ext^1(K_C-A, N))$, where $M \cong K_C-A+N \in {\rm Pic}^d(C)$ general. \noindent (ii) Differently to $B_{\rm sup}$, the component $B_{\rm reg}$ cannot dominate ${\rm Pic}^d(C)$ for $d >\frac{7g-11}{2}$; indeed, in such a case $$\dim\;B_{\rm reg} = 8g-2d-11 < g = \dim \; {\rm Pic}^d(C).$$ Finally, by Theorem \ref{TeixidorRes}, if $C$ is with general moduli and $M \in {\rm Pic}^d(C)$ is general, with $d > \frac{7g-11}{2}$ then $B_M^{k_2}(C) = \emptyset$. In view of the {\em residual correspondence} as in Remark \ref{rem:TeixRes}(ii), this also implies that for $d < \frac{g+3}{2}$ then $B_M^2(C) = \emptyset$ for $M \in {\rm Pic}^d(C)$ and $C$ general. This extends to even degrees $d$ (and via completely different methods) what found in \cite[Example\;6.2]{LNS}. } \end{remark} \subsection{Components via extensions}\label{ss:extension} To prove Theorem \ref{thm:main2} we make use of Theorem \ref{CF} from which we know that, for any possible irreducible component of $B_d^{k_2}$, its general bundle $\mathcal F$ arises as an extension \eqref{degree}, with $h^1(L)>0$. The following preliminary result in particular restricts the possibilities for $h^1(L)$. \begin{lemma}\label{speciality3} There is no irreducible component of $B_d^{k_2}$ whose general member $\mathcal F$ is of speciality $i := h^1(\mathcal F) \geq 3$. \end{lemma} \begin{proof} If $\mathcal F \in B_d^{k_2}$ is such that $h^1(\mathcal F) = i \ge 3$, then by the Riemann-Roch theorem $h^0(\mathcal F) = d - 2g + 2 + i = k_2 +(i-2) = k_i > k_2$. Thus $\mathcal F \in {\rm Sing} (B_d^{k_2})$ (cf. \cite[p.\;189]{ACGH}). Therefore the statement follows from \cite[Lemme 2.6]{L}, from which one deduces that no component of $B_d^{k_2}$ can be entirely contained or coincide with a component of $B_d^{k_i}$, for any $i \geq 3$ (the proof is identical to that in \cite[pp.101-102]{L} for $B^0_d$, $1 \leq d \leq 2g-2$, which uses elementary transformations of vector bundles). \end{proof} >From the previous lemma, a general element $\mathcal F$ of any possible component of $B_d^{k_2} $ is therefore presented via an extension \eqref{degree} for which either $$ h^1(L) =1 \ \mbox{ or } \ h^1(L) =2.$$The proof of Theorem \ref{thm:main2} splits into the following two subsections\;\S\;\ref{ss:regular} and \ref{ss:superabundant}, respectively dealing with the case $h^1(L) =1$ and $h^1(L) =2$. We will show in particular that the case $h^1(L) =1$ (resp., $h^1(L) =2$) produces only the component $B_{\rm reg}$ (resp., $B_{\rm sup}$) as in Theorem \ref{thm:main2}. \subsection{The regular component $B_{\rm reg}$} \label{ss:regular} In this subsection we prove that there exists only one component of $B_d^{k_2} \color{black} \cap U_C^s(2,d) \color{black}$, whose general bundle $\mathcal F$ is presented via an extension \eqref{degree} with $h^1(L) =1$ and this component is exactly $B_{\rm reg}$ as in Theorem \ref{thm:main2} $(i)$. To do this recall first that, for any exact sequence $(u)$ as in \eqref{degree}, if one sets$$\partial_u : H^0(L) \to H^1(N)$$ the corresponding coboundary map then, for any integer $t>0$, the locus \begin{equation}\label{W1} \mathcal W_t:=\{u\in\ext^1(L,N)\ |\ {\rm corank} (\partial_u)\ge t\}\subseteq \ext^1(L,N), \end{equation}has a natural structure of determinantal scheme (cf. \cite[\S\,5.2]{CF}). Observe further that, by \eqref{eq:ourbounds}, the Brill-Noether locus $W^0_{4g-5-d}$ on\;$C$ is not empty, irreducible and $h^0(D) =1$ for general $D \in W^0_{4g-5-d}$, \color{black} where $\deg\;D = 4g-5-d \leq g$.\color{black} \begin{lemma}\label{lem:i=1.2} Let $D \in W^0_{4g-5-d}$ and $p\in C$ be general and let $\mathcal W_1 \subseteq \ext^1(K_C-p,K_C-D)$ be as in \eqref{W1}. Then, $\mathcal W_1$ is a sub-vector space of $\ext^1(K_C-p,K_C-D)$ of dimension $4g-6-d$. Moreover, for $u\in \mathcal W_1$ general, the corresponding rank $2$ vector bundle $\mathcal F_u$ is stable, with: \begin{enumerate} \item[(a)] $h^1(\mathcal F_u)=2$; \item[(b)] $s(\mathcal F) \ge 1$ (resp., $2$) if $d$ is odd (resp., even). \end{enumerate} \end{lemma} \begin{proof} This is a simplified and more general version of \cite[Proof of Lemma\;3.7]{CFK}. First we prove the assertions on $\mathcal W_1$; from the assumptions we have: \begin{equation}\label{degree0} \begin{CD} &(u)& : 0\to &K_C-D&\to &\ \ \mathcal F\ \ & \to \ &\ \ K_C-p\ \ &\to 0\\ &\deg& &d-2g+3&& d&& 2g-3&\\ &h^0& &d-3g+5&& && g-1&\\ &h^1& &1&& && 1&. \end{CD} \end{equation} Notice that $\mathcal W_1 \neq \emptyset$, as $(K_C-p) \oplus (K_C-D) \in \mathcal W_1$, and that $u \in \mathcal W_1$ if and only if $\partial_u = 0$, since $h^1(K_C-D) = 1$. Recalling that $\partial_u$ is induced by the cup-product $$\cup: H^0(K_C-p) \otimes H^1(p-D) \to H^1(K_C-D),$$we have the natural induced morphism \[ \begin{array}{ccc} H^1(p-D) & \stackrel{\Phi}{\longrightarrow} & {\rm Hom} \left(H^0(K_C-p),\; H^1(K_C-D)\right) \\ u & \longrightarrow & \partial_u \end{array} \]which shows that {\small $${\mathcal W}_1 = \ker \left(H^1(p-D) \stackrel{\Phi}{\to} H^0(K_C-p)^{\vee} \otimes H^1(K_C-D)\right) \cong \left(\coker \left(H^0(K_C+D-p) \stackrel{\Phi^{\vee}}{\leftarrow} H^0(K_C-p) \otimes H^0(D)\right)\right)^{\vee}.$$ }Therefore $\mathcal W_1 = \left(\im \; \Phi^{\vee}\right)^{\perp}$ is a sub-vector space of $\ext^1(K_C-p,K_C-D)$. \color{black} Since $h^0(D) =1$, the morphism $\Phi^{\vee}$ is injective hence $\mathcal W_1$ is of codimension $(g-1)$. \color{black} From \eqref{degree0} and definition of $\mathcal W_1$, any $u \in \mathcal W_1$ gives $h^1(\mathcal F_u) =2$, which in particular proves (a). To show that, for $u \in \mathcal W_1$ general, the bundle $\mathcal F_u$ satisfies also (b) we follow a similar strategy as in the proof of \cite[Lemma\;3.7]{CFK}. Precisely, we consider the linear subspace $$\widehat{\mathcal W}_1 := \mathbb{P} (\mathcal W_1) \subset\mathbb P:= \mathbb{P}\left(\ext^1(K_C-p, K_C-D)\right)$$which has dimension $4g-7-d$. Consider the natural morphism $ C\stackrel{\varphi}{\longrightarrow} X\subset\mathbb P$, given by the complete linear system $|K_C+D-p|$, and take $X = \varphi\left(C\right)$. Posing $\delta := 2g-3$ and considering \eqref{eq:ourbounds}, one has $2\delta - d \geq 2 \nu -1 \geq 5$; thus one can apply Theorem \ref{LN}. Taking therefore $s$ any integer such that $ s\equiv 2\delta-d\ {\rm{ (mod \;2) \ \ and }} $ $ 0 \le s\le 2\delta - d$ one has $$\dim \Sec_{\frac{1}{2}(2\delta-d+s-2)}(X) =2\delta - d + s - 3 = 4g - 9 - d + s \le 4g-7-d = \dim \widehat{\mathcal W}_1$$if and only if $s\le 2$, where the equality holds if and only if $s=2$. Thus, for $d$ odd, $s(\mathcal F_u) \geq 1$ for $u \in \mathcal W_1$ general by Theorem \ref{LN}. For $d$ even (case in which $s$ has to be taken necessarily equal to $2$ and $\dim \widehat{\mathcal W}_1 = \dim \Sec_{\frac{1}{2}(4g-6-d)}(X) = 4g-7-d$) one has $\widehat{\mathcal W}_1 \neq \Sec_{\frac{1}{2}(4g-6-d)}(X)$ since $\widehat{\mathcal W}_1$ is a linear space whereas $\Sec_{\frac{1}{2}(4g-6-d)}(X)$ is non-degenerate as $X \subset \mathbb P$ is not; thus, by Theorem \ref{LN}, in this case for general $u \in \widehat{\mathcal W}_1$ one has $s(\mathcal F_u ) \geq 2$. In any case $\mathcal F_u$ general is stable and satisfies (b). \end{proof} To construct the locus $B_{\rm reg}$ and to show that it is actually a component of $B_d^{k_2}$ as in Theorem \ref{thm:main2} (i), notice first that as in \cite[Sect.\;3.2]{CFK} one has a natural projective bundle $\mathbb P(\mathcal E_{d})\to S$, where \linebreak $S \subseteq W^0_{4g-5-d}\times C$ is a suitable open dense subset; namely, $\mathbb P(\mathcal E_{d})$ is the family of \linebreak $\mathbb P(\ext^1(K_C-p,K_C-D))$'s as $(D, p) \in S$ varies. Since, for any such $(D, p) \in S$, $\widehat{\mathcal W}_1$ is a linear space of (constant) dimension $4g-7-d$, one has an irreducible projective variety $$\widehat{\mathcal W}_1^{Tot}:= \left\{ (D,p,u) \in \mathbb P(\mathcal E_{d}) \; | \; H^0(K_C-p) \stackrel{\partial_u = 0}{\longrightarrow} H^1(K_C-D)\right\},$$which is ruled over $S$, of dimension $$\dim \widehat{\mathcal W}_1^{Tot} = \dim S + 4g - 7 - d = 4g - d - 4 + 4g - 7 - d = 8 g - 2d - 11 = \rho_d^{k_2}.$$ From Lemma \ref{lem:i=1.2}, one has the natural (rational) map \[ \begin{array}{ccc} \widehat{\mathcal W}_1^{Tot}& \stackrel{\pi}{\dashrightarrow} & U^s_C(2,d) \\ (D,p, u) & \longrightarrow &\mathcal F_u \end{array} \] and $\im(\pi) \subseteq B^{k_2}_d \cap U_C^s(2,d)$. \begin{proposition}\label{thm:i=1.3} The closure $B_{\rm reg}$ of $\im(\pi)$ in $U_C(2,d)$ is a generically smooth component of $B^{k_2}_d \color{black} \cap U^s_C(2,d)$ \color{black} with (expected) dimension $\rho_d^{k_2} = 8g-11-2d$, i.e. $B_{\rm reg}$ is {\em regular}. Moreover, $B_{\rm reg}$ is {\em uniruled}, being finitely dominated by $\widehat{\mathcal W}_1^{Tot}$. The general point of $B_{\rm reg}$ arises as in Lemma \ref{lem:i=1.2}. \end{proposition} \begin{proof} The proof of the first sentence is identical to that in \cite[Proposition\;3.9]{CFK}. The fact that $B_{\rm reg}$ is uniruled follows from the ruledness of $\widehat{\mathcal W}_1^{Tot}$ and the \color{black}generic \color{black} finiteness of the map $\pi$ (as it is proved in \cite[Lemma\;6.2]{CF}, cf. also \cite[Proposition\;3.9]{CFK}). \end{proof} Next, we show the uniqueness of $B_{\rm reg} $ among possible components of $B_d^{k_2} \color{black} \cap U_C^s(2,d) \color{black}$, whose general bundle $\mathcal F$ is presented via an extension \eqref{degree} with $h^1(L) =1$. To do this, we will make use of the following: \begin{theorem}(\cite[Theorem 5.8 and Corollary 5.9]{CF})\label{CF5.8} Let $C$ be a smooth, irreducible, projective curve of genus $g\ge 3$, $L \in \pic^{\delta}(C)$ and $N \in \pic^{d-\delta}(C)$. Set $$l := h^0(L), \;\; r:= h^1(N), \;\; m:=\dim\;\ext^1(L,N).$$Assume that $$ r\ge1,\ l\ge\max\{1,r-1\},\ m \ge l+1.$$Then: \begin{enumerate} \item[(i)] $\mathcal W_1$ as in \eqref{W1} is irreducible of dimension $m- (l-r+1)$; \item[(ii)] if $l \geq r$, then $\mathcal W_1 \subsetneq \ext^1(L,N)$. Moreover for general $u \in \ext^1(L,N)$, $\partial_u$ is surjective whereas for general $w \in \mathcal W_1$, ${\rm corank} (\partial_w)=1$. \end{enumerate} \end{theorem} \begin{proposition}\label{lem:i=1.1} Let $\mathcal B$ be any component of $B^{k_2}_d $, with $\dim \mathcal B \geq \rho_d^{k_2}$. Assume that a general element $\mathcal F$ in $\mathcal B$ fits in \eqref{degree}, with $h^1(\mathcal F)=2$ and $h^1(L)=1$. Then, $\mathcal B$ coincides with the component $B_{\rm reg}$ as in Proposition\;\ref{thm:i=1.3}. \end{proposition} \begin{proof} The strategy of the proof is similar to that of \cite[Prop.\;3.13]{CFK}; the main difference is given by different bounds \eqref{eq:ourbounds}. Let $\delta := \deg \;L$; then, $\frac{3g-5}{2} \leq \delta\le 2g-2$, as it follows from the facts that $L$ is special and $\mathcal F$ is semistable with $d \geq 3g-5$ from \eqref{eq:ourbounds}. Hence, \color{black} using \eqref{eq:ourbounds}, \color{black} one has \begin{equation}\label{degn} g -3 \le \deg \; N=d-\delta\le \frac{d}{2} \le 2g - \frac{5}{2} - \nu. \end{equation} By \eqref{degree}, $h^1(\mathcal F) =2$ and $h^1(L)=1$, $N$ is therefore special with $r: = h^1(N) \geq 1$ and the corresponding coboundary map $\partial$ has to be of corank one. Set $l:=h^0(L)$; by $h^1(L) =1$ one has $l=\delta-g+2$. First we want to show that $l \geq r$; observe indeed that $3d \geq 9g - 15 \geq 8g-10$, where the first inequality follows from \eqref{eq:ourbounds} whereas the latter from $g \geq 2 \nu \geq 6$, always by \eqref{eq:ourbounds}. Therefore$$3d \geq 8g -10 \; \Rightarrow \; d \geq 8g - 2d - 10 \; \Rightarrow \frac{d}{2} \geq 4g - d - 5.$$By semistability of $\mathcal F$, the last inequality in particular implies $\delta \geq \frac{d}{2} \geq 4g - d - 5$, which is equivalent to $l= \delta - g + 2 \geq \frac{2g-1-d+\delta}{2} \geq r$, the last inequality following from $r-1 \leq \frac{\deg(K_C-N)-1}{2}$ by Clifford's theorem, as $C$ is non--hyperelliptic. Now set $m:=\dim \; {\rm Ext}^1(L,N)$; we want to prove that $m\ge l+1$. From \eqref{eq:m}, one has \linebreak $m\geq g+2\delta-d -1$ so it suffices to show that $g+2\delta-d -1 \geq \delta - g + 3$. This is equivalent to $d-\delta \leq 2g-4$, which certainly holds since $\deg\;N = d-\delta \leq \frac{d}{2} \leq 2g - \frac{5}{2} - \nu$ as it follows from semistability and from \eqref{eq:ourbounds}. To sum--up, since $l \geq r$ and $m \geq l +1$, we are in position to apply Theorem \ref{CF5.8}, from which we get that $$\emptyset \neq \mathcal W_1=\{u\in\ext^1(L,N)\ |\ {\rm corank} (\partial_u)\ge1\} \subsetneq {\rm Ext}^1(L,N)$$ is irreducible and $ \dim {\mathcal W}_1 = m-l+r-1 = m - \delta +g +r - 3$. Using the same strategy as above (cf. also the proof of \cite[Prop.\;3.13]{CFK}), for a suitable open dense subset $S \subseteq W^{r-1}_{2g-2 +\delta -d}\times C^{(2g-2-\delta)}$, one can construct a projective bundle $\mathbb P(\mathcal E_d)\to S$ and an irreducible subvariety $\widehat{\mathcal W}^{Tot}_1 \subsetneq \mathbb P(\mathcal E_d)$, fitting in the diagram: \[\begin{array}{ccc} \widehat{\mathcal W}^{Tot}_1 & \stackrel{\pi}{\dasharrow} & \mathcal B \subset B_{d}^{k_2}\\ \downarrow & & \\ S & & \end{array} \]whose general fiber over $S$ is $\widehat{\mathcal W}_1:= \mathbb P(\mathcal W_1)$, which is the projectivization of the affine irreducible variety $\mathcal W_1 \subsetneq {\rm Ext}^1(L,N)$, and such that the component $\mathcal B$ has to be the image of $ \widehat{\mathcal W}^{Tot}_1$ via a dominant rational map $\pi$ as above (cf.\;\cite[Sect.\;6]{CF} for details). From the given parametric construction of $\mathcal B$, one must have $$ \dim \mathcal B \leq \dim W^{r-1}_{2g-2-d+\delta}+2g-2-\delta+\dim \widehat{\mathcal W}_1.$$ Observe that, from \eqref{degn}, one has $\deg \; K_C-N \le g+1$. To conclude the proof for $\deg \; K_C-N \le g-1$ one can refer to \cite[proof of Prop.\;3.13]{CFK}. Assume therefore $\deg \; K_C-N= g+a$ where $a\in\{0,1\}$; thus $\deg\; N = d-\delta = g-2-a$. If $r\ge a+2$, then we have $h^0(N) = r-a-1\ge 1$, hence $N\in W^{r-a-2}_{g-2-a} \subsetneq \text{Pic}^{g-2-a}(C)$; otherwise, if $r= a+1$, by $\deg\; N = g-2-a$ one deduces that $N\in \text{Pic}^{g-2-a}(C)$ is general. Hence we get \begin{eqnarray*} \dim\mathcal B \leq \begin{cases} \ \dim \text{Pic}^{g-2-a}(C)+(2g-2-\delta)+ m - \delta +g +r - 4\ &\text{ if } r= a+1 \\ \ \dim W^{r-a-2}_{g-2-a}+(2g-2-\delta)+ m - \delta +g +r - 4\ &\text{ if } r\ge a+2. \end{cases} \end{eqnarray*} Consider the second case $r\ge a+2$; since $r\ge 2$ then $N$ cannot be isomorphic to $L$ which, from \eqref{eq:m}, implies $m=2\delta -d+g-1$. Hence from above we have \begin{eqnarray*} \dim\mathcal B & \leq & \ \dim W^{r-a-2}_{g-2-a}+(2g-2-\delta)+ m - \delta +g +r - 4\ \\ & \leq & (g-2-a)-2(r-a-2)-1 + (2g-2-\delta)+ (2\delta -d+g-1) - \delta +g +r - 4 \\ & = & 5g-6+a-r-d = 6g-2d+\delta-8-r, \end{eqnarray*}where the second inequality follows from Martens' theorem \cite[Theorem \color{black}\;(5.1)]{ACGH} applied to $N$ whereas the last equality comes from $g=d-\delta+2+a$. This gives $ \rho^{k_2}_d =8g - 2d - 11 \leq \dim \mathcal B \le 6g-2d+\delta-8-r$, which cannot occur since $\delta \leq 2g-2$. Assume now $r= a+1$. If $L \cong N$, then $m=g$ by \eqref{eq:m}, so \begin{eqnarray*} \dim \; \mathcal B \leq g+(2g-2-\delta)+ m - \delta +g +r - 4 = 5g - 2 \delta - 5 + a = 6g-d-\delta-7 , \end{eqnarray*}where the last equality follows from $g= d - \delta + 2 + a$. Therefore, from $\rho_d^{k_2} \leq \dim \;\mathcal B \leq 6g-d-\delta-7$ one would have $\deg \; N = d - \delta \geq 2g-4$ which is a contradiction from \eqref{degn}. If otherwise $L\ncong N$, then \begin{eqnarray*} \dim\mathcal B \leq g+(2g-2-\delta)+ m - \delta +g +r - 4=6g-2d+\delta-8, \end{eqnarray*}where the last equality follows from \eqref{eq:m} and $g= d - \delta + 2 + a$. As above, from $\rho_d^{k_2} \leq \dim \;\mathcal B \leq 6g-2d +\delta-8$, one gets $\delta \geq 2g-3$ which implies that either $L\cong K_C$ or $L\cong K_C(-p)$, for some $p\in C$. Then one concludes as in the last part of the proof of \cite[Prop.\;3.13]{CFK} \end{proof} \begin{remark}\label{rem:min} {\normalfont The proof of Proposition \ref{lem:i=1.1} shows that $K_C-p$ is minimal among special quotient line bundles for $\mathcal F$ general in $B_{\rm reg}$, completely proving Theorem \ref{thm:main2} (i). Note moreover that \eqref{exactB0} implies that $\mathcal F$ general in $B_{\rm reg}$ admits also a {\em presentation} via a canonical quotient, i.e. it fits in $0 \to K_C-D-p \to \mathcal F \to K_C\to 0 $, whose residual presentation coincides with that in the proof of \cite[Theorem]{Teixidor1}. In other words, the component $B_{\rm reg}$ coincides with the component $\mathcal B$ in \cite[Theorem]{Teixidor1}; this is the only component when $C$ is with general moduli. } \end{remark} \subsection{The superabundant component $B_{\rm sup}$}\label{ss:superabundant} To finish the proof of Theorem \ref{thm:main2}, it remains to study possible components $\mathcal B$ for which $\mathcal F\in \mathcal B $ general is such that $h^1(\mathcal F) =h^1(L)=2$, with $\mathcal F$ fitting in a suitable exact sequence as in \eqref{degree}. To do this, we first need the following: \begin{lemma}\label{lem:i=2.2} Let $\mathcal F$ be a rank 2 vector bundle arising as a general extension in ${\rm Ext}^1(K_C-A, N)$, where $N$ is any line bundle in ${\rm Pic}^{d-2g+2 + \nu}(C)$, with $d$ and $\nu$ as in \eqref{eq:ourbounds}. Then: \begin{enumerate} \item[(a)] $\mathcal F$ is stable with $s(\mathcal F)= 4g-4-2\nu-d$, i.e. $K_C-A$ is a minimal quotient of $\mathcal F$; \item[(b)] If moreover $N$ is non special, then $h^1(\mathcal F) = h^1(K_C-A) = 2$. \end{enumerate} \end{lemma} \begin{proof} (b) is a trivial consequence of the exact sequence $0 \to N \to \mathcal F \to K_C-A \to 0$ and the assumption on $N$; in particular, for any $u \in {\rm Ext}^1(K_C-A, N)$, one has $h^1(\mathcal F_u) = 2$. To prove (a), in order to ease notation, we set $L:= K_C-A$ and $\delta:=\deg\;L = \deg \; K_C -A = 2g-2-\nu$. \noindent $\bullet$ For $3g-5 \leq d \leq 4g-6 - 2 \nu$, one can reason as in the proof of \cite[Theorem 3.1]{CFK}. Indeed, the upper bound on $d$ implies $2\delta-d = 2 (2g-2-\nu) - d \ge 2$, so one can apply Theorem \ref{LN} with $s = 2\delta - d$ and $ C\stackrel{|K_C+L-N|}{\longrightarrow} X\subset\mathbb P:=\mathbb P(\ext^1(L,N))$. With these choices, one has $$\dim\left(\Sec_{\frac{1}{2}(2(2\delta-d)-2)}(X)\right)=2(2\delta-d)-3<2 \delta-d+g-2= \dim\;\mathbb P,$$where the last equality follows from \eqref{eq:m} and the fact that $L = K_C-A \ncong N$, as $\deg \; L-N = 2 \delta - 2 \geq 2$, whereas the strict inequality in the middle follows from \eqref{eq:ourbounds}, as $2\delta-d=4g-4-2 \nu -d \leq g +1 - 2 \nu \leq g-5$. Thus, $\mathcal F= \mathcal F_u$ arising from $u \in {\rm Ext}^1(K_C-A, N)$ general is of degree $d$ and stable, since $s(\mathcal F_u) = 2 \delta - d = 4g-4 - 2\nu -d \geq 2$; the equality $s(\mathcal F_u) = 2 \delta - d = 4g-4 - 2\nu -d$ follows from Theorem \ref{LN} and the fact that $u \in {\rm Ext}^1(K_C-A, N)$. \vskip 2pt \noindent $\bullet$ For $d = 4g-5-2\nu$, Theorem \ref{LN} does not apply, as in this case one has $2 \delta - d =1$. On the other hand, since $d$ is odd, to prove stability of $\mathcal F=\mathcal F_u$ general as above is equivalent to show that $\mathcal F_u$ is not unstable. Assume, by contradiction there exists a sub-line bundle $M \hookrightarrow \mathcal F_u$ such that $\deg\; M \geq 2 g - 2 - \nu > \frac{d}{2}$. We would get therefore the following commutative diagram: \[ \xymatrix@C-2mm@R-2mm{ & & 0 \ar[d] \\ & & M \ar[d] \ar[dr]^{\varphi} \\ 0 \ar[r] & N \ar[r] & \mathcal F_u \ar[r] & K_C-A \ar[r] & 0. } \]Since $\deg \; N = 2g - 3 - \nu$, $\varphi$ is not the zero-map. On the other hand, $\varphi$ can be neither strictly injective (for degree reasons) nor an isomorphism (otherwise $\mathcal F_u \cong N \oplus (K_C-A)$, contradicting the generality of $u \in {\rm Ext}^1(K_C-A, N)$). \end{proof} Now we can prove that $B_{\rm sup}$ as in Theorem \ref{thm:main2} (ii) is a component of $B^{k_2}_d$. The definition of the locus $B_{\rm sup} \subset B_d^{k_2} \cap U_C^s(2,d)$ follows from Lemma \ref{lem:i=2.2} and the construction in \cite[\S\;3.1]{CFK}, which still works under condition \eqref{eq:ourbounds}; precisely, using the diagram after \cite[Lemma\;3.3]{CFK}, one can consider a vector bundle $\mathcal E_{d,\nu}$ on a suitable open, dense subset $S \subseteq \pic^{d-2g+2+\nu}(C)$, whose rank is $ \dim{\ext^1(K_C-A,N)}= 5g-5- 2 \nu -d $ as in \eqref{eq:m}, since $K_C-A \ncong N$ (cf.\;\cite[pp.\;166-167]{ACGH}). Taking the associated projective bundle $\mathbb P(\mathcal E_{d,\nu})\to S$ (consisting of the family of $\mathbb P\left(\ext^1(K_C-A,N)\right)$'s as $N$ varies in $S$) one has$$ \dim \mathbb P(\mathcal E_{d,\nu}) =g+ (5g-5- 2 \nu -d) -1=6g-6-2 \nu -d. $$From Lemma \ref{lem:i=2.2}, one has a natural (rational) map \begin{eqnarray*} &\mathbb P(\mathcal E_{d,\nu})\stackrel{\pi_{d,\nu}}{\dashrightarrow} &U_C(2,d) \\ &(N, u)\to &\mathcal F_u; \end{eqnarray*} which gives $ \im (\pi_{d,\nu})\subseteq B^{k_2}_d \cap U^s_C(2,d)$. Once we show that $\pi_{d,\nu}$ is birational onto its image, we will get that the closure $B_{\rm sup}$ of $\im (\pi_{d,\nu})$ in $U_C(2,d)$ is {\em ruled}, being birational to $ \mathbb P(\mathcal E_{d,\nu})$ which is ruled over $\pic^{d-2g+2+\nu}(C)$, and such that $\dim \; B_{\rm sup} = 6g - 6 - 2\nu - d$. \begin{claim}\label{cl:birat} $\pi_{d,\nu}$ is birational onto its image. \end{claim} \begin{proof}[Proof of Claim \ref{cl:birat}] Let $\Gamma \subset F := \mathbb{P}(\mathcal F_u)$ be the section of the ruled surface $F$ corresponding to the quotient $\mathcal F_u \to\!\!\!\!\to K_C-A$. $\Gamma$ is the only section of degree $2g-2-\nu$ and speciality $2$ of $F$, since $K_C-A$ is the only line bundle with these properties on $C$. Moreover $\Gamma$ is also linearly isolated. This guarantees that $\pi_{d,\nu}$ is birational onto its image (for more details see the proof of \cite[Lemma\;6.2]{CF}). \end{proof} Now we can show the following: \begin{theorem}\label{lem:i=2.1} Under assumptions \eqref{eq:ourbounds}, $B_{\rm sup}$ is an irreducible component of $B^{k_2}_d \color{black} \cap U^s_C(2,d)$ \color{black} which is {\em superabundant}. Moreover, it is: \begin{enumerate} \item[(i)] generically smooth, if $d\geq 3g-3$, \item[$(ii)$] non-reduced, if $d=3g-4, \ 3g-5$. \end{enumerate} \end{theorem} \begin{proof} The result will follow once we prove that, for general $\mathcal F \in B_{\rm sup}$, \begin{equation} \label{tan_sup} \dim T_{\mathcal F}(B^{k_2}_d)=\begin{cases}\dim B_{\rm sup} &\mbox{ if } d\geq 3g-3\\ \dim B_{\rm sup} +3 g-3 -d &\mbox{ if } d= 3g-4, \ 3g-5\end{cases} \end{equation}and moreover, for $d = 3g-4,\;3g-5$, $B_{\rm sup}$ is actually a component of $B^{k_2}_d$. Concerning tangent space computations, one can consider the Petri map of a general $\mathcal F\in B_{\rm sup}$: $$ \mu_\mathcal F: H^0(\mathcal F)\otimes H^0(\omega_C\otimes \mathcal F^*)\to H^0(\omega_C\otimes \mathcal F\otimes \mathcal F^*). $$Since, by construction of $B_{\rm sup}$ as a birational image of $\mathbb P(\mathcal E_{d,\nu})$, $\mathcal F$ general fits in an exact sequence as \eqref{exactB1}, with $N \in {\rm Pic}^{d-2g+2 + \nu}(C)$ general; by \eqref{eq:ourbounds} one has therefore $h^1(N)=0$. Thus, we have $$H^0(\mathcal F)\simeq H^0(N)\oplus H^0(K_C-A) \;\;\; {\rm and} \;\;\; H^0(\omega_C\otimes \mathcal F^*)\simeq H^0(A).$$In particular, $\mu_\mathcal F$ reads as \begin{equation*} \begin{CD} \left(H^0(N)\oplus H^0(K_C-A)\right)&\;\otimes\;& \;H^0(A)&\;\; \stackrel {\mu_\mathcal F }{\longrightarrow}\;\; &H^0(\omega_C\otimes \mathcal F\otimes \mathcal F^*).\\ \end{CD} \end{equation*} Consider the following natural multiplication maps: \begin{eqnarray} \mu_{A,N}:& H^0(A)\otimes H^0(N)\to H^0(N+A)\label{muA}\\ \mu_{0,A}: & H^0(A)\otimes H^0(K_C-A)\to H^0(K_C)\label{mu0}. \end{eqnarray} \begin{claim}\label{cl:ker} $\ker(\mu_\mathcal F)\simeq \ker(\mu_{0,A})\oplus \ker(\mu_{A,N}) $. \end{claim} \begin{proof}[Proof of Claim \ref{cl:ker}] The proof is a simplified and extended version of \cite[Proof of Claim 3.5]{CFK}. Since $h^1(N) = h^1(N+A) = 0$, one has the following commutative diagram \begin{equation*}\label{eq1b} {\small \begin{array}{ccl} 0&&0\\[1ex] \downarrow &&\downarrow\\[1ex] \color{black} H^0(A) \otimes \color{black} H^0(N) & \stackrel{\mu_{A,N}}{\longrightarrow} & H^0(N+A) \\[1ex] \downarrow && \downarrow \\[1ex] \color{black} H^0(A) \otimes \color{black} H^0(\mathcal F) & \stackrel{\mu_\mathcal F}{\longrightarrow} & H^0(\mathcal F \otimes A) \subset H^0 (\omega_C \otimes \mathcal F\otimes \mathcal F^*) \\[1ex] \downarrow &&\downarrow \\[1ex] H^0(A)\otimes H^0(K_C-A) & \stackrel{\mu_{0,A}}{\longrightarrow} & H^0(K_C) \\[1ex] \downarrow &&\downarrow\\[1ex] 0&&0 \end{array} } \end{equation*}where the column on the left comes from the $H^0$-cohomology sequence of \eqref{exactB1} tensored by $H^0(A)$, whereas the column on the right comes from \eqref{exactB1} tensored by $A$ and then \color{black} taking \color{black} cohomology. The previous diagram proves the statement. \end{proof} \noindent By Claim \ref{cl:ker}, one has \begin{eqnarray*} \dim T_{\mathcal F}(B^{k_2}_d)&=&4g-3-h^0(\mathcal F)h^1(\mathcal F)+\dim\;\ker\;\mu_\mathcal F \\ &=&4g-3-2(d-2g+4)+\dim\;\ker\;\mu_0(A)+\dim\;\ker\;\mu_{A,N}. \end{eqnarray*} From \eqref{muA} and \eqref{mu0}, we have \begin{equation}\label{eq:kers} \ker(\mu_{0,A})\simeq H^0(K_C-2A)\cong H^1(2A)^*\;\; {\rm and} \;\; \ker(\mu_{A,N})\simeq H^0(N-A), \end{equation}as it follows from the base point free pencil trick. From \cite[Theorem\;(2.6)]{ACGH} and \cite[p. 869,\;Theorem\;(12.16)]{ACG}, one has \begin{equation}\label{eq:h12A} h^1(2A)=g + 2 -2 \nu. \end{equation}As for $\ker(\mu_{A,N})$, the generality of $N$ implies that $N-A$ is general of its degree, which is $\deg \; N-A = \deg \; N - \nu =d-2g+2$. Therefore it follows that $$\begin{cases} h^1 (N-A) =0, \mbox{ equivalently, } h^0 (N-A) =d-3g+3, &\mbox{ for } d\geq 3g-3\\ h^0 (N-A) = 0, &\mbox{ for } d= 3g-4, \ 3g-5. \end{cases}$$Thus we have $$ \dim T_{\mathcal F}(B^{k_2}_d) =\begin{cases} 4g-3-2(d-2g+4)+(g + 2 -2 \nu) + (d-3g+3) &\mbox{ if } d\geq 3g-3\\ 4g-3-2(d-2g+4)+(g + 2 -2 \nu) &\mbox{ if } d=3g-4, \ d=3g-5, \end{cases}$$ which gives \eqref{tan_sup} since $\dim B_{\rm sup} = 6g-6-2 \nu -d.$ The fact that $B_{\rm sup}$ is actually a (non--reduced) component of $B_d^{k_2}$, for $d = 3g-4,\;3g-5$, will be a direct consequence of the previous computations and the next more general result. \end{proof} In the next lemma we will prove that $B_{\rm sup}$ is the only component of $B_{d}^{k_2}$, for which the general bundle $\mathcal F$ is such that $h^1(\mathcal F) = h^1(L) =2$, for suitable $L$ as in \eqref{degree}. In particular, this will also imply that, for $d = 3g-4,\;3g-5$, $B_{\rm sup}$ cannot be strictly contained in a component of $B_{d}^{k_2}$, finishing the proof of Theorem \ref{lem:i=2.1}. \begin{lemma}\label{lemsup} Assume that $\mathcal B$ is any irreducible component of $B^{k_2}_d$ such that a general $\mathcal F \in \mathcal B$ fits in an exact sequence like \eqref{degree}, with $h^1(\mathcal F)=h^1(L)=2$. Then $\mathcal B = B_{\rm sup}$. \end{lemma} \begin{proof} Since $\mathcal F$ is semistable, from \eqref{eq:neccond} and \eqref{eq:ourbounds} one has $\deg \; L\;\ge \frac{3g-5}{2}$. Moreover, since $C$ is a general $\nu$-gonal curve and $h^1(L)=2$, from \cite[Theorem\;2.6]{AC} we have $K_C - L \cong A +B_b$, where $B_b \in C^{(b)}$ is a base locus of degree $b \geq 0$. By assumption, $\mathcal F$ corresponds to a suitable $v \in {\rm Ext}^1(K_C-A-B_b, N_b)$, for some \linebreak $N_b \in {\rm Pic}^{d-2g+2+\nu+b}(C)$. \color{black} Moreover, always by assumption, $L = K_C-A-B_b$ is such that $h^1(L) = h^1(K_C-A-B_b)= h^1(\mathcal F) = 2$; therefore, by taking cohomology in $0 \to N_b \to \mathcal F \to K_C-A-B_b \to 0$, irrespectively \color{black} of the speciality of $N_b$, the corresponding coboundary map $H^0(K_C-A-B_b) \stackrel{\partial_v}{\longrightarrow} H^1(N_b)$ has to be surjective. From \color{black} semicontinuity on the (affine) space ${\rm Ext}^1(K_C-A-B_b, N_b)$ \color{black} and the fact that semistability is an open condition, it follows that for a general $u \in {\rm Ext}^1(K_C-A-B_b, N_b)$ the coboundary map $\partial_u$ is surjective too and $\mathcal F_u$ is semistable of speciality $2$. Since $\mathcal B$ is by assumption a component of $B^{k_2}_d$ and since $u $ general specializes to $v \in {\rm Ext}^1(K_C-A-B_b, N_b)$, one has that $\mathcal F \in \mathcal B$ has to come from a general $u \in {\rm Ext}^1(K_C-A-B_b, N_b)$, for some $B_b \in C^{(b)}$ and some $N_b \in {\rm Pic}^{d-2g+2+\nu+b}(C)$. On the other hand, a general extension as $$(*)\;\;\; 0 \to N_b \to \mathcal F_u \to K_C-A-B_b \to 0$$ is a flat specialization of a general extension of the form $$(**)\;\;\; 0 \to N \to \mathcal F \to K_C-A \to 0,$$where $N \cong N_b -B_b$; indeed extensions $(**)$ are parametrized by $\ext^1(K_C-A, N) \cong H^1(N+A -K_C)$ whereas extensions $(*)$ are parametrized by $\ext^1(K_C-A-B_b,N+B_b) \cong H^1(N + 2B_b + A - K_C)$ and the aforementioned existence of such a flat specialization is granted by the surjectivity $$H^1(N+A -K_C) \twoheadrightarrow H^1(N + 2 B_b + A - K_C),$$which follows from the exact sequence $0 \to \mathcal O_C \to \mathcal O_C(2 B_b) \to \mathcal O_{2B_b} \to 0$ tensored by $N+A -K_C$ (cf. \cite[pp.\;101-102]{L} for the use of {\em elementary transformations} of vector bundles to interpret the above surjectivity). From Lemma \ref{lem:i=2.2} (a), semicontinuity and the construction of $B_{\rm sup}$, general extension $(**)$ gives rise to a point of $B_{\rm sup}$; by specialization of a general $(**)$ to a general $(*)$, one can conclude that $\mathcal B \subseteq B_{\rm sup}$, i.e. $\mathcal B = B_{\rm sup}$. \end{proof} \begin{remark}\label{rem:sup_reg} {\normalfont Notice that \eqref{eq:ourbounds} implies $\nu \leq \frac{g}{2}$; more precisely, when $\nu =\frac{g}{2}$, one can easily compute that the only admissible value for $d$ in \eqref{eq:ourbounds} is $d=3g-5$. In such a case, i.e. $(\nu, d) = (\frac{g}{2}, 3g-5)$, one has $\dim \; B_{\rm reg} = \dim \; B_{\rm sup} = \rho_{3g-5}^{g-1} = 2g-1$. On the other hand, for any $d$ and $\nu$ as in \eqref{eq:ourbounds}, \eqref{tan_sup} states that $\dim T_{\mathcal F_u}(B^{k_2}_d) > \rho ^{k_2} _d$ for general $\mathcal F_u \in B_{\rm sup}$ whereas, from Proposition \ref{thm:i=1.3}, $\dim T_{\mathcal F_v} (B^{k_2}_d) =\rho ^{k_2}_d $ for general $\mathcal F_v \in B_{\rm reg}$. Thus, for $(\nu, d) = (\frac{g}{2}, 3g-5)$, $B_{\rm reg}$ and $B_{\rm sup}$ are distinct irreducible components of $B^{g-1}_{3g-5}$, both of expected dimension, the first regular the second superabundant, being non-reduced. } \end{remark} \section{Very-ampleness of vector bundles as in Theorem \ref{thm:main2}}\label{S:va} In this section, we will find sufficient conditions guaranteeing that a general bundle in $B_{\rm reg}$ (respectively, in $B_{\rm sup}$) is very ample. Concerning the component $B_{\rm reg}$ we already observed that, as predicted also by Theorem \ref{TeixidorRes}, it makes sense also on $C$ with general moduli and it is actually the unique component of $B_{d}^{k_2} \cap U^s_C(2,d)$ on such a $C$. Construction and properties of $B_{\rm reg}$ in this case are similar to those conducted in \S\;\ref{ss:regular}. We will find therefore very--ampleness conditions for a general $\mathcal F \in B_{\rm reg}$ also for $C$ with general moduli, since we will use this condition in \S\;\ref{ss:Hreg}. As for $B_{\rm sup}$ on a general $\nu$--gonal curve $C$ as above, in order to find sufficient very--ampleness conditions for $\mathcal F \in B_{\rm sup}$ general, we will make use of the following: \begin{lemma}\label{S2} (cf. \cite[Corollary\;1]{KK}) On a general $\nu$-gonal curve $C$ of genus $g \ge 2\nu-2$, with $\nu \ge 3$, there does not exist a $g^r_{\nu - 2 + 2r}$ with $\nu - 2 + 2r \le g-1$, $r \ge 2$. \end{lemma} \begin{theorem}\label{thm:veryampleness} Take notation as in Theorem \ref{thm:main2}. \noindent (i) If $C$ is a general $\nu$--gonal curve, with $d$ and $\nu$ as in \eqref{eq:ourbounds}, a general $\mathcal F \in B_{\rm reg}$ is very ample for $\nu \ge 4$ and $d \ge 3g-1$. If $C$ is a curve with general moduli, a general $\mathcal F \in B_{\rm reg}$ is very ample for $d \ge 3g-1$. \vskip 2pt \noindent (ii) If $C$ is a general $\nu$--gonal curve, with $d$ and $\nu$ as in \eqref{eq:ourbounds}, a general $\mathcal F \in B_{\rm sup}$ is very ample for $d+\nu \ge 3g+1$. \end{theorem} \begin{proof} (i) When $C$ is a general $\nu$-gonal curve, for $d$ and $\nu$ as in \eqref{eq:ourbounds}, the strategy of \cite[Lemma\;3.7(c)]{CFK} extends to \eqref{eq:ourbounds} for $\nu \geq 4$. Indeed, observe $K_C-p$ is very ample as it follows by the Riemann-Roch theorem; \color{black} moreover, as in \cite[Claim\;3.8]{CFK}, for general $D \in W^0_{4g-5-d}$, $K_C-D$ is very ample if $\nu \ge 4$ and $d \ge 3g-1$. Here we remark that the condition $d \ge 3g-1$ was crucially used in the proof of the claim. \color{black} Thus, as $\mathcal F \in B_{\rm reg}$ general fits in \eqref{exactB0}, part $(i)$ is proved in this case. When otherwise $C$ is with general moduli, $K_C-p$ is very ample. Since $\deg\; K_C-D = d-2g+3$ and it is of speciality $1$, then $h^0(K_C-D) = d-3g+5$ which is very ample as soon as the latter quantity is at least $4$. \noindent (ii) This part extends the proof of \cite[Lemma\;3.3(c)]{CFK} to \eqref{eq:ourbounds}. Observe first that $K_C-A$ is very ample: if not, Riemann-Roch theorem would give the existence of a $g^2_{\nu+2}$ on $C$, which is not allowed by Lemma \ref{S2} above. At the same time, since $\deg(N)=d-2g+2+\nu \geq g +3$ for $d+\nu \ge 3g+1$, a general $N$ is very ample too. Thus $\mathcal F_u$ as in \eqref{exactB1} is very ample too. \end{proof} \section{Hilbert schemes of surface scrolls}\label{S:Hilbert} In this section, we consider some natural consequences of Theorems \ref{thm:main2} and \ref{thm:veryampleness} to Hilbert schemes of surface scrolls in projective spaces. Precisely, with assumptions as in Theorem \ref{thm:veryampleness}, a general $\mathcal F \in B_{\rm reg}$ (respectively, $\mathcal F \in B_{\rm sup}$) gives rise to the projective bundle $\Pp(\Ff)\stackrel{\rho}{\to} C$ ($\rho$ is the fiber-map), which is embedded via $\vert \OO_{\Pp(\Ff)}(1)\vert$ as a smooth scroll $S$ of degree $d$, sectional genus $g$ and which is linearly normal in the projective space $\Pp^{k_2-1}= \Pp^{d-2g+3}$, as $k_2= d-2g+4$. We will say that the pair $(C, \Ff)$ {\em determines} $S$, equivalently that $S$ is {\em associated to} $(C, \Ff)$. In any of the above cases, the scroll $S$ is {\em stable}, since $\mathcal F$ is, and it is {\em special}, since $$h^1(S, \OO_S(1)) = h^1(\Pp(\Ff), \OO_{\Pp(\Ff)}(1)) = h^1(C,\mathcal F) =2.$$Let $P_S(t) \in \mathbb{Q}[t]$ be the Hilbert polynomial of $S$ and let $$\mathcal H_{d,g,k_2-1}$$be the union of components of the Hilbert scheme, parametrizing closed subschemes in $\Pp^{k_2-1}$, having Hilbert polynomial $P_S(t)$, whose general point corresponds to a smooth, linearly normal surface scroll in $\Pp^{k_2-1}$. \begin{proposition}\label{prop:Normal} If $\N_{S/\Pp^{k_2-1}}$ denotes the normal bundle of $S$ in $\Pp^{k_2-1}$, then: \begin{equation}\label{eq:tgS3bis} \chi(S,\N_{S/\Pp^{k_2-1}}) = 7g-7 + k_2(k_2-2) = 7g-7 + (d-2g+4) (d-2g+2). \end{equation}In particular, for any irreducible component $\mathcal H$ of $\mathcal H_{d,g,k_2-1}$, one has $$\dim \; \mathcal H \geq \chi(S,\N_{S/\Pp^{k_2-1}}) = 7g-7 + k_2(k_2-2),$$where the latter is the so called {\em expected dimension} of the Hilbert scheme. \end{proposition} \begin{proof} The {\em Euler's sequence} restricted to $S$ is \begin{equation}\label{eq:EulerS} 0 \to \Oc_S \to H^0(\Oc_S(H))^{\vee} \otimes \Oc_S(H) \to \T_{\Pp^{k_2-1}}|_S \to 0. \end{equation}Moreover, one has also the {\em normal bundle sequence} \begin{equation}\label{eq:tang} 0 \to \T_S \to \T_{\Pp^{k_2-1}}|_S \to \N_{S/\Pp^{k_2-1}} \to 0, \end{equation}where $\T_S$ denotes the tangent bundle of $S$. Since $S$ is a scroll of genus $g$, we have \begin{equation}\label{eq:tgS} \chi(\mathcal O_S) = 1-g, \quad \chi(\T_S) = 6 - 6g \end{equation}\color{black} (the latter equality is well--known. It can be easily computed: by using the structural scroll--morphism $ S \cong \mathbb{P}(\mathcal F) \stackrel{\rho}{\longrightarrow} C$ and the standard scroll exact sequence $0 \to \T_{rel} \to \T_S \to \rho^*(\T_C) \to 0$, where $\T_{rel}$ is the {\em relative tangent sheaf}; from the above sequence and the fact that $S$ is a scroll, one gets $\T_{rel} = \omega_S^{\vee} \otimes \rho^* (\omega_C) \cong \Oc_S(2 H - \rho^*(\det {\mathcal F})) $, and so $\chi(S,\T_S) = \chi (S,\T_{rel}) + \chi (S,\rho^*(\T_C)) = \chi(S, \Oc_S(2 H - \rho^*(\det {\mathcal F}) )) + \chi (C, \T_C)=2(3-3g)$). \color{black} From Euler's sequence above, we get \begin{equation}\label{eq:tgS2} \chi(\T_{\Pp^{k_2-1}}|_S) = k_2 (k_2-2) + g-1, \end{equation} \color{black} since $\chi(S, \Oc_S(H)) = \chi (C, \mathcal F) = d-2g+2 = k_2-2$ as it follows from the fact that $S \cong \mathbb{P}(\mathcal F)$ is a scroll, from Leray's isomorphism and projection formula. \color{black} Thus, from \eqref{eq:tang}, we get $$\chi(S, \N_{S/\Pp^{k_2-1}}) =7(g-1) + k_2(k_2-2)$$as in \eqref{eq:tgS3bis}. The last assertion \color{black} in the statement of Proposition \ref{prop:Normal} \color{black} is a consequence of \cite[Corollary 3.2.7]{Ser} and the fact that $h^2(\N_{S/\Pp^{k_2-1}})=0$, as it follows from $h^2(\Oc_S(H))=0$, \eqref{eq:EulerS} and \eqref{eq:tang}. \end{proof} With this set--up, the aim of this section is to prove Theorem \ref{thm:Hilb}. This will be done in the following subsections. \subsection{The components $\mathcal H_{{\rm sup},\nu}$'s}\label{ss:Hsup} In the following section we will give the proof of Theorem \ref{thm:Hilb} (ii). We start giving a parametric construction of the components $\mathcal H_{{\rm sup},\nu}$'s for every possible $(d,g,\nu)$ arising from \eqref{eq:ourbounds} and conditions in Theorems \ref{thm:Hilb} (ii) and \ref{thm:veryampleness}\;(ii). To this aim, consider: \begin{itemize} \item $C \in \mathcal M^1_{g,\nu}$ general \item $\Ff \in B_{\rm sup}$ general on $C$ \item $\Phi \in {\rm PGL}(k_2, \mathbb{C}) = {\rm Aut}(\Pp^{k_2-1})$. \end{itemize}The triple $(C, \Ff, \Phi)$ determines the smooth scroll $\Phi(S) \subset \Pp^{k_2-1}$, where $S$ is associated to $(C, \Ff)$. For each triple $(d,g,\nu)$, scrolls $\Phi(S)$ as above fill--up an irreducible subset $\mathcal X_{\nu}$ of $\mathcal H_{d,g,k_2-1}$, as $\mathcal M^1_{g,\nu}$, $B_{\rm sup}$ on $C$ and ${\rm PGL}(k_2, \mathbb{C})$ are all irreducible. Therefore, $\mathcal X_{\nu}$ is contained in (at least) one irreducible component of $\mathcal H_{d,g,k_2-1}$; any such irreducible component dominates $\mathcal M^1_{g,\nu}$ (as $\mathcal X_{\nu}$ does, by construction) and has dimension at least $\dim\;\mathcal X_{\nu}$. Thanks to the parametric representation of $\mathcal X_{\nu}$, we can easily compute its dimension. \begin{proposition}\label{cl:dimX} $\dim\; \mathcal X_{\nu} = 8g-d-12 + k_2^2 = 8g-d - 12 + (d - 2g + 4)^2.$ In particular, if $d\leq 3g-4$, then the closure of $\mathcal X_{\nu} $ cannot be an irreducible component of $\mathcal H_ {d,g,k_2 -1}$. \end{proposition} \begin{proof} Let $(C, \Ff, \Phi)$ and $(C', \Ff', \Phi')$ be two triples such that $\Phi(S) = \Phi'(S')$. Since $\Phi$ and $\Phi'$ are both projective transformations, the previous equality implies $S' = ((\Phi')^{-1} \circ \Phi)(S)$, i.e. $S$ and $S'$ are projectively equivalent via $\Psi:= ((\Phi')^{-1} \circ \Phi)$ and the triples $(C, \Ff, {\rm Id})$ $(C', \Ff', \Psi)$ map to the same point in $\mathcal X_{\nu}$. This, in particular, implies that the abstract ruled surfaces $\Pp(\Ff)$ on $C$ and $\Pp(\Ff')$ on $C'$ are isomorphic via $\Psi$. Thus, $\Psi|_C: C \to C'$ has to be an isomorphism, i.e. $C$ and $C'$ corresponds to the same point of $\mathcal M^1_{g,\nu}$ and $\Psi|_C \in {\rm Aut}(C)$. On the other hand, since $C \in \mathcal M^1_{g,\nu}$ is general, with $\nu \geq 3$, one has ${\rm Aut}(C) = \{Id_C\}$ (cf.\;computations in\;\cite[pp.\;275-276]{GH}). Therefore, with notation as in \cite{Ma}, $\Psi \in {\rm Aut}_C(\Pp(\Ff))$, which is the subgroup of ${\rm Aut}(\Pp(\Ff))$ of automorphisms of $\Pp(\Ff)$ over $C$ (i.e. fixing $C$ pointwise). From \cite[Lemma\;3]{Ma}, one has the exact sequence of algebraic groups $$\{Id\} \to \frac{{\rm Aut}(\Ff)}{\mathbb C^*} \to {\rm Aut}_C(\Pp(\Ff)) \to \Delta \to \{Id\}$$where $\Delta$ is a finite subgroup of the $2$-torsion part of ${\rm Pic}^0(C)$. Since $\Ff$ is stable, so simple (i.e. ${\rm Aut}(\Ff) \cong \mathbb C^*$), we deduce that ${\rm Aut}_C(\Pp(\Ff))$ is a finite group. This means that $$\dim\; \mathcal X_{\nu}= \dim \; \mathcal M^1_{g,\nu} + \dim \; B_{\rm sup} + \dim\; {\rm PGL}(k_2, \mathbb{C});$$the latter sum is: $$(2g+2\nu - 5) + (6g-d-2\nu -6) + (k_2^2-1) = 8g - d -12 + k_2^2 = 8g-d - 12 + (d - 2g + 4)^2.$$Notice moreover that, if $d=3g-5, \; 3g-4$, one has that $$\dim\; \mathcal X_{\nu} < \chi(\mathcal N_{S/\Pp^{k_2-1}}) = 7g-7 + k_2(k_2-2)$$as in \eqref{eq:tgS3bis}; this means that, in these cases, $\mathcal X_{\nu}$ cannot be a component of the Hilbert scheme as it follows by Proposition \ref{prop:Normal}. \end{proof} Let $[S] \in \mathcal X_{\nu} \subset \mathcal H_{d,g,k_2-1}$ be the point corresponding to the scroll $S \subset \Pp^{k_2-1}$; then $$T_{[S]} (\mathcal H_{d,g,k_2-1}) \cong H^0(S,\mathcal N_{S/\Pp^{k_2-1}}).$$ We first focus on the case $d \geq 3g-3$ and prove that $\mathcal X_{\nu}$ fills--up a component of $\mathcal H_{d,g,k_2-1}$ with properties as in Theorem \ref{thm:Hilb} (ii). To prove this, we are reduced to comput\color{black}ing \color{black} the cohomology of $\mathcal N_{S/\Pp^{k_2-1}}$ for $[S]$ a general point of $\mathcal X_{\nu}$. This will be done in the following proposition. \begin{proposition}\label{prop:Normalsup} Let $S \subset \Pp^{k_2-1}$ be a smooth, linearly normal, special scroll which corresponds to a general point of $\mathcal X _\nu$ as above for $d\geq 3g-3$. Then, one has: \begin{itemize} \item[(i)] $h^0( S, \N_{S/\Pp^{k_2-1}}) = 8g-d-12 + k_2^2 = 8g-d-12+(d-2g+4)^2$; \item[(ii)] $h^1( S, \N_{S/\Pp^{k_2-1}}) = d - 3g + 3 $; \item[(iii)] $h^2( S, \N_{S/\Pp^{k_2-1}}) = 0$. \end{itemize} \end{proposition} \begin{proof} Observe that (iii) has already been proved in Proposition \ref{prop:Normal}. We moreover observed therein that $$\chi(S, \N_{S/\Pp^{k_2-1}}) = h^0(S, \N_{S/\Pp^{k_2-1}}) - h^1(S, \N_{S/\Pp^{k_2-1}})= 7(g-1) + k_2(k_2-2)$$as in \eqref{eq:tgS3bis}. Therefore, the rest of the proof is concentrated on computing $h^1(S, \N_{S\vert \Pp^{k_2-1}})$. Since $S \cong \Pp(\Ff)$ is a scroll corresponding to a general point of $\mathcal X$, then $\mathcal F$ corresponds to the general point of $B_{\rm sup}$ on $C$. Let $\Gamma$ be the unisecant of $S$ of degree $2g-2-\nu$ corresponding to \color{black} the \color{black} quotient line bundle $\Ff \to\!\!\!\!\! \to K_C-A$ as in \eqref{exactB1} (cf.\;\cite[Ch.\;V.\;Proposition\;2.6]{H}). \begin{claim}\label{cl:flam1502} One has $h^1(S, \N_{S /\Pp^{k_2-1}} (-\Gamma)) = h^2(S, \N_{S /\Pp^{k_2-1}} (-\Gamma)) = 0$, hence \begin{equation}\label{eq:tgS15} h^1(S, \N_{S /\Pp^{k_2-1}}) = h^1(\Gamma, \N_{S /\Pp^{k_2-1}}|_{\Gamma}). \end{equation} \end{claim} \begin{proof}[Proof of Claim \ref{cl:flam1502}] Look at the exact sequence \[0 \to \N_{S /\Pp^{k_2-1}} (-\Gamma) \to \N_{S /\Pp^{k_2-1}} \to \N_{S /\Pp^{k_2-1}}|_{\Gamma} \to 0.\] From \eqref{eq:tang} tensored by $\Oc_S(-\Gamma)$ we see that $h^2(S, \N_{S /\Pp^{k_2-1}} (-\Gamma)) = 0$ follows from $h^2(S, \T_{\Pp^r}|_S (-\Gamma)) = 0$ which, by Euler's sequence restricted to $S$, follows from $h^2(S, \Oc_S(H - \Gamma)) = h^0(S, \Oc_S(K_S - H + \Gamma)) = 0$, since $K_S - H + \Gamma$ intersects the ruling of $S$ negatively. As for $h^1(S, \N_{S /\Pp^{k_2-1}} (-\Gamma))= 0$, this follows from $h^1(S, \T_{\Pp^{k_2-1}}|_S (-\Gamma)) = h^2(S, \T_S (-\Gamma)) = 0$. By Euler's sequence restricted to $S$, the first vanishing follows from $h^2(S, \Oc_S(-\Gamma)) = h^1(S, \Oc_S(H-\Gamma))=0$. Since $K_S+\Gamma$ meets the ruling negatively, one has $h^0(S, \Oc_S(K_S +\Gamma)) = h^2(S, \Oc_S(-\Gamma)) =0$. Moreover $h^1(S, \Oc_S(H-\Gamma)) = h^1(C, N)=0$, as it follows from \eqref{exactB1} and the fact that $N \in {\rm Pic}^{d-2g+2+\nu}(C)$ is non special, being general of its degree (cf. Theorem \ref{thm:main2}\;(ii)). In order to prove $h^2(S, \T_S (-\Gamma)) = 0$, consider the exact sequence \[0 \to \T_{rel} \to \T_S \to \rho^*(\T_C) \to 0\]arising from the structure morphism $S\cong \Pp(\Ff) \stackrel{\rho}{\to} C$. The vanishing we need follows from $h^2(S, \T_{rel} \otimes \Oc_S(-\Gamma)) = h^2 (S, \Oc_S(-\Gamma) \otimes \rho^*(\T_C)) = 0$: the first vanishing holds since $\T_{rel} \cong \Oc_S (2H - df)$, where $f = \rho^{-1}(q)$ is a ruling of $S$, therefore $\Oc_S(K_S + \Gamma) \otimes \T_{rel}^{*}$ restricts negatively to the ruling, so it cannot be effective. Similar considerations yield the second vanishing $h^2 (S, \Oc_S(-\Gamma) \otimes \rho^*(\T_C)) = 0$. \end{proof} Consider now the exact sequence \begin{equation}\label{eq:B} 0 \to \N_{\Gamma/S} \stackrel{\alpha}{\longrightarrow} \N_{\Gamma /\Pp^{k_2-1}} {\longrightarrow} \N_{S / \Pp^{k_2-1}}|_{\Gamma} \to 0. \end{equation} \begin{claim}\label{cl:flam2611} The map $$H^1(\Gamma, \N_{\Gamma/S}) \stackrel{H^1(\alpha)}{\longrightarrow} H^1(\Gamma, \N_{\Gamma/\Pp^{k_2-1}})$$arising from \eqref{eq:B} is injective. \end{claim} \begin{proof}[Proof of Claim \ref{cl:flam2611}] Consider $\Gamma \subset \langle \Gamma \rangle = \Pp^{g-\nu} \subset \Pp^{k_2-1}$, where $\langle \Gamma \rangle$ denotes the linear span given by the section $\Gamma$ and where $\dim \; \langle \Gamma \rangle := h^0(K_C-A) - 1 = h^1(A) - 1 = g-\nu$, as it follows from \eqref{eq:h12A}. From the inclusions $\Gamma \subset \Pp^{g-\nu} \subset \Pp^{k_2-1}$ we get the sequence \begin{equation}\label{eq:tg*} 0 \to \N_{\Gamma/\Pp^{g-\nu}} \to \N_{\Gamma\vert \Pp^{k_2-1}} \to \N_{\Pp^{g-\nu}/\Pp^{k_2-1}}|_{\Gamma} \to 0, \end{equation}Take the Euler sequence of $\Pp^{g-\nu}$ restricted to $\Gamma$, i.e. $$0 \to \Oc_{\Gamma} \to H^0(\Oc_{\Gamma}(1))^{\vee} \otimes \Oc_{\Gamma}(1) \cong (K_C-A)^{\oplus(g-\nu+1)} \to \T_{\Pp^{g-\nu}}|_{\Gamma} \to 0;$$ \color{black} taking cohomology and dualizing, \color{black} we get that $$H^1(\T_{\Pp^{g-\nu}}|_{\Gamma})^{\vee} \cong {\rm Ker}\left(H^0(K_C-A) \otimes H^0(A) \stackrel{\mu_{0,A}}{\longrightarrow} H^0(K_C)\right)$$as in \eqref{eq:kers}. Therefore, from \color{black} \eqref{eq:kers} and \color{black} \eqref{eq:h12A} one gets $$h^1(\T_{\Pp^{g-\nu}}|_{\Gamma}) = g + 2 - 2 \nu.$$Consider now the exact sequence defining the normal bundle of $\Gamma$ in its linear span: $$0 \to \T_{\Gamma} {\longrightarrow} \T_{\Pp^{g-\nu}}|_{\Gamma} {\longrightarrow} \N_{\Gamma/\Pp^{g-\nu}} \to 0;$$the associated coboundary map $H^0( \N_{\Gamma/\Pp^{g-\nu}}) \stackrel{\partial}{\longrightarrow} H^1 (\T_{\Gamma})$ identifies with the differential at the point $[\Gamma]$ of the natural map $$\Psi: {\rm Hilb}_{g,2g-2-\nu, g-\nu} \to \mathcal M_g,$$where ${\rm Hilb}_{g,2g-2-\nu, g-\nu}$ the Hilbert scheme of curves of genus $g$, degree $2g-2-\nu$ in $\Pp^{g-\nu}$. By construction, $$\dim\; {\rm coker}(d\Psi_{[\Gamma]}) = \dim \; \mathcal M_g - \dim \;\mathcal M^1_{g,\nu} = 3g-3 - (2g+2\nu -5) = g+2 - 2\nu = h^1(\T_{\Pp^{g-\nu}}|_{\Gamma}),$$i.e. the map $$H^1( \T_{\Gamma}) \stackrel{H^1(\lambda)}{\longrightarrow} H^1(\T_{\Pp^{g-\nu}}|_{\Gamma})$$is surjective. Since $h^2(\T_{\Gamma}) = 0$, this implies $h^1(\N_{\Gamma/\Pp^{g-\nu}}) = h^2(\N_{\Gamma/\Pp^{g-\nu}}) = 0$. Therefore, from \eqref{eq:tg*}, one has \begin{equation}\label{eq:normals1} H^1(\N_{\Gamma/\Pp^{k_2-1}}) \cong H^1(\N_{\Pp^{g-\nu}/\Pp^{k_2-1}}|_{\Gamma}) = H^1(\Oc_{\Gamma}(1)^{\oplus(k_2-1 - g + \nu)}) \cong H^1((K_C-A)^{\oplus(k_2-1 - g + \nu)}). \end{equation} Since the scroll $S$ arises from $\Ff \in B_{\rm sup}$ general (on $C \in \mathcal M^1_{g,\nu}$ general), then $\Ff$ fits in \eqref{exactB1}, with $N$ general of its degree. In particular one has $$0 \to H^0(N) \to H^0(\Ff) \to H^0(K_C-A) \to 0.$$Therefore, one has also $$0\to H^0(C, K_C-A)^{\vee} \to H^0(C, \Ff)^{\vee} \to H^0(C, N)^{\vee} \to 0.$$Since $H^0(S, \Oc_S(1)) \cong H^0(C, \Ff)$ and $\Oc_{\Gamma}(1)\cong K_C-A$, the Euler sequences of the projective spaces $\Pp^{g-\nu}$ and $\Pp^{k_2-1}$ restricted to $\Gamma$ give the following commutative diagram: \begin{displaymath} \begin{array}{ccccccc} & & & 0 & & 0 & \\ & & & \downarrow & & \downarrow & \\ 0 \to & \Oc_{\Gamma} & \to & H^0(C, K_C-A)^{\vee} \otimes K_C-A & \to & \T_{\Pp^{g-\nu}}|_{\Gamma} & \to 0 \\ & || & & \downarrow & & \downarrow & \\ 0 \to & \Oc_{\Gamma} & \to & H^0(C, \Ff)^{\vee} \otimes K_C-A & \to & \T_{\Pp^{k_2-1}}|_{\Gamma} & \to 0 \\ & & & \downarrow & & \downarrow & \\ & & & H^0(C, N)^{\vee} \otimes K_C-A&\cong & \N_{\Pp^{g-\nu}\vert \Pp^{k_2-1}}|_{\Gamma} & \\ & & & \downarrow & & \downarrow & \\ & & & 0 & & 0 & \end{array} \end{displaymath}This shows in particular that $$\N_{\Pp^{g-\nu}\vert \Pp^{k_2-1}}|_{\Gamma} \cong H^0(C, N)^{\vee} \otimes K_C-A$$and so, in \eqref{eq:normals1}, one has more precisely $$H^1(\N_{\Gamma/\Pp^{k_2-1}}) \cong H^1(\N_{\Pp^{g-\nu}/\Pp^{k_2-1}}|_{\Gamma}) \cong H^0(N)^{\vee} \otimes h^1(K_C-A).$$ On the other hand, $\N_{\Gamma/S} \cong K_C-A-N$ (cf. \cite[Ch.\;V.\;Proposition\;2.6]{H}), so $$h^0(\Gamma, \N_{\Gamma/S})= 0, \; h^1(\Gamma, \N_{\Gamma/S}) = d- 3g+3 + 2 \nu.$$Therefore, if we take the map $$H^1(\Gamma, \N_{\Gamma/S}) \stackrel{H^1(\alpha)}{\longrightarrow} H^1(\Gamma, \N_{\Gamma/\Pp^{k_2-1}})$$arising from \eqref{eq:B}, this identifies with the natural map $$H^1(K_C-A-N) \stackrel{H^1(\alpha)}{\longrightarrow} H^0(N)^{\vee} \otimes H^1(K_C-A)$$whose dual is $$H^0(N) \otimes H^0(A) \stackrel{H^1(\alpha)^{\vee}}{\longrightarrow} H^0(N+A),$$i.e. $H^1(\alpha)^{\vee} = \mu_{A,N}$ as in \eqref{muA} is a natural multiplication map. Since $N$ is non special and by definition of $A$, one has $$h^0(N) = d - 3g + 3 + \nu, \; h^0(A) = 2;$$moreover, from \eqref{eq:kers}, one has $${\rm ker} (\mu_{A,N}) = h^0(N-A) = d - 3g + 3.$$Therefore, $$\dim {\rm Im}\;\mu_{A,N} = 2 (d - 3g + 3 + \nu) - \color{black} (\color{black}d - 3g + 3 \color{black}) \color{black}= d - 3g + 3 + 2 \nu = h^0(N+A),$$i.e. $\mu_{A,N} = H^1(\alpha)^{\vee}$ is surjective. This implies that $H^1(\alpha)$ in injective, as wanted. \end{proof}Considering once again \eqref{eq:B} and \eqref{eq:normals1}, the injectivity of $H^1(\alpha)$ and $h^2(\N_{\Gamma/S}) =0$ give $$h^1(\N_{S / \Pp^{k_2-1}}|_{\Gamma}) = h^1(\N_{\Gamma /\Pp^{k_2-1}}) - h^1(\N_{\Gamma/S}) = $$ $$2 (k_2 - 1 + g - \nu) - h^1(K_C-A-N) = 2 ((k_2 - 1 + g - \nu)) - (d + 2 \nu - 3 g + 3) = d - 3g + 3.$$From \eqref{eq:tgS15}, the (ii) of Proposition \ref{prop:Normalsup} follows and the proof is completed. \end{proof} \color{black} To conclude the proof of Theorem \ref{thm:Hilb} (ii), for $d \geq 3g-3$, we need to show that $\mathcal X_{\nu}$ fills--up a dense subset of a unique component, say $\mathcal H_{{\rm sup},\nu}$, with all the properties mentioned therein. To deduce this, it suffices to observe first that $$ 8g-d-12 + k_2^2 = \dim\; \mathcal X_{\nu} \leq \dim \; T_{[S]} (\mathcal H_{d,g,k_2-1}) = h^0(S,\N_{S / \Pp^{k_2-1}})= 8g-d-12 + k_2^2,$$the latter equality following from Proposition \ref{prop:Normalsup} (i). Moreover, as $$8g-d-12 + k_2^2 = \chi(\N_{S / \Pp^{k_2-1}})+ d-3g+3,$$ it follows that the component $\mathcal H_{{\rm sup},\nu}$ (arising as the closure of $\mathcal X_{\nu}$ in $\mathcal H_{d,g,k_2-1}$) is a {\em superabundant} (resp., {\em regular}) component of $\mathcal H_{d,g,k_2-1}$ for $d\geq 3g-2$ (resp. $d=3g-3$). By construction of $\mathcal H_{{\rm sup},\nu}$, it follows that it dominates $\mathcal M^1_{g,\nu}$. This implies that $\mathcal H_{{\rm sup},\nu} \neq \mathcal H_{{\rm sup},\nu'}$ for $\nu \neq \nu'$. Thus the proof of Theorem \ref{thm:Hilb} (ii) is completed for $d \geq 3g-3$. Now we take into account the cases $3g-5 \leq d \leq 3g-4$; recall that $\mathcal X_{\nu}$ has to be strictly contained in at least one irreducible component $\mathcal H$ of $\mathcal H_{d,g,k_2-1}$. To investigate such a component $\mathcal H$, we will use the following lemma. \begin{lemma}\label{lem_special} For $3g-5 \leq d \leq 3g-4$, assume that $\mathcal H$ is an irreducible component of $\mathcal H _{d,g, k_2 -1}$, whose general point corresponds to a smooth, stable scroll. Let $\mathcal F _u$ be a rank 2 vector bundle associated to a general element of $\mathcal H$, where $\mathcal F_u$ arises as an extension of the form \eqref{degree}, with $L$ necessarily special, on a suitable smooth curve $C$ of genus $g$. Then, one must have $h^1(L) =1$. \end{lemma} \begin{proof} By the definition of $\mathcal H _{d,g, k_2 -1}$, the general point $[S]$ of $\mathcal H$ represents a smooth, linearly normal scroll $S$ in $\Pp^{k_2-1}$, i.e. it is of speciality exactly $2$; the scroll $S$ is associated to a degree $d$, very ample, rank 2 vector bundle $\mathcal F_u$ on a smooth curve $C$ of genus $g$. With a small abuse of notation, in what follows we will denote simply by $u \in \mathcal H$ the corresponding point $[S]$. From the fact that $\mathcal F_u$ is special and stable, by Theorem \ref{CF} $\mathcal F_u$ arises as an extension \eqref{degree}. Suppose that $h^1 (L) > 1$, then one must have $h^1(L) =2$. Since $\mathcal F_u$ is stable with $d \geq 3g-5$ then, by \eqref{eq:neccond}, one has $\delta:= \deg\; L > \frac{3g-5}{2}$. Then $|K_C -L|$ is a $g^1_k$ with $k < \frac{g+1}{2}$, where $k:=2g-2-\deg \; L$. Thus there exists an open dense subset $\mathcal H_0$ of $\mathcal H$ which admits a map: $$\eta : \mathcal H_0 \to {\mathcal P}ic ^k (p)$$ given by $\eta (u) := K_C -L$, where ${\mathcal P}ic ^k (p)$ is the relative Picard variety for $p: \mathcal C \to S$ a suitable family of smooth curves of genus $g$. By $h^1(L) =2$, the image of $\eta$ is included in $\mathcal W ^1_k$, where $\mathcal W ^1_k$ is a subvariety of ${\mathcal P}ic ^k (p)$ parameterizing pairs $(C, M)$ with $h^0 (M) \geq 2$. It is known that $dim \; \mathcal W ^1_k =2g +2k -5$ for $k < \frac{g+1}{2}$ (see \cite[Proposition (6.8)]{ACG}). The fiber of $\eta$ has dimension at most $$\dim \; {\rm Pic}^{d-\delta} (C)+ \dim \; \mathbb P (\ext^1(L, N)) + \dim \;{\rm PGL}(k_2, \mathbb{C}) = 6g -d-2k-6+k_2 ^2 -1$$as it follows by \eqref{eq:m}. In sum, we get: $$\dim \; \mathcal H \leq (2g +2k-5) + (6g -d-2k-6 +k_2 ^2 -1) =8g-d-12 +k_2 ^2.$$ This cannot occur for $d\leq 3g-4$, since any irreducible component has dimension at least $\chi (\N_{S / \Pp^{k_2-1}})$ as in Proposition \ref{prop:Normal}. \end{proof} \begin{corollary} If $d=3g-5, \; 3g-4$, $\mathcal X_{\nu}$ is strictly contained a component of $\mathcal H_{d,g,k_2-1}$ whose general point is associated to an extention \eqref{degree} with $h^1 (L)=1$ on a suitable smooth curve of genus $g$. \end{corollary} \subsection{The component $\mathcal H_{\rm reg}$}\label{ss:Hreg} As we did above for the components $\mathcal H_{{\rm sup},\nu}$'s, we first give a parametric construction of the component $\mathcal H_{\rm reg}$. Take integers $d,\;g,\;\nu$ as in \eqref{eq:ourbounds} and in Theorem \ref{thm:veryampleness}\;(i). As observed therein, the construction of $B_{\rm reg}$, conducted in Sect.\;\ref{ss:regular} for $C \in \mathcal M^1_{g,\nu}$ general, holds {\em verbatim} for $C$ with general moduli and, in particular, it coincides with the (unique) component $\mathcal B$ of $B_d^{k_2}\cap U_C(2,d)^s$ as in the statement of Theorem \ref{TeixidorRes}; moreover, very-ampleness conditions in Theorem \ref{thm:veryampleness}\;(i) holds also for $C$ general. To construct $\mathcal H_{\rm reg}$, take therefore: \begin{itemize} \item $C \in \mathcal M_g$ general \item $\Ff \in B_{\rm reg}$ general on $C$ \item $\Phi \in {\rm PGL}(k_2, \mathbb{C}) = {\rm Aut}(\Pp^{k_2-1})$. \end{itemize}As in the previous section, the triple $(C, \Ff, \Phi)$ determines the smooth scroll $\Phi(S) \subset \Pp^{k_2-1}$, where $S$ is associated to $(C, \Ff)$. Such scrolls $\Phi(S)$ fill-up an irreducible subset $\mathcal Y$ of $\mathcal H_{d,g,k_2-1}$, as $\mathcal M_g$, $B_{\rm reg}$ on $C$ and ${\rm PGL}(k_2, \mathbb{C})$ are all irreducible. Therefore, $\mathcal Y$ is contained in (at least) one irreducible component of $\mathcal H_{d,g,k_2-1}$; any such component dominates $\mathcal M_g$ (as $\mathcal Y$ does, by construction) and it is of dimension at least $\dim\;\mathcal Y$. Moreover, since $ \mathcal M^1_{g,\nu} \subsetneq \mathcal M_g$ is also irreducible and, for $C' \in \mathcal M^1_{g,\nu}$ general, $B_{\rm reg}$ on $C'$ is irreducible, of the same dimension as $B_{\rm reg}$ on $C$, the triples $(C', \Ff', \Phi)$, for $C' \in \mathcal M^1_{g,\nu}$ general, $\Ff' \in B_{\rm reg}$ general on $C'$ and $\Phi \in {\rm PGL}(k_2, \mathbb{C})$ fill--up an irreducible, closed subset $\mathcal Y' \subsetneq \mathcal Y$, where $\mathcal Y'$ dominates $ \mathcal M^1_{g,\nu}$ (but not $\mathcal M_g$) by construction. Thanks to the parametric representation of $\mathcal Y$, reasoning as in the proof of Proposition \ref{cl:dimX}, one can easily compute $\dim\;\mathcal Y$, as ${\rm Aut}(C) = \{Id_C\}$ for $C$ with general moduli. Thus, one gets: $$\dim\; \mathcal Y = \dim \; \mathcal M_{g} + \dim \; B_{\rm reg} + \dim\; {\rm PGL}(k_2, \mathbb{C});$$the latter quantity is $$(3g-3) + (8g-2d-11) + (k_2^2-1) = 11g - 2d -15 + k_2^2 = 11g - 2d - 15 + (d-2g+4)^2 = $$ $$= 11g - 2d - 15 + 2 (d-2g+4) + (d-2g+2)(d-2g+4)= 7g - 7 +(d-2g+4) (d-2g+2) = 7g - 7 + k_2 (k_2-2).$$ To prove that $\mathcal Y$ fills--up a dense subset of a unique component of $\mathcal H_{d,g,k_2-1}$, with properties as in Theorem \ref{thm:Hilb} (i), we are reduced to compute the cohomology of the normal bundle $\mathcal N_{S/\Pp^{k_2-1}}$ for $S$ corresponding to a general point of $\mathcal Y$. This will be done in the following: \begin{proposition}\label{prop:Normalreg} Let $S \subset \Pp^{k_2-1}$ correspond to a general point of $\mathcal Y$ as above. Then, one has: \begin{itemize} \item[(i)] $h^0( S, \N_{S/\Pp^{k_2-1}}) = 7g-7 + k_2(k_2-2) = 7g-7 + (d-2g+4) (d-2g+2)$; \item[(ii)] $h^1( S, \N_{S/\Pp^{k_2-1}}) = 0$; \item[(iii)] $h^2( S, \N_{S/\Pp^{k_2-1}}) = 0$. \end{itemize} \end{proposition} \begin{proof} The proof of (iii) has already been given in Proposition \ref{prop:Normal}. Therefore, $\chi(\N_{S/\Pp^{k_2-1}}) = h^0(\N_{S/\Pp^{k_2-1}}) - h^1(\N_{S/\Pp^{k_2-1}} ) $ \color{black} is given in \color{black} \eqref{eq:tgS3bis}. \color{black} The proof is reduced to showing that \color{black} $h^1(S, \N_{S\vert \Pp^{k_2-1}}) =0$. Since $S \cong \Pp(\Ff)$ corresponds to a general point of $\mathcal Y$, then $\mathcal F$ corresponds to a general point of $B_{\rm reg}$ on $C$ with general moduli. To compute $h^1(S, \N_{S/\Pp^{k_2-1}})$, we therefore cannot proceed as in the proof of Proposition \ref{prop:Normalsup} (where we used the section of minimal degree $\Gamma$ corresponding to the quotient line bundle $K_C-A$ and the fact that $h^1(C,N) =0$ for $N$ as in \eqref{exactB1}). Indeed, in the present case the section $\Gamma$ corresponds to the quotient line bundle $\Ff \to\!\!\!\!\! \to K_C-p$ as in \eqref{exactB0}, for which $h^1(K_C-D) = h^1(K_C-p) =1$. To sum--up, one cannot reason as in the previous case. To this aim, consider the natural exact sequence on $S$: \begin{equation}\label{tgrel} 0 \to \T_{rel} \to \T_S \to \rho^*(\T_C) \to 0, \end{equation}arising from the structure morphism $S\cong \Pp(\Ff) \stackrel{\rho}{\to} C$. One has $h^2(\T_S) =0$, as it follows from \begin{equation}\label{tgrel2} 0 \to \rho_* (\T_{rel}) \to \rho_*(\T_S) \to \T_C \to 0, \end{equation}obtained by push-forword \eqref{tgrel} on $C$, and from Leray's isomorphisms. From the exact sequence defining the normal bundle: \begin{equation}\label{eq:normal} 0 \to \T_S \stackrel{\gamma_S}{\longrightarrow} \T_{\Pp^{k_2-1}}|_S \to \N_{S/\Pp^{k_2-1}} \to 0 \end{equation}and the fact that $h^2(\T_S) =0$, one has: $$(*)\;\;\; h^1(\N_{S/\Pp^{k_2-1}}) =0 \;\; \Leftrightarrow \;\; H^1(\T_S) \stackrel{H^1(\gamma_S)}{\longrightarrow} H^1(\T_{\Pp^{k_2-1}}|_S) \;\; \mbox{is surjective};$$therefore, we are reduced to show\color{black}ing \color{black} that the map $H^1(\gamma_S)$ is a surjective map. On the other hand, since \eqref{tgrel}, \eqref{tgrel2} and Leray's isomorphisms give $h^0(\rho^*(\T_C) ) = h^0(\T_C) = h^2(\T_{rel}) = h^2(\rho_* (\T_{rel}))= 0$, then one has $$H^1(\T_S) \cong H^1(\rho_*(\T_S)) = H^1(\rho_* (\T_{rel})) \oplus H^1(\T_C);$$moreover, from $K_S = - 2 H \otimes \rho^*(\omega_C \otimes \det (\mathcal F))$ (cf. \cite[Ch.\;V]{H}), one gets $\T_{rel} \cong \Oc_S ( 2 H \otimes \rho^*(\det (\mathcal F)^*))$ thus, by projection formula, $\rho_*(\T_{rel}) = Sym^2(\mathcal F) \otimes \det (\mathcal F)^*$. To sum up, one has: \begin{equation}\label{eq:isom1} H^1(S, \T_S) \cong H^1\left(C, Sym^2(\mathcal F) \otimes \det (\mathcal F)^*\right) \oplus H^1\left(C, \T_C\right). \end{equation} Similarly, the Euler sequence of $\mathbb{P}^{k_2-1}$ restricted to $S$ reads: \begin{equation}\label{eq:EulerSS} 0 \to \Oc_S \to H^0(\mathcal F)^{\vee} \otimes \Oc_S(H) \stackrel{\tau_S}{\longrightarrow} \T_{\Pp^{k_2-1}}|_S \to 0, \end{equation}as it follows by the definition of $\Oc_S(H)$ and the fact that $S \subset \mathbb{P}^{k_2-1}$ is linearly normal. Applying $\rho_*$ to \eqref{eq:EulerSS}, one has: \begin{equation}\label{eq:EulerC} 0 \to \Oc_C \to H^0(\mathcal F)^{\vee} \otimes \mathcal F \stackrel{\rho_*(\tau_S)}{\longrightarrow} \rho_*(\T_{\Pp^{k_2-1}}|_S) \to 0, \end{equation}with $H^i(S, \T_{\Pp^{k_2-1}}|_S) \cong H^i(C, \rho_*(\T_{\Pp^{k_2-1}}|_S))$, for $i \geq 0$. Since the above identifications have been all obtained by using \eqref{tgrel} and \eqref{eq:EulerSS}, which are both compatible with \eqref{eq:normal}, then one has: {\small $$(**)\;\;\; h^1(\N_{S/\Pp^{k_2-1}}) =0 \;\; \Leftrightarrow \;\; H^1\left(C, Sym^2(\mathcal F) \otimes \det (\mathcal F)^*\right) \oplus H^1\left(C, \T_C\right) \stackrel{H^1(\rho_*(\gamma_S))}{\longrightarrow} H^1(\rho_*(\T_{\Pp^{k_2-1}}|_S)) \to 0.$$ } From \eqref{eq:EulerSS} and $h^2(\Oc_S)=0$, one has $$H^0(\mathcal F)^{\vee} \otimes H^1(\Oc_S(H)) \stackrel{H^1(\tau_S)}{\longrightarrow} H^1(\T_{\Pp^{k_2-1}}|_S) \to 0$$and, as above $H^1(\tau_S)$ identifies with the surjective map \begin{equation}\label{eq:surjectivity1} H^0(\mathcal F)^{\vee} \otimes H^1(\mathcal F) \stackrel{\tiny H^1(\rho_*(\tau_S))}{\longrightarrow} H^1(\rho_*(\T_{\Pp^{k_2-1}}|_S)) \to 0 \end{equation} Therefore, to show the surjectivity of $H^1(\rho_*(\gamma_S))$ as in $(**)$, it suffices to show there exists a natural surjective map \begin{equation}\label{eq:surjectivity2} H^1\left(C, Sym^2(\mathcal F) \otimes \det (\mathcal F)^*\right) \oplus H^1\left(C, \T_C\right) \stackrel{\psi_C}{\longrightarrow} H^0(\mathcal F)^{\vee} \otimes H^1(\mathcal F) \end{equation} compatible with the maps in the previous diagrams. By duality, this is equivalent to prove the existence of an injective map \begin{equation}\label{eq:injectivity1} H^0(\mathcal F) \otimes H^0(\omega_C \otimes \mathcal F^*) \stackrel{\psi^{\vee}_C}{\hookrightarrow} H^0\left(C, \omega_C \otimes Sym^2(\mathcal F^*) \otimes \det (\mathcal F) \right) \oplus H^0\left(C, \omega_C^{\otimes 2}\right) \end{equation} compatible with the dual maps of the previous diagrams. Since $\mathcal F$ fits in an exact sequence of the form \eqref{exactB0}, for $p$ and $D$ general on $C$ a curve with general moduli, i.e. $\mathcal F = \mathcal F_u$ for $u \in \mathcal W_1 \subsetneq \ext^1(K_C-p, K_C-D)$, by semicontinuity on $\mathcal W_1$ and the fact that $$H^0(\mathcal F_u) \cong H^0(K_C-D) \oplus H^0(K_C-p) \;\;\;{\rm and} \;\;\; H^1(\mathcal F_u) \cong H^1(K_C-D) \oplus H^1(K_C-p)$$for any $u \in \mathcal W_1$, we will prove the existence of such an injective map \eqref{eq:injectivity1} for the splitting bundle $\mathcal F_0 := (K_C-D) \oplus (K_C-p) \in \mathcal W_1$. Concerning the domain of the map $\psi^{\vee}_C$, i.e. $H^0(\mathcal F_0) \otimes H^0(\omega_C \otimes \mathcal F_0^*)$, as in \cite[Proof of Prop.\;3.9]{CFK} one has \[\begin{array}{ccl} H^0(\mathcal F_0) \otimes H^0(\omega_C \otimes \mathcal F_0^*) & \cong & \left(H^0(K_C-D) \otimes H^0(D)\right) \oplus \left(H^0(K_C-D) \otimes H^0(p)\right) \oplus\\ & & \left(H^0(K_C-p) \otimes H^0(D)\right) \oplus \left(H^0(K_C-p) \otimes H^0(p)\right). \end{array} \]On the other hand, since $$\det (\mathcal F_0) = 2 K_C - p - D \;\; {\rm and} \;\; Sym^2(\mathcal F_0^*)= (p + D - 2 K_C) \oplus (2p - 2 K_C) \oplus (2D - 2K_C),$$one has $$\omega_C \otimes Sym^2(\color{black} \mathcal F_0\color{black}^*) \otimes \det (\color{black} \mathcal F_0\color{black}) \cong K_C \oplus (K_C + p - D) \oplus (K_C+D-p).$$Therefore, concerning the target of the map $\psi^{\vee}_C$, one has: \[\begin{array}{ccl} {\small H^0\left(\omega_C \otimes Sym^2(\color{black} \mathcal F_0\color{black}^*) \otimes \det (\color{black} \mathcal F_0\color{black}) \right) \oplus H^0\left(\omega_C^{\otimes 2}\right)} & \cong & H^0(K_C) \oplus H^0(K_C + p - D) \oplus\\ & & H^0(K_C+D-p) \oplus H^0(2K_C). \end{array} \] \color{black} By the above decomposition of $H^0(\mathcal F_0) \otimes H^0(\omega_C \otimes \mathcal F_0^*)$ and of $H^0\left(\omega_C \otimes Sym^2(\mathcal F_0^*) \otimes \det (\mathcal F_0) \right) \oplus H^0\left(\omega_C^{\otimes 2}\right)$, one considers the following natural maps: \begin{eqnarray*} \mu_{0,D}: & H^0(D)\otimes H^0(K_C-D)\to H^0(K_C),\\ \mu_{p,K_C-D}:& H^0(p) \otimes H^0(K_C-D)\to H^0(K_C-D+p)\\ \mu_{D,K_C-p}: & H^0(D) \otimes H^0(K_C-p)\to H^0(K_C+D-p)\\ \mu_{0, p}: & H^0(p)\otimes H^0(K_C-p) \to H^0(K_C), \end{eqnarray*}(which are simply defined by multiplication of global sections of line bundles and are all injective as $h^0(D) = h^0(p) =1$) and the following natural injection: $$\iota : H^0(K_C) \hookrightarrow H^0(2K_C),$$which is induced by any choice of an effective divisor in $|K_C|$. Looking at the Chern classes of the involved line bundles, \color{black} one naturally defines $$\psi_C^{\vee} := \mu_{0, D} \oplus \mu_{p,K_C-D} \oplus \mu_{D,K_C-p} \oplus (\iota \circ \mu_{0, p})$$\color{black} which is therefore \color{black} injective. Moreover, it is compatible with the dual maps $H^1(\rho_*(\gamma_S))^{\vee}$ and $H^1(\rho_*(\tau_S))^{\vee}$ as $\mathcal F_0$ splits. The previous argument shows $(ii)$, completing the proof. \end{proof} \color{black} To conclude the proof of Theorem \ref{thm:Hilb} (i), the fact that $\mathcal Y$ fills--up a unique component, say $\mathcal H_{\rm reg}$, with all the properties mentioned therein, it suffices to observe that $$ 7g-7 + k_2(k_2-2) = \dim\; \mathcal Y \leq \dim \; T_{[S]} (\mathcal H_{d,g,k_2-1}) = h^0(S,\N_{S / \Pp^{k_2-1}})$$ and to use Proposition \ref{prop:Normalreg} (i). The fact that $\mathcal H_{\rm reg}$ is a {\em regular} component of $\mathcal H_{d,g,k_2-1}$ follows from the fact that $\chi(S, \N_{S / \Pp^{k_2-1}}) = h^0(S,\N_{S / \Pp^{k_2-1}})$ as in \eqref{eq:tgS3bis}, i.e. $\mathcal H_{\rm reg}$ is reduced and of expected dimension.
{ "timestamp": "2018-09-07T02:07:14", "yymm": "1802", "arxiv_id": "1802.03956", "language": "en", "url": "https://arxiv.org/abs/1802.03956" }
\section{Introduction} Engineering and contemporary science are based on the first-principle models of biological, social and physical systems. A primary scientific model, such as Newton's laws or Maxwell's equations, begins in electromagnetism and then builds various applications in mechanical or electrical engineering. In this method, empirical data are used to validate the models of the first principle and to estimate some of the unknown or imprecise parameters directly. In many areas, the first basic principles are unknown, or the systems under study are complex to be mathematical. With the expanding use of computers, there is a large amount of data generated by these systems. In the absence of the first principle models, these available data are used to derive models for estimating Useful relationships between system mutants (e.g. unknown links to output links). The need to understand large, complex and informative databases is practically common to all fields of work, engineering, and science. In the world of work, customer and company data are a source of strength where the usefulness of extracting useful knowledge hidden in these data and acting on this knowledge is increasingly important in today's competitive world. The whole process is called the application of the computer-based approach, including new techniques to discover the knowledge of data from data mining. Data mining is a repetitive process where progress is known to be detected either by mechanical or manual means. It is very useful in the exploratory analysis scenario where there are no preconceived ideas about what the output will be. It is the search for new information and non-primitive value in big data. It is a collaborative work between human and computer where the best results achieved by balancing the knowledge of specialists in describing the problems and objectives with the capabilities of the search for computers. Do you believe in accidents in life? As an analyst, it is likely that your answer is negative or to convey that you will learn it from now. For example, a simple event you can imagine is throwing a coin. It is a random event; No one can predict the face that will appear after the heart. That may be true, but the fact that no one can predict it does not mean that it is impossible as a principle. Such as the speed of the throwing, the rotation angle, the properties of the constituent materials, the mass distribution, wind velocity, and direction, then we will be very able to some effort and time to predict the results of throwing a coin. However, the physical formulas of this relationship are known. Let us now move on to another example; But this time we can predict the case out, the cup will be broken if it falls from a high altitude to some floor. We know that we will get a broken cup in few seconds we analyze the factors during the fall of the cup. How do we have the proven ability to do such a thing? We have not seen the cup that falls at these moments broken before now, and in the end, the physical relations that describe the refraction of the cup are unknown to most of us. Of course, the trophy may remain intact by "luck" in individual cases, but this is unusual. The failure to break the cup is not just a "mismatch, luck" but a follow-up to the laws of physics. For example, the energy of the collision will go to Earth. Well, how can people know what will happen in some cases and what will happen in other cases? The most common explanation is that the interpretation of some cases is "congruence" and the description of other situations as "non-conformance. But let us put the following assumptions: -The vast majority of the processes we recognize in our environment are not the result of chance. -The inherent defect in our inability to accurately characterize and extrapolate processes is absolute because we are unable to identify or measure the factors with the underlying effect or correlations between them. In the event of the fall of the cup we were able to quickly identify the most important features such as the quality of the material and fall from the height and nature of the ground, and in a very short time guess the possibility of refraction of the cup in comparison with previous experiences, but we can’t do this with the coin, we can see the throw of the piece number created But we will never succeed in identifying the necessary factors quickly enough and extrapolating the results in accordance with random firing. So, what happened in our heads when we predicted the refraction of the cup after the collision? We measured the characteristics of this event or to say that we collected the data describing the fall of the trophy and then we came in comparison to the result very quickly. Where comparisons are made with previous fall experiences of cups, cups and similar materials based on similarity measures, the two necessary: First, we need data for past events available; secondly, we need to realize the similarity between the present and all previously defined data. Finally, we will be able to guess or predict by looking for the most similar event in the past and placing it like the scene of the current event. The search for the closest event to the current event is a kind of search for the ideal and named examples and gives us the closest and most accurate results. Not necessarily every fall will lead to the refraction of the cup, but most of us can get the right guesses. The relatively correct predictability of the future will enable us to address potential problems and avoid them and make the wiser decisions of the principle that events are similar and remain to be considered. All of these done using data mining methods, automated learning and statistics that mimic human learning and adds other ways of seeing what we do not see and analyzing it so quickly that we can visualize the future and take advantage of it in real time. The basic definition of decision making is: A study to identify and choose between options and alternatives depending on the specifics and values, a process to reduce any hesitation about the options available to achieve a scientific and reasonable choice because sometimes our findings may be unreliable due to lack of research, and sufficient knowledge of alternatives with a good amount of Knowledge reduces risk and uncertainty but does not eliminate it. \section{Decision Support Systems} Decision support systems are directly targeted at decision-making while business intelligence provides accurate and timely information that indirectly supports the decision. Means used in business intelligence are the same as those used in decision support systems such as (data mining and forecasting analysis) Software companies develop the development of decision support systems by undergraduate and graduate students Business Intelligence. For example, Harvard's 90-year decision support system was designed to predict which departments would need to be opened in the coming years or to calculate the best price for a new product for a business intelligence company. Business Intelligence Objectives The objectives of decision support systems are mostly analytical[1]. \subsection{Decisions Types} \textbf{Whether Decisions:} Where decisions are yes, no / either, or. The person decides whether the decision is achievable or not. Part of the decision-making process examines the pros and cons, and if the answer is "no," then another alternative study will be taken. \textbf{Which Decisions:} These decisions are taken from a range of alternatives and to be compared with the most likely option based on a set of criteria. \textbf{Contingent Decisions:} Decisions that are already known or taken but set aside until conditions are approved and met. Most people make conditional decisions but keep them until they have the opportunity to apply the decision such as time, price, availability, motivation, energy, etc. \section{Types of Decision Modelling} \subsection{Kenper Tregoe Model} These models are a unique way to solve problems systematically. There are four steps to this type of model: \\1. Identify and evaluate the situation. \\2. Problem analysis. \\3. Evaluation of the decision. \\4. Analysis of potential problems. \subsection{Decision Step Model} Sometimes called the logical decision model or the 8-step decision, but it ensures many variations as the steps are done sequentially step by step. The situation and problem are important factors in determining the type of decision model to be used[2]. \subsection{Six Thinking Hats Model} This model was developed to detect all angles and points in a complicated situation. The concept is six caps in multiple colors (green, yellow, blue, white, red, black) representing personal emotion/excitement. \subsection{Carnegie Decision model} Developed at Carnegie University, this model is a Satisficing strategy which is a combination of Satisfy and Sufficient. Alternatives exist in research, and the least acceptable option is studied until a collective agreement is achieved[3]. \subsection{Iterative Decision Model} This model is usually used in decisions involving techniques and steps that work increasingly and are tested from time to time. \subsection{Vroom Yetton Model} This model focuses on taking the most efficient and best decision, and also on the ways we reached this decision. This sample contains a set of seven Yes / No questions, after which the decision criteria are calculated to take the appropriate case. \subsection{Probability decision Model} This model is a framework centered around two axes or two dimensions: First, goal consent Second, technical knowledge is the understanding of the relationship between cause and effect necessary to achieve the target. Where a two-dimensional matrix is reached, the first dimension is a focus of consensus on the target, and the second dimension is specific to technical knowledge[4]. \section{Data Mining in Business} Because of the large amounts of information are collected every day, and analysis of such data is a necessary need. Data mining can fit this need by providing means to discover knowledge from data, so data mining is a natural result of the information revolution. The massive growth of information makes us live in the age of information. The information around us is many and scattered. We as analysts need to organize this information into structures to extract knowledge from it, which is the primary purpose of data analysis. That is, we live today in the age of knowledge and data mining. Optimize to derive this knowledge. There are several synonyms for Data Mining: \\-Knowledge Mining from data. \\-Knowledge Extraction. \\-Data/pattern analysis. \\-Data archaeology. \\-Data dredging. \section{Classification} A classification is a form of data analysis that extracts models of important data. These models are called works that predict categorically (sporadically and unclassified) the label. Most classification methods have been developed by researchers in the field of Machine Learning, Pattern Recognition and Statistics. Most of these algorithms fully occupy memory and are ideal for small data size. The latest research in data mining is working on the development of classification and prediction algorithms that can handle large volumes of data. The classification has a significant number of applications such as fraud detection, forecasting, production, medical diagnosis as well as marketing objectives.[6] The first time it was described in the early 1950's and was not published until 1960 when the increase in computing power is possible, this algorithm is considered Lazy learner classifier. The decision trees, SVM, and neural network are all classified as learning to learn. When a training set is given, it builds a classification before receiving Test data to organize it. When Tuple training is provided, Lazy Learner is stored or performed by only a small fraction of the processing processes and is expected to be given a Test record. When you see a log algorithm, It applies its classification based on its similarity with the saved training records. Lazy Learner does less when he does a Training Tuple and does more work when he classifies or predicts digital. Lazy Learner is highly expensive and requires high storage techniques as it has been studied very well to be applied in Parallel Hardware because it provides little insight or insight into the data structure. Lazy Learner naturally supports growing learning. It can model high resolutions that have hyper polynomial forms that can’t be described as simple by other learning algorithms such as hyper rectangular forms shaped with decision trees.[4] \section{Artificial Neural Network} Artificial neural networks are one branch of artificial intelligence that mimic the neural networks in the learning of the machine. Neural networks have many important characteristics, including their ability to learn through complex models on new data models.[3] The components of the artificial neural network and how to process information are the steps: \\1-Treatment is done in simple processing elements called neurons. \\2-Pass signals between neurons via interconnection lines. \\3-Each weight line is accompanied by a certain weight, which is multiplied with the input values to the neuron. \\4- Apply on each neuron to activate the activation function to its input to determine the resulting output. \section{Decision Trees} Decision Trees is a form of nodes linked together by paths and under certain conditions that illustrate the necessary conclusions in the classification, the tree is built from the root to reach the branches so that they are connected to conditions that enable us to reach the solution and describe the problem more easily And illustrate the complex cases that need descriptive analysis, and the most famous decision tree algorithms are CART, C4.5, ID3. At the beginning of the root is the owner of the largest profit and then the fork is based on the multiple values of that selected character up to one of the tree papers, which represents one of the items that account for the output of the classification process and thus the decision tree generated consists of two types of contract: \\1- Decision nodes – represented by squares. \\2-Chance nodes – represented by circles. \\3-End nodes – represented by triangles. Decision trees have three types of nodes and two branches. Branches emerging from node resolution are decision-making branches; Each branch represents the available alternatives or the event path available at that node; the set of choices must be mutually exclusive (if one of the remaining options is selected) and must be aggregated (all available alternatives must be included / present in the group). Each terminal node has a final value associated with it, called the resulting value. Each final value is calculated as a result of the following scenario: The sequence of decisions and events on an individual chosen path that leads from the primary decision node to the specified final decision node.[7] To determine the final value; This is done by using the method of assigning the values (weights or cash flow according to some references) to the decision branches and the event branches. The weights are then collected on the branches leading to the final node to determine the final value. In some problems, we need a model with more detailed values to determine the final values. \section{Decision Trees Algorithms} Most decision trees adopt the division strategy known as "divide and conquer." In this chapter, we will discuss one of these algorithms, the "C4.5" algorithm, an extension of the "ID3" algorithm after its disadvantages have been improved. \subsection{ID3 algorithm} In learning resolution trees, ID3 is an algorithm invented by ROSS Quinlan and used to generate the decision tree from the given data set. Typically, ID3 is utilized in the learning machine in the field of natural language processing. Decision tree technology involves constructing a tree to model the classification process. Once the tree is built, it is applied to each line in the Tuple database and also applied to the results in another classification. The ID3 algorithm is an information algorithm based on Information Entropy. Its basic idea is that all samples are assigned to different classes according to various values of the attribute set state, and their essence is to determine the best classification attribute of the attribute sets.[6][9] The algorithm selects Information Gain as a criterion for selection attributes, usually the attribute that has the highest information profit chosen as a division attribute for the current node, to make the interoperability of information divided into smaller subsets. Branches can be constructed according to different values of attributes, and the previous process is repeatedly called for each branch to create other nodes and branches so that all the samples in one branch are subordinate to the same attribute. The selection of the attributes of the division is dependent on the concepts of entropy and the gain of information. \subsection{C4.5 algorithm} The decision trees which created by the C4.5 algorithm can be used for classification, and therefore C4.5 refers to static classification. The default criterion for selecting the Splitting Attribute attributes in the C4.5 algorithm is the Information Gain Ratio, rather than the information gain of the ID3 algorithm. It is an effective method in the classification process and belongs to nodes and decision trees, which enable us to infer rules that are easily understood by the decision tree that we have. The tree is constructed based on the profitability values resulting from the measurement of the profit factor for each characteristic of the income group that will lead the process of building the tree based on one of the parameters used to measure the profit of the information.[8] \section{Dataset Description} The Dataset is a global competition that took place in October 2014 and extended to February 2015 through Kaggle (The Home of Data Science), a platform that offers competitions and business contracts for companies looking for analyzes and predictions of their data. The site contains several forecasting competitions, Movies and forecasting behavior across social media sites. We selected the contest from the Business Intelligence category. Dataset belongs to AVAZU, an advertising company offering the following: "Click-Through Rate CTR is an important criterion for evaluating ad performance, so the click expectation system will be substantial and widely used for funded research and real-time bidding." "This competition is 11 days of information from AVAZU to build and test a prediction model. Can you find an algorithm that defeats the standard classification algorithms?" Training group: 10 days in chronological order and 44 million records. Test Group: A day of ads to test the forecast model with a million records. The company did not give any other information and left the rest to the data scientists. The data values are encrypted with the MD5 algorithm, which is a hash algorithm and therefore has no decryption, but some of the coding sites have been identified by the names of the countries where the ads are displayed. The company has deleted the geographic area and modified the data to maintain confidentiality. The large volume of data prevents the use of all dataset records, so it is supposed to be satisfied with one day of data as the data extends from 20 to 31 of the tenth month in 2014, but also the size of the data is very large, estimated one day with 4 million records so it lost the hours of 12 and 01 were taken from night 30 because the day closest to the test day has 178640 records to be studied. \section{Using Data Mining in Business Intelligence} It is necessary for entrepreneurs to earn a better understanding of the context of their systems such as customers, markets, resources, equipment, and competitors. Without knowledge of data mining, many entrepreneurs can’t effectively analyze markets, compare customer feedback in similar products, discover the strength and weakness of competitors, retain their most valuable customers, and make smart business decisions. It is clear that data mining is the essence of business intelligence, and analytical tools are dependent on multidimensional data mining. An example illustrating the use of data mining in business intelligence: Analysis of a company's loyalty model requires a classification model for potential customers leaving the company, for example, customers who may be left to the ISP and move to another. This area of analysis is important for companies to gain new customers from their competitors. In this case, data scientists build new models every day to identify all the variables that may occur. The model is then applied to all existing customers. According to this model, every customer who is likely to leave the company and has not received a special offer 30 days ago, the company offers a special offer ( Buying a mobile phone at a lower price). This offer increases the loyalty of the customer and reduces the possibility of moving to another company. Based on the previous analysis of the loyalty model, the company manager has developed a strategy for campaigns to focus on increasing the loyalty of potential customers continually. \section{Conclusion and Future Work} There has been a significant similarity between the two concepts since the emergence of business intelligence. Decision support systems are directly targeted at decision-making while business intelligence provides accurate and timely information that indirectly supports the decision. Means used in business intelligence are the same as those used in decision support systems such as (data mining and forecasting analysis) Software companies develop the development of decision support systems by undergraduate and graduate students Business Intelligence. For example, Harvard's 90-year decision support system was designed to predict which departments would need to be opened in the coming years or to calculate the best price for a new product for a business intelligence company. Business Intelligence Objectives The objectives of decision support systems are mostly analytical. This Article discusses the most recently general classification and prediction algorithms like SVM, FFM, and C4.5. Which are used to build better models to support decision making in Business Intelligence (BI) including essential steps for preprocessing of data. For example, the weighting of attributes using Information Gain Ratio and estimating and filling the missing values by using different algorithms such as: K-Means, K-Nearest Neighbor, Linear Regression and Neural Networks (Back Propagation). \newpage
{ "timestamp": "2018-02-13T02:19:57", "yymm": "1802", "arxiv_id": "1802.04109", "language": "en", "url": "https://arxiv.org/abs/1802.04109" }
\section{Introduction\label{sec:intro}} From the particle physics point of view, the simplest, most popular, and arguably most robust mechanism leading to the correct amount of cold dark matter (DM) in the early Universe is thermal freeze-out (see, e.g.,\cite{Kolb:1990vq,Gondolo:1990dk,Jungman:1995df,Dodelson:2003ft}). Briefly stated, one assumes that the DM consists of one or more matter species that were originally in thermal equilibrium with the Standard Model (SM) after the Big Bang and that, as the Universe expanded and cooled down, ``froze'' out of equilibrium when their number density became too low for annihilation and creation processes to take place. As is well known, in the context of the freeze-out mechanism the measurement of the relic abundance provided by WMAP and Planck, $\Omega_{\textrm{PL}}h^2=0.1188\pm0.0010$\cite{Komatsu:2010fb,Ade:2015xua}, implies a rather specific value for the thermally averaged annihilation cross section of the DM into SM particles: $\langle\sigv\rangle\approx 3\times 10^{-26}\,\textrm{cm}^3/\textrm{s}\approx 1\,\textrm{pb}$. Nevertheless, the thermal mechanism fails to provide any additional information on the nature of the DM itself since a cross section of that size can result from a discouraging wide range of DM mass values, spin quantum numbers, and DM-SM coupling strengths. Thus, in lack of more information, one has almost always to resort to some theoretical assumptions in order to narrow the search for DM down. Since the 1990s, expectations about the scale of the new physics beyond the SM (BSM) have been driven by the theorists' discomfort with the hierarchy problem. This is the well-known fact that in a low-energy effective theory that includes one or more light fundamental scalars (as likely is the SM with a Higgs boson), one expects enormous quantum corrections to the scalar's mass from the physics in the UV (the Planck scale, in the absence of anything else). Given the broad separation between the characteristic energies in play, this means that in order to get electroweak symmetry breaking (EWSB) one should fine tune the fundamental (unknown) Lagrangian parameters at the level -- again in the absence of anything lighter than the Planck scale -- of one part in $\sim 10^{28}$. Unless, of course, additional degrees of freedom were present, preferably close to the Higgs mass itself (say $\sim 100-1000\gev$). Remarkably, simply on dimensional grounds, if one of these expected TeV-scale BSM particles were to be the DM, its coupling to the SM extracted from the freeze-out mechanism would be of the size of the electroweak coupling constant, $g\approx (16\pi m_{\textrm{DM}}^2\cdot1\,\textrm{pb})^{1/4}\approx 0.1-1$. This fascinating coincidence, which, in light if its singling out specifically weakly interacting massive particles, or WIMPs, is known as the ``WIMP miracle,'' maintains its attractiveness to these days, even if the LHC has failed to discover new particles below the scale of approximately 2\tev\cite{Atlas_LHC,CMS_LHC}. Arguably the most complete and well motivated of the known BSM theories still remains low-scale supersymmetry (SUSY) (see, e.g.,\cite{Martin:1997ns}, for a popular review). From the theoretical point of view, not only does SUSY provide possibly the most elegant solution to the hierarchy problem (if one allows for the possibility that, given the current LHC bounds, the theory might have to be amended to regain full naturalness); it also leads to a more precise UV unification of the gauge couplings than in the SM alone; it provides a solid rationale for the measured value of the Higgs boson and top quark masses and, by extension, for radiative EWSB. From the phenomenological point of view, the Minimal Supersymmetric Standard Model (MSSM), contains all the necessary ingredients for successful baryogenesis and provides a framework for cosmic inflation. It thus makes sense that, of all possible candidates for WIMPs, through the years a lot of attention was dedicated to the particles of the MSSM. In this review we give a compact summary of the subject of DM in the traditional MSSM. After briefly surveying the particles with the potential of providing a good DM candidate, we argue that the nearly pure higgsino neutralino survives to these days as perhaps the only one that is not in substantial tension with any phenomenological constraint. Interestingly, it does so in a relatively model-independent way, without the need of resorting to narrow or secluded regions of the parameter space. We will thus review the higgsino's prospects for detection in direct underground DM searches, indirect searches for DM in gamma-ray and neutrino telescopes, and at the LHC. Incidentally we will show that, in those models where SUSY breaking is transmitted to the visible sector at the scale of Grand Unification (GUT), the detection prospects of higgsino DM become tightly bound to the typical mass of the sfermions in the spectrum and, as a direct consequence, to the size of the Higgs boson mass. In recent months several comprehensive reviews on the status of WIMP dark matter have appeared in the literature\cite{Gelmini:2016emn,Arcadi:2017kky,Plehn:2017fdg,Roszkowski:2017nbc}, one of which, co-authored by one of us, dedicated a full chapter to the MSSM neutralino with particular attention to the detection prospects of a $\sim1\tev$ higgsino. While that work is broader in scope, casting light on the experimental opportunities provided by neutralinos in the context of the wider picture of thermal DM models, DM constraints, and existing experimental anomalies, we concentrate here instead on the specific physical characteristics of higgsinos, underlining what we believe makes them currently stand out as the most interesting elements in the DM panorama of the MSSM. In this we are not dissimilar, perhaps, to recently appeared studies in the same tone\cite{Baer:2016ucr,Krall:2017xij}. The structure of the review is as follows. In \refsec{sec:mssmdm} we recall the particles of the MSSM that can provide a good DM candidate, classifying them according to their transformation properties under the SM gauge symmetry group. In \refsec{sec:pheno} we single out the higgsino as the most promising candidate of the list and review its detection prospects in different and complementary experimental venues. We dedicate an additional subsection to the calculation of typical fine tuning and expectations for the scale of the superpartners in models constrained at the GUT scale. We summarize the main treated points and conclude in \refsec{sec:sum}. \section{Dark matter in the MSSM}\label{sec:mssmdm} One of the features making the MSSM very attractive from a phenomenological point of view is that its gauge symmetry structure originates directly from the supersymmetrization of the SM itself. As such, the fundamental gauge symmetry is SU(3)$\times$SU(2)$\times$U(1), and the dimensionless couplings are of the strong, electroweak, or SM Yukawa type. One of the consequences is that a potentially viable DM particle is also expected to interact with SM-like strength. Since cosmological observations have long excluded the possibility of DM particles being charged under color\cite{PhysRevD.41.3594} and, on the other hand, the DM is by definition ``dark,'' or practically electrically neutral\cite{Smith:1979rz,Jungman:1995df}, one is led to conclude that all viable DM candidates in the MSSM must be classifiable on the basis only of the SU(2) representation they belong to. Moreover, the available representations are limited to those that can be found in the SM: SU(2) singlets, doublets, and the adjoint. Before we proceed to briefly review these three groups individually, we remind the reader that in order to make the lightest SUSY particle (LSP) stable on cosmological time scales, one introduces in the MSSM an additional discrete symmetry, R-parity\cite{Farrar:1978xj,Dimopoulos:1981zb,Weinberg:1981wj,Sakai:1981pk,Dimopoulos:1981dw}, under which only the superpartners of the SM fermions, gauge bosons, and any Higgs scalar field are odd. The origin of R-parity is still an active subject of research, and addressing the issue goes beyond the scope of the present review. We just point out that R-parity violation is strongly constrained phenomenologically, by the proton decay rate and electroweak precision measurements\cite{Barbier:2004ez}. The only particles of the MSSM that are electrically and color-neutral are the neutrinos, their scalar superpartners, called \textit{sneutrinos}, and, finally, the \textit{neutralinos}. Neutralinos, $\chi_{i=1,..,4}$, are Majorana fermion mass eigenstates emerging, after EWSB, from the diagonalization of the mass matrix of four electrically and color-neutral SUSY states (see\cite{Ellis:1983ew,Griest:1988ma,Griest:1988yr,PhysRevD.41.3565} for early studies and\cite{Jungman:1995df} for a comprehensive, classic review). Two of these particles are \textit{gauginos}, fermionic superpartners of the SM gauge bosons. The \textit{bino}, $\tilde{B}$, in particular, is the partner of the U(1) gauge boson, while the neutral \textit{wino}, $\tilde{W}$, is the partner of the SU(2) gauge boson $W_3$. The other two states are neutral \textit{higgsinos}, $\tilde{H}_u$ and $\tilde{H}_d$, which belong to a vector-like pair of Higgs doublet superfields. If the lightest neutralino, hereafter indicated simply with $\chi$, is the LSP it can be the DM particle. At the tree level, the neutralino mass matrix takes the well-known form \bea\label{neutmatr} \mathbf{M_\chi}= \begin{bmatrix} M_1 & 0 & -\frac{g' v_d}{\sqrt{2}} & \frac{g' v_u}{\sqrt{2}} \\ 0 & M_2 & \frac{g v_d}{\sqrt{2}} & -\frac{g v_u}{\sqrt{2}} \\ -\frac{g' v_d}{\sqrt{2}} & \frac{g v_d}{\sqrt{2}} & 0 & -\mu \\ \frac{g' v_u}{\sqrt{2}} & -\frac{g v_u}{\sqrt{2}} & -\mu & 0 \end{bmatrix}, \eea where $g$ and $g'$ are SU(2) and U(1) gauge couplings, respectively, $v_u$ and $v_d$ are the vacuum expectation values (vev) of the neutral components of the scalar Higgs doublets, $M_1$ and $M_2$ are the soft SUSY-breaking bare masses of the bino and wino, respectively, and $\mu$ is the vector-like mass parameter of the Higgs doublet superfields. In the remainder of this section we give an overview of the mentioned DM candidates of the MSSM, highlighting the strongest phenomenological constraints that can be applied in each case. We will not, however, discuss the neutrinos. It has been long known\cite{Tremaine:1979we,White:1984yj} that the SM neutrinos do not provide, on their own, a viable candidate for cold DM. Their mass is $\mathcal{O}(<\textrm{eV})$, so that they are relativistic at the time of decoupling and therefore incur strong constraints from structure formation\cite{Abazajian:2005xn,dePutter:2012sh,Lukash:2012tq}. On the other hand, heavy right-handed neutrinos, whose existence might be postulated on the ground of the observed neutrino masses, and could provide a naturally expected extension of the traditional MSSM, also do not provide a good candidate for DM because they are not protected by R-parity and therefore not stable over cosmological scales in most scenarios. \subsection{SU(2) singlets\label{sec:singlet}} \textbf{(Nearly) pure bino.} The first SU(2) singlet DM candidate we present is the bino. Because of EWSB, a pure bino state does not exist in the MSSM, but the lightest neutralino behaves like a pure bino to a very good approximation, after the diagonalization of $\mathbf{M_\chi}$, if $|M_1|\ll M_2, \mu$. The interactions of the bino-like neutralino with the SM fields are easily found by directly supersymmetrizing the SM gauge-fermion-fermion interaction and applying the R-parity conservation constraint. The resulting vertex takes the form bino-sfermion-fermion, $\mathcal{L}\supset -X_L \tilde{f}_L \bar{\chi} P_L f-X_R \tilde{f}_R \bar{\chi} P_R f$, where tree-level couplings, $X_{L,R}=\sqrt{2}\,g'\,Y_{L,R}$, are expressed in terms of the hypercharge assignment $Y_{L,R}$ of the fermion Weyl spinors. The pair-annihilation of bino-like neutralinos in the early Universe proceeds at the leading order through the $t$-channel diagram shown in \reffig{fig:dmrelic}(a). The region of the MSSM parameter space where $\abund\approx 0.12$ is obtained in this way is historically known as the \textit{bulk}\cite{Drees:1992am,Baer:1995nc}. One can calculate the thermal cross section for binos, given approximately by\cite{ArkaniHamed:2006mb} \be\label{bulksigv} \langle\sigv\rangle_{\tilde{B}} \approx \sum_{\tilde{f}}\frac{g'^4 Y_{\tilde{f}}^4}{2\pi}\, \frac{\mchi^2\left(m_{\tilde{f}}^4+\mchi^4\right)}{\left(m_{\tilde{f}}^2+\mchi^2\right)^4} \left(\frac{T_F}{m_{\chi}}\right)\,, \ee in terms of the neutralino (bino) mass, \mchi, sfermions' mass $m_{\tilde{f}}$, hypercharge $Y_{\tilde{f}}$, and freeze-out temperature $T_F$, which parameterizes the dependence on velocity of the $p$-wave cross section, and is set here approximately at $T_F \approx (0.04-0.05)\,\mchi$. The bulk has been long known to be strongly constrained by direct SUSY searches at colliders. To give a semi-quantitative estimate of these constraints, let us assume that only selectrons and smuons belong to the light SUSY spectrum, a reasonable ansatz in light of the strong LHC bounds on particles with color\cite{Sirunyan:2017kqq,Aaboud:2017aeu,Aaboud:2017vwy}. Assuming all four left- and right-handed slepton states have the same mass, and inserting $Y_{\tilde{f}_L}=-1/2$, $Y_{\tilde{f}_R}=-1$ in \refeq{bulksigv} one finds that the cross section is typically much smaller than $\sim 1\,\textrm{pb}$, except in the range $\mchi< m_{\tilde{f}}\lesssim 100\gev$. A charged slepton mass of this size has been long excluded by direct searches at LEP\cite{Patrignani:2016xqp}. If, instead of selectrons and smuons, the light sfermions happen to be staus, the parameter space opens up a little, $m_{\tilde{\tau}_1}\lesssim 150\gev$ for $\mchi\approx 50\gev$, due to the non-negligible mixing between left and right chiral slepton states, which introduces an $s$-wave component to the annihilation cross section (see, e.g.,\cite{Fukushima:2014yia}). Nevertheless, LHC bounds on electroweak production\cite{Aad:2015eda}, implying $m_{\tilde{\tau}_1}\gsim 109\gev$, are by now becoming strongly constraining for these scenarios too, which will be probed even more deeply soon\cite{ATL-PHYS-PUB-2016-021}. Finally, as we have mentioned, SUSY parameter space where bulk sfermions are charged under color is strongly excluded by LHC direct searches. \begin{figure}[t] \centering \subfloat[]{ \label{fig:a} \includegraphics[width=0.25\textwidth]{Figs/Bino_t.pdf} } \hspace{1cm} \subfloat[]{ \label{fig:b} \includegraphics[width=0.255\textwidth]{Figs/Hino_W1.pdf} } \hspace{1cm} \subfloat[]{ \label{fig:c} \includegraphics[width=0.26\textwidth]{Figs/Hino_W2.pdf} } \caption{(a) The dominant early-Universe annihilation channel for a nearly pure bino-like neutralino. (b), (c) Examples of annihilation and co-annihilation tree-level channels into gauge bosons for a predominantly higgsino-like neutralino.} \label{fig:dmrelic} \end{figure} A way to evade the strong collider bounds is provided, if the bino-like neutralino and some other sparticles (sfermions $\tilde{f}$ or other gauginos) are nearly degenerate in mass, by the mechanism of co-annihilation\cite{Griest:1990kh,Ellis:1998kh,Ellis:1999mm}. In this case the cross section of \refeq{bulksigv} should be replaced by an effective quantity that takes into account the thermal average of all annihilations and co-annihilations of the kind $\chi\chi,\chi\tilde{f},\tilde{f}\tilde{f}\rightarrow \textrm{SM}\,\textrm{SM}$, some of which are likely to be much more efficient than $\chi\chi\rightarrow \textrm{SM}\,\textrm{SM}$ alone. However, without any guidance from the theory in the UV, co-annihilation of the bino with other sparticles can only be achieved in narrow slices of the parameter space, which require some tuning of the initial parameters to engineer the desired coincidence of neutralino and sfermion mass. And in models that are instead defined in terms of a limited number of free parameters in the UV, like the CMSSM\cite{Kane:1993td}, in which slepton or stop co-annihilation with the bino can occur naturally for particular choices of the initial conditions, the preferred regions of the parameter space are incurring increasingly strong limits from direct LHC searches\cite{Roszkowski:2014wqa,Bechtle:2015nua,Han:2016gvr,Athron:2017qdc,Roszkowski:2017nbc}. Besides, with gaugino universality at the GUT scale it is a struggle to fully accommodate the measured value of the Higgs mass at the LHC\cite{Roszkowski:2014wqa,Ellis:2018jyl} (this problem is resolved if the gluino mass is a free parameter, e.g.,\cite{Akula:2013ioa}). Thus, even if co-annihilation of the bino with other sparticles can still lead to viable regions of the parameter space in the most generic parametrizations of the MSSM\cite{Roszkowski:2014iqa}, it is also perhaps not exceedingly attractive from a natural point of view. \bigskip \noindent \textbf{R sneutrino.} The second SU(2) singlet DM candidate of the MSSM is the scalar ``right-handed'' sneutrino. The right-handed sneutrino does not properly belong to the MSSM, which in its original formulation features massless neutrinos, but naturally emerges in SM extensions with right-handed neutrinos, which can give rise to the neutrino mass via small Yukawa couplings (if the right-handed neutrino is Dirac), or through the see-saw mechanism (if the right-handed neutrino is Majorana, see, e.g.,\cite{Mohapatra:1999em} and references therein). The phenomenology of right-handed sneutrinos as DM, however interesting, is very model-dependent. In traditional see-saw models with large-scale Majorana mass the right-handed sneutrino is too heavy to be the DM. On the other hand, for a sneutrino of the ``Dirac'' type, or, in alternative, Majorana but such that the bare mass is of the order of the superpartners' mass\cite{ArkaniHamed:2000bq,Borzumati:2000mc}, the only really model-independent vertex with the SM involves a very small Yukawa coupling $\mathcal{L}\supset -y_{\nu_R}\,\bar{e}_L \tilde{H}_u^{\pm}\,\tilde{\nu}_R -y_{\nu_R}\,\bar{\nu}_L \tilde{H}_u^{0}\,\tilde{\nu}_R$. Thus, the induced $t$-channel processes similar to \reffig{fig:dmrelic}(a), with sneutrinos (charginos) in place of neutralinos (sfermions), and a tiny coupling constant, are not strong enough to get the correct \abund. On the other hand, the correct relic density can certainly be obtained thanks to the mixing with the left-handed sneutrino, and SUSY breaking can generate $A$-terms of the order of the SUSY scale, which provide large couplings to the SM Higgs boson. The phenomenology of these cases can be very rich and exceeds the scope of this review. We direct the reader to the vast literature on sneutrino DM for further details (see, e.g.,\cite{TuckerSmith:2001hy,TuckerSmith:2004jv,Asaka:2005cn,Arina:2007tm}, for early studies and bounds, and\cite{Arina:2015uea} for a recent LHC analysis). \subsection{SU(2) doublets\label{sec:doublet}} We have seen that singlet DM candidates in the MSSM are accompanied by some uncomfortable features: they are either strongly constrained by collider bounds, are only viable in fine-tuned regions of the parameter space, or present a phenomenology that is highly model-dependent. We therefore move on to reviewing the next set of candidates, the SU(2) doublets.\smallskip \noindent \textbf{(Nearly) pure higgsino.} The most popular SU(2) doublet DM candidate, and the one that appears to us most attractive from a phenomenological point of view, is the higgsino, which is the main subject of this review. As was the case for the bino, there is no pure higgsino state after EWSB, but one obtains an almost pure higgsino-like neutralino by diagonalizing $\mathbf{M_\chi}$ in \refeq{neutmatr} in the limit $|\mu|\ll M_1, M_2$. As supersymmetry assigns a Weyl spinor to each complex state in the scalar Higgs doublets one counts four physical higgsino states, which, after EWSB, give rise to two Majorana neutralinos, $\chi_1$ (or $\chi$) and $\chi_2$, and a Dirac chargino, $\chi^{\pm}$. When $|\mu|\ll M_1\approx M_2$, the tree-level mass splitting between the two higgsino-like neutralinos is of approximately the size of $m_Z^2/M_{1,2}$\cite{Martin:1997ns}, and the splitting between the higgsino-like chargino and the lightest neutralino is approximately half of that. Moreover, radiative corrections also induce a non-negligible and irreducible mass splitting ($\sim 100$s~MeV) between the charged and neutral states (see, e.g.,\cite{Drees:1996pk,Nagata:2014wma}). To correctly compute the thermally-averaged effective cross section that yields the DM relic abundance, one must take into account all possible annihilations and co-annihilations of higgsino states. For \mchi\ above the $W$ threshold the dominant final state is into $W$ and $Z$ bosons (Figs.~\ref{fig:dmrelic}(b) and \ref{fig:dmrelic}(c) give examples of possible diagrams for this processes), to which higgsino-like neutralinos and charginos couple through the electroweak charged and neutral currents\cite{Jungman:1995df}, \be \mathcal{L}\supset \left(-\frac{g}{2}\,W^+_{\mu}\bar{\chi}\gamma^{\mu}\chi^- -\frac{g}{4\cos\theta_W}\,Z_{\mu}\bar{\chi}_1\gamma^{\mu}\chi_2 +\textrm{h.c.}\right)-\frac{g}{2\cos\theta_W}\,Z_{\mu}\bar{\chi}^+\gamma^{\mu}\left(1 -2\sin^2\theta_W\right) \chi^-. \ee The effective cross section can be obtained at the leading order in the limit of all four states being degenerate (see, e.g.,\cite{ArkaniHamed:2006mb}): \be\label{higgsinosigv} \langle\sigv \rangle_{\tilde{H}}^{(\textrm{eff})} \approx \frac{21\,g^4+3\,g^2 g'^2+11\,g'^2}{512\,\pi\,\mchi^2}\,. \ee For heavy, very pure higgsinos, one should include in the calculation of $\langle\sigv \rangle_{\tilde{H}}^{(\textrm{eff})}$ corrections due to the Sommerfeld enhancement, a well known non-perturbative effect originating from the fact that if a DM particle is much heavier than the electroweak gauge bosons and relatively slow, the weak force becomes effectively long-range and the impact of the non-relativistic potential on the interaction cross section becomes significant\cite{Hisano:2002fk,Hisano:2004ds}. However, in the case of the higgsino the splitting between its charged and neutral components is almost always large enough to effectively wash out substantial non-perturbative effects originating from the resummation of ladder diagrams\cite{Hisano:2006nn,Cirelli:2007xd,Hryczuk:2010zi}, so that in a first approximation \refeq{higgsinosigv} provides a fairly accurate estimate of $\langle\sigv \rangle_{\tilde{H}}^{(\textrm{eff})}$. One can see that the cross section is typically much larger than $\sim 1\,\textrm{pb}$, unless $\mchi\approx 1\tev$ (the precise numerical value is more about 1.1\tev, as we shall see). Thus, a $\sim 1\tev$ higgsino is on its own a good candidate for the DM in the Universe\cite{Profumo:2004at}, while a higgsino much lighter than 1\tev\ requires one to assume the existence of an additional DM component (e.g., axion\cite{Baer:2011hx,Baer:2011uz}), needed to get $\abund\approx 0.12$. As we shall see in the next sections, a $\sim 1\tev$ higgsino is generally associated with a large SUSY-breaking scale, and for this reason it is not currently very constrained from a phenomenological point of view. However, its characteristic properties can give us hope for a timely detection in direct and indirect DM searches and even, if $\mchi\ll 1\tev$, in collider searches. \bigskip \noindent \textbf{L sneutrino.} We conclude this subsection by reviewing the properties of the only other SU(2) doublet DM candidate in the MSSM: the ``left-handed'' sneutrino, scalar superpartner of the SM left-handed neutrino. The left-handed sneutrino is a complex scalar field with SU(2)$\times$U(1) quantum numbers equal to the higgsino's. Like the higgsino, it has charged and neutral current couplings to the $W$ and $Z$ bosons, $\mathcal{L}\sim -ig/\sqrt{2}\left(W^+_{\mu}\tilde{\nu}^{\ast}_L \partial^{\mu}\tilde{e}^-_L +W^-_{\mu}\tilde{e}^+_L \partial^{\mu}\tilde{\nu}_L\right)-ig/(2\cos\theta_W)\,Z_{\mu}\,\tilde{\nu}^{\ast}_L \partial^{\mu}\tilde{\nu}_L$\,. The mass splitting of the charged and neutral components of the SU(2) doublet is, however, much larger for sneutrinos/sleptons than for higgsinos, being generated through hypercharge D-term contributions\cite{Martin:1997ns}: $m^2_{\tilde{e}_L}-m^2_{\tilde{\nu}_L}\approx -m_W^2\cos 2\beta$, where $\tanb\equiv v_u/v_d$. Thus, one should resist the temptation of interpreting \refeq{higgsinosigv} as an accurate estimate of the effective cross section for sneutrinos too, since the co-annihilation of charged and neutral states becomes somewhat less efficient. It turns out\cite{Arina:2007tm} that the mass required to produce $\langle\sigv \rangle_{\tilde{\nu}_L}^{(\textrm{eff})}\approx 1\,\textrm{pb}$ is about $m_{\tilde{\nu}_L}\approx 600-700\gev$. Sneutrinos lighter than that imply the existence of an additional component of DM. A very important constraint on left-handed sneutrinos as DM arises because they, unlike the Majorana higgsino-like neutralinos, are not their own antiparticle, so that their elastic scattering with nuclei in direct detection experiments proceeds also through $t$-channel exchange of a $Z$ boson. By virtue of the sneutrino's neutral current coupling, the spin-independent cross section is approximately given by a Fermi-like contact interaction, $\sigsip\approx \mu_{\textrm{red}}^2 G_F^2/8\pi\approx 10^{-3}\,\textrm{pb}=10^{-39}\,\textrm{cm}^2$, where reduced mass $\mu_{\textrm{red}}\approx m_p$ for $m_{\tilde{\nu}_L}\gg m_p$. Cross sections of this size have been long excluded in underground detector searches\cite{Falk:1994es,Hall:1997ah}. \subsection{SU(2) adjoint triplet\label{sec:triplet}} \textbf{(Nearly) pure wino.} The only SU(2) triplet DM candidate in the MSSM is the wino-like neutralino, dominated by the fermionic superpartner of the $W_3$ weak gauge boson. The wino belongs to the adjoint representation of the gauge group (hypercharge $Y=0$) and the wino-like neutralino emerges, after EWSB, from the diagonalization of \refeq{neutmatr} in the limit $|M_2|\ll M_1, \mu$. One finds a Majorana neutralino, $\chi$, and a Dirac chargino, $\chi^{\pm}$, mass-degenerate at the tree level. In the context of UV complete models of SUSY-breaking, spectra with a light wino can arise, for example, in scenarios where SUSY breaking is transmitted via anomaly mediation\cite{Randall:1998uk,Giudice:1998xp}. If the wino LSP is heavier than the electroweak gauge bosons, its dominant final state channel for annihilation (and co-annihilation with charginos) in the early Universe is into $W$ (but not $Z$) boson final states, to which it couples as $\mathcal{L}\sim -g\,W^{\pm}_{\mu}\bar{\chi}\gamma^{\mu}\chi^{\mp}$. The thermal annihilation cross-section is dominated by coannihilations of the three wino states, similarly to what happens for the doublet higgsinos. Annihilation into fermion–antifermion final states through a $t$-channel sfermion exchange, reminiscent of the bino bulk mechanism, has been instead long excluded by LEP limits on the charged slepton masses. Unlike higgsinos, in the wino case mass splitting between the charged and neutral fermion component of the SU(2) multiplet is generated exclusively by radiative corrections, $\Delta M_{\widetilde{W}}=(g^2/4\pi)\,m_W \sin^2 (\theta_W/2)\approx 166\mev$\cite{Cirelli:2005uq}. Note that the mass splitting is typically much smaller than for higgsinos, so that one cannot neglect the effects of the Sommerfeld resummation on the calculation of the thermal cross section. When one includes the Sommerfeld enhancement numerically, the correct relic density is obtained for $\mchi\approx 2.7-2.8\tev$\cite{Hisano:2006nn,Cirelli:2007xd,Hryczuk:2010zi}. For a lighter mass, winos do not saturate the relic abundance. The Sommerfeld enhancement induces more dramatic modifications of the effective DM annihilation cross section when the average kinetic energy of the WIMP corresponds to speeds of the order of $10^{-3} c$, as in the present-day Universe. This fact has led to the derivation of powerful indirect astrophysical constraints on the annihilation cross section of wino-like neutralinos\cite{Cohen:2013ama,Fan:2013faa,Hryczuk:2014hpa,Beneke:2016jpw,Cuoco:2017iax}. By taking into account the effects of Sommerfeld-enhanced contributions to the annihilation of winos into mono-chromatic gamma rays, as well as bounds on the present-day cross section to $W^+W^-$ from diffuse gamma radiation from the Galactic Center and Dwarf Spheroidal satellite galaxies (dSphs), measured in terrestrial and space telescopes H.E.S.S.\cite{Abramowski:2013ax,Abdallah:2016ygi} and Fermi-LAT/MAGIC\cite{Ahnen:2016qkx}, and from cosmic ray (CR) antiproton data at AMS-02\cite{Cuoco:2017iax,Aguilar:2016kjl}, one can derive strong independent constraints (albeit affected by significant systematic uncertainties) which steeply raise the stakes on the wino as a viable DM particle, especially in scenarios where it saturates the relic abundance. \subsection{Mixed cases\label{sec:mixed}} The four neutralinos of the MSSM are all Majorana fermions that, after EWSB, remain neutral under $U(1)_{\textrm{em}}$ and color. In the absence of a well-separated hierarchy among $M_1$, $M_2$, and $\mu$, the lightest mass eigenstate will be an admixture of the SU(2) gauge multiplets discussed in Secs.~\ref{sec:singlet}-\ref{sec:triplet} but, unlike those cases, it will present properties that differ significantly from a pure gauge eigenstate. When $|M_1|\approx |\mu|$ the neutralino is in a highly mixed bino/higgsino state. Mixed neutralinos of this kind (sometimes also called ``well-tempered''\cite{ArkaniHamed:2006mb}), originally observed in mSUGRA parameter space\cite{Chan:1997bi,Feng:1999zg,Feng:2000gh} but that can arise under different boundary conditions (e.g.,\cite{Baer:2006te,Baer:2008ih}), enjoyed some popularity, especially before the advent of the LHC, because they can easily lead to $\abund\approx 0.12$ for values of the $\mu$ parameter as low as few hundreds~GeV, which are favored to solve the hierarchy problem. However, the rapid progress made in the bounds on the spin-independent cross section of the neutralino scattering off nuclei in direct WIMP detection searches, combined with a failure to directly observe scalar fermions and heavy Higgs bosons at the LHC, have rendered scenarios where the lightest neutralino is a rich admixture of gaugino and higgsino much less appealing if not excluded altogether (see, e.g.\cite{Badziak:2017the}, for a very recent update of the constraints on bino-higgsino, and\cite{Beneke:2016jpw} for wino-higgsino scenarios). To briefly set the issue on quantitative grounds, let us estimate the strength of the coupling with which neutralino admixtures of higgsino and gaugino contribute to the spin-independent cross section. We recall that, in the limit of the squarks and heavy Higgs bosons being much heavier than $\mhl=125\gev$, which has become a reasonable assumption after the first two runs of the LHC, the main interaction between the neutralino and heavy nuclei in underground detectors proceeds as in \reffig{fig:sigsip}, via $t$-channel exchange of the 125\gev\ Higgs boson and an effective coupling to gluons through the heavy quark loops. As the neutralino LSP-Higgs-neutralino LSP tree-level vertex directly stems from applying the gauge covariant derivative on the Higgs doublets, it is non-zero only for a gaugino/higgsino admixture. For \tanb\ sufficiently large to ensure a predominantly SM-like Higgs boson,\footnote{$\tanb>3-4$ is a condition often fulfilled, for instance, in scenarios where EWSB is obtained radiatively via the renormalization group evolution of soft SUSY-breaking parameters constrained at some high scale, as it prevents certain soft masses from running tachyonic at the low scale.} the coupling to the nucleon can thus be expressed entirely in terms of the higgsino fraction (or \textit{purity}), $f_h$, which depends on the elements of the unitary matrix, $N$, diagonalizing \refeq{neutmatr}. \begin{figure}[t] \centering \includegraphics[width=0.2\textwidth]{Figs/ID_higgs.pdf} \caption{The main interaction between the neutralino and heavy nuclei in underground detectors in the limit of squarks and heavy Higgs bosons being much heavier than $\mhl=125\gev$ and in general outside of LHC reach.} \label{fig:sigsip} \end{figure} If $\textrm{diag}[m_{\chi_1},m_{\chi_2},m_{\chi_3},m_{\chi_4}]=N\,\mathbf{M_\chi}N^{\dag}$, one can define $f_h\equiv |N_{13}|^2+|N_{14}|^2$ and express the coupling of interest as $\mathcal{L}\sim (g\sqrt{f_h\left(1-f_h\right)}/4)\bar{\chi}\chi h$\,. Note, incidentally, that deriving an explicit form for the elements of matrix $N$ in terms of bare masses $M_1$, $M_2$, and $\mu$ is not a trivial task even at the tree level, and useful formulas in this regard can be found in several papers, for example\cite{ElKheishen:1992yv,Choi:2001ww,Choi:2004rf,Beylin:2008zz}. By simple inspection of \refeq{neutmatr}, however, one can infer a rough approximation for the higgsino fraction in the limit of nearly pure higgsinos, $|\mu|\ll M_2\approx M_1$: \be\label{hinofrac} 1-f_h\approx\frac{m_W^2}{(M_{1,2}-|\mu|)^2}\,. \ee Equation~(\ref{hinofrac}) becomes quite accurate for $f_h \gsim 0.999$. The spin-independent cross section of the neutralino with protons (nucleons), $\sigsip=\left(4\mu_{\textrm{red}}^2/\pi\right) \left|\mathcal{A}_p\right|^2$, can be parameterized for moderate-to-large \tanb\ simply as\cite{Jungman:1995df} \be\label{hinosigsip} \mathcal{A}_p(f_h)\approx a_{\textrm{eff}}\,\frac{f_{TG}}{9}\,\frac{m_p}{v}\,\frac{g\sqrt{f_h\left(1-f_h\right)}}{m_h^2}\,, \ee in terms of the gluon fractional content of the proton, $f_{TG}$ (we use the default value for \texttt{micrOMEGAs~v4.3.1}\cite{Belanger:2013oya}, $f_{TG}=0.92$), and a phenomenological fudge factor, $a_{\textrm{eff}}\approx 0.9-1$, which takes into account the dependence of $\mathcal{A}_p$ on twist-two operators\cite{Drees:1993bu} and higher-order loop corrections\cite{Hisano:2004pv}. We show in \reffig{fig:purity_sigsip} a plot of \sigsip\ as a function of purity $f_h$ for a $\mchi=1\tev$ neutralino (to a first approximation the DM mass affects the cross section only through the reduced mass leading to $\mu_{\textrm{red}}\approx m_p$). One can see that, for admixtures dominated by the higgsino fraction, the most recent XENON-1T 90\%~C.L. upper bound\cite{Aprile:2017iyp} on \sigsip\ enforces $f_h>98\%$, so that viable DM candidates ought to be very close to a pure higgsino state. Since the purity of well-tempered higgsino-dominated neutralinos stays well below~90\% in those models attempting to provide a satisfactory solution to the hierarchy problem while saturating the relic abundance\cite{ArkaniHamed:2006mb}, we conclude that, barring increasingly narrow corners of the parameter space\cite{Badziak:2017the}, these scenarios have become very hard to rescue or justify in light of the most recent direct detection bounds. \bigskip \begin{figure}[t] \centering \includegraphics[width=0.55\textwidth]{Figs/TestSigsip_complete.pdf} \caption{The neutralino-proton spin-independent cross section, \sigsip, for a typical case of predominantly higgsino-like neutralino DM with $\mchi=1.0\tev$ as a function of higgsino purity $f_{\textrm{higgsino}}$ ($\equiv f_h$). } \label{fig:purity_sigsip} \end{figure} To conclude this subsection, we finally recall that in cases where $|M_1|<|\mu|\lesssim 1-2\tev$, one obtains scenarios where the mixed neutralino is predominantly bino-like, but also acquires couplings that originate from its admixture with higgsino states, so that additional mechanisms for obtaining $\langle\sigv\rangle\approx1\,\textrm{pb}$ with respect to \refsec{sec:singlet} are possible. These mechanisms, often called \textit{funnels}, involve resonant or close-to-resonant $s$-channel annihilation of two neutralino LSPs via a nearly on-shell mediator which could be the $Z$ boson (if $\mchi\approx m_Z/2$)\cite{Griest:1988ma}, the SM Higgs boson (if $\mchi= 60-65\gev$)\cite{Ellis:1989pg}, or one of the heavy Higgs bosons of the MSSM\cite{Drees:1992am}. Note that the $Z$-funnel parameter space is strongly constrained by the LHC. The coupling of the lightest neutralino to the $Z$ boson is due exclusively to the isospin neutral current, cf. \refsec{sec:doublet}, which means that in mixed bino-higgsino scenarios it is directly proportional to the higgsino fraction. As a consequence, $f_h$ cannot take excessively small values or, in other words, $\mu$ cannot be much larger than $M_1\approx m_Z/2$. The relative proximity of a mostly higgsino-like chargino and a mostly bino-like neutralino subjects this region of the parameter space to strong bounds from direct LHC multi-lepton searches\cite{Calibbi:2014lga}. Light and heavy Higgs boson funnels are less constrained from direct LHC SUSY searches than the $Z$ funnel, since the direct coupling to the lightest neutralino is dependent on $\sqrt{f_h}$ and the mediator can be quite heavy. However, there exist complementary observables which can constrain these regions, like the branching ratio \brbsmumu\cite{Kowalska:2013hha} and direct searches for heavy Higgs bosons in the $\tau\tau$ channel\cite{Arbey:2013jla}. Moreover, as was the case for the co-annihilations of the bino, most phenomenological scenarios require \textit{ad hoc} arrangement of the parameters to obtain the right ratio of neutralino to scalar mass, although this is not necessarily the case for some parameter-space regions of GUT-constrained scenarios like the CMSSM, in which the renormalization group evolution (RGE) of soft masses from a handful of free parameters can lead more naturally to the right mass coincidence (see, e.g.,\cite{Lahanas:1999uy,Ellis:2001msa} for early studies). \section{Phenomenology of higgsino dark matter}\label{sec:pheno} The discussion of \refsec{sec:mssmdm} has led us to conclude that the sole DM candidate of the MSSM emerging almost unscathed from the wealth of observational data of recent years is the nearly pure higgsino. We therefore dedicate this section to the analysis of the prospects for detection of a higgsino-like neutralino in direct DM detection searches, collider searches, and indirect astrophysical signals, and spend a few words on alternative strategies in other experimental venues. We will also give some predictions for the scale of the superpartner particles in traditional models and briefly discuss the issue of fine tuning. \subsection{Prospects for detection in direct and indirect searches}\label{sec:prosp} We begin in \reffig{fig:spinCS}(a), where we plot the rescaled spin-independent neutralino-nucleon cross section versus neutralino mass for a nearly pure higgsino under CMSSM/mSUGRA boundary conditions\cite{Kane:1993td}.\footnote{We remind the reader that this means scanning simultaneously over 4 free parameters: \mzero, the universal soft SUSY-breaking scalar mass at the GUT scale; \mhalf, the universal GUT-scale gaugino mass; \azero, the universal GUT-scale soft trilinear coupling; and \tanb, the ratio of the Higgs doublets' vevs. We scan them in this study over broad ranges: $\mzero,\mhalf\in[0.1\tev,30\tev]$, $\azero\in[-30\tev,30\tev]$, $\tanb\in[1,62]$. Additionally, one chooses the sign of $\mu$, which we set here to positive, as its sign does not much affect the region of parameter space with nearly pure higgsino DM (see, e.g.,\cite{Kowalska:2013hha,Roszkowski:2014wqa}). Note that the chosen input mass ranges encompass the parameter space region shown in \reffig{fig:spinCS} in its entirety. In it one finds $\mhalf\lesssim 0.6\,\mzero$, with $5\tev\lesssim \mzero\lesssim 25\tev$, $2.5\tev \lesssim \mhalf\lesssim 15\tev$ due to the Higgs mass measurement, see discussion on pages~14-15.} The color code depicts the higgsino DM relic abundance. For the points of the parameter space corresponding to \abund\ below the Planck measurement\cite{Ade:2015xua}, $\Omega_{\textrm{PL}}h^2 \approx 0.12$, we directly rescale \sigsip\ by $\xi=\abund/\Omega_{\textrm{PL}}h^2$, assuming implicitly that the fraction of higgsino DM we measure locally today traces closely its early time large-scale freeze-out value. Solid tilted lines show recent direct upper bounds from the PandaX-II\cite{Tan:2016zwf} (maroon) and XENON1T\cite{Aprile:2017iyp} (blue) underground experiments. The latter is not much more constraining than an earlier bound from the now decommissioned LUX\cite{Akerib:2016vxi}. Dot-dashed lines show the projected reach of several upcoming and planned experiments. We also show in \reffig{fig:spinCS}(a) as a thin black line the current lower bound on mass from direct searches for compressed electroweakinos in final states with two low-momentum leptons at the LHC (Refs.\cite{Aaboud:2017leg,CMS-PAS-SUS-16-048}, following a proposal and case studies by\cite{Giudice:2010wb,Schwaller:2013baa}), which is sensitive to higgsino DM for mass splitting $m_{\chi_2}-m_{\chi_1}=3-30\gev$. One should also be aware of the estimated putative reach of the ILC in testing higgsinos\cite{Fujii:2017ekh}, which we do not show in the plot for lack of space. It extends to approximately 240\gev\ (480\gev), independently of mass splitting, if the beam energy is set to $s=(500\gev)^2$ ($s=1000^2\gev^2$). \begin{figure}[t] \centering \subfloat[]{ \includegraphics[width=0.50\textwidth]{Figs/mchi_sigsip_DM_red.png} } \subfloat[]{ \includegraphics[width=0.50\textwidth]{Figs/mchi_sigsdp_DM_red.png} } \caption{(a) Spin-independent neutralino-nucleon cross section \sigsip\ rescaled by the relic abundance, as a function of neutralino mass \mchi, for a nearly pure higgsino with CMSSM/mSUGRA boundary conditions subject to $m_h\approx 125\gev$ and LHC Higgs bounds. Solid lines show the 90\%~C.L. upper bounds from PandaX-II\cite{Tan:2016zwf} (maroon) and XENON1T\cite{Aprile:2017iyp} (LUX\cite{Akerib:2016vxi}) (blue). Dot-dashed lines show the projected reach for DEAP-3600\cite{Amaudruz:2014nsa} (orange), XENON1T/nT\cite{Aprile:2015uzo} (blue), DarkSide G2\cite{Aalseth:2015mba} (maroon), LZ\cite{Szydagis:2016few} (black), DARWIN (purple)\cite{Aalbers:2016jon}. Thin solid black line shows the current lower bound on mass from direct searches at the LHC\cite{Aaboud:2017leg,CMS-PAS-SUS-16-048}. (b) Rescaled spin-dependent neutralino nucleon cross-section \sigsdp\ as a function of neutralino mass \mchi, for the a nearly pure higgsino in the CMSSM/mSUGRA. Solid lines show the 90\%~C.L. indirect upper bounds from IceCube\cite{Aartsen:2016zhm} (green) and Antares\cite{Adrian-Martinez:2016gti} (red). Dashed lines show projections for LZ\cite{Akerib:2015cja} (violet), XENON1T\cite{Aprile:2015uzo} (purple), Pico-500\cite{PICO} (blue), and DARWIN\cite{Aalbers:2016jon} (black).} \label{fig:spinCS} \end{figure} In \reffig{fig:spinCS}(b) we show the rescaled spin-dependent neutralino-proton elastic scattering cross section, $\xi$\sigsdp, versus neutralino mass. We show with solid lines existing indirect upper bounds from observations of neutrinos from the Sun in the neutrino telescopes IceCube\cite{Aartsen:2016zhm} (green) and Antares\cite{Adrian-Martinez:2016gti} (red), interpreted for a predominantly $W^+W^-$ annihilation final state, which give a good approximation for the nearly pure higgsino case\cite{Roszkowski:2014iqa,Catalan:2015cna}. Dashed lines of different colors give various projections for the future direct reach in \sigsdp\ of underground detectors. The relic density and DM observables are here calculated with \texttt{micrOMEGAs~v4.3.1}\cite{Belanger:2013oya}. The supersymmetric spectrum is calculated with \texttt{SPheno v4.0.3}\cite{Porod:2003um,Porod:2011nf}, and all model points are subject to LHC Higgs constraints from \texttt{HiggsSignals/HiggsBounds}\cite{Bechtle:2013xfa,Bechtle:2008jh,Bechtle:2011sb,Bechtle:2013wla} and to the Higgs mass measurement\cite{Aad:2015zhl}. The Higgs mass is calculated, like the SUSY spectrum, with the latest version of \texttt{SPheno}, which yields, in the regime where soft SUSY-breaking masses are well above $\sim 1\tev$, a value in excellent agreement with other numerical packages, \texttt{SusyHD}\cite{Vega:2015fna} and \texttt{FlexibleSUSY}\cite{Athron:2016fuq}. The calculated value is subject to an overall estimated theory uncertainty of approximately 2\gev\cite{Staub:2017jnp}, which we take into account in \reffig{fig:spinCS}. Note that when the SUSY spectrum lies in the several~TeV regime or above, all electroweak precision and flavor observables, including the anomalous magnetic moment of the muon, are expected to roughly maintain their SM value. We have chosen to show in \reffig{fig:spinCS} the higgsino parameter space under CMSSM boundary conditions, which provide a reasonable ansatz for models with scalar universality inspired by supergravity, and more generally cast in a lean framework scenarios in which supersymmetry breaking is transmitted to the visible sector at some high scale (the GUT scale) and EWSB is obtained radiatively around the minima of the MSSM scalar potential. In models defined in this way one observes, for a higgsino-like neutralino, strong correlation between the Higgs boson mass and the allowed minimum value of \sigsip. We show this in \reffig{fig:higgscorr}, where we plot the lower bound on \sigsip\ as a function of Higgs mass for a higgsino LSP of arbitrary mass. The correlation between minimum cross section and Higgs mass translates in \reffig{fig:spinCS}(a) into a lower bound on \sigsip\ when $\mchi\approx1\tev$. \begin{figure}[t] \centering \includegraphics[width=0.55\textwidth]{Figs/higgs_sigsip.pdf} \caption{Lower bound on \sigsip\ as a function of Higgs mass for a higgsino LSP of arbitrary mass in generic models where the breaking of supersymmetry is transmitted at the GUT scale and the physical spectrum and EWSB are obtained after RGE to the low scale.} \label{fig:higgscorr} \end{figure} To qualitatively understand what is happening, let us recall from \refsec{sec:mixed} that in order to push down \sigsip\ for a predominantly higgsino-like neutralino one must increase purity $f_h$ or, in other words, raise the wino and bino masses, cf.~\refeq{hinofrac}. Very heavy winos/binos at the GUT scale feed through the RGE on the low-scale value of the soft SUSY-breaking up-type Higgs doublet mass, which carry SU(2) isospin and hypercharge, and also tend to push down the right-handed stop mass. This happens even in scenarios where the gluino mass is not universal and can be found relatively close to the higgsino, like those analyzed in\cite{Kowalska:2014hza}. In order to keep the Higgs doublet soft mass under control, so to obtain a higgsino-like LSP after EWSB, and avoid tachyonic physical states, numerical scans are in this situation driven to large negative \azero\ and/or larger soft scalar mass. Both solutions have the net effect of pushing up the Higgs boson mass and give rise to the behavior we observe in \reffig{fig:higgscorr}.\footnote{The attractiveness, from the phenomenological point of view, of a lower bound on the neutralino scattering cross section determined by the Higgs mass measurement was pointed out early on in Bayesian analyses of the CMSSM/NUHM\cite{Kowalska:2013hha,Roszkowski:2014wqa,Roszkowski:2017nbc}. The exact minimal cross section depends strongly on the calculation of the Higgs mass itself, and on how it translates into mass predictions for the sparticles. In \texttt{SPheno~v4.0.3}, $\mhl\approx 125\gev$ leads to less optimistic expectations for the mean SUSY scale than in the versions of \texttt{SOFTSUSY}\cite{Allanach:2001kg} or \texttt{FeynHiggs}\cite{Hahn:2013ria} used in\cite{Kowalska:2013hha,Roszkowski:2014wqa}. Hence the parameter space in \reffig{fig:spinCS}(a) extends to lower \sigsip\ values than in those studies.} There is no apparent lower bound on the scattering cross section if we relax the requirement of radiative EWSB from boundary conditions generated at the GUT scale. This is the case, for example, in models where the typical mass of scalar particles is by several orders of magnitude decoupled from the electroweak vev (see, e.g.,\cite{Hall:2011jd,Fox:2014moa,Benakli:2015ioa}), and one does not expect to infer strict relations between the mechanism of SUSY-breaking and EWSB. The relic density alone determines then the mass of the higgsino-like DM, and purity $f_h$ can be extremely close to 1. We generically indicate with a black arrow in \reffig{fig:spinCS}(a) the parameter space for higgsino DM in those models, which can extend well below the neutrino background floor\cite{Hill:2013hoa,Nagata:2014wma}. This highly inaccessible part of the higgsino parameter space proves particularly tricky to probe. For underabundant higgsinos, $\mu\ll1\tev$, interesting venues for detections can be provided, for very small mass splitting, $m_{\chi^{\pm}}-\mchi\approx 150\mev$, by future collider searches for disappearing tracks\cite{Mahbubani:2017gjh,Fukuda:2017jmk}. If there is a sizable CP violating phase, future electron dipole moment experiments might be sensitive to parameter space with purity in excess of 99.99\%\cite{Nagata:2014wma}. And possibly new venues for detection are given by the cooling curve of white dwarfs\cite{Krall:2017xij}. Additional opportunities for the future detection of higgsino-like compressed spectra, in particular for long-lived particles with a relatively short lifetime, can arise then in electron-proton colliders\cite{Curtin:2017bxr}. \begin{figure}[t] \centering \subfloat[]{ \includegraphics[width=0.46\textwidth]{Figs/mchi_sigV_DM_new.pdf} } \hspace{0.02\textwidth} \subfloat[]{ \includegraphics[width=0.46\textwidth]{Figs/mchi_CSggmono.pdf} } \caption{(a) Indirect detection bounds and projections in gamma-ray searches in space and terrestrial telescopes for $\sim 1\tev$ higgsino DM under CMSSM/mSUGRA boundary conditions. Solid black line shows 90\%~C.L. upper bounds on the present-day annihilation cross section to $W^+W^-$ from the statistical combination of Fermi-LAT and MAGIC observations of dSphs\cite{Ahnen:2016qkx}; solid magenta line shows the recent bound from 10-year observation of the Galactic Center at H.E.S.S.\cite{Abdallah:2016ygi} under the Einasto profile assumption; solid green line shows the upper bound from antiproton cosmic-ray (CR) data at AMS-02\cite{Aguilar:2016kjl} according to\cite{Cuoco:2017iax} for the NFW profile; and dashed blue line shows the projected reach of CTA 500h under the Einasto profile assumption\cite{Roszkowski:2014iqa}. (b) In magenta, the current 95\%~C.L. upper bound on the annihilation cross section (times velocity) to gamma-ray lines, $\sigma_{\gamma\gamma} v$, from H.E.S.S.\cite{Rinchiuso:2017kfn} under the Einasto profile assumption, compared to the cross section of our $\sim 1\tev$ higgsino points.} \label{fig:hinosigv} \end{figure} We finally show in \reffig{fig:hinosigv} the status of indirect detection bounds and projections in gamma-ray searches in space and terrestrial telescopes for $\sim 1\tev$ higgsino DM under CMSSM/mSUGRA boundary conditions (we implicitly assume that the chances for detection maximize if higgsinos saturate the relic abundance). In \reffig{fig:hinosigv}(a), solid black line shows the most recent 90\%~C.L. upper bound on the present-day \sigv\ from the statistical combination of Fermi-LAT and MAGIC observations of dSphs\cite{Ahnen:2016qkx}, and the magenta line draws the recent bound from 10-year observation of the Galactic Center at H.E.S.S.\cite{Abdallah:2016ygi} under the Einasto profile assumption. We adopt the bounds in the $W^+W^-$ final state interpretation, which give a good approximation for the $\sim 1\tev$ higgsino. For the $W^+W^-$ final state we show in solid green the determination by\cite{Cuoco:2017iax} of the 95\%~C.L. upper bound on \sigv\ from antiproton CR data at AMS-02\cite{Aguilar:2016kjl}, under the NFW profile assumption. Note that the bound is subject to uncertainties related to the choice of diffusion model for CR propagation in the Galaxy. Some of these choices can in fact weaken it\cite{Cuoco:2017iax}, and push it up to approximately the level of the H.E.S.S. limit. Finally, dashed blue line shows the projected statistical reach of CTA 500h, under the Einasto profile assumption\cite{Roszkowski:2014iqa,Carr:2015hta}. Note that including the systematic uncertainty from diffuse astrophysical radiation will most likely weaken the extent of the projected reach\cite{Silverwood:2014yza,Catalan:2015cna}. Also note in \reffig{fig:hinosigv}(a) that some model points are characterized by \sigv\ significantly above the thermal relic expectation, due to the presence of the heavy pseudoscalar Higgs mass at $\ma \approx 2\,\mchi$\cite{Roszkowski:2014wqa,Roszkowski:2014iqa}. Regions of the parameter space that allow for this serendipitous coincidence thus see their indirect detection prospects improve significantly. We show in \reffig{fig:hinosigv}(b), as a magenta solid line, the current 95\%~C.L. upper bound on the annihilation cross section (times velocity) to gamma-ray lines from the final 254h data at H.E.S.S.\cite{Rinchiuso:2017kfn} under the Einasto profile assumption. The line is compared to the cross section of our $\sim 1\tev$ higgsino points, which lie well below the limit. \subsection{The soft SUSY scale and fine tuning\label{sec:hinoHiggs}} We conclude with a few words about the expected scale of the supersymmetric particles associated with higgsino DM. In truth, little is known in this regard, as the issue is highly model-dependent and there is not one only way of inferring the scale of SUSY breaking. Of course, expressions similar to Eqs.~(\ref{hinofrac})-(\ref{hinosigsip}) can give us a lower bound on the scale of the electroweak gauginos for every given upcoming new constraint on \sigsip, but to be precise one should then take into account the rich parametric dependence of the full formulas. Equivalently, the Higgs mass measurement tells us that in all likelihood stops and gluinos sit well above the LHC reach, but little more than that is known, as expectations depend strongly on parameters like \tanb\ and the trilinear coupling $A_t$. Thus, without pretence of presenting any universally valid result, but to just show an example of a model where the measurement of the Higgs mass actually does provide predictions for the maximally allowed typical scale of the superpartners, we present in \reffig{fig:msusy}(a) the distribution of the mean stop mass, $\msusy=(\ensuremath{m_{\tilde{t}_1}}\,\ensuremath{m_{\tilde{t}_2}})^{1/2}$, under CMSSM/mSUGRA boundary conditions in the (\mchi, $\xi$\sigsip) plane with higgsino DM. One can see that by approximately the next round of XENON-1T data we will be starting to probe the 10\tev\ range of the superpartners if the DM is entirely composed of higgsinos. Note also that, for higgsino mass $\mchi\lesssim 140\gev$, the LHC is already excluding, with direct soft-lepton bounds on electroweakinos, the parameter space corresponding to $\msusy\lesssim 8-10\tev$. \begin{figure}[t] \centering \subfloat[]{ \label{fig:b} \includegraphics[width=0.50\textwidth]{Figs/mchi_sigsip_MSUSY_red.png} } \subfloat[]{ \label{fig:c} \includegraphics[width=0.50\textwidth]{Figs/mchi_sigsip_FT_new_red.png} } \caption{(a) A plot of $\msusy=(\ensuremath{m_{\tilde{t}_1}}\,\ensuremath{m_{\tilde{t}_2}})^{1/2}$ in the (\mchi, $\xi$\sigsip) plane with higgsino DM under CMSSM/mSUGRA boundary conditions. (b) EWSB fine tuning for points with higgsino DM in the (\mchi, $\xi$\sigsip) plane.} \label{fig:msusy} \end{figure} Finally, like all BSM models developed at least in part to deal with the hierarchy problem, after the first two runs of the LHC models with higgsino DM have become marred by a certain amount of EWSB fine tuning. The severity of this issue depends, of course, on the specific features of each model: how EWSB is obtained and the relation to the mass of the Higgs boson. In the context of the CMSSM, the fine tuning associated with higgsino DM is shown in \reffig{fig:msusy}(b), where we plot in the (\mchi, $\xi$\sigsip) plane the size of the usual Barbieri-Giudice measure\cite{Ellis:1986yg,Barbieri:1987fn} (following the prescription of\cite{Ross:2017kjc}).\footnote{We remind the reader that the Barbieri-Giudice measure is generally defined as $\max_{p_i} |\partial \log M_Z^2/\partial \log p_i|$, where the $p_i$ are the model's input parameters at the typical scale of the messengers for SUSY breaking. In the CMSSM these are the GUT-defined parameters \mzero, \mhalf, \azero, $B_0$, $\mu_0$.} No point shows EWSB fine tuning of less than a part in 100, as direct consequence of the Higgs mass measurement, and one can observe the well-known fact that higgsino points favored by expectations of naturalness correspond to $\mchi<1\tev$ and lead to $\abund\ll 0.12$. For the specific case of the $\sim 1\tev$ higgsino, a failure to observe a signal in, say, the next round of XENON-1T data will imply a fine tuning greater than one part in $10^3$, with rapid increase with each successive milestone exclusion.\footnote{There exist ways of embedding the MSSM in UV completions that can lead to lower fine tuning for higgsino DM, see, e.g.,\cite{Kowalska:2014hza,Ross:2016pml}.} However, we emphasize that a large fine tuning is by no means exclusive to the CMSSM, to higgsino DM, or even to SUSY in general (see, e.g.,\cite{Barnard:2017kbb} for fine tuning in a non-SUSY scenario). As a matter of fact, the majority of phenomenological DM models found in the literature do not even attempt to construct a UV completion that could directly relate their free parameters to the physics of the high scale. It is very possible that once a discovery is finally made many of the suspended questions will start to find their answers. Higgsinos appear to be just in the perfect position to usher, in case of their eventual discovery, a new era of understanding. \section{Summary and conclusions}\label{sec:sum} The appealing theoretical features of the MSSM have made it, through the years, a natural favorite among the theoretical frameworks incorporating a possible DM particle. In this review, we have given a summary of the current status of phenomenological constraints on the DM candidates of the MSSM and have highlighted the growing consensus that, although available parameter space remains open for most DM aspirant particles, only one of them, the higgsino-like neutralino, is almost entirely free of tension from the increasing amount of observational data. Much of what makes higgsinos very attractive is the fact that the current constraints are not evaded with specific arrangements of some model parameters, but rather as a consequence only of the higgsino isospin quantum numbers, which lead to a fairly large mass to produce \abund\ in agreement with observations, and of the mass splittings among its neutral and charged components, which stem directly from EWSB. As these are not exotic features, however, one reasonably expects that the higgsino parameter space will not remain unexplored indefinitely. We have thus reviewed the excellent prospects for detection of higgsinos in the traditional experimental venues of direct DM detection in underground searches, indirect detection from astrophysical observations, and collider accelerators, all of which show reasons for optimism. The prospects are particularly enticing in supergravity-inspired scenarios with radiative EWSB, where the overall consistency of the theoretical picture requires a lower bound on the spin-independent cross section for higgsinos, determined indirectly but convincingly by the measured value of the Higgs boson mass. For those models that might instead be characterized by very large scales for the superpartners (in agreement with the 125\gev\ Higgs mass when \tanb\ is close to 1), the prospects for detection are more tricky to assess, but not without hope. We have drawn the reader's attention to a few references that promoted alternative venues for the explorations of this more fleeting scenarios. Promising venues are given by the experimental determination of dipole moments, disappearing track signatures in colliders, and the measurement of cooling curves in white dwarfs and neutron stars. Overall, we hope this might serve as an agile but comprehensive report on the consistency of the higgsino DM picture, and on the multiple opportunities that arise for its observation in the not so distant future. \bigskip \begin{center} \textbf{ACKNOWLEDGMENTS} \end{center} We would like to thank Luc Darm\'e for his comments on the manuscript and discussions. The use of the CIS computer cluster at the National Centre for Nuclear Research in Warsaw is gratefully acknowledged. The authors declare that there is no conflict of interest regarding the publication of this article. \bigskip \bibliographystyle{utphysmcite}
{ "timestamp": "2018-11-20T02:44:24", "yymm": "1802", "arxiv_id": "1802.04097", "language": "en", "url": "https://arxiv.org/abs/1802.04097" }
\section{Introduction} \subsection{Empirical measures and quadrature} Consider a discrete-time stochastic process $(X_k)_{k\ge 0}$ taking its values in some phase space $\Omega$, assumed to be a Polish space endowed with its Borel $\sigma$-algebra. We are concerned with the random atomic measure \[ \hat\mu_n = \frac1n \sum_{k=1}^n \delta_{X_k}, \] called the \emph{empirical measure} of the process, and its convergence. We shall either assume that the $(X_k)_{k\ge0}$ are independent identically distributed of some law $\mu$, or assume some weak long-range dependence and convergence of the law of $X_k$ to $\mu$ as $k\to\infty$. To quantify the convergence, we are interested in distances on the set $\operatorname{\mathcal{P}}(\Omega)$ of probability measures defined by duality. Given a class $\fspace{F}$ of functions $f:\Omega\to\mathbb{R}$ (sometime called ``test functions'' or ``observables''), one defines for $\nu_0,\nu_1 \in \operatorname{\mathcal{P}}(\Omega)$: \[ \lVert \nu_0-\nu_1\rVert_{\fspace{F}} = \sup_{f\in\fspace{F}} \big\lvert \nu_0(f)-\nu_1(f) \big\rvert\] (note that we write indifferently $\nu_0(f)$ or $\int f \dd\nu_0$). One particularly important case is obtained by taking $\fspace{F}=\operatorname{Lip}_1(\Omega)$, the set of $1$-Lipschitz functions. The corresponding metric is the $1$-Wasserstein metric $\operatorname{W}_1= {\lVert \cdot \rVert_{\operatorname{Lip}_1}}$, which by virtue of \emph{Kantorovich duality} can be written equivalently as \[\operatorname{W}_1(\nu_0,\nu_1) := \inf_{X\sim \nu_0, Y\sim \nu_1} \operatorname{\mathbb{E}} \big[ \lVert X-Y\rVert \big]\] where $\lVert \cdot\rVert$ here is the Euclidean norm and the infimum is over all pairs of random variable with the given measures as individual laws. It is long-known \cite{ajtai1984optimal} that, when the $(X_k)_{k\ge0}$ are independent and uniformly distributed on $[0,1]^d$ , we have \begin{equation} \operatorname{\mathbb{E}}\big[ \operatorname{W}_1(\hat\mu_n,\lambda) \big] \asymp \begin{dcases*} \frac{1}{\sqrt{n}} & if $d=1$, \\[2\jot] \sqrt{\frac{\log n}{n}} & if $d=2$, \\[2\jot] \frac{1}{n^{\frac1d}} & if $d\ge 3$. \end{dcases*} \label{eq:speed} \end{equation} where $\asymp$ expresses upper and lower bounds up to multiplicative constants and $\lambda$ denotes the Lebesgue measure. This problem and generalizations have been studied in several works, e.g. \cite{talagrand1992matching, talagrand1994sharper, boissard2014mean, dereich2013constructive, fournier2015rate, ambrosio2016pde, weed2017sharp}. The bounds \eqref{eq:speed} are interesting theoretically, but are rather negative for the practical application to quadrature. Computations of integrals are in many cases impractical using deterministic methods, and one often has to resort to Monte Carlo methods, i.e. approximate the unknown $\mu(f)$ by $\hat\mu_n(f)$. When one has to compute the integrals of a large number of functions $(f_m)_{1\le m \le M}$ with respect to a fixed measure $\mu$, one would rather draw the random quadrature points $X_1,\dots, X_k$ once and for all, and use them for all functions $f_m$; while usual Monte Carlo bound will ensure each individual estimate $\hat\mu_n(f_m)$ has small probability to be far from $\mu(f_m)$, if $M$ is large compared to $n$ these bounds will not ensure that \emph{all} estimates are good with high probability. On the contrary, convergence in $\operatorname{W}_1$ (or in duality with some other class $\fspace{F}$) ensures good estimates simultaneously for all $f_m$, as long as they belong to the given class, independently of $M$. This makes such convergence potentially useful; but the \emph{rate} given above, $n^{-\frac1d}$, is hopelessly slow in high dimension which is precisely the setting where Monte Carlo methods are most needed. We shall prove that if the functions of interest are regular, then this ``curse of dimensionality'' can be overcome. We shall be interested in the duality with $\Cku{s}$ the set of functions with $\Ck{s}$ norm at most $1$ (precise definitions are given below; when $s=1$ this is the set of $1$-Lipschitz functions); but other spaces could be considered, e.g. Sobolev or Besov spaces. Another issue is that in many cases, drawing independent samples $(X_k)_{k\ge 0}$ of law $\mu$ is not feasible, and one is lead to instead rely on a Markov chain having $\mu$ as its stationary measure; this is the Markov Chain Monte Carlo method (MCMC). While the empirical measure of Markov chains have been considered by Fournier and Guillin \cite{fournier2015rate}, these authors need quite strong assumptions: a spectral gap in the $L^2$ space (or similarly large spaces), and a ``warm start'' hypothesis ($X_0$ should have a law absolutely continuous with respect to $\mu$). In good cases, one can achieve this by a burn-in period (start with arbitrary $X_0$, and consider $(X_{k_0+k})_{k\ge 0}$ for some large $k_0)$; but in some cases, each $X_k$ has a singular law with respect to $\mu$ (for example the natural random walk generated by an Iterated Function System). We shall consider Markov chains satisfying a certain geometric contraction property, but again the method can certainly be adapted to other assumptions. \subsection{Markov chains} Our main result handles Markov chains of arbitrary starting distribution and with a spectral gap in $\operatorname{Lip}$ (e.g. positively curved chains in the sense of Ollivier \cite{ollivier2009ricci}). \begin{theomain}\label{theomain:Markov} Assume that $(X_k)_{k\ge0}$ is a Markov chain defined on a bounded domain $\Omega$ of $\mathbb{R}^d$, whose iterated transition kernel $(m^t_x)_{x\in\Omega,t\in\mathbb{N}}$ defined by \[ m_x^t(A) = \operatorname{\mathbb{P}}(X_{k+t}\in A \mid X_k=x)\] is exponentially contracting in the Wasserstein metric $\operatorname{W}_1$, i.e. there are constants $D\ge 1$ and $\theta\in(0,1)$ such that \[ \operatorname{W}_1(m_x^t,m_y^t) \le D\theta^t \lVert x-y\rVert. \] Denote by $\mu$ the (unique) stationary measure of the transition kernel. Then for some constant $C=C(\Omega,d,D,s)$ and all large enough $n$, letting $\bar n=(1-\theta)n$, we have \begin{equation} \operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \big] \le C \begin{dcases*} \frac{(\log \bar n)^{\frac{d}{2s+1}}}{\sqrt{\bar n}} & when $s > d/2$\\[2\jot] \frac{\log \bar n}{\sqrt{\bar n}} & when $s=d/2$ \\[2\jot] \frac{(\log \bar n)^{d-2s+\frac sd}}{\bar n^{\frac sd}} & when $s < d/2$ \end{dcases*} \label{eq:theo-Markov} \end{equation} \end{theomain} Let us stress two strengths of this result: \begin{itemize} \item for $s=1$, recalling $\lVert \cdot\rVert_{\Cku{1}}=\lVert \cdot\rVert_{\operatorname{Lip}_1}=\operatorname{W}_1$, the bounds are only a power of logarithm factor away from the optimal bounds for IID random variables, \item for $s$ large enough, we almost obtain the optimal convergence rate $\asymp 1/\sqrt{n}$ \item we assume neither reversibility, stationarity, nor warm start hypotheses (the distribution of $X_0$ can be arbitrary), \item the rate of convergence does not depend on the specific feature of the Markov chain, only on $D$ and $\theta$. \end{itemize} Note that for fixed $\theta$, $\bar n$ has the same order than $n$, but if $\theta$ is close to $1$, $1/(1-\theta)$ is the typical time scale for the decay of correlations. One thus cannot expect less than $(1-\theta)n$ Markov samples to achieve the bound obtained for $n$ independent samples. Examples of Markov chains which are exponentially contracting in $\operatorname{W}_1$ (equivalently, that have a spectral gap in the space of Lipschitz observables) are numerous; it is a slightly more general condition than ``positive curvature'' in the sense of Ollivier \cite{ollivier2009ricci}, see e.g. \cite{joulin2010curvature} and \cite{K:concentration} for concrete examples, or in the context of dynamical systems \cite{kloeckner2015contraction} and \cite{kloeckner2017optimal}. Under the assumption of Theorem \ref{theomain:Markov}, it is well-known that uniform estimates \begin{equation} \sup_{f\in\mathscr{F}} \operatorname{\mathbb{P}}\big(\lvert \hat\mu_n(f)-\mu(f)\rvert >\varepsilon\big) \to 0 \qquad\text{and}\qquad \sup_{f\in\mathscr{F}}\operatorname{\mathbb{E}}\big[ \lvert \hat\mu_n(f)-\mu(f)\rvert \big] \to 0 \label{eq:convergence} \end{equation} hold, here with $\mathscr{F}=\operatorname{Lip}_1$ (or any smaller class), with a Gaussian rate. The problem of convergence in duality to the class $\mathscr{F}$ is thus to invert the supremum and the probability (or expectancy), to bound from above \[\operatorname{\mathbb{P}}\big(\sup_{f\in\mathscr{F}} \lvert \hat\mu_n(f)-\mu(f)\rvert >\varepsilon\big) \qquad\text{or}\qquad \operatorname{\mathbb{E}}\big[ \sup_{f\in\mathscr{F}} \lvert \hat\mu_n(f)-\mu(f)\rvert \big].\] We shall disregard the potential issue of non-measurability: as we shall only deal with classes $\mathscr{F}$ having a countable subset which is dense in the uniform norm, we can always replace the supremum with a supremum over a countable set of functions. The idea of the proof of Theorem \ref{theomain:Markov} is to take an arbitrary $f\in\Cku{s}(\Omega)$ and decompose it using Fourier series. The regularity hypothesis gives us a control on both the uniform approximation by a truncated Fourier series, and on the Fourier coefficients. Combining these controls, we bound from above $\lvert \hat\mu_n(f)-\mu(f)\rvert$ by a quantity that does not depend on $f$ at all, but depends on the Fourier basis elements $(e_k)_{k\in\mathbb{Z}^d}$ up to some index size. Taking a supremum and an expectation, this leaves us with the simple task to optimize where to truncate the Fourier series. This decomposition method can in principle be used under various assumptions on the process $(X_k)_{k\ge0}$, the point being to identify a decomposition suited to the assumption; in particular, one can easily adapt the method to study geometrically ergodic Markov chains. I chose to present Theorem \ref{theomain:Markov} in part because its hypothesis is relevant to several Markov chains I am interested in, and in part because it presents specific difficulties: a blunt computation leads to non-optimal powers of $n$. To obtain good rates, we translate the contraction hypothesis to frame part of the argument in the space $\operatorname{Hol}_\alpha$, where the Fourier basis has smaller norm; and instead of bounding the Fourier coefficients of a Lipschitz function directly, we use Parceval's formula and the injection $\Ck{s}\to H^s$ which turns out to give a better estimate. Another functional decomposition, and another path in computations might improve the power in the logarithmic factor. We restrict to the compact case, but the method can in principle be adapted, or truncation argument be used, to deal with non-compactly supported measure. In order to introduce the decomposition method and show its flexibility, we shall state two simpler results below. \subsection{Explicit bounds in the i.i.d case, for the Wasserstein metric} The decomposition method enables one to get a very explicit version of \eqref{eq:speed} with a few computations but very little sophistication. \begin{theomain} \label{theomain:W1} If $\mu$ is any probability measure on $[0,1]^d$ and $(X_k)_{k\ge 0}$ are i.i.d. random variable with law $\mu$, then for all $n\in\mathbb{N}$ we have \begin{equation} \operatorname{\mathbb{E}}\big[\operatorname{W}_1(\hat\mu_n,\mu) \big] \le \begin{dcases*} \frac{1}{2(\sqrt{2}-1)}\cdot \frac{1}{\sqrt{n}} & when $d=1$\\[2\jot] \frac{\log_2(n)+8}{\sqrt{8n}} & when $d=2$ \\[2\jot] \frac{C_d}{n^{\frac1d}} & when $d\ge 3$ \end{dcases*} \label{eq:theo-W1} \end{equation} where $C_3\le 6.3$, $C_d \le 3 \sqrt{d}$ for all $d\ge 4$, and $C_d/\sqrt{d} \to 2$ as $d\to\infty$. \end{theomain} The order of magnitude of these bounds is sharp in many regimes: \begin{itemize} \item in dimension $1$, the order of magnitude $1/\sqrt{n}$ is optimal; however the constant $1/(2(\sqrt{2}-1))$ is \emph{not} asymptotically optimal when $\mu$ is Lebesgue measure, \item when $d=2$ and $\mu$ is Lebesgue measure, as previously mentioned the correct order is $\sqrt{\log n/n}$, but to the best of my knowledge it is an open question to determine whether this better order holds for arbitrary measures (a positive answer is strongly expected). See Section \ref{sec:four-corners} for an example showing that in a more general setting the order $\log n/\sqrt{n}$ cannot be improved, \item when $d\ge 3$, both orders of magnitude $n^{-1/d}$ as $n\to\infty$ and $\sqrt{d}$ as $d\to\infty$ are sharp up to multiplicative constants (see Remark \ref{rema:lowerbound}). The asymptotic constant $2$ is certainly quite larger than the asymptotic constant \[ \lim_{d\to\infty} \lim_{n\to\infty} \frac{n^{\frac1d}}{\sqrt{d}} \operatorname{\mathbb{E}}\big[\operatorname{W}_1(\hat\mu_n,\lambda) \big] \] which has been computed for the related, but slightly different \emph{matching problem} by Talagrand \cite{talagrand1992matching}; but our bound holds for all $n$ and all $d$ (and also all $\mu$). An even more general bound has been given by Boissard and Le Gouic \cite{boissard2014mean}, but their constant is larger by a factor approximately $10$. \end{itemize} Let us stress that the main purpose of this result will be to expose our method in an elementary setting: indeed many previous similar bounds are available in this case. For example more general non-asymptotic results have been obtained by Fournier and Guillin \cite{fournier2015rate}, building on previous work by Dereich, Scheutzow and Schottstedt \cite{dereich2013constructive}. They are more general in that they consider $q$-Wasserstein metric for any $q>0$ (while we will only be able to consider $q\le 1$), and apply to non-compactly supported measures $\mu$ under moment assumptions. However their constants, though non-asymptotic, have not been made explicit, and their behavior when the dimension grows has not been studied. \subsection{Regular observables and independent samples} In the i.i.d. case, we can improve Theorem \ref{theomain:Markov} by removing most of the logarithmic factors. \begin{theomain}\label{theomain:reg} If $\mu$ is any probability measure on $[0,1]^d$ and $(X_k)_{k\ge 0}$ are i.i.d. random variable with law $\mu$, then for all $s\ge 1$, for some constant $C=C(d,s) > 0$ (not depending upon $\mu$), and all integer $n\ge 2$ we have \begin{equation} \operatorname{\mathbb{E}}\big[\lVert \hat\mu_n -\mu \rVert_{\Cku{s}} \big] \le C \begin{dcases*} \frac{1}{\sqrt{n}} & when $s > \frac{d}{2}$ \\[2\jot] \frac{\log n}{\sqrt{n}} & when $s = \frac{d}{2}$ \\[2\jot] \frac{1}{n^{s/d}} & when $s < \frac{d}{2}$ \end{dcases*} \label{eq:theo-reg} \end{equation} \end{theomain} It is possible to prove this result with previous, more classical methods. Indeed, combining the ``entropy bound'' for the class $\Cku{s}$ \cite[Thm 2.7.1]{vaart1996weak} and the ``chaining method'' (see e.g. \cite[Ex 5.11, p. 138]{handel2016probability}) leads to Theorem \ref{theomain:reg}; I am indebted to Jonathan Weed for pointing this out to me. The proof by the decomposition method we provide here is very simple, but non-elementary as it relies on a wavelet decomposition. It is well-known that all functions in $\Cku{s}$ can be written as a linear combination of a few elements of a wavelet basis, with small coefficients, up to a small error. Then controlling $\lvert \hat\mu_n(f)-\mu(f)\rvert$ for all $f\in \Cku{s}$ simultaneously reduces to controlling this quantity for the few needed elements of the wavelet basis. \subsection{concentration inequalities} Up to know, we have restricted to estimates on the expectancy, while in many practical situations one would need concentration estimates. This is in fact not a restriction, as we shall explain briefly in Section \ref{sec:conc}: the classical bounded difference method enable one to get concentration near the expectancy. In particular, we get the following. \begin{coromain}\label{coromain:conc} Under the assumptions of Theorem \ref{theomain:Markov}, for some $\epsilon$ depending on $\theta,D,\operatorname{diam} \Omega$, for all large enough $n$ and all $M\ge C=C(\Omega, d, D, \theta)$ we have: \begin{itemize} \item when $s>d/2$ \begin{equation} \operatorname{\mathbb{P}}\Bigg[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \ge M \frac{(\log n)^{\frac{d}{2s+1}}}{\sqrt{n}} \Bigg] \le e^{-\epsilon (M-C)^2(\log n)^{\frac{d}{2s+1}}} \end{equation} \item when $s=d/2$ \begin{equation} \operatorname{\mathbb{P}}\Bigg[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \ge M \frac{\log n}{\sqrt{n}} \Bigg] \le e^{-\epsilon(M-C)^2 (\log n)^2} \end{equation} \item when $s < d/2$ \begin{equation} \operatorname{\mathbb{P}}\Bigg[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \ge M \frac{(\log n)^{d-2s+\frac sd}}{n^{\frac sd}} \Bigg] \le e^{-\epsilon(M-C)^2 n^{1-2s/d}}. \end{equation} \end{itemize} \end{coromain} (The last inequality is not optimal as we relaxed the poly-logarithmic factor for simplicity.) For example, when $s\ge d/2$ we deduce that $ \frac{\sqrt{n}}{\log n} \lVert \hat\mu_n-\mu\rVert_{\Cku{s}} $ is bounded almost surely. \paragraph{Structure of the paper} Sections \ref{sec:W1}, \ref{sec:main} and \ref{sec:MC} are independent and contain the proofs of the main Theorems (\ref{theomain:W1}, \ref{theomain:reg} and \ref{theomain:Markov} respectively: we start with the most elementary proof, follow with the simplest one, and end with the most sophisticated). Section \ref{sec:conc}, dealing with concentration estimates, is mostly independent from the previous ones, which are only used to deduce Corollary \ref{coromain:conc}. We shall write $a \lesssim b$ for $a \le C b$, the dependency of the constant $C$ being left implicit unless it feels necessary; the constants denoted by $C$ will be allowed to change from line to line. \section{Wasserstein convergence and dyadic decomposition}\label{sec:W1} The goal of this Section is to prove (a refinement of) Theorem \ref{theomain:W1}. We consider a sequence $(X_k)_{1\le k}$ of independent, identically distributed random points whose common law shall be denoted by $\mu$; we assume that $\mu$ is supported on the cube $[0,1]^d$ and consider the convergence of the empirical measure $\hat\mu_n := \sum_{k=1}^n \frac1n \delta_{X_k}$ in the $q$-Wasserstein distance where $q\in(0,1]$, i.e. \[ \operatorname{W}_{q}(\mu_0,\mu_1) := \inf_{f\in \operatorname{Hol}^{q}_1} \big\lvert \mu_0(f) - \mu_1(f) \big\rvert \] where $\operatorname{Hol}^{q}_1$ is the set of functions $f: [0,1]^d \to \mathbb{R}$ such that for all $x,y\in [0,1]^d$: \[ \lvert f(x) - f(y) \rvert \le \lVert x-y\rVert^q \] While we are mostly interested in the Euclidean norm $\lVert\cdot\rVert$, our method is sharper in the case of the supremum norm\footnote{The same notation is used for the uniform norm of functions, but the type of the argument will prevent any confusion.} $\lVert\cdot\rVert_\infty$, with respect to which the analogue of the aforementioned objects are denoted by $\operatorname{W}_{q,\infty}$ and $\operatorname{Hol}^{q,\infty}_1$. We will work with $\lVert\cdot\rVert_\infty$, and then deduce directly the corresponding result for the Euclidean norm by using that $\lVert\cdot\rVert \le \sqrt{d} \lVert\cdot\rVert_\infty$ (and thus $\operatorname{W}_{q} \le d^{\frac{q}{2}}\operatorname{W}_{q,\infty}$). Our most precise result is the following. \begin{theo}\label{theo:Wq} For all $q\in(0,1]$ and all $n$, it holds: \[\operatorname{\mathbb{E}}\big[ \operatorname{W}_{q,\infty}(\hat\mu_n,\mu) \big] \le \begin{dcases*} \frac{2^{\frac{d}{2}-2q}}{1-2^{\frac{d}{2}-q}}\cdot\frac{1}{\sqrt{n}} & when $d < 2q$,\\[2\jot] \Big(2 + \frac{\log_2(n)}{2^{q+1}q} \Big) \frac{1}{\sqrt{n}} & when $d = 2q$ \\[2\jot] 2 \Big(\frac{\frac{d}{2}-q}{2q(1-2^{q-\frac{d}{2}})}\Big)^{\frac{2q}{d}} \Big( 1 + \frac{q}{2^q(\frac{d}{2}-q)} \Big) \frac{1}{n^{\frac{q}{d}}} & when $d > 2q$. \end{dcases*} \] \end{theo} We deduce several more compact formulas below, including Theorem \ref{theomain:W1}. Observe that for fixed $q$ and large $d$, the complicated front constant converges to $2$. \begin{rema}\label{rema:lowerbound} It is not difficult to see that for $\mu$ the Lebesgue measure and an optimal, deterministic approximation $\tilde\mu_n$ with $n=k^d$ Dirac masses, one has \[\operatorname{W}_{1,\infty}(\tilde\mu_n,\mu)\ge \frac{d}{(d+q)2^q} \frac{1}{n^{\frac qd}}\] so that in high dimension, for the $\ell^\infty$ norm and in the worst case $q=1$ our estimate is off by a factor of approximately $4$ compared to a best approximation. With the Euclidean norm, an easy lower bound in the case of the Lebesgue measure is obtained by observing that a mass at most \[ \frac{\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2}+1)} R^d n\] is at distance $R$ or less of one of the $n$ points (be they random or not). This leads, for \emph{any} measure $\tilde\mu_n$ supported on $n$ points, to \[\operatorname{W}_1(\tilde\mu_n,\mu) \ge n\int_0^{R_0} d \frac{\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2}+1)} R^d \dd R = n\frac{d\pi^{\frac{d}{2}}}{(d+1)\Gamma(\frac{d}{2}+1)} R_0^{d+1}\] where $R_0$ is defined by $n\frac{\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2}+1)} R_0^d=1$. Finally, \[W_1(\tilde\mu_n,\mu) \ge \underbrace{\frac{d\Gamma(\frac{d}{2}+1)^{\frac1d}}{(d+1)\sqrt{\pi}}}_{\underset{d\to\infty}{\sim} \sqrt{\frac{d}{2e\pi}}} \cdot \frac{1}{n^{\frac1d}}\] and again our order of magnitude $C_d\asymp \sqrt{d}$ is the correct one. The results of \cite{talagrand1992matching} show that, at least for the bipartite matching problem, this seemingly crude lower bounds are in fact attained asymptotically, taking renormalized limits as $n\to\infty$ and then $d\to\infty$. This indicates that our constant are not optimal, and it would be interesting to have a non-asymptotic bound with optimal asymptotic behavior. \end{rema} \subsection{Decomposition of H\"older functions} The method to prove Theorem \ref{theo:Wq} consists in a multiscale decomposition of the functions $f\in\operatorname{Hol}^{q,\infty}_1$. In its spirit, it seems quite close to arguments of \cite{boissard2014mean}, \cite{dereich2013constructive} and \cite{fournier2015rate}; our interest is mostly in setting this multiscale analysis in a functional decomposition framework. We fix a positive integer $J$ to be optimized later, representing the depth of the decomposition. For each $j \in \{0,\dots,J\}$, set $\Lambda_j = \{j\} \times \{0,\dots, 2^{j}-1\}^d$ ; then define $\Lambda = \bigcup_{j=0}^J \Lambda_j$, acting as the set of indices for the decomposition. For each $j \in \{0,\dots,J\}$, let $\{C_\lambda : \lambda\in\Lambda_j\}$ be the regular decomposition of $[0,1]^d$ into cubes of side-length $2^{-j}$; the boundary points are attributed in an arbitrary (measurable) manner, with the constraint that $\{C_\lambda : \lambda\in\Lambda_j\}$ is a partition of $[0,1]^d$ that refines the previous partition $\{C_\lambda : \lambda\in\Lambda_{j-1}\}$. Denote by $x_\lambda$ the center of the cube $C_\lambda$, and by $\psi_\lambda := \boldsymbol{1}_{C_\lambda}$ the characteristic function of $C_\lambda$ (so that for each $j$, $\sum_{\lambda\in\Lambda_j} \psi_\lambda = \boldsymbol{1}_{[0,1]^d}$). \begin{lemm}\label{lemm:HolDec} For all function $f\in \operatorname{Hol}^{q,\infty}_1$ and all $J$, there exists coefficients $\alpha(\lambda)\in\mathbb{R}$ such that \begin{equation} f = \sum_{j=1}^J \sum_{\lambda\in\Lambda_j} \alpha(\lambda) \psi_\lambda + c + g \label{eq:HolDec} \end{equation} where $c$ is a constant and $g$ is a function $[0,1]^d\to \mathbb{R}$, such that \begin{align*} \lvert \alpha(\lambda)\rvert &\le 2^{-(j+1)q} \qquad \forall \lambda\in\Lambda_j\\ \lVert g\rVert_\infty &\le 2^{-(J+1)q}. \end{align*} \end{lemm} \begin{proof} Replacing $f$ with $f-c$ where $c=f(x_{0,0})$, we assume that $f$ vanishes at the center $x_{0,0}$ of $C_{0,0}=[0,1]^d$. Observe that $f\in\operatorname{Hol}^{q,\infty}_1$ then implies that $\lVert f\rVert_\infty \le 2^{-q}$ and $\lvert f(x_\lambda)\rvert \le 2^{-2q}$ for all $\lambda\in\Lambda_1$. For $\lambda\in\Lambda_1$, we define $\alpha(\lambda) = f(x_\lambda)$ and set $f_1 = \sum_{\lambda\in\Lambda_1} \alpha(\lambda) \psi_\lambda$; we have $\lvert \alpha(\lambda)\rvert\le 2^{-2q}$, the function $f-f_1$ is $\operatorname{Hol}^{q,\infty}_1$ on $C_\lambda$ and vanishes at $x_\lambda$. Since $C_\lambda$ is a $\lVert\cdot\rVert_\infty$ ball of center $x_\lambda$ and radius $1/4$, it follows that $\lVert f-f_1\rVert_\infty \le 2^{-2q}$ on each $C_\lambda$, and thus on the whole of $[0,1]^d$. Moreover for all $\lambda\in\Lambda_2$ it holds $\lvert (f-f_1)(x_\lambda)\rvert\le 2^{-3q}$. Similarly, we define $f_j:[0,1]^d\to\mathbb{R}$ recursively by setting $\alpha(\lambda)= (f-f_{j-1})(x_\lambda)$ for all $\lambda\in\Lambda_j$ and $f_j=f_{j-1} + \sum_{\lambda\in\Lambda_j} \alpha(\lambda) \psi_\lambda$. Then $\lvert\alpha(\lambda)\rvert \le 2^{-(j+1)q}$ for all $\lambda\in\Lambda_j$ and $\lVert f-f_J\rVert_\infty\le 2^{-(J+1)q}$. \end{proof} \subsection{Wasserstein distance estimation} With the notation of Lemma \ref{lemm:HolDec}, for any $f\in\operatorname{Hol}^q_1$ we have: \begin{align*} \big\lvert \hat\mu_n(f) - \mu(f) \big\rvert &\le 2\lVert g\rVert_\infty + \sum_{j=1}^J \sum_{\lambda\in\Lambda_j} \lvert \alpha(\lambda)\rvert \lvert \hat\mu_n(\psi_\lambda) - \mu(\psi_\lambda) \rvert \\ &\le 2^{1-(J+1)q} + \sum_{j=1}^J 2^{-(j+1)q} \sum_{\lambda\in\Lambda_j} \lvert \hat\mu_n(\psi_\lambda) - \mu(\psi_\lambda) \rvert \end{align*} where the last right-hand term does not depend on $f$ in any way. We can thus take a supremum and an expectation to obtain \begin{align*} \operatorname{\mathbb{E}}\big[ \operatorname{W}_{q,\infty}(\hat\mu_n,\mu) \big] &\le 2^{1-(J+1)q} + \sum_{j=1}^J 2^{-(j+1)q} \sum_{\lambda\in\Lambda_j} \operatorname{\mathbb{E}}\big[ \lvert \hat\mu_n(\psi_\lambda) - \mu(\psi_\lambda) \rvert \big] \end{align*} \begin{rema} This is the core of the decomposition method. Observe that we used no hypothesis on the $(X_k)$ yet; any stochastic process for which one can control $\operatorname{\mathbb{E}}[\lvert \hat\mu_n(\psi_\lambda) - \mu(\psi_\lambda) \rvert ]$ can be applied the method. \end{rema} Setting $p_\lambda = \mu(\psi_\lambda)$, the random variable $n\hat\mu_n(\psi_\lambda)$ is binomial of parameters $n$ and $p_\lambda$. A standard estimation of the mean absolute deviation yields \begin{align*} \operatorname{\mathbb{E}}\big[ \lvert n\hat\mu_n(\psi_\lambda) - n\mu(\psi_\lambda) \rvert \big] &\le \sqrt{n p_\lambda(1-p_\lambda)} \\ \sum_{\lambda\in \Lambda_j} \operatorname{\mathbb{E}}\big[ \lvert \hat\mu_n(\psi_\lambda) - \mu(\psi_\lambda) \rvert \big] &\le \frac{1}{\sqrt{n}}\sum_{\lambda\in \Lambda_j} \sqrt{p_\lambda} \end{align*} By concavity of the square-root function, we have \begin{equation} 2^{-dj} \sum_{\lambda\in \Lambda_j} \sqrt{p_\lambda} \le \sqrt{2^{-dj} \sum_{\lambda\in \Lambda_j} p_\lambda} = 2^{-\frac{dj}{2}} \label{eq:concavity} \end{equation} and we deduce \begin{align} \sum_{\lambda\in \Lambda_j} \operatorname{\mathbb{E}}\big[ \lvert \hat\mu_n(\psi_\lambda) - \mu(\psi_\lambda) \rvert \big] &\le \frac{2^{\frac{dj}{2}}}{\sqrt{n}} \nonumber\\ \operatorname{\mathbb{E}}\big[ \operatorname{W}_{q,\infty}(\hat\mu_n,\mu) \big] &\le 2^{1-(J+1)q} + \sum_{j=1}^J \frac{2^{j(\frac{d}{2}-q)-q}}{\sqrt{n}}, \label{eq:esp-sum} \end{align} leaving us with the simple task to optimize the choice of $J$. \subsection{Optimization of the depth parameter} We shall distinguish three cases: $d<2q$, $d=2q$ and $d>2q$. The first case is only possible for $d=1$, but we let it phrased that way because for some measures $\mu$ the dimension $d$ of the ambient space can be replaced by the ``dimension'' of the measure itself, see Section \ref{sec:four-corners} for an example. \subsubsection{Small dimension} If $d<2q$, then the sum in \eqref{eq:esp-sum} is bounded independently of $J$ and we can let $J\to\infty$ to obtain: \begin{align} \operatorname{\mathbb{E}}\big[ \operatorname{W}_{q,\infty}(\hat\mu_n,\mu) \big] &\le \frac{2^{-q}}{\sqrt{n}}\sum_{j=1}^\infty 2^{j(\frac{d}{2}-q)} \nonumber\\ &\le \frac{2^{\frac{d}{2}-2q}}{1-2^{\frac{d}{2}-q}}\cdot\frac{1}{\sqrt{n}} \label{eq:small-d} \end{align} In particular, for $d=1$, $q=1$: \begin{equation} \operatorname{\mathbb{E}}\big[ \operatorname{W}_{1}(\hat\mu_n,\mu) \big] \le \frac{1}{2(\sqrt{2}-1)}\cdot\frac{1}{\sqrt{n}} \end{equation} \begin{rema} For $\frac{d}{2}-q$ close to $0$, the constant in \eqref{eq:small-d} goes to infinity; in this regime, for moderate $n$ letting $J\to\infty$ is sub-optimal and one should optimize $J$ in \eqref{eq:esp-sum} as we shall do in the next cases. \end{rema} \subsubsection{Critical dimension} If $d=2q$ (or in fact $d\le 2q$) we can rewrite \eqref{eq:esp-sum} as \[\operatorname{\mathbb{E}}\big[ \operatorname{W}_{q,\infty}(\hat\mu_n,\mu) \big] \le 2^{1-(J+1)q} + \frac{2^{-q} J}{\sqrt{n}}.\] To optimize $J$, we formally differentiate the right-hand side with respect to $J$, equate to zero and solve for $J$. Reminding that $J$ is an integer, and keeping only the leading term (when $n\to\infty$) to simplify, this leads us to choose \[ J = \Big\lfloor \frac{\log_2 n}{2q} \Big\rfloor \] in particular implying $2^{1-(J+1)q} \le 2/\sqrt{n}$. We deduce the claimed bound \begin{equation} \operatorname{\mathbb{E}}\big[ \operatorname{W}_{q,\infty}(\hat\mu_n,\mu) \big] \le \Big(2 + \frac{\log_2(n)}{2^{q+1}q} \Big) \frac{1}{\sqrt{n}} \lesssim \frac{\log n}{\sqrt{n}} \label{eq:critical-d} \end{equation} immediately implying the bound of Theorem \ref{theomain:W1} for $d=2$ and $q=1$ (where a $\sqrt{2}$ comes from the comparison between the supremum and Euclidean norms): \begin{equation} \operatorname{\mathbb{E}}\big[ \operatorname{W}_{1}(\hat\mu_n,\mu) \big] \le \frac{\log_2(n)+8}{\sqrt{8n}} \end{equation} \subsubsection{Large dimension} If $d>2q$, equation \eqref{eq:esp-sum} becomes \[ \operatorname{\mathbb{E}}\big[ \operatorname{W}_{q,\infty}(\hat\mu_n,\mu) \big] \le 2^{1-(J+1)q} + \frac{2^{J(\frac{d}{2}-q)}-1}{1-2^{q-\frac{d}{2}}} \cdot \frac{1}{2^q\sqrt{n}} \le 2^{1-(J+1)q} + \frac{2^{J(\frac{d}{2}-q)}}{2^q(1-2^{q-\frac{d}{2}})} \cdot \frac{1}{\sqrt{n}} \] Following the same optimization process as in the critical dimension case, we choose $J$ such that \[ \frac12 n^{\frac1d} \Big(\frac{2q(1-2^{q-\frac{d}{2}})}{\frac{d}{2}-q} \Big)^{\frac2d} \le 2^J \le n^{\frac1d} \Big(\frac{2q(1-2^{q-\frac{d}{2}})}{\frac{d}{2}-q} \Big)^{\frac2d}\] leading to \begin{equation*} \operatorname{\mathbb{E}}\big[ \operatorname{W}_{q,\infty}(\hat\mu_n,\mu) \big] \le 2 \Big(\frac{\frac{d}{2}-q}{2q(1-2^{q-\frac{d}{2}})}\Big)^{\frac{2q}{d}} \Big( 1 + \frac{q}{2^q(\frac{d}{2}-q)} \Big) \frac{1}{n^{\frac{q}{d}}} \end{equation*} For $q=1$ and $d\ge 3$, it comes $\operatorname{\mathbb{E}}\big[ \operatorname{W}_{1,\infty}(\hat\mu_n,\mu) \big] \le C'_d n^{-\frac1d}$ where \begin{align*} C'_d = 2\Big(\frac{\frac{d}{2}-1}{2-2^{2-\frac{d}{2}}}\Big)^{\frac{2}{d}} \Big( 1 + \frac{1}{d-2} \Big) \frac{1}{n^{\frac{1}{d}}} \end{align*} We have notably $C'_4=3$. Relaxing our bound for $d\ge 4$ to \[C'_d\le 2\Big(\frac{d}{4}\Big)^{\frac{2}{d}} \Big( 1 + \frac{1}{d-2} \Big)\] it is more easily seen that it is decreasing (and still takes the value $3$ at $d=4$). We also see that we can take $C'_d\to 2$ as $d\to\infty$. The last part of Theorem \ref{theomain:W1} follows with $C_d = \sqrt{d} C'_d$, and a numerical computation shows $C_3\le 6.3$. \subsection{The four-corners Cantor measure}\label{sec:four-corners} We conclude this section with an example showing that the critical case order $\log n/\sqrt{n}$ is sharp if one generalizes its scope. The \emph{four-corner} Cantor set $K$ is the compact subset of the plane defined as the attractor of the Iterated Function System $(T_1,T_2,T_3,T_4)$ where $T_i$ are homotheties of ratio $1/4$ centered at $(0,0)$, $(0,1)$, $(1,1)$ and $(1,0)$ (see figure \ref{fig:4corners}). It has a natural measure $\mu_K$, which can be defined as the fixed point of the map \begin{align*} \mathcal{T} \colon \operatorname{\mathcal{P}}([0,1]^2) &\to \operatorname{\mathcal{P}}([0,1]^2) \\ \nu &\mapsto \frac14 (T_1)_*\nu + \frac14 (T_2)_*\nu + \frac14 (T_3)_*\nu + \frac14 (T_4)_*\nu \end{align*} ($\mathcal{T}$ is contracting in the complete metric $\operatorname{W}_1$, so that it has a unique fixed point). The measure $\mu_K$ can also be described as follows. In the $4$-adic decomposition of the square, at depth $j>0$ there are $16^j$ squares, among which $4^j$ intersect $K$ in their interior; $\mu_K$ gives each of these squares a mass $1/4^j$. \begin{figure} \centering \includegraphics[scale=.3]{4corners.pdf} \caption{Second stage of the construction of the four-corners Cantor set (contained in the filled black area).} \label{fig:4corners} \end{figure} $K$ has Hausdorff dimension $1$ (and positive, finite $1$-dimensional Hausdorff measure), and one should expect $\mu_K$ to have dimension $d=1$ in any reasonable sense of the term. It is thus interesting to have a look at $\operatorname{W}_{q}(\hat\mu_n,\mu_K)$ in the critical case $q=1/2$. \begin{prop} If $(X_k)_{k\ge0}$ are i.i.d. of law $\mu_K$, then \[\operatorname{\mathbb{E}}\big[ \operatorname{W}_{\frac12}(\hat\mu_n,\mu_K) \big] \asymp \frac{\log n}{\sqrt{n}}.\] \end{prop} \begin{proof} The proof of the upper bound follows the proof of Theorem \ref{theo:Wq}, using a $4$-adic decomposition and discarding all $\lambda$ such that $C_\lambda$ does not intersects $K$ in its interior. This replaces $d$ by $1$ as there are $4^j$ relevant squares of size $4^{-j}$ (indeed the only place where $d$ is used is in \eqref{eq:concavity}, only through the number of dyadic squares to be considered), so that with $q=1/2$ we end up in the critical case. To prove the lower bound, we first record the proportions $p_1,p_2,p_3,p_4$ of the random points $X_k$ lying in each of the four relevant depth-one squares (of side-length $1/4$). For large $n$, each $p_i$ is close to $1/4$ with typical fluctuations of the order of $1/\sqrt{n}$. The discrepancy of mass in each of these squares compared to the mass $1/4$ given to each of them by $\mu_K$ induces a cost of at least $1/\sqrt{2n}$, since the distance between depth-one squares is at least $1/2$ and $q=1/2$. The same reasoning applies at depth two inside each depth-one square, but with $np_i \simeq n/4$ points, thus fluctuations are of the order of $1/\sqrt{n/4}=2/\sqrt{n}$, inducing a total cost of the order of $1/\sqrt{2n}$ (distances are now $1/4\times 1/2$, and a square root is taken since $q=1/2$). The fact that the number of points is $n p_i$ rather than precisely $n/4$ is not an issue, an uneven distribution improving the bound. At each depth $j$ up to $\log_4 n$, there is a typical induced cost of the order of $1/\sqrt{n}$ from the uneven distribution of points among the $4$ subsquares of each depth $j$ square, yielding the desired bound of the order of $\log n/\sqrt{n}$. \end{proof} \section{Wavelet decomposition and convergence against regular test functions} \label{sec:main} \subsection{Wavelet decomposition}\label{sec:wavelets} Let us give a short account of the results about wavelets we will use (see e.g. Meyer's book \cite{meyer1992wavelets} for proofs and references). It will be convenient to use wavelets of compact support with arbitrary regularity $\Ck{r}$, whose construction is due to Daubechies \cite{daubechies1988orthonormal}. The construction yields \emph{compactly supported} functions $\phi,\psi^\epsilon:\mathbb{R}^d\to\mathbb{R}$ where $\epsilon$ takes any of $2^d-1$ values ($\epsilon\in E:=\{0,1\}^d\setminus\{(0,0,\dots,0)\}$), with particular properties of which only those we will use will be described. One defines from these ``father and mother'' wavelets a larger family of \emph{wavelets} by \begin{align} \phi_\tau(x) &=\phi(x-\tau), && (\tau \in\mathbb{Z}^d) \nonumber\\ \psi_\lambda(x) &= 2^{\frac{dj}{2}}\psi^\epsilon(2^j x - \tau), && (\lambda=(j,\tau,\epsilon) \in \Lambda = \mathbb{Z}\times \mathbb{Z}^d\times E); \label{eq:psi} \end{align} one important property of the construction is that the union of $(\phi_\tau)_{\tau\in\mathbb{Z}^d}$ and $(\psi_\lambda)_{\lambda\in\Lambda}$ form an \emph{orthonormal basis} of $L^2(\mathbb{R}^d)$. For $f\in L^2(\mathbb{R}^d)$ we can thus write \[f = \sum_{\tau \in \mathbb{Z}^d} \langle f,\phi_\tau\rangle \phi_\tau + \sum_{j=0}^\infty \sum_{\lambda\in\Lambda_j} \langle f,\psi_\lambda\rangle \psi_\lambda \] where $\Lambda_j = \{j\}\times \mathbb{Z}^d\times E$ and $\langle\cdot,\cdot\rangle$ denotes the $L^2$ scalar product (with respect to Lebesgue measure). One stunning property is that many functional spaces can be \emph{characterized} in term of the wavelet coefficients $\alpha(\lambda)=\langle f,\psi_\lambda\rangle$ and $\beta(\tau)=\langle f,\phi_\tau\rangle$. We shall only use upper bounds on the $\alpha(\lambda)$ and $\beta(\tau)$ in a specific case. The H\"older space $\Ck{s}$ is defined as the space of $k$ times continuously differentiable with $\gamma$-H\"older partial derivatives of order $k$, with $k$ a non-negative integer, $\gamma\in(0,1]$ and $k+\gamma=s$ (e.g. $\Ck{1}$ is the space of Lipschitz functions, $\Ck{3/2}$ the space of once continuously differentiable functions with $1/2$-H\"older first-order partial derivatives, $\Ck{5}$ is the space of four-times continuously differentiable functions with Lipschitz fourth-order partial derivatives, etc.). Note that ``$1$-H\"older'', meaning ``Lipschitz'', could be slightly enlarged to ``Zygmund'' (and should, if one is interested in two-sided bounds), but we need not enter this subtlety here. The space $\Ck{s}$ is endowed with the norm \[\lVert f\rVert_{\Ck{s}} = \max_{j\in\{0,\dots,k\}} \max_{\omega\in \{1,\dots,d\}^{j} } \Big\lVert \frac{\partial^j f}{\partial x_{\omega_1}\cdots\partial x_{\omega_j}}\Big\lVert_\star \] where the decomposition $s=k+\gamma$ is defined as above and $\lVert\cdot\rVert_\star$ is the uniform norm if $j<k$ and is the $\gamma$-H\"older constant if $j=k$. We denote by $\Cku{s}$ the set of functions with $\Ck{s}$ norm at most $1$. If the regularity of the wavelets is larger than the regularity of the considered H\"older space ($r>s$) then \begin{alignat*}{2} \lvert \beta(\tau) \rvert &\le C_{d,s}\lVert f\rVert_\infty &&\quad\forall \tau\in\mathbb{Z}^d \\ \lvert \alpha(\lambda)\rvert &\le C_{d,s} \lVert f \rVert_{\Ck{s}} 2^{-\frac{dj}{2}} 2^{-js} &&\quad \forall \lambda\in\Lambda_j, \end{alignat*} where the constant $C_{d,s}$ depends implicitely on the choice of father and mother wavelets $\phi$ and $\psi^\epsilon$; but we can fix for each $s$ such a choice with suitable regularity, e.g. $r=s+1$ and the constants then truly depends only on $d$ and $s$. The $\Ck{s}$ norm in the $\alpha(\lambda)$ coefficient could be relaxed to the ``regularity part'' of the norm but we do not use this. Note that the explicit computation of these constants would in particular need a very fine analysis of the chosen wavelet construction, and I do not know whether such a task has been conducted. \subsection{Decomposition of regular functions} Let us now use wavelet decomposition to prove good convergence properties for the empirical measure against smooth enough test functions; the strategy is similar to the one used in Section \ref{sec:W1}. We assume here that $(X_k)_{k\ge0}$ is a sequence of i.i.d. random variables whose law $\mu$ is supported on a bounded set $\Omega\subset \mathbb{R}^d$ (e.g. $\Omega=[0,1]^d$); note that $\Cku{s}=\Cku{s}(\mathbb{R}^d)$ makes no reference to $\Omega$. We consider a fixed family of wavelet of regularity $r>s$ as in Section \ref{sec:wavelets}; all constants $C$ below implicitly depend on $d$, $s$ and $\Omega$ (only through its diameter). Since the wavelets have compact support, there exist some constant $C$ such that for each $j$: \begin{itemize} \item for each point $x\in[0,1]^d$, there are at most $C$ different $\lambda$ corresponding to a $\psi_\lambda$ that does not vanish at $x$; the set of those $\lambda$ is denoted by $\Lambda_j(x)\subset\Lambda_j$, \item the union $\Lambda_j(\Omega) := \bigcup_{x\in\Omega} \lambda_j(x)$ has at most $C 2^{dj}$ elements. \end{itemize} We denote by $Z$ the set of parameters $\tau\in\mathbb{Z}^d$ corresponding to a $\phi_\tau$ whose support intersects $\Omega$ (observe that $Z$ is finite). We fix a function $f\in\Cku{s}$ and decompose it in our wavelet basis: \[f = \sum_{\tau \in \mathbb{Z}^d} \beta(\tau) \phi_\tau + \sum_{j=0}^\infty \sum_{\lambda\in\Lambda_j} \alpha(\lambda) \psi_\lambda \] with \begin{alignat*}{2} \lvert \beta(\tau) \rvert &\lesssim 1 &&\quad \forall \tau\in\mathbb{Z}^d\\ \lvert \alpha(\lambda)\rvert &\lesssim 2^{-\frac{dj}{2}} 2^{-js} &&\quad \forall \lambda\in\Lambda_j. \end{alignat*} Cutting the second term of the decomposition to some depth $J$ we get: \[f = \sum_{\tau \in Z} \beta(\tau) \phi_\tau + \sum_{j=0}^J \sum_{\lambda\in\Lambda_j} \alpha(\lambda) \psi_\lambda + g\] where \[ g = \sum_{\tau \notin Z} \beta(\tau) \phi_\tau + \sum_{j>J} \sum_{\lambda\in\Lambda_j} \alpha(\lambda) \psi_\lambda.\] Using the bound on the $\alpha$ coefficients and the formula \eqref{eq:psi} for $\psi_\lambda$, we get: \[\lVert g\boldsymbol{1}_{\Omega}\rVert_\infty \lesssim 2^{-s J}\] and it follows: \[ \lvert \hat\mu_n(f) - \mu(f) \rvert \lesssim 2^{-Js} + \sum_{\tau\in Z} \lvert \hat\mu_n(\phi_\tau)-\mu(\phi_\tau)\rvert + \sum_{j=0}^J \sum_{\lambda\in\Lambda_j(\Omega)} 2^{-(\frac{d}{2}+s)j} \lvert \hat\mu_n(\psi_\lambda) - \mu(\psi_\lambda) \rvert \] where the right-hand side does not depend on $f$. Taking a supremum and an expectation, it then comes: \begin{equation} \operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu\rVert_{\Cku{s}} \big] \lesssim 2^{-sJ} + \sum_{\tau\in Z} \operatorname{\mathbb{E}}\big[ \lvert \hat\mu_n(\phi_\tau)-\mu(\phi_\tau)\rvert\big] + \sum_{j=0}^J \sum_{\lambda\in\Lambda_j(\Omega)} 2^{-(\frac{d}{2}+s)j} \operatorname{\mathbb{E}}\big[ \lvert \hat\mu_n(\psi_\lambda) - \mu(\psi_\lambda) \rvert\big] \label{eq:wavelet1} \end{equation} and to conclude, we simply need to estimate the last two terms above. \subsection{Convergence for basis elements} \begin{lemm}\label{lemm:varsum} We have \begin{align*} \sum_{\tau\in Z} \operatorname{\mathbb{E}}\big[ \lvert \hat\mu_n(\phi_\tau)-\mu(\phi_\tau)\rvert\big] &\lesssim \frac{1}{\sqrt{n}} \\ \text{and}\qquad \sum_{\lambda\in\Lambda_j(\Omega)} \operatorname{\mathbb{E}}\big[ \lvert \hat\mu_n(\psi_\lambda) - \mu(\psi_\lambda) \rvert\big] &\lesssim \frac{2^{dj}}{\sqrt{n}} \end{align*} \end{lemm} \begin{proof} For each $\tau\in Z$, the random variable $\hat \mu_n(\phi_\tau)$ is the average of $n$ independent identically distributed, bounded random variables of expectation $\mu(\phi_\tau)$, so that $\operatorname{\mathbb{E}}\big[ \lvert \hat \mu_n(\phi_\tau)-\mu(\phi_\tau)\rvert\big] \le C/\sqrt{n}$. Since $Z$ is finite, the first claim is proved. To prove the second claim, we cannot argue in the exact same way because $\psi_\lambda$ depends on $j$. To ease notation we introduce $\bar\psi_\lambda := 2^{-\frac{dj}{2}} \psi_\lambda$ and $Y_\lambda := \hat\mu_n(\bar\psi_\lambda)-\mu(\bar\psi_\lambda)$, and recall that $\bar\psi_\lambda$ is bounded independently of $j$. Also, a bounded number of different $\bar\psi_\lambda$ ($\lambda\in\Lambda_j$) are non-zero at any point $x\in\Omega$; we denote by $p_\lambda$ the mass given by $\mu$ to the support of $\psi_\lambda$ and observe that $Y_\lambda$ is the average of $n$ i.i.d. centered random variables of variance less than $Cp_\lambda + \mu(\bar\psi_\lambda)^2$. We have \[\operatorname{Var}(Y_\lambda) \le \frac1n\big(C p_\lambda + \mu(\bar\psi_\lambda)^2\big) \qquad \sum_{\lambda\in\Lambda_j(\Omega)} p_\lambda \lesssim 1 \qquad \sum_{\lambda\in\Lambda_j(\Omega)} \mu(\bar\psi_\lambda) \lesssim 1 \] so that \begin{align*} \sum_{\lambda\in\Lambda_j(\Omega)} \operatorname{Var}(Y_k) &\le \frac1n \Big( C \sum_{\lambda\in\Lambda_j(\Omega)} p_\lambda + \big(\sum_{\lambda\in\Lambda_j(\Omega)} \mu(\bar\psi_\lambda)\big)^2 \Big)\\ &\lesssim \frac1n. \end{align*} Now it comes \begin{align*} \sum_{\lambda\in\Lambda_j(\Omega)} \operatorname{\mathbb{E}}\big[ \lvert \hat\mu_n(\psi_\lambda) - \mu(\psi_\lambda) \rvert\big] &= 2^{\frac{dj}{2}} \sum_{\lambda\in\Lambda_j(\Omega)} \operatorname{\mathbb{E}}\big[ \lvert Y_\lambda \rvert\big] \\ &\le 2^{\frac{dj}{2}}\sum_{\lambda\in\Lambda_j(\Omega)} \sqrt{\operatorname{\mathbb{E}}\big[ Y_\lambda^2 \big]} \\ &\le 2^{\frac{dj}{2}}\sqrt{\lvert \Lambda_j(\Omega)\rvert} \sqrt{\sum_{\lambda\in\Lambda_j(\Omega)} \operatorname{Var}(Y_\lambda)}\\ &\lesssim \frac{2^{dj}}{\sqrt{n}} \end{align*} \end{proof} \begin{rema} Lemma \ref{lemm:varsum} is the only place where we use that the $(X_k)_{k\in\mathbb{N}}$ are i.i.d. The method can therefore be applied to any stochastic process satisfying the conclusion of Lemma \ref{lemm:varsum}. \end{rema} \subsection{Conclusion of the proof} Plugin Lemma \ref{lemm:varsum} into \eqref{eq:wavelet1} yields \begin{align*} \operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu\rVert_{\Cku{s}} \big] &\lesssim 2^{-Js} + \frac{1}{\sqrt{n}} \sum_{j=0}^J \big(2^{\frac{d}{2}-s}\big)^j \end{align*} and we get the same trichotomy as before. If $s > d/2$, then we can let $J\to\infty$ to obtain \[\operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu\rVert_{\Cku{s}} \big] \le \frac{C}{\sqrt{n}},\] if $s=d/2$ we can take $J$ such that $2^{-Js}\simeq 1/\sqrt{n}$ and get \[\operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu\rVert_{\Cku{s}} \big] \le C \frac{\log n}{\sqrt{n}},\] and if $s< d/2$ we can choose $J$ such that $2^J\simeq n^{\frac1d}$ to get \[\operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu\rVert_{\Cku{s}} \big] \le \frac{C}{n^{s/d}},\] ending the proof of Theorem \ref{theomain:reg}. \section{Markov chains}\label{sec:MC} In this section we assume $(X_k)_{k\ge 0}$ is a Markov chain on a bounded domain; since we will use Fourier series, it will make things simpler to embed this domain into a torus, so we assume $\Omega\subset \mathbb{T}^d = \mathbb{R}^d/\mathbb{Z}^d$ (we do not lose generality in doing so, as scaling down $\Omega$ makes it possible to make the embedding isometric). We still denote by $\lVert x-y\rVert$ the distance between two points induced by the Euclidean norm. Our main assumption is that the iterated transition kernel of $(X_k)_{k\ge0}$, defined by \[ m_x(A) = \operatorname{\mathbb{P}}(X_{k+1}\in A \mid X_k=x) \qquad m_x^t(A) = \operatorname{\mathbb{P}}(X_{k+t}\in A \mid X_k=x)\] is exponentially contracting in $\operatorname{W}_1$, i.e. there are constants $D\ge 1$ and $\theta\in(0,1)$ such that \begin{equation} \operatorname{W}_1(m_x^t,m_y^t) \le D\theta^t \lVert x-y\rVert. \label{eq:contraction} \end{equation} Let us denote by $\op{L}$ the averaging operator, i.e. \[\op{L} f (x) = \int f(y) \dd m_x(y)\] and by $\op{L}^*$ its dual acting on probability measure, i.e. $\op{L}^*\nu$ is the law of $X_{k+1}$ conditioned on $X_k$ having law $\nu$. The linearity of $\operatorname{W}_1$ enables one to rewrite \eqref{eq:contraction} as \begin{equation} \operatorname{W}_1(\op{L}^{*t}\nu_0,\op{L}^{*t}\nu_1) \le D\theta^t \operatorname{W}_1(\nu_0,\nu_1) \label{eq:contraction2} \end{equation} so that there is a unique stationary measure $\mu$, and the law of $X_k$ converges exponentially fast (in $\operatorname{W}_1$) to $\mu$, whatever the law of $X_0$ is. We shall prove Theorem \ref{theomain:Markov}, which we restate for convenience. \begin{theo}\label{theo:Markov-precise} For some constant $C=C(\Omega,d,D,s)$ and all large enough $n$, letting $\bar n=(1-\theta)n$, we have \begin{equation} \operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \big] \le C \begin{dcases*} \frac{(\log \bar n)^{\frac{d}{2s+1}}}{\sqrt{\bar n}} & when $s > d/2$\\[2\jot] \frac{\log \bar n}{\sqrt{\bar n}} & when $s=d/2$ \\[2\jot] \frac{(\log \bar n)^{d-2s+\frac sd}}{\bar n^{\frac sd}} & when $s < d/2$ \end{dcases*} \label{eq:theo-Markov} \end{equation} \end{theo} Following the decomposition method, we shall find a suitable decomposition basis for any $f\in\Cku{s}$, seeking for a compromise between precision of a truncated decomposition and number of basis elements. Here using wavelets seems inefficient, as we do not have a precise enough analogue of Lemma \ref{lemm:varsum}, which uses independence to take advantage of the localization property of wavelets; without this, the number and size of the $\psi_\lambda$ are overwhelming. We shall use Fourier series instead, as they will be more easily controlled under our assumptions. For simplicity we consider complex-valued functions here, and denote the Fourier basis by $e_k(x) := e^{2i\pi k\cdot x}$ where $k\in\mathbb{Z}^d$ and the dot $\cdot$ denotes the canonical inner product. The key is thus to control $\lvert \hat\mu_n(e_k) - \mu(e_k) \rvert$; our hypothesis may seem perfectly suited to this since $e_k$ is Lipschitz, but its Lipschitz constant grows too rapidly with $k$ for a direct approach to be efficient. We shall combine the following two observations (the first of which is pretty trivial, the second of which is folklore). \begin{lemm}\label{lemm:Hol-ek} For all $\alpha\in(0,1)$, we have the following control of $e_k$'s $\alpha$-H\"older constant: \[\operatorname{Hol}_\alpha(e_k) \lesssim \lvert k\rvert_\infty^\alpha\] where $\lvert k\rvert_\infty = \max \big\{k_i : i\in\{1,\dots,d\} \big\}$. \end{lemm} \begin{proof} We have $\operatorname{Lip}(e_k) \le 2\pi \sqrt{d} \lvert k\rvert_\infty$ and $\lVert e_k\rVert_\infty\le 1$ so that for all $x\neq y\in\mathbb{T}^d$: \[ \frac{\lvert e_k(x)-e_k(y) \rvert}{\lVert x-y\rVert^\alpha} \le \min\Big( \frac{2}{\lVert x-y\rVert^\alpha}, 2\pi\sqrt{d}\lvert k\rvert_\infty \lVert x-y\rVert^{1-\alpha} \Big) \le 2\pi^\alpha d^{\frac\alpha2}\lvert k\rvert_\infty^\alpha \] \end{proof} \begin{lemm}\label{lemm:correlations} For all $\alpha\in(0,1]$, denoting by $\operatorname{W}_\alpha$ the $\alpha$-Wasserstein metric (i.e. the $1$-Wasserstein metric associated with the modified distance $\lVert \cdot\rVert^\alpha$), we have \begin{equation} \operatorname{W}_\alpha(\op{L}_0^{*t}\nu_0,\op{L}_0^{*t}\nu_1) \le D^\alpha\theta^{\alpha t} \operatorname{W}_\alpha(\nu_0,\nu_1) \label{eq:contraction-alpha} \end{equation} As a consequence, for all $\alpha$-H\"older functions $f:\Omega\to \mathbb{C}$ and all $\ell,m\in\mathbb{N}$ it holds \begin{align*} \big\lvert \operatorname{\mathbb{E}}[f(X_\ell)]-\mu(f) \big\rvert &\lesssim \operatorname{Hol}_\alpha(f) \, \theta^{\alpha \ell} \\ \big\lvert \operatorname{\mathbb{E}}[f(X_m)f(X_\ell)] - \operatorname{\mathbb{E}}[f(X_m)] \operatorname{\mathbb{E}}[f(X_\ell)] \big\lvert &\lesssim \operatorname{Hol}_\alpha(f)^2 \, \theta^{\alpha \lvert m-\ell \rvert} \end{align*} where the implied constants depends only on $\Omega$ and the constant $C$ in \eqref{eq:contraction}. \end{lemm} \begin{proof} By linearity we only have to check \eqref{eq:contraction-alpha} when $\nu_0=\delta_x$ and $\nu_1=\delta_y$ for some $x,y\in\Omega$, and by concavity \[ \operatorname{W}_\alpha(\op{L}^{*t}\delta_x,\op{L}^{*t}\delta_y)\le \big( \operatorname{W}_1(\op{L}^{*t}\delta_x,\op{L}^{*t}\delta_y) \big)^\alpha \le D^\alpha\theta^{\alpha t} \lVert x-y\rVert^\alpha = D^\alpha \theta^{\alpha t}\operatorname{W}_\alpha(\delta_x,\delta_y).\] To prove convergence toward the average and decay of correlation, we first use the contraction and that $\mu$ is the stationary measure to get \begin{align*} \big\lvert \op{L}^t f(x) - \mu(f)\big\rvert &= \Big\lvert \int \op{L}^tf \dd\delta_x -\int f \dd\mu \Big\rvert \\ &= \Big\lvert \int f \dd\big( \op{L}^{*t}\delta_x\big) -\int f \dd \big(\op{L}^{*t} \mu \big) \Big\rvert \\ &\le \operatorname{Hol}_\alpha(f) \operatorname{W}_\alpha(\op{L}^{*t} \delta_x,\op{L}^{*t}\mu) \\ &\le \operatorname{Hol}_\alpha(f) \, D^\alpha\theta^{\alpha t} \operatorname{W}_\alpha(\delta_x,\mu) \\ \big\lvert \op{L}^t f(x) - \mu(f)\big\rvert &\lesssim \operatorname{Hol}_\alpha(f) \, \theta^{\alpha t}. \end{align*} Assuming without lost of generality $\mu(f)=0$ we have $\lVert f\rVert _\infty\lesssim \operatorname{Hol}_\alpha(f)$ ($\mu(f)=0$ implies that $f$ takes both non-positive and non-negative values, and $\Omega$ is bounded). Assume further $m\ge \ell$ and write $m=\ell+t$. Combining all previous observations we get: \begin{align*} \lVert \op{L}^t f\rVert_\infty &\lesssim \operatorname{Hol}_\alpha(f) \, \theta^{\alpha t},\\ \big\lvert \operatorname{\mathbb{E}}[f(X_m)] \big\lvert &= \big\lvert \operatorname{\mathbb{E}}\big[\op{L}^m f(X_0) \big] \big\rvert \\ &\lesssim \operatorname{Hol}_\alpha(f) \, \theta^{\alpha m},\\ \big\lvert \operatorname{\mathbb{E}}[f(X_\ell)] \big\lvert &\lesssim \operatorname{Hol}_\alpha(f) \, \theta^{\alpha \ell},\\ \big\lvert\operatorname{\mathbb{E}}[f(X_m)f(X_\ell)] \big\rvert &= \big\lvert \operatorname{\mathbb{E}}\big[\op{L}^t f(X_\ell) \, f(X_\ell) \big] \big\rvert \\ &\lesssim \lVert \op{L}^t f\rVert_\infty \operatorname{\mathbb{E}}[\lvert f(X_\ell)\rvert] \\ &\lesssim \operatorname{Hol}_\alpha(f)^2 \theta^{\alpha t} \end{align*} and the conclusion follows. \end{proof} We deduce the following from these two Lemmas. \begin{coro} For all $k,\alpha$ and all $n\ge 1/(1-\theta^\alpha)$ it holds \[\operatorname{\mathbb{E}}\big[\lvert \hat\mu_n(e_k)-\mu(e_k)\rvert^2\big] \lesssim \frac{\lvert k\rvert_\infty^{2\alpha}}{(1-\theta^\alpha)n} \] \end{coro} \begin{proof} We have: \begin{align*} \operatorname{\mathbb{E}}\big[\lvert \hat\mu_n(e_k)-\mu(e_k)\rvert^2\big] &= \operatorname{\mathbb{E}}\Big[ \Big(\frac1n\sum_{\ell=1}^n e_k(X_\ell)-\mu(e_k) \Big)^2 \Big] \\ &=\frac{1}{n^2}\sum_{1\le \ell,m\le n} \operatorname{\mathbb{E}}[ e_k(X_\ell) e_k(X_m) ] - \frac2n \sum_{\ell=1}^n \operatorname{\mathbb{E}}[ e_k(X_\ell) ] \mu(e_k) + \mu(e_k)^2 \\ &\le \frac{1}{n^2} \Big( \sum_{1\le \ell,m\le n} \operatorname{\mathbb{E}}[ e_k(X_\ell)] \operatorname{\mathbb{E}}[ e_k(X_m) ] + C\operatorname{Hol}_\alpha(e_k)^2 \, \theta^{\alpha \lvert \ell-m\rvert} \Big) \\ &\qquad\qquad - \frac2n \sum_{\ell=1}^n \operatorname{\mathbb{E}}[ e_k(X_\ell) ] \mu(e_k) + \mu(e_k)^2 \\ &\le \frac{C\operatorname{Hol}_\alpha(e_k)^2}{n^2} \sum_{1\le \ell,m\le n} \, \theta^{\alpha \lvert \ell-m\rvert} + \frac{1}{n^2} \Big(\sum_{\ell=1}^n \big(\operatorname{\mathbb{E}}[ e_k(X_\ell) ] -\mu(e_k)\big) \Big)^2 \\ &\lesssim \frac{\operatorname{Hol}_\alpha(e_k)^2}{n^2}\cdot \sum_{\ell=1}^n 2\sum_{t=0}^\infty \theta^{\alpha t} + \frac{\operatorname{Hol}_\alpha(e_k)^2}{n^2}\Big(\sum_{\ell=1}^n \theta^{\alpha\ell} \Big)^2 \\ &\lesssim \frac{\operatorname{Hol}_\alpha(e_k)^2}{n^2}\cdot \frac{n}{1-\theta^\alpha} + \frac{\operatorname{Hol}_\alpha(e_k)^2}{n^2(1-\theta^{\alpha})^2} \\ &\lesssim \frac{\lvert k\rvert_\infty^{2\alpha}}{(1-\theta^\alpha)n} \end{align*} whenever $n\ge 1/(1-\theta^\alpha)$. \end{proof} Fix some threshold $J\ge 3$ and some exponent $\alpha\in(0,1]$, to be determined explicitly later on. Let $f:\mathbb{T}^d \to \mathbb{R}$ be in $\Cku{s}$. From the multidimensional version of Jackson's theorem \cite{schultz1969multivariate}, we know that there is a trigonometric polynomial $T_J(f)$ which is a linear combination of the $e_k$ for $\lvert k\rvert_\infty\le J$, such that \[\lVert f-T_J(f)\rVert_\infty \lesssim \frac{1}{J^s}\] We have no clear control on the coefficient of this optimal trigonometric polynomial, which need not be the Fourier coefficients of $f$. But it is also known that the Fourier series of $f$ is within a factor $\simeq \lVert f\rVert_\infty (\log J)^d$ of the best approximation (see \cite{mason1980near-best} for an optimal constant), so that denoting by $F_J(f) := \sum_{\lvert k\rvert_\infty \le J} \hat f_k e_k$ the $J$-truncation of the Fourier series of $f$, we get \[\lVert f-F_J(f)\rVert_\infty \lesssim \frac{(\log J)^d}{J^s}.\] We can assume $\hat f_0=0$ by translating $f$, and what precedes yields: \begin{align} \lvert \hat\mu_n(f)-\mu(f)\rvert &\le \lvert \hat\mu_n(f)-\hat\mu_n(F_J(f))\rvert + \lvert \hat\mu_n(F_J(f)) -\mu(F_J(f))\rvert + \lvert \mu(F_J(f))-\mu(f)\rvert \nonumber\\ &\le 2\lVert f-F_J(f)\rVert_\infty + \sum_{0<\lvert k\rvert_\infty\le J} \lvert \hat f_k\rvert \lvert \hat\mu_n(e_k) - \mu(e_k)\rvert \label{eq:Markov-line2}\\ &\lesssim \frac{(\log J)^d}{J^s} + \Big( \sum_{0<\lvert k\rvert_\infty\le J} \lvert \hat f_k\rvert^2 \lvert k\rvert_\infty^{2s} \Big)^{\frac12} \Bigg(\sum_{0<\lvert k\rvert_\infty\le J} \frac{\lvert \hat\mu_n(e_k) - \mu(e_k)\rvert^2}{\lvert k\rvert_\infty^{2s}} \Bigg)^{\frac12} \nonumber\\ &\lesssim \frac{(\log J)^d}{J^s} + \lVert f\rVert_{H^s} \Bigg( \sum_{0<\lvert k\rvert_\infty\le J} \frac{\lvert \hat\mu_n(e_k) - \mu(e_k)\rvert^2}{\lvert k\rvert_\infty^{2s}}\Bigg)^{\frac12} \nonumber\\ \lvert \hat\mu_n(f)-\mu(f)\rvert &\lesssim \frac{(\log J)^d}{J^s} + \Bigg( \sum_{0<\lvert k\rvert_\infty\le J} \frac{\lvert \hat\mu_n(e_k) - \mu(e_k)\rvert^2}{\lvert k\rvert_\infty^{2s}}\Bigg)^{\frac12} \label{eq:Markov-main} \end{align} Where the right-hand side does not depend on $f$ in any way (note that $\lVert \cdot\rVert_{H^s}$ is the Sobolev norm, controlled by the $\Ck{s}$ norm). \begin{rema} At line \eqref{eq:Markov-line2}, one could be tempted to bound directl $\lvert \hat f_k\rvert$ instead of using the Cauchy-Schwarz inequality, in order to make better use of our assumption on $f$. This would be effective if $\lvert \hat\mu_n(e_k)-\mu(e_k)\rvert$ were of the order of $1/n$, but it is actually of the order of $1/\sqrt{n}$, ultimately leading to a weaker bound than the one we aim for. \end{rema} Taking a supremum and an expectation in \eqref{eq:Markov-main} and using concavity, it comes: \begin{align*} \operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \big] &\lesssim \frac{(\log J)^d}{J^s} + \Bigg( \sum_{0<\lvert k\rvert_\infty\le J} \frac{\operatorname{\mathbb{E}}\big[ \lvert \hat\mu_n(e_k) - \mu(e_k)\rvert^{2} \big]}{\lvert k\rvert_\infty^{2s}}\Bigg)^{\frac12} \\ &\lesssim \frac{(\log J)^d}{J^s} + \Bigg( \sum_{0<\lvert k\rvert_\infty\le J} \frac{ \lvert k\rvert^{2\alpha}}{(1-\theta^\alpha)n \lvert k\rvert_\infty^{2s}}\Bigg)^{\frac12} \\ &\lesssim \frac{(\log J)^d}{J^s} + \Bigg( \sum_{\ell=1}^J \frac{\ell^{d-1+2\alpha-2s}}{(1-\theta^\alpha)n} \Bigg)^{\frac12} \end{align*} Choose now $\alpha =1/\log J$ so that $\ell^{2\alpha}\lesssim 1$ for all $\ell\in\{1,\dots, J\}$, use $1-\theta^\alpha \ge \alpha(1-\theta)$ and set $\bar n := (1-\theta)n$ to obtain \begin{equation} \operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \big] \lesssim \frac{(\log J)^d}{J^s} + \sqrt{\frac{\log J}{\bar n}} \Big(\sum_{\ell=1}^J \ell^{d-1-2s} \Big)^{\frac12} \end{equation} For $s < d/2$, we get: \begin{align} \operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \big] &\lesssim \frac{(\log J)^d}{J^s} + \frac{(\log J)^{\frac12} J^{\frac{d}{2}-s}}{\sqrt{\bar n}} \label{eq:Markov-explicit-bound} \end{align} Trying to balance the contribution of the two terms, we first see that taking $J \simeq \bar n^{\frac1d}$ would optimize the power of $\bar n$ in the final expression; refining to $J=(\log \bar n)^\beta \bar n^{\frac1d}$, developing and ignoring lower order terms shows that the choice $\beta=2-\frac1d$ optimizes the final power of $\log \bar n$, and we thus set \[ J = \big\lfloor (\log \bar n)^{2-\frac1d} \bar n^{\frac1d} \big\rfloor \] Any large enough $n$ (the bound depending on both $\theta$ and $d$) satisfies the requirement $n \ge 1/(1-\theta^\alpha)$ since the right-hand side is of the order of $\log n$. It then comes: \[ \operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \big] \lesssim \frac{(\log \bar n)^{d-2s+\frac sd}}{\bar n^{\frac sd}} \qquad (n \text{ large enough}).\] For $2s=d$ we get \[ \operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \big] \lesssim \frac{(\log J)^d}{J^s} + \frac{\log J}{\sqrt{\bar n}} \] and taking $J = \lfloor \bar n^{\frac{1}{2s}} (\log \bar n)^{(d-1)/s} \rfloor$ yields \[ \operatorname{\mathbb{E}}\big[\operatorname{W}_1(\hat\mu_n,\mu)\big] \lesssim \frac{\log \bar n}{\sqrt{\bar n}}.\] Finally, for $s > d/2$ we get \[ \operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \big] \lesssim \frac{(\log J)^d}{J^s} + \frac{(\log J)^{\frac12} }{\sqrt{\bar n}} \] and taking $J = \lfloor \bar n^{\frac{1}{2s}} (\log \bar n)^{\frac{d}{s+1/2}} \rfloor$ yields \[\operatorname{\mathbb{E}}\big[ \lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \big] \lesssim \frac{(\log \bar n)^{\frac{d}{2s+1}}}{\sqrt{\bar n}}, \] ending the proof of Theorem \ref{theomain:Markov}. \section{Concentration near the expectancy}\label{sec:conc} Let us detail how classical bounded martingale difference methods can be used to prove that the empirical measure concentrates very strongly around its expectancy. When $(X_k)_{k\ge 0}$ are independent identically distributed, this is long-known (see \cite{talagrand1992matching}, and also \cite{weed2017sharp} for more general Wasserstein metrics $\operatorname{W}_p$, $p\ge1$). In the case of Markov chains, such arguments have been developed notably in \cite{chazotte2009concentration} and, in a dynamical context, \cite{chazottes2012optimal}. Our approach is very similar and thus cannot pretend to novelty, but we write it down to show how to handle functional spaces more general than just Lipschitz and H\"older. The fundamental result to be used is the Azuma-Hoeffding inequality, which we recall. \begin{theo*}[Azuma-Hoeffding inequality] Let $Y$ be a random variable, let \[\{\varnothing,\Omega\}=\mathscr{B}_0\subset \mathscr{B}_1\subset \dots \subset \mathscr{B}_n = \mathscr{B}(\Omega)\] be a filtration and for each $k\in\llbracket 1,n\rrbracket$ set $\Delta_k = \operatorname{\mathbb{E}}[Y | \mathscr{B}_k]-\operatorname{\mathbb{E}}[Y | \mathscr{B}_{k-1}]$. Assume that for all $k$ and some numbers $a_k\in\mathbb{R}$, $c_k>0$ we have $\Delta_k \in[a_k,a_k+c_k]$ almost surely. Then for all $t>0$, \[\operatorname{\mathbb{P}}\big[ Y\ge \operatorname{\mathbb{E}}[Y] +t \big] \le \exp\Big(-\frac{2t^2}{\sum_k c_k^2} \Big).\] \end{theo*} \subsection{The independent case} In the case of i.i.d. random variables, the Azuma-Hoeffding inequality famously yields the following concentration inequality. \begin{theo*}[McDiarmid's inequality] Let $F:\Omega^n\to\mathbb{R}$ be a function such that for some $c_1,\dots,c_n$ and all $k\in\llbracket 1,n\rrbracket$ and all $(x_1,\dots,x_n,x_k')\in \Omega^{n+1}$ it holds \[\big\lvert F(x_1,\dots,x_k,\dots,x_n) - F(x_1,\dots,x_k',\dots,x_n) \big\rvert \le c_k.\] Let $(X_k)_{1\le k\le n}$ be a sequence of independent random variables. Then for all $t>0$ it holds \[\operatorname{\mathbb{P}}\big[ F(X_1,\dots,X_n) \ge \operatorname{\mathbb{E}}[F(X_1,\dots,X_n)] +t \big] \le \exp\Big(-\frac{2 t^2}{\sum_k c_k^2} \Big).\] \end{theo*} Applying this to \[F(X_1,\dots,X_n) = \lVert \hat\mu_n - \mu \rVert_{\fspace{F}} = \sup_{f\in\fspace{F}} \Big\lvert \frac1n \sum_{k=1}^n f(X_k) -\mu(f) \Big\rvert\] we can take \[c_k=\frac1n \sup_{f\in\fspace{F},x,x'\in\Omega} \lvert f(x)-f(x')\rvert =: \frac1n \operatorname{osc}(\fspace{F})\] and it comes \[\operatorname{\mathbb{P}}\big[ F(X_1,\dots,X_n) \ge \operatorname{\mathbb{E}}[F(X_1,\dots,X_n)] +t \big] \le \exp\Big(-\frac{2n t^2}{\operatorname{osc}(\fspace{F})^2}\Big).\] For example if $\fspace{F}\subset \operatorname{Lip}_1(\Omega)$ (e.g. $\fspace{F}=\Cku{s}$) we have $\operatorname{osc}(\fspace{F}) \le \operatorname{diam}\Omega$; if moreover $\Omega = [0,1]^d$ it thus comes \begin{equation} \operatorname{\mathbb{P}}\Big[ \lVert \hat\mu_n - \mu\rVert_{\fspace{F}} \ge \operatorname{\mathbb{E}}\big[\lVert \hat\mu_n - \mu\rVert_{\fspace{F}} \big] +t \Big] \le \exp\Big(-\frac{2}{d}\cdot n t^2\Big). \label{eq:conciid} \end{equation} This, combined with Theorem \ref{theomain:reg}, yields good concentration estimates. \begin{coro} If $(X_k)_{k\ge 0}$ are i.i.d.random variables with law $\mu$, then for all $s\ge 1$, for some constant $C=C(d,s)>0$ (not depending upon $\mu$), all integer $n\ge 2$ and all $M \ge C$ we have: \begin{itemize} \item if $s>d/2$ \begin{equation} \operatorname{\mathbb{P}}\Big[\lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \ge \frac{M}{\sqrt{n}} \Big] \le e^{-\frac2d (M-C)^2}; \end{equation} \item if $s=d/2$ \begin{equation} \operatorname{\mathbb{P}}\Big[\lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \ge \frac{M \log n}{\sqrt{n}} \Big] \le e^{-\frac2d (M-C)^2 (\log n)^2}; \end{equation} \item if $s<d/2$ \begin{equation} \operatorname{\mathbb{P}}\Big[\lVert \hat\mu_n - \mu \rVert_{\Cku{s}} \ge \frac{M}{n^{\frac{s}{d}}} \Big] \le e^{-\frac2d (M-C)^2 n^{1-2s/d}}; \end{equation} \end{itemize} \end{coro} Similarly, with Theorem \ref{theomain:W1} we can obtain entirely explicit, non-asymptotic concentration bounds. \subsection{Markov Chains} To tackle Markov chains we will need some hypothesis to replace independence; we choose a framework that covers the case of $\operatorname{W}_1$, but also more general dual metrics $\lVert\cdot\rVert_\fspace{F}$. Assume that $\Omega$ is endowed with a metric $d$ with finite diameter ($d$ is assumed to be lower-semi-continuous, but not necessarily to induce the given topology on $\Omega$). We still denote by $\operatorname{Lip}_1(\Omega)$ be the space of functions $\Omega\to\mathbb{R}$ which are $1$-Lipschitz with respect to $d$. Let $(X_k)_{\ge 0}$ be a Markov chain on $\Omega$ which is exponentially contracting (see the beginning of Section \ref{sec:MC}) with constant $D$ and rate $\theta$, in the metric $d$ instead of the euclidean norm; this can be rewritten in a coupling formulation as follows: for all $x,x'\in\Omega$, all $i,t\in\mathbb{N}$ there are random variables $(X'_k)_{k\ge i}$ with the same law as $(X'_k)_{k\ge i}$ and such that for all $t$: \[\operatorname{\mathbb{E}}[d(X_{i+t},X'_{i+t}) \mid X_i=x, X'_i=x'] \le D \theta^t d(x,x').\] Note that the flexibility in the choice of $d$ enables to include uniformly ergodic Markov chains in this framework, simply by taking $d=\boldsymbol{1}_{\neq}$, i.e. $d(x,y)=0$ if $x=y$ and $d(x,y)=1$ otherwise. Given a multivariate function $\Phi:\Omega^n\to\mathbb{R}^n$, we define as usual the coordinate-wise Lipschitz constants of $\Phi$ by \[\Lambda_i(\Phi) = \sup_{x_1,\dots,x_n\in\Omega, x'_i\neq x_i} \frac{\lvert \Phi(x_1,\dots, x_i,\dots, x_n)-\Phi(x_1,\dots, x'_i,\dots,x_n) \rvert}{d(x_i,x'_i)}\] and we say that $\Phi$ is separately Lipschitz if $\Lambda_i(\Phi)<\infty$ for all $i$ (when $d=\boldsymbol{1}_\neq$, the coordinate-wise Lipschitz constant become the coordinate-wise oscillations). \begin{theo}\label{theo:conc} Let $(X_k)_{k\ge1}$ be a Markov chain whose kernel is exponentially contracting with constant $D\ge 1$ and rate $\theta\in(0,1)$, with respect to a lower-semi-continuous distance $d$ on $\Omega$ giving it finite diameter $\operatorname{diam}(\Omega)$. Let $n\in\mathbb{N}$ and $\Phi :\Omega^n\to\mathbb{R}$ be separately Lipschitz with constants $\Lambda_i(\Phi)\le \Lambda$. Then \[\operatorname{\mathbb{P}}\Big[ \Phi(X_1,\dots,X_n) \ge \operatorname{\mathbb{E}}[\Phi(X_1,\dots,X_n)] + t \Big] \le \exp\Big(-\frac{(1-\theta)^2 t^2}{2n D^2 \operatorname{diam}(\Omega)^2\Lambda^2}\Big) \] \end{theo} \begin{proof} We set $X=(X_1,\dots,X_n)$ and $X_{i:j} = (X_i,\dots,X_j)$ (meaning the empty family whenever $j< i$). We shall apply the Azuma-Hoeffding inequality with the filtration $\mathscr{B}_k=\sigma(X_1^k)$, leaving us with the task of bounding the oscillations $c_k$ of the random variable \[\Delta_k = \operatorname{\mathbb{E}}[\Phi(X) | X_{1:k}]-\operatorname{\mathbb{E}}[\Phi(X) | X_{1:k-1}].\] Given an arbitrary $x_{1:k}=(x_1,\dots,x_k)\in\Omega^k$ and $x_k'\in\Omega$ we set \[V_k(x_{1:k},x'_k) = \operatorname{\mathbb{E}}[\Phi(X) | X_{1:k}=x_{1:k}]-\operatorname{\mathbb{E}}[\Phi(X) | X_{1:k-1}=x_{1:k-1}, X_k=x'_k]\] so that $c_k = \sup V_k - \inf V_k\le 2\lVert V_k\rVert_\infty$. Let $(X'_i)_{i\ge k}$ be a copy of $(X_i)_{i\ge k}$ as in the definition of exponential contraction; then \begin{align*} V_k(x_{1:k},x'_k) &= \operatorname{\mathbb{E}}\big[\Phi(x_{1:k-1},X_{k:n}) \big| X_k=x_k\big]-\operatorname{\mathbb{E}}\big[\Phi(x_{1:k-1},X'_{k:n}) \big| {X'}_{k}=x'_{k}\big] \\ &= \sum_{i=k}^{n} \operatorname{\mathbb{E}}\Big[\Phi(x_{1:k-1},X_{k:i},X'_{i+1:n}) -\Phi(x_{1:k-1},X_{k:i-1},X'_{i:n}) \Big| X_k=x_k, X'_k=x'_k\Big] \\ \lvert V_k(x_1^k,x'_k) \rvert &\le \sum_{i=k}^{n} \operatorname{\mathbb{E}}\big[\Lambda d(X_i,X'_{i}) \big| X_k=x_k, X'_k=x'_k\big] \\ &\le D\Lambda d(x_k,x'_k) \sum_{i=k}^{\infty} \theta^{i-k} \\ c_k &\le 2C\Lambda \operatorname{diam}(\Omega)/(1-\theta). \end{align*} Applying the Azuma-Hoeffding inequality finishes the proof. \end{proof} \begin{rema} The above inequality is probably not optimal; one can expect to improve the rate, either by moving the constant $2$ from the denominator to the numerator, or by replacing $(1-\theta)^2$ by $(1-\theta)$ (probably with another constant). \end{rema} As soon as $\fspace{F}\subset \operatorname{Lip}_1(\Omega)$ (e.g. $\fspace{F}=\Cku{s}$), Theorem \ref{theo:conc} applies to \[\Phi(X) = \lVert \hat\mu_n -\mu\rVert_{\fspace{F}} = \sup_{f\in\fspace{F}} \frac1n\sum_{k=1}^n f(X_k) -\mu(f)\] with $\Lambda=\frac1n$, yielding \begin{equation} \operatorname{\mathbb{P}}\Big[ \lVert \hat\mu_n -\mu\rVert_{\fspace{F}} \ge \operatorname{\mathbb{E}}\big[\lVert \hat\mu_n -\mu\rVert_{\fspace{F}}\big] + t \Big] \le \exp\Big(-\frac{(1-\theta)^2}{2 D^2 \operatorname{diam}(\Omega)^2} \cdot nt^2\Big) \end{equation} i.e., as in the independent case, subgaussian concentration. Corollary \ref{coromain:conc} follows. \bibliographystyle{amsalpha}
{ "timestamp": "2018-02-13T02:18:17", "yymm": "1802", "arxiv_id": "1802.04038", "language": "en", "url": "https://arxiv.org/abs/1802.04038" }
\section{Introduction} The field of nuclear security addresses the danger of the nuclear weapons, including proliferation of weapon technology, safeguards of fissile materials, and the risk of nuclear terrorism. The latter topic encompasses cargo security, which specifically focuses on preventing the smuggling of nuclear materials and fully assembled nuclear devices through ports of entry and other pathways. Estimates of the immediate economic costs alone of a nuclear explosion in a major port exceed \$1 trillion, \textit{before} accounting for the substantial human costs \cite{randecon,abtes}. Given the relative anonymity of cargo shipping and its resulting vulnerability to smuggling, the lack of systems to efficiently and reliably deter nuclear smuggling remains a relevant security threat. This paper details the demonstration of a new radiography technique for quantitatively identifying materials in cargo that is capable of distinguishing different materials with high atomic number. Specifically, the technique is capable of separating benign high-$Z$ materials such as lead and tungsten from special nuclear materials (SNM). \subsection{Detecting Nuclear Material in Cargo} Approximately 40000--57000 maritime shipping containers enter the United States every day~\cite{kouzes}. This fast throughput rate and the fact that many containers are densely packed to weights of up to 20 metric tons make cargo containers particularly vulnerable to the smuggling of nuclear materials or weapons. A system designed to detect nuclear smuggling must simultaneously achieve the following: scan cargo at $\lesssim$1 minute per container, produce a low rate of false positives, and provide a clear indicator of the presence of nuclear materials in diverse cargo configurations (i.e., a low rate of false negatives). Additionally, port operations restrict the footprint of scanning systems, as well as the permissible radiation dose to the cargo and surrounding area \cite{cbo,wco}. Cargo screening technologies can be classified into three categories: passive interrogation, active interrogation, and radiography. Passive interrogation involves the detection of the natural radioactivity of various particles --- neutrons and photons in particular --- from fissile materials. In this context the materials of interest are primarily weapons grade uranium (WGU), which consists primarily of $^{235}$U, and weapons grade plutonium (WGPu). The later primarily consists of $^{239}$Pu, but its other isotopes ($^{240}$Pu in particular) play a key role in its passive signature. Passive detection systems offer simplicity and relatively low cost, and such systems have been deployed widely in the United States and elsewhere. These systems primarily consist of portals, which use various scintillators to detect photons in combination with $^3$He neutron detectors to uncover SNM by their radioactive emissions from nuclear decay and spontaneous fission. The addition of shielding around smuggled material, however, circumvents passive detection. The passive signal from WGU is very weak and easily shielded, while even an assembled plutonium device (with its strong spontaneous fission neutron signature) may be shielded with combinations of low- and high-$Z$ material to block both neutron and photon signals. The limitations of passive detection techniques necessitate alternative approaches. Active interrogation systems expose the cargo to a beam of one or more types of particles (such as photons, neutrons, or muons) to trigger secondary processes unique to fissionable and fissile materials, producing signals which are strong enough to overcome shielding attempts by a competent smuggler. Examples of such systems include prompt neutrons from photofission (PNPF)~\cite{pnpf-short}, EZ3D~\cite{ez3d_patent}, and nuclear resonance fluorescence~\cite{NRF_bertozzi}. Furthermore, other groups related to this research effort have advanced the detection of delayed neutrons from induced fission as a way of identifying fissile materials~\cite{mayer}. While such techniques have promise due to their specificity for SNM detection, no system has been sufficiently developed for deployment at this time. For a high level discussion of several active detection methodologies see Runkle, \textit{et al}.~\cite{runkle2012rattling}. \subsection{Radiography for SNM Detection} While searching for shielded fissile and fissionable materials via active methods is promising, shielding scenarios which completely block the signal are nevertheless possible. Radiographic imaging of cargo provides a means of detecting such scenarios. A variety of radiographic techniques have been proposed in the past, including using medium energy ($\sim$GeV) protons~\cite{king1999800}, muons\cite{schultz2004,morris2008,borozdin2003}, neutrons\cite{jill,sowerby,cutmore}, as well as $\sim$keV photons (X-rays) and $\sim$MeV photons (gamma rays) \cite{chen2007,gilbert_spectral_analysis}. Additionally, radiographic imaging of cargo for SNM overlaps well with other goals of cargo inspection (such as detection of non-nuclear smuggling), adding value to the technique. This work builds upon prior studies of using 4.4 and 15.1~MeV monochromatic photons from the $^{11}$B$(d,n\gamma)^{12}$C reaction to radiograph objects and differentiate between their material types\cite{oday2015,buck}. A parallel effort by other groups, using Cherenkov detectors, have used the same reaction to pursue a similar goal \cite{rose}. The prior work however did not achieve a precise determination of the atomic number or areal density of the scanned objects. This work demonstrates the ability to infer the effective atomic number ($Z$) and areal density ($\rho_A$) of a given spatial pixel across a cargo sample, providing essential information for the identification of materials present in the cargo. This reconstruction is shown to be accurate enough to distinguish between uranium and lead, a critical result for SNM detection in that it permits distinguishing nuclear threats from benign materials. With this capability, the system is robust against false alarm scenarios in which benign high-$Z$ materials (e.g., lead, tungsten, precious metals) appear similar to SNM and thus require further inspection. In its simplest form radiography combines measurements the transmitted photon flux $\phi$ for a given material sample, knowledge of the incident flux $\phi_0$, and an assumption of the mass attenuation coefficient $\mu$ of the material to infer an approximate areal density $\rho_A$: \begin{equation} \mu \rho_A = \ln{(\phi_0/\phi)} ~. \label{eq:dmu} \end{equation} This calculation can be performed for every pixel in a radiographic scan to image the sample. By assuming that $\mu$ does not vary through the scan plane, a {\it relative} value of $\rho_A$ can be reconstructed. It should be noted that $\mu$ depends on the elemental composition of the material, and thus is a function of effective atomic number $Z$. As such, a measurement such as this cannot allow a simultaneous determination of effective atomic number $Z$ and areal density $\rho_A$, a requirement for distinguishing SNM from benign cargo. This goal can be achieved by using the energy dependence of $\mu$, and performing multiple measurements at various energies. The main processes which contribute to photon attenuation at 4.4 and 15.1 MeV are Compton scattering and pair production. The mass attenuation coefficient can be approximated as $\mu = \mu_{c} + \mu_{pp}$, where $\mu_{c}$ and $\mu_{pp}$ are the coefficients for Compton scattering and pair production, respectively. Each of these coefficients depends on $Z$ and the incident photon energy $E$ in different ways. Specifically, \begin{align*} \mu_c = Z N_A \sigma_{c}(E,Z) / A \\ \mu_{pp} = N_A \sigma_{pp}(E,Z) / A, \end{align*} where $N_A$ is Avogadro's number, $A$ is the atomic weight of the material under inspection, and the $\sigma(E,Z)$ are the cross sections of the relevant attenuation processes. For photon energies satisfying $E \gg 511$ keV, the cross sections may be approximated as $\sigma_c \propto 1/E$ and $\sigma_{pp} \propto Z^2 f(E_\gamma)$, where $f(E_\gamma)$ is a function of energy with negligible dependence on atomic number \cite{Leo}. Using these as inputs to the mass attenuation coefficients to compute the transmission ratios (Equation~\ref{eq:dmu}) at two different energies ($E_0$ and $E_1$) results in \begin{align} R = \frac{\ln {(\phi(E_1)/\phi_0(E_1))}}{\ln{(\phi(E_0)/\phi_0(E_0) )}} &= \frac{ Z^2 f(E_1) N_A \cdot \mathrm{const_1}/A }{ Z N_A \cdot \mathrm{const_2}/AE_0} \nonumber \\ &= Z \cdot E_0f(E_1) C, \label{eq:R} \end{align} where $C$ is a constant equal to the ratio $\mathrm{const_1/const_2}$. This treatment assumes that at $E_0$ the mass attenuation is entirely dominated by Compton scattering, while at $E_1$ pair production dominates. Assuming these requirements are met, an experimental measurement of $R$ could be used to directly determine the atomic number $Z$ (and the total attenuation used to infer the areal density $\rho_A$). While this simple model requires broad approximations, Equation~\ref{eq:R} captures the essential mechanism by which dual energy radiography may provide precise identification of the effective $Z$ of inspected materials. \subsection{Monoenergetic Gamma Rays from Nuclear Reactions} Dual energy radiography is by no means a new concept. Current systems implement this technique by using bremsstrahlung beams with varying endpoints. Linear accelerator (linac) based bremsstrahlung dual energy systems typically vary the electron beam energy between two fixed values (e.g., 6 and 9~MeV)\cite{chen2007,gilbert_spectral_analysis}. The transmitted signals are compared in a way that allows quantitative determination of the effective $Z$ of a given pixel in the cargo image\cite{tsinghua}. Bremsstrahlung based systems, while capable of rapidly producing images with excellent spatial resolution using commercially-produced equipment, have notable disadvantages. Most commercial linacs produce have a duty factor of $\sim$0.1\%, producing pulses of several {\textmu}s length at $\mathcal{O}(10^2\:\text{Hz})$. The resulting large instantaneous flux prevents measurement of the transmitted spectrum and only an integrated measurement of the total deposited energy is possible. This significantly reduces the information content of the signal, increasing the number of photons (and thus radiation dose) required to reconstruct the material type. Furthermore, most of the energy of the beam flux is at low energies ($\lesssim$1 MeV). Photons at these energies contribute to radiation dose, but provide little to no transmitted information due to the strong attenuation at low energies. For example, a Geant4\cite{geant} simulation shows that a system based on a 6 MeV electron beam would produce approximately 90\% of the counts and 65\% of the radiation dose from photons $\leq$3 MeV. This translates to a low information-to-dose ratio. Finally, a significant number of photons undergo scattering in the cargo but still reach the detectors, which reduces the image contrast and dilutes the pixel-specific $Z$-dependent information content. Many of these factors can be overcome by replacing a linac-based system with one which uses nuclear reactions to produce monochromatic photons. The technique of using monoenergetic gamma rays at several energies, referred to as multiple monoenergetic gamma radiography (MMGR), provides several advantages over traditional bremsstrahlung radiography. The knowledge of the photon energies and the measurement of transmitted spectral data allows the suppression of events in the signal which have undergone scattering, thus leaving only the photons which have undergone direct line-of-sight transmission. This creates a clean transmitted signal associated with each pixel, highly dependent on the effective $Z$ and areal density of the intervening material. This work utilizes the $^{11}$B(d,n$\gamma$)$^{12}$C reaction to produce 4.4 and 15.1~MeV photons, which arise from the short-lived excited levels of $^{12}$C in the final state of the reaction. The large spread in energy between the two gamma rays in the source spectrum provide strong leverage for material identification. \section{Experimental Methods} \label{sec:exp} To test the capability of the MMGR technique, a mock cargo scanning setup was constructed at the MIT-Bates Research and Engineering Center, a schematic of which is shown in Figure \ref{fig:schem}. This setup expanded upon previous test experiments\cite{buck,jill} to permit the 2D imaging of mock cargo materials. This included the installation of a motion system to move mock cargo materials through the beam, an array of 32 detectors to provide position resolution perpendicular to the direction of the motion, and the addition of a number of beam and data diagnostics to monitor the system over the course of a scan. This section describes the key elements of the experiment and mock cargo scenario for which data was collected. \begin{figure}[htb] \includegraphics[width=\columnwidth]{figures/experimentOverview-eps-converted-to.pdf} \caption{Schematic of the mock cargo scanning experiment, viewed from above \cite{buck}. The arrow associated with the mock cargo label indicates the direction of motion of the cargo across the scan.} \label{fig:schem} \end{figure} \subsection{Gamma Ray Beam} The 4.4 and 15.1 MeV photons used to radiograph materials were generated by impinging a 3 MeV deuteron ($d^+$) beam on a thick natural boron target, containing 80.1\% $^{11}$B. Given the relative cross sections of the $^{11}$B(d,n$\gamma$)$^{12}$C for the 4.4 and 15.1 MeV gammas at this energy, the beam on target produced the two gammas in approximately a 4:1 ratio\cite{COOPER201345,CLASS1965433}. The deuterons were accelerated using an Accsys Technologies DL-3 Radio Frequency Quadruple (RFQ) accelerator, and the target was mounted to the output port of the RFQ. The accelerator operated at a frequency of 300 Hz, producing deuteron beam pulses of approximately 20~{\textmu}s (0.6\% duty factor) as shown in Figure \ref{fig:pulse}. Thus, while the time average deuteron current during experiments was approximately 10~{\textmu}A, the instantaneous current during the beam pulses reached $\sim$1.7~mA. The reactions at the target generated gamma rays approximately isotropically \cite{CLASS1965433}. Thus, high-density concrete collimators were used to create a fan beam extending vertically with an illumination width of 2.38 cm in the horizontal direction at the location of the mock cargo. Additionally, 53 cm of borated high density polyethylene (HDPE) was placed directly downstream of the target (encompassing the entire fan beam) to block neutrons and low energy photons from secondary reactions in the target. Figure~\ref{fig:slab} shows the spectrum of the beam measured at low event rate to show the key features, including the 4.4 MeV and 15.1 MeV gammas. An additional contributions is visible at 1.7 MeV (from $^{11}$B(d,p$\gamma$)$^{12}$B and peaks between 6 and 9 MeV result from thermal neutron capture in the detectors and surrounding materials. See O'Day, \textit{et al.}\cite{buck} for an extended discussion of the beam components. \begin{figure}[thb] \centering \includegraphics[width=\columnwidth]{figures/detectorSpectrum.eps} \caption{Sample spectrum measured in the NaI (Tl) detectors with an iron sample in the beam and $\sim$1~{\textmu}A deuteron current, so as to show the features with high resolution. See O'Day, \textit{et al.}\cite{buck} for a discussion of the labeled elements of the spectrum.} \label{fig:slab} \end{figure} \subsection{Detectors} The transmitted spectra were measured using a vertical array of 32 Saint-Gobain 2X4H16/2SS NaI(Tl) scintillator detector packages \cite{stg}. The detectors consisted of $2\,'' \times 4\,'' \times 16\,''$ thallium-doped sodium iodide crystals instrumented with $2\,''$ photomultiplier tubes. The large size and appreciable energy resolution of these detectors allowed the selection of directly transmitted monoenergetic photons, providing critical information for precision material identification. The high voltage and gain controls were manually adjusted for each detector to approximately match their responses, although further energy calibration (gain) corrections were applied in analysis (see Section \ref{sec:gain}). The array was constructed so that the long axis of the detectors was parallel to the beam axis and the short axis was along the vertical direction to maximize the vertical spatial resolution of the detector array. The detector array was placed such that the upstream faces of the detectors were 9.35~m from the boron target (or approximately 5.81~m from the mock cargo). This resulted in approximately 3~cm vertical resolution for the cargo imaging. The horizontal extent of the detectors perpendicular to the beam was wider than the collimation, and thus did not significantly affect the imaging resolution. \subsection{Data Acquisition} The detector pulses were processed using CAEN V1725 digitizer modules operating in digital pulse processing pulse shape discrimination (DPP-PSD) mode \cite{caen}. The system was configured such that the trigger threshold for each detector approximately corresponded to a 1 MeV energy deposition. The pulse integration window for each trigger was 1 {\textmu}s. Note that unlike standard radiography systems, which operate in charge integrating mode, the system described here recorded individual waveforms with timing and pulse shape information available for each detection. This allowed the use of several analysis techniques described in Section \ref{sec:analysis} to increase the resolution of the system in effective $Z$ and areal density. The digitizer output was processed using an extension to the ADAQ analysis framework to produce data files for analysis \cite{adaq}. \subsection{Mock Cargo Test Configuration} \label{sec:mock} To utilize this system as a cargo scanning prototype, a motion system installed between the concrete collimators (as shown in Figure~\ref{fig:schem}) moved materials samples placed on a cart (shown in Figure~\ref{fig:images}) across the fan beam over the course of an experimental run. Data were collected as a function of time, which, when paired with the known motion of the materials and the vertical resolution of the detector array, allowed the 2D imaging of the mock cargo materials. The materials tested were chosen to so as to span a large range of effective $Z$ ($\sim$5--82) and to include a stand-in for SNM (natural uranium rods with aluminum cladding---see Appendix~\ref{app:eff}). The areal densities of the materials were chosen so as to approximate typical total areal densities present in commercial cargo containers. Table~\ref{tab:mats} summarizes the parameters of the materials samples. Section~\ref{sec:res} presents results from two distinct experimental runs using the same materials samples: one in which the cargo was moved across the beam at 0.0077 cm/s and one in which the cargo was moved at 0.308 (4x the speed of the first test). These are referred to as the 7400~s and 2000~s scans respectively. Additionally, the data could be sampled to considerably finer time resolution (4 ns), permitting the oversampling of the data relative to the collimator width to improve the horizontal position resolution of the reconstruction. Analyses were conducted using 1 cm and 1 mm pixel widths, as discussed in Sections~\ref{sec:analysis} and \ref{sec:res}. Note that while these scan times are considerably longer than would be feasible for a deployed cargo scanner, the relevant quantity is the integrated deuteron beam current delivered on target per unit scan distance, since scan times may be reduced by increasing the beam current. In the 7400~s run, the beam charge delivered was 1.3~mC/cm of scan length (at 10 {\textmu}A of average beam current). This would correspond to scan times of $\sim$100~s for full sized containers with 1~mA average beam current. Such currents would likely be achievable using a purpose-designed accelerator (operating with continuous wave current). \begin{table*}[thb!] \begin{center} \begin{tabular}{l|r|r|c|r|r} Material & Effective $Z$ & Density & Width $\times$ Height & Depth & Areal Density ($\rho_A$) \\ & & (g/cm$^3$) & (cm$\times$cm) & (cm) & (g/cm$^2$) \\ \hline\hline Borated HDPE & $\sim$5.2 & 1.02 & $20.3\times23.1$ & 45.30 & 46.2 \\ Aluminum (Al) & 13 & 2.70 & $20.3\times24.5$ & 20.25 & 54.7 \\ Copper (Cu) & 29 & 8.96 & $10.1\times10.1$ & 5.45 & 48.8 \\ Tin (Sn) & 50 & 7.31 & $10.1\times10.1$ & 6.74 & 49.3 \\ Tungsten (W) & 74 & 19.30 & $10.1\times10.1$ & 2.56 & 49.4 \\ Lead (Pb) & 82 & 11.35 & $20.3\times20.3$ & 5.08 & 57.7 \\ Uranium rods & $\sim$65 & 12.72 & $13.8\times21.2$ & 5.50 & $\sim$55\\ \end{tabular} \end{center} \caption{Parameters of the materials samples used for the imaging test, the arrangement of which is shown in Figure~\ref{fig:images}. The effective $Z$ value listed for the borated HDPE is computed as the average elemental composition of the material weighted by the contribution to the electron density by each element, since Compton scattering dominates the photon interactions at the energies of interest for the light nuclei comprising the material \cite{xcom}. Values listed for the uranium rods are averaged over the arrangement of the 10 rods. See Appendix~\ref{app:eff} for explanation of the effective $Z$ and areal density for the uranium rods.} \label{tab:mats} \end{table*} \section{Analysis} \label{sec:analysis} To reconstruct the effective $Z$ and areal density of the mock cargo, the transmitted gamma ray spectra were compared to the expected spectra based on a detailed simulation model of the experiment. The data spectra were collected over fixed increments of the scan length for each detector channel to create ``pixels'' for the material reconstruction. Similarly to standard radiography techniques, the analysis consisted of comparing the transmitted spectra with materials in the beam to that of the ``open'' beam, i.e., when no materials were present in the bean other than the fixed components of the setup described in Section~\ref{sec:exp}. This comparison provides information of the total attenuation of the beam due to the material as well as the energy dependence of this attenuation, which provides sufficient information to reconstruct the total areal density and effective atomic number of the materials. The simulation model was used to generate a library of materials over the complete space of $Z=4\text{--}92$ and areal density $\rho_A=20\text{--}250$~g/cm$^2$. The use of the simulation library allowed for the reconstruction of the cargo materials without empirical calibration based on additional datasets and provided a means of directly accounting for detector response and efficiency, collimation of the beam, multiple/down-scattering of transmitted photons, and other elements of the physical setup. This section describes the procedures applied to prepare the data spectra for comparison with the simulated transmission library, the simulation model, and the analysis used to extract the effective $Z$ and areal density of the mock cargo. \subsection{Spectrum Corrections} \label{sec:corr} Several corrections were applied to the raw spectra, both to ensure consistency across an imaging scan and to select the relevant data for comparison with the simulation model. Unlike traditional gamma/x-ray cargo radiography systems, which utilize integration mode detectors to cope with the high photon flux of bremsstrahlung beams \cite{r60}, the lower absolute photon flux of the nuclear reaction based photon beam here permits use of detectors in counting mode. This makes it possible to record the complete energy dependence of the transmitted spectra. This spectral information allows individual recorded events to be associated with the initial photon energy and thus more accurate determination of the attenuation of the beam due to the cargo at the specific beam energies. To produce spectra representative of the transmission of the monoenergetic photons, several corrections must be applied to the raw spectra. The corrections are described as follows in the order they were applied to the raw data. \subsubsection{Gain Drift Correction} \label{sec:gain} As the experiment operated in a non-climate controlled warehouse, the NaI (Tl) detectors were subject to gain drift on the order of several percent over the course of each scan. Since the analysis depends on the measurement of counts recorded in specific energy regions of the data spectra, a fixed calibration of ADC counts to deposited energy for a detector would cause systematic error for each energy bin. To prevent this, the raw spectrum of each detector at each position step in the scan was used to determine the ADC-to-energy calibration at that specific step using the monoenergetic peaks present in the spectra to produce energy spectra that could be compared on equal footing. \subsubsection{Beam Timing Cut} The pulsed nature of the deuteron beam provided a means of suppressing many of the background contributions to the raw detected spectra. Since the lifetimes of the excited states of $^{12}$C that gave rise to the 4.4 and 15.1 MeV photons in the target are $\mathcal{O}\left(10^{-13}\:\text{s}\right)$, events from the gamma rays of interest were recorded promptly in coincidence with beam pulses. Gating on the beam pulse timing allowed for suppression of background events due to longer-lived excited states and thermal neutrons. In particular, bremsstrahlung photons arising from the beta decay of $^{12}$B (produced by neutron capture on $^{11}$B in the target) contributed significantly to the raw signal up to 6.9 MeV. Given that the beta decay of $^{12}$B has a lifetime of $\sim$20 ms, however, $\sim$99.9\% of its contribution to the raw spectra may be eliminated by selecting only events in the beam pulse time windows. Since no timing information was recorded for the beam pulses during data taking and the exact frequency of the accelerator deviated slightly from 300 Hz, the pulse frequency was reconstructed by computing the mean time of concentrations of events in the detectors over many pulses. A symmetric 20 {\textmu}s window around the reconstructed pulse center was selected for the timing cut, as shown in Figure~\ref{fig:pulse}. Figure~\ref{fig:timec} shows the spectrum of the open beam inside and outside the timing cut, showing that the inclusion of events outside the time cut would contribute $\sim$10\% error to the estimated counts in the 4.4 MeV region. Notably, the 4.4 MeV signal remains visible in the off-pulse spectrum. This is due to the fact that the beta decay of $^{12}$B frequently creates the $^{12}$C 4.4 MeV excited state \cite{b12b,a12}. Additionally visible are small peaks from the capture of thermal neutrons on hydrogen (2.2 MeV) and a longer lived excited state of $^{12}$B (1.7 MeV)~\cite{a11}. \begin{figure}[thb] \centering \includegraphics[width=\columnwidth]{figures/pulse.pdf} \caption{Reconstructed beam pulse shape, time behavior of the after pulse events, and cut window applied to select only prompt events associated with beam pulses (dashed lines).} \label{fig:pulse} \end{figure} \begin{figure}[thb] \centering \includegraphics[width=\columnwidth]{figures/timespect.pdf} \caption{Spectra of the open beam during and outside the beam pulse window (lower histogram, color online), with pile-up corrections applied (see Section~\ref{sec:pup}). Events outside the pulses contributed approximately 13\% of all raw counts, after correction for pile-up (Section \ref{sec:pup}).} \label{fig:timec} \end{figure} \subsubsection{Beam Current Correction} Any unaccounted for variation in the deuteron current (and thus the beam flux) between the open beam and the subsequent measurements would caused an error in the transmission measurement directly proportional to the current variation. As noted in Section~\ref{sec:exp}, a charge integrator was used to monitor the beam current incident on the boron target over the course of each imaging scan. The beam current varied by up to $\sim$10\% during the imaging tests, primarily due to instabilities in the deuteron source and accelerator. The data from this channel were used to renormalize the data spectra at each position step. Note that only the relative beam current at each scan step is required, rather than an absolute calibration, since the analysis utilizes only the relative transmission between cargo-in-beam and the open beam. While an approximate calibration of the current was known, any uncertainty in its value does not significantly affect the reconstruction of the materials. \subsubsection{Pile-Up/Dead-Time Correction} \label{sec:pup} The large size of the NaI (Tl) detectors in combination with the high instantaneous current of the pulsed RFQ beam resulted in a significant number of ``pile-up'' events in the raw spectra (i.e., single spectrum counts representing the energy deposition of two or more individual photons in the same pulse integration period). These pile-up events significantly distort the open beam spectrum. In particular, such events add an excess of events at higher energies from the summation of two lower energy depositions. While standard radiography systems operating with integration mode detectors are not subject to this issue, the analysis described here --- which utilizes spectral information --- must carefully account for this effect. A pulse shape discrimination (PSD) algorithm was used to identify such pile-up events in data. The ``tail-over-total'' PSD method, frequently used to separate gamma ray and neutron events in organic scintillators due to their differing scintillation decay time scales \cite{psd}, may also be used to identify pile-up events. In this method, the charge integrated by the ADC for a PMT waveform is separated into ``head'' and ``tail'' portions at a a fixed amount following the trigger. An energy deposition of any value resulting from a single event should exhibit roughly the same ``tail-over-total'' ratio, corresponding to the decay time of the scintillator. Integration windows with pile-up events will show an excess in the tail portion of pulse integration due to the contribution of the second pulse. For the imaging scans, the pulse integration period was fixed at 1 {\textmu}s, and the tail region was defined to be approximately the last 50\% of the pulse following the trigger. The 2D histogram of the tail fraction and the total energy deposition for a single detector over the course of one of the data runs is shown in Figure~\ref{fig:psd}. For each energy bin in each detector, a Gaussian profile was fit to the central tail/total ratio peak to produce a cut at $3\sigma$ for each energy bin in each detector to reject events as pile-up. The resulting cut region on the PSD parameter for an example detector is also shown in Figure~\ref{fig:psd}. \begin{figure}[thb] \centering \includegraphics[width=\columnwidth]{figures/psd.pdf} \caption{Two-dimensional ``tail-over-total'' PSD histogram ($\log_{10}$ counts) for the raw spectrum of the open beam. The bright band at a ratio of $\sim$0.55 represents windows with a single detected photon. The curved bands away from the main band show pile-up events, in which additional energy is added to the common monoenergetic depositions (above the main band) or in which a monoenergetic event occurs close enough to the end of a trigger window for its tail to cause another trigger (below the main band). The region between the magenta lines indicates the pile-up cut region.} \label{fig:psd} \end{figure} Rejecting events identified as pile-up, however, introduces an effective deadtime to the measurement (since no counts are accepted in an integration window with pile-up). Since the presence of material in the beam significantly affects the pile-up rate, the pile-up rejected spectra on their own are not representative of the actual transmitted flux compared to the open beam. For the open beam, approximately 70\% of recorded energy depositions were rejected due to pile-up (after correction for the Gaussian cut boundaries). Since each of these pile-up events represents at least two individual photons, use of the uncorrected pile-up rejected spectrum would result in underestimating the true flux by $>$500\%, and would introduce similarly large errors in the transmission ratio. Due to the long integration window, pile-up deadtime dominated the total effective deadtime. Additional data acquisition and processing time added $\ll$1\% to the effective deadtime, and was not included as a significant correction. To account for this, a pile-up correction was devised. Given that individual true events are independent, the true energy spectrum of counts that are recorded in the pile-up portion of the data matches the energy spectrum of the counts that occurred without pile-up, up to secondary effects such as small additional energy pile-up and associated trigger bias. Thus the desired correction may be approximated as a scaling factor that is applied to the pile-up rejected (``clean'') spectrum. Since the true rate of individual events $r$ is unknown, and the resolving time of the detector $\tau$ may also be unknown, it is most useful to express the standard formulation of pile-up \cite{knoll} as a function of the fraction $f$ of total counts captured in the pile-up rejected spectrum. For the long, fixed ADC integration window used in this experiment, the deadtime was was of a non-paralyzable nature. As derived in Appendix~\ref{app:pcor}, the true spectrum $N$ may thus be reconstructed from the pile-up rejected spectrum $N_C$ as: \begin{equation} \label{eq:pcor} N = N_C\left( \frac{1-\ln f}{f} \right). \end{equation} This correction was applied to each time/position step of the image scans to account for variation in the pile-up rate over the course of the experiments. \subsection{Simulation Model} A complete simulation model of the experiment was constructed, including all relevant aspects of the experiment, to compute simulated transmission spectra for a wide variety of materials that could be directly compared to the data. By using such a model, the need for empirical calibration of the system with sample data from many materials was avoided. The simulation model was constructed using the Geant4 toolkit \cite{geant}, and included all important physical materials present in the experiment (the neutron shield in the beamline, the collimators, the detectors, etc.) at positions surveyed during data taking. Photons originating at the boron target were propagated through the geometry (including any simulated cargo material) to the model detectors, which included simulated responses --- resolution, efficiency, etc. --- modeled according to dedicated empirical tests. The simulated beam was generated by simulating the five major monoenergetic beam components shown in Figure~\ref{fig:slab} (1.7, 4.4, 6.7, 8.9, and 15.1 MeV) for fixed geometries, and determining their relative contribution to the beam using an empirical fit to data taken in dedicated experiments. The resulting simulated spectra for each monoenergetic contribution were convolved with the simulated detector response, and then the relative strength of each contribution was fit so as to best match the corresponding data. The results of the fit for the open beam is shown in Figure~\ref{fig:smatch}. Note that while the analysis depends only on relative transmission, this beam model accounts for the contributions in 4.4 MeV region due to downscatter and incomplete energy deposition in the detectors from higher energy photons to increase the accuracy of the analysis. Figure~\ref{fig:cucomp} shows the simulation prediction for the transmission spectrum for a copper sample of $\rho_A \approx 49$ g/cm$^2$. This prediction is based on propagation of the reconstructed open beam (Figure~\ref{fig:smatch}) through a simulated material sample using Geant4. These predicted spectra were compared to the data spectra to reconstruct materials, as described in Section~\ref{sec:recon}. \begin{figure}[thb] \centering \includegraphics[width=\columnwidth]{figures/dsim_all.pdf} \caption{The simulation spectrum fit to an open beam data sample taken using a single NaI (Tl) detector (color online). The individual simulated contribution magnitudes, detector model parameters, and background model were fit to best match the data for each sample.} \label{fig:smatch} \end{figure} \begin{figure}[thb] \centering \includegraphics[width=\columnwidth]{figures/cumatch.pdf} \caption{Comparison of the data spectrum from a pixel in the copper region of the 2000~s scan to the simulation prediction based on the fit to the open beam spectrum of Figure~\ref{fig:smatch}, given knowledge of the material in the beam at the time. } \label{fig:cucomp} \end{figure} Approximately 5000 simulated transmission experiments were generated to create a library of expected detected spectra over the complete two-dimensional space of $Z= 4\text{--}92$ and $\rho_A = 20\text{--}250$ g/cm$^2$, in addition to the open beam configuration. As described in Section~\ref{sec:recon}, this simulation library was compared to the transmission data to estimate the areal density and effective $Z$ of each pixel in a scan. \subsection{Radiographic Reconstruction} \label{sec:recon} With data and simulation spectra prepared, the transmission ratios for each data and simulation spectrum were computed in the regions of the 4.4 MeV and 15.1 MeV peaks. For each spectrum (with the simulation and data treated in the same manner), the counts between 2.8 MeV and 5.0 MeV (so as to encompass the full deposition and escape peaks of the 4.4 MeV photons) were integrated to compute $N_4$, while all counts above 10.1 MeV were integrated to produce $N_{15}$ (since essentially all counts above this energy were due to incomplete energy depositions of 15.1 MeV photons). The simulation was run with no mock cargo to produce an open beam spectrum for the transmission calculation, while the data runs used the open beam spectra collected during the first and last portions of the run (normalized to the integrated current of one pixel). The transmission ratios in each energy bin $E$ were defined as \begin{equation} R_E = \frac{N_E}{N_{E,\text{open}}}, \label{eq:trat} \end{equation} where the $N_{E,\text{open}}$ are the integrated counts for the appropriate open beam spectra, after applying the corrections discussed in Section \ref{sec:corr}. With the transmission ratios for the data and simulation spectra in the regions of interest determined, a figure of merit $F$ was constructed to determine the simulated combination of $Z$ and $\rho_A$ that best matched the data spectrum. The quantity was constructed using ratios of the transmission ratios (Equation \ref{eq:trat}) in the 4.4 and 15.1 MeV regions to construct a quantity robust against a number of systematic uncertainties. For each pixel, the data spectrum was compared to each element of the simulated material library with effective $Z$ and $\rho_A$ according to \begin{multline} \label{eq:metric} F\left(Z,\rho_A\right) = \left( \frac{R_{15,\text{data}}}{R_{15,\text{sim}}\left(Z,\rho_A\right)} - 1 \right)^2 \\ + \left( \frac{ \sfrac{R_{4,\text{data}}}{R_{15,\text{data}}} } {\sfrac{R_{4,\text{sim}}\left(Z,\rho_A\right)}{R_{15,\text{sim}}\left(Z,\rho_A\right)}} - 1 \right)^2. \end{multline} This metric was motivated by the fact that the signal in the 15.1~MeV region was very clean due to the absence of high energy backgrounds in the data and the fact that the ratio-of-ratios between the 4.4 and 15.1~MeV regions provides strong material discrimination while canceling certain systematic uncertainties. The values of $Z$ and $\rho_A$ corresponding to the minimum of $F$ were assigned as the reconstructed values for each pixel, noting that a perfect match between the measured and simulated transmission results in $F=0$. Figure~\ref{fig:fmin} shows examples of the reconstruction for a low-$Z$ material (Al) and a high-$Z$ material (W). \begin{figure*}[thb] \centering \includegraphics[width=\columnwidth]{figures/AlID.pdf} \includegraphics[width=\columnwidth]{figures/WID.pdf} \caption{Two examples of the data/simulation comparison metric $F$ (Equation~\ref{eq:metric}) as a function of $Z$ and $\rho_A$ over the simulation library. Examples from the aluminum (left) and tungsten (right) regions of the 7400~s test are shown. The crossing dashed lines indicate the minimum of $F$ in each plot, showing the reconstructed $Z$ and $\rho_A$ values.} \label{fig:fmin} \end{figure*} \section{Results} \label{sec:res} To produce images of the mock cargo in effective $Z$ and areal density, the data spectra were grouped into 1 cm pixels along the scan length. For each pixel, the radiographic analysis described in Section~\ref{sec:analysis} was applied to produce the images in effective $Z$ and areal density $\rho_A$. Figure~\ref{fig:images} shows the results for the 7400~s scan. In addition to the pixel-by-pixel estimate, regions were defined according the boxes shown in the reconstruction images to quantitatively evaluate the performance of the analysis for each material. For each region, the average reconstructed $Z$ and $\rho_A$ were computed for comparison to the known true values. These results are summarized in Tables~\ref{tab:long} and \ref{tab:short} for the 7400~s and 2000~s scans, respectively. The standard deviation of each estimated pixel value was used as a measure of the uncertainty on the values reconstructed for each individual pixel, which were then used to compute the uncertainties on the overall mean reconstructed values. The reconstructed values are very close to the true values in both of the imaging tests, showing the robustness of the system and analysis to environmental drift effects (e.g., temperature changes) and the statistics of the data (the 7400~s run included approximately 4 times as many counts in the transmission spectra per pixel as the 2000~s run). It should also be noted that while the absolute values are close to the true values, there are residual statistically significant differences. This indicates that further improvements to the reconstruction algorithms or control of systematic uncertainties are possible, and thus should be part of future work. Despite this limitation, the specificity of the monoenergetic beam transmission provides effective atomic number identification with specificity of $\pm$3 in $Z$ and thus permits separation of different high-$Z$ materials, which is typically not possible in existing radiography systems. For example, the tungsten and lead samples are well separated in reconstructed atomic number, while the areal density reconstruction is also accurate for each material to within a few g/cm$^2$. This suggests that pure special nuclear materials ($Z\geq92$) could be separated from benign high-$Z$ materials such as lead and tungsten, which would be invaluable for reducing false alarms in a system designed to detect nuclear smuggling. \begin{figure*}[thb] \centering \includegraphics[width=2.1\columnwidth]{figures/images2.pdf} \caption{Images of the mock cargo in reconstructed effective $Z$ and areal density for the 7400~s test ($\sim$1.3 mC integrated deuteron beam current on target per cm of the scan). The magenta boxes show the regions used to define each material sample for the computation of the values in Table~\ref{tab:long}. } \label{fig:images} \end{figure*} \begin{table*}[thb!] \setlength{\tabcolsep}{6pt} \begin{center} \begin{tabular}{l |r r r | r r r} Material & Actual $Z$ & Reconstructed $Z$ & Single Pixel Unc. & Actual $\rho_A$ & Reconstructed $\rho_A$ & Single Pixel Unc. \\ & & & & (g/cm$^2$) & (g/cm$^2$) & (g/cm$^2$) \\ \hline\hline Borated HDPE & $\sim$5.2 & $4.7\pm0.1$ & $(1.4)$ & $46.2$ & $60.5\pm0.5$ & $(5.3)$ \\ Aluminum (Al) & 13 & $12.0\pm0.2$ & $(2.1)$ & $54.7$ & $55.8\pm0.6$ & $(7.2)$ \\ Copper (Cu) & 29 & $27.5\pm1.1$ & $(5.6)$ & $48.8$ & $48.0\pm1.2$ & $(6.0)$ \\ Tin (Sn) & 50 & $49.1\pm2.9$ & $(14.0)$ & $49.3$ & $45.0\pm1.3$ & $(6.2)$ \\ Tungsten (W) & 74 & $75.3\pm3.3$ & $(15.1)$ & $49.4$ & $50.0\pm1.7$ & $(8.0)$ \\ Lead (Pb) & 82 & $83.2\pm0.9$ & $(9.3)$ & $57.7$ & $56.8\pm0.4$ & $(4.5)$ \\ Uranium rods & $\sim$65 & $55.5\pm2.3$ & $(19.3)$ & $\sim$55 & $61.3\pm0.8$ & $(7.0)$ \\ \end{tabular} \end{center} \caption{Reconstructed effective $Z$ and areal density values for the mock cargo materials for the 7400~s test ($\sim$1.3 mC integrated deuteron beam current on target per cm of the scan). Quoted uncertainty after the ``$\pm$'' represents the uncertainty on the average over the material region, while the single pixel uncertainty is the standard deviation of the single pixel (1~cm$\times$1 detector) estimates over the region. See Figure~\ref{fig:images} for the definition of each sample region.} \label{tab:long} \end{table*} \begin{table*}[htb!] \setlength{\tabcolsep}{6pt} \begin{center} \begin{tabular}{l |r r r | r r r} Material & Actual $Z$ & Reconstructed $Z$ & Single Pixel Unc. & Actual $\rho_A$ & Reconstructed $\rho_A$ & Single Pixel Unc. \\ & & & & (g/cm$^2$) & (g/cm$^2$) & (g/cm$^2$) \\ \hline\hline Borated HDPE & $\sim$5.2 & $5.9\pm0.2$ & $(2.4)$ & $46.2$ & $61.7\pm0.7$ & $(7.9)$ \\ Aluminum (Al) & 13 & $12.4\pm0.4$ & $(4.6)$ & $54.7$ & $53.8\pm0.9$ & $(10.0)$ \\ Copper (Cu) & 29 & $28.2\pm1.3$ & $(6.6)$ & $48.8$ & $47.8\pm1.2$ & $(5.9)$ \\ Tin (Sn) & 50 & $49.7\pm3.3$ & $(16.1)$ & $49.3$ & $46.5\pm1.5$ & $(7.1)$ \\ Tungsten (W) & 74 & $75.8\pm3.8$ & $(18.4)$ & $49.4$ & $47.7\pm1.8$ & $(8.9)$ \\ Lead (Pb) & 82 & $80.9\pm1.7$ & $(18.1)$ & $57.7$ & $65.1\pm2.2$ & $(23.4)$ \\ Uranium rods & $\sim$65 & $59.4\pm2.5$ & $(20.5)$ & $\sim$55 & $63.2\pm2.0$ & $(16.5)$ \\ \end{tabular} \end{center} \caption{Reconstructed effective $Z$ and areal density values for the mock cargo materials for the 2000~s test ($\sim$0.33 mC integrated deuteron beam current on target per cm of the scan). Quoted uncertainty after the ``$\pm$'' represents the uncertainty on the average over the material region, while the single pixel uncertainty is the standard deviation of the single pixel (1 cm$\times$1 detector) estimates over the region.} \label{tab:short} \end{table*} The results for the mixed material uranium rods merit further discussion. Due to the fact that the rods consist of aluminum and uranium, and additionally because they are not uniform in areal density as presented to the beam, evaluation of the reconstruction of the material parameters for the rods is not as straightforward as for the pure materials. Appendix~\ref{app:eff} details rough estimates of the expected effective $Z$ and areal density for the arrangement of the rods, up to the limited information available about the exact composition of the rods. Due to the fact that the 1~cm pixel size in Figure \ref{fig:images} obscures the structures predicted by the results in Figure~\ref{fig:rodad}, it is useful to consider 1 mm pixels for the uranium rod sample despite the reduction in statistics. Figure~\ref{fig:urods} shows the reconstructed $Z$ and $\rho_A$ for the rods with 1 mm pixels. While the areal density is slightly overestimated in the 1 mm pixels (due to low statistics), the images in Figure~\ref{fig:urods} clearly show the structure of the rod arrangement (Figure~\ref{fig:rodarr}), and show that extra spacing between the rightmost rods in combination with the uncertainty on the rod composition is likely responsible for the discrepancies between the reconstructed $Z$ and $\rho_A$ values and the estimates from Appendix~\ref{app:eff}. This mixed material example demonstrates the limitations of 2D radiographic imaging to determine the material composition of cargo, but with sufficient position resolution the presence of high-$Z$ material is still clearly evident. \begin{figure*}[thb] \centering \includegraphics[width=1.5\columnwidth]{figures/urodsonly.pdf} \caption{Images of the uranium rods in reconstructed effective $Z$ and areal density for the 7400~s test ($\sim$1.3 mC integrated deuteron beam current on target per cm of the scan) with 1 mm wide pixels.} \label{fig:urods} \end{figure*} \section{Conclusions and Future Work} The results presented here establish the use of multiple monoenergetic gamma ray radiography (MMGR) to image materials in both their effective atomic number and areal density. Most notably, the technique distinguishes pure materials even at high-$Z$ (e.g., separating Pb and W or Pb and U), a critical requirement of any system designed to detect SNM and differentiate it from benign high-$Z$ materials, which could otherwise result in false positive detections. The specific information transmitted by the monochromatic beam, combined with high resolution detectors to clearly identify directly transmitted photons, provides the capability to identify materials while minimizing the radiation dose delivered to the cargo. The results for the natural uranium rods demonstrate the fundamental limitations of this technique, and indeed radiography of any kind, as a method for detecting SNM. The several mm of aluminum cladding significantly reduces the effective $Z$ of the configuration, somewhat masking the presence of the uranium. With sufficient position resolution, however, the presence of very high-$Z$ ($>$80) material can be flagged for this configuration. This suggests that monoenergetic gamma ray radiography may be paired with a secondary technique (such as a system designed to detect induced photofission neutrons\cite{rose,pnpf-short}) to disambiguate such situations. Additionally, future work will explore the resolving power of radiography using multiple projections for mixed material configurations. A concern that has been raised regarding high energy gamma radiography techniques is the resulting radiological activation of inspected materials. Photons at 15.1 MeV have enough energy to induce $(\gamma,n)$ photodisintegration reactions in many elements, and may indirectly become a source of neutrons. The capture of these secondary neutrons can transmute stable isotopes into metastable ones and induce long-lived radioactivity in the inspected materials. Calculations show, however, that the exposure to the neutrons produced by the above reaction amounts to just one hour of exposure to cosmogenic neutrons from the natural background, and as such any contributions to induced radioactivity is negligible when compared naturally occurring activation. This calculation is detailed in Appendix~\ref{app:neutrons}. The experimental setup used here would require significant modifications for deployment as a cargo scanning system, several of which are the subject of ongoing work. As discussed at the end of Section~\ref{sec:exp}, the scan times of several thousand seconds used in this work would be reduced to $\lesssim$2 minutes by operating at mA-scale current. The results presented here demonstrate the ability of a radiography system to function in counting mode at such currents using the pile-up correction technique detailed in Section~\ref{sec:pup}. In such a system, another technique would likely need to be devised to account for the background subtraction conducted here using timing information. Additionally, the use of alternate nuclear reactions such as $^{12}$C$(p,p'\gamma)^{12}$C and $^{16}$O$(p,p'\gamma)^{16}$O, which produce monoenergetic gamma rays between 4.4 and 8.9 MeV, would open a variety of options for different accelerators and significantly reduce the neutrons that are present from other processes in a system using $^{11}$B(d,n$\gamma$)$^{12}$C. Work is ongoing to establish the applicability of the techniques for precision material identification described in this paper using the lower energy photons available from such reactions. \section*{Acknowledgements} This work is supported in part by the U.S. Department of Homeland Security Domestic Nuclear Detection Office under a competitively awarded collaborative research ARI-LA Award, ECCS-1348328 and is part of a collaboration between the Massachusetts Institute of Technology, Georgia Institute of Technology, University of Michigan, and Pennsylvania State University. This support does not constitute an express or implied endorsement on the part of the United States Government. The authors are grateful to Richard C. Lanza, who developed some of the initial ideas behind this work, for his support, encouragement, and valuable advice. BSH gratefully acknowledges the support of the Stanton Foundation Nuclear Security Fellowship. The authors wish to thank the MIT-Bates Research and Engineering Center staff for their invaluable contributions to the construction and operation of the experiment; in particular Peter Binns, Hamid Moazeni, and Ernest Ihloff. They thank Taylor Sims for his work on the experiment during data taking. Additionally, they thank Igor Jovanovic and Jayson Vavrek for valuable comments on the manuscript.
{ "timestamp": "2018-03-21T01:04:08", "yymm": "1802", "arxiv_id": "1802.04225", "language": "en", "url": "https://arxiv.org/abs/1802.04225" }
\section{Entropy of a disordered star polymer} The goal of this section is to calculate the total number of blobs $N_{\mathrm{blob}}$ in the star polymer, since the free energy $F$ is proportional to that number: \begin{equation} \beta F = N_{\mathrm{blob}} \label{eq:blob_F} \end{equation} where $\beta=1/k_{\mathrm{B}}T$. \subsection*{Hypotheses} In the case of a regular star we picture the semidilute region as made of concentric spherical shells of thickness $\xi(r)$, each one crossed by $f$ arms. This means that each shell is made of $f$ blobs of dimension $\xi(r)$.\\ From these assumptions we can write an expression for $\xi$: the volume of one shell, divided by $f$, must be the volume of a blob; so we can write $$ \frac{r^2 \xi}{f} = \xi^3 $$ and then the expression for the blob size $\xi$: \begin{equation} \xi(r) = f^{-1/2}r\label{eq:base xi}. \end{equation} To account for the different length of the arms, we consider the number of arms $f$ as a decreasing function of the radius: $f(r)$. We give an ansatz for this function: \begin{equation} f(r)=f_{0} \left(\frac{r}{b}\right)^{-\gamma}\label{eq:ansatz} \end{equation} where $b$ is the radius of the core of the star and $\gamma\geq0$. Combining eq. \ref{eq:base xi} and eq. \ref{eq:ansatz} we obtain the relation for the blob size as a function of the radius: \begin{equation} \xi(r) = f_{0}^{-1/2}b\left(\frac{r}{b}\right)^{1+\frac{\gamma}{2}}. \label{eq:xi} \end{equation} \subsection*{Positions of the shells} We are interested in the radius of each shell $r_m$, because we can relate it to the monomer concentration in the way explained below. Since the thickness of a shell is $\xi(r)$, we can relate the radii of two consecutive shells with the following rule: $$ r_{m+1}-r_{m}=\xi(r_{m}) $$ that can be approximated to a simple differential equation $$ \frac{dr(m)}{dm}=\xi(r(m)) $$ whose solution, using the expression \ref{eq:xi}, is \begin{equation} r(m)=b\left[1-\frac{\alpha}{2}f_{0}^{-1/2}m\right]^{-\frac{2}{\gamma}}\label{eq:r} \end{equation} for $\gamma \neq 0$.\\ If $\gamma=0$, that is the case of the regular star, the solution is exponential: \begin{equation} r(m)=b\, e^{f_0^{-1/2}m}\label{eq:rexp}. \end{equation} \subsection*{Monomer concentration} Since a blob behaves by definition as a swollen polymer, his dimension is related to the number of monomers $g$ it contains by $\xi=ag^{\nu}$, where $a$ is the distance between two consecutive monomers. We set $\nu=3/5$. Therefore we invert this relation and we use eq. \ref{eq:xi} to obtain $g(r)$: \begin{equation} g(r)=\left(\frac{\xi(r)}{a}\right)^{5/3}=\left(f_{0}^{-1/2}\frac{b}{a}\right)^{5/3}\left[\frac{r}{b}\right]^{\frac{5}{3}\left(1+\frac{\gamma}{2}\right)}\label{eq:g} \end{equation} Given the number of monomers in one blob $g(r)$, we can calculate the radial monomer concentration $c(r)$ from the total number of monomers in one shell divided by the volume of the shell: $$ c(r)=\frac{f(r)g(r)}{4\pi r^{2}\xi(r)}. $$ Using eq. \ref{eq:ansatz}, \ref{eq:xi} and \ref{eq:g} it becomes \begin{equation} c(r)=\frac{1}{4\pi r^{2}}\frac{1}{b} f_{0}^{2/3}\left(\frac{b}{a}\right)^{5/3} \left[\frac{r}{b}\right]^{\frac{2}{3}(1-\gamma)}\label{eq:concentration}. \end{equation} \subsection*{Radius of the star polymer} Since we know the total number of monomers $N_{tot}$ in the star polymer, we can integrate the concentration to obtain the expression of the radius $R$ of the star polymer: $$ N_{\mathrm{tot}}=\int_{b}^{R} 4\pi r^{2} c(r)dr $$ $$ = f_{0}^{2/3}\left(\frac{b}{a}\right)^{5/3} \int_{b}^{R}\left[\frac{r}{b}\right]^{\frac{2}{3}(1-\gamma)} \frac{dr}{b}= $$ \begin{equation} = f_{0}^{2/3}\left(\frac{b}{a}\right)^{5/3} \frac{3}{5-2\gamma} \left[ \left(\frac{R}{b}\right)^\frac{5-2\gamma}{3} -1 \right]. \label{eq:N(R)} \end{equation} Now we observe that we must exclude $\gamma>5/2$, because it corresponds to an inverse relation $R(N_{tot})$ that diverges for a certain finite value of $N$. So we set $\gamma<5/2$ and then we invert the relation \ref{eq:N(R)} to obtain $R(N_{tot})$: $$ R(N_{\mathrm{tot}})=b\left(1+\frac{5-2\gamma}{3} \left( \frac{a}{b} \right)^{5/3} f_0^{-2/3} N_{\mathrm{tot}}\right)^{\frac{3}{5-2\gamma}}. $$ We neglect the term $+1$ in the parentheses since we consider the second term much greater that 1: \begin{equation} R(N_{\mathrm{tot}})\simeq b\left(\frac{5-2\gamma}{3} \left( \frac{a}{b} \right)^{5/3} f_0^{-2/3} N_{\mathrm{tot}}\right)^{\frac{3}{5-2\gamma}}. \label{eq:R} \end{equation} \subsection*{Number of shells} We evaluate $r(S)$ using eq. \ref{eq:r}, where $S$ is the total number of shells in the star polymer; then we compare it to the expression \ref{eq:R} for $R$ : $$ r(S)\equiv R. $$ Now we can finally obtain an expression for $S$ $$ b\left[1-\frac{\gamma}{2}f_{0}^{-1/2}S\right]^{-\frac{2}{\gamma}}=\left(\frac{5-2\gamma}{3} \left( \frac{a}{b} \right)^{5/3} f_0^{-2/3} N_{\mathrm{tot}}\right)^{\frac{3}{5-2\gamma}}. $$ This leads to \begin{equation} S=\frac{2}{\gamma} f_{0}^{1/2} \left[1- \left(\frac{5-2\gamma}{3} \left( \frac{a}{b} \right)^{5/3} f_0^{-2/3} N_{\mathrm{tot}} \right)^{-\frac{\gamma}{2} \frac{3}{5-2\gamma}} \right]. \end{equation} We observe that this expression reduces to the one for the regular star \cite{witten1986colloid} in the limit $\gamma \rightarrow 0$ \begin{equation} S \sim f_{0}^{1/2} \log \left[ \left( \frac{5}{3} \left( \frac{a}{b} \right)^{5/3} f_0^{-2/3} N_{\mathrm{tot}} \right)^{-\frac{3}{5}} \right] \label{eq:limit} \end{equation} where we used the limit formula $$ \lim_{\gamma \rightarrow 0} \frac{x^\gamma-1}{\gamma} = \log x $$ \subsection*{Number of blobs} In order to finally obtain the expression for the number of blobs in the star, we need to calculate the series \begin{equation} N_{\mathrm{blob}} = \sum_{m=1}^S f(r_m). \label{eq:Nblob} \end{equation} For $f(r_m)$ we combine the expression of the scaling of $f$ $$ f(r)=f_{0} \left(\frac{r}{b}\right)^{-\gamma} $$ and the expression of the radius of the $m$-th shell $$ r(m)=b\left[1-\frac{\gamma}{2}f_{0}^{-1/2}m\right]^{-\frac{2}{\gamma}}. $$ We obtain $$ f(r(m)) = f_0\left[1-\frac{\gamma}{2}f_{0}^{-1/2}m\right]^2. $$ Plugging this last equation into equation \ref{eq:Nblob} we find: $$ N_{\mathrm{blob}} = \sum_{m=1}^S f_0\left[1-\frac{\gamma}{2}f_{0}^{-1/2}m\right]^2 $$ $$ = \sum_{m=1}^S f_0\left[1-\gamma f_{0}^{-1/2}m + \frac{\gamma^2}{4} f_0^{-1} m^2 \right]. $$ In the end we approximate the series with an integral, so that we obtain: \begin{equation} N_{\mathrm{blob}}\simeq f_0 \left[S-\gamma f_{0}^{-1/2} \frac{S^2}{2} + \frac{\gamma^2}{4} f_0^{-1} \frac{S^3}{3} \right] \label{perturbation} \end{equation} Again we observe that in the limit $\gamma \rightarrow 0$ this expression reduces to the one for the regular star $$ N_{\mathrm{blob}}\simeq (f_0)^{3/2}, $$ since we drop the logarithmic term in equation \ref{eq:limit}.\\ \subsection*{Discussion on the ansatz} In order to justify the ansatz \ref{eq:ansatz} on the scaling of the number of blob per shell, in this section we show that $f(r)$ can be linked to the distribution $p(N)$ of the number of monomers in a single arm. In particular we will find that, given a certain value of the exponent $\gamma$, there is a power law distribution $p(N)$ that produces $f(r)=f_{0} (r/b)^{-\gamma}$. \\ Let's interpret $f(r)$ as the fraction of arms that reach \textit{at least} a distance $r$ from the core; namely $f(r)$ is the probability that one arm arrives at least at a distance $r$. This probability is nothing but the cumulative distribution of the probability $p(r)$ that one arm reaches the distance $r$: $$ f(r)=1-\int_0^r p(r')dr'. $$ We can obtain the probability $p(r)$ deriving one time the cumulative distribution: \begin{equation} p(r) = - \frac{\partial}{\partial r} f(r). \end{equation} Since we are interested in the probability in function of the number of monomers $N$ we need to change the variable with the relation $$ p(N)dN = p(r)dr, $$ that gives \begin{equation} p(N)=p(r) \left. \left(\frac{dN}{dr} \right)^{-1} \right\vert_{r(N)}\label{eq:P}. \end{equation} To find the relation $N(r)$ we integrate the number $g(r)$ of monomers in one blob: $$ N(r) = \int_b^r g(r')dr', $$ where $g(r)=(f(r)^{-1/2}r)^{1/\nu}$. Furthermore we observe that $$ \frac{dN}{dr}=g(r). $$ Thanks to these relations, the expression \ref{eq:P} becomes \begin{equation} p(N)=f'(r) (f(r)^{-1/2}r)^{1/\nu}\label{eq:NN} \end{equation} Plugging the ansatz \ref{eq:ansatz} into the equation \ref{eq:NN}, we find the expression of the number of monomers contained in a sphere of radius $r$ $$ N(r) \sim r^{\frac{1}{\nu}\left(1+\frac{\gamma}{2}\right)+1} $$ and the expression of the probability that one arm has $N$ monomers \begin{equation} p(N) \sim r(N)^{-1-\gamma -\frac{1}{\nu}\left(1+\frac{\gamma}{2}\right)}\label{eq:p+ansatz}. \end{equation} Using the equation \ref{eq:NN} into the expression \ref{eq:p+ansatz} we finally obtain the power law distribution $p(N)$ that corresponds to the ansatz $f(r)=f_{0} (r/b)^{-\gamma}$: \begin{equation} p(N) \sim N^{-1-\frac{\gamma}{\frac{1}{\nu}\left( 1+\frac{\gamma}{2}\right)+1}}. \end{equation} \section{Figures} \begin{figure}[h] \includegraphics[scale=0.7]{fancy_energy_landscape.eps} \caption{Each replica must have its energy overlapping with the enegry of the contiguous ones, so that the different replicas can exchange configuration. In the picture we show the MC trajectories of energy in the typical situation at equilibrium (we show only half of the replicas for visual clarity). Simulations were performed with $N=513$, $R=1.44$, $\lambda=1.42$ for $3\times10^{9}$ Monte Carlo sweeps. } \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.49\textwidth]{sigma_comparison_s7.eps} \includegraphics[width=0.49\textwidth]{sigma_comparison_s16.eps} \end{center} \caption{The curves of internal energy for different realizations are in good agreement with each other for $\sigma=7,16$, so we conclude that the internal energy is self-averaging. Four simulations were performed for each of the two values of variance and different realization of the disorder were used in each run (each realization is coded with a different color). These simulations were performed with $N=257$, $\eta=17/257$, $R=0.77$, $\lambda=1.42$, $1.8\cdot10^9$ Monte Carlo sweeps. } \label{fig:self_averaging} \end{figure} \end{document} \section{Introduction} Simple heteropolymer models provide a candidate explanation for the formation of intermediate- and large-scale domains in prokaryotic and eukaryotic chromatin~\cite{imakaev2015modeling,Lagomarsino2015,Nicodemi2014,Dame2011,schwarzer2017two}. Such domains may be defined as extended contiguous regions along the DNA chain in which the DNA interacts preferentially with sites of the same domain. As such, they appear as squared blocks in the contact matrix of the polymer, which is measurable by chromosome capture and sequencing techniques~\cite{Nicodemi2014}. While other mechanisms, such as loop extrusion~\cite{Fudenberg2016,Goloborodko2016a} likely contribute to driving domain formation, the interaction between chromosome-bound proteins is considered to be one of the main drivers for this behavior. For example, in mammals, the protein CTCF has been shown to form dimers \cite{pant2004mutation} that can stabilize chromatin loops. In bacteria, the proteins H-NS and MatP have the same bridging capabilities~\cite{Dame2011,RN49}. One main question is what drives domain identity, size and stability, and to what extent intra-specific interactions are needed to form domains. In other words, while it is reasonable to think that the domain formation is mediated by proteins that are bound to chromatin and that interact with each other, we do not know how many species are needed to program a certain number of domains into a polymer~\cite{Nicodemi2014}. Since there are thousands of domains at different scales in mammalian genomes, trivially associating one--to--one interactions would require the presence of thousands of different types of intra-specific DNA-binding proteins. It is more reasonable to think that only a small number of proteins is responsible for the interactions between the chromatin sites. Focusing on the direct interaction between chromatin structure factors, various kinds of heteropolymer models~\cite{Nicodemi2014,brackley2016simulated, nazarov2015statistical,junier2010spatial} have been proposed, to explain various aspects of domain formation, specification and stability. Perhaps the simplest one is a polymer chain in which equally-spaced monomers attract monomers of the same type~\cite{junier2010spatial,scolari2015combined}. This is a specific type of co--polymer model in which only one of the two chemical species exerts attractive interactions (and the linear density of this species is typically considered to be low). This model shows that multiple-domain states are possible without any intra-specific interaction~\cite{scolari2015combined}. In such states, the polymer is collapsed into a multiple rosette configuration. Analytical arguments support the hypothesis that such multi-domain phase is stable, and due to the trade-off between the surface-tension cost of keeping a core of bridging proteins and the entropy cost of the arms of the rosette states. Here, we use replica-exchange Monte Carlo (MC) simulations to explore the equlibrium states of the disordered version of this model, where the interacting monomers are not equally spaced, but arranged randomly along the backbone in a fixed (quenched) configuration. We ask about the role played by these disordered interactions into the thermodynamic stability of the collapsed states with one and multiple domains. We also address the possible role of the disorder into localizing the domains in a specific region of the chain, which may lead to pre-programmed spatial domains without intra-specific interactions. \section{Model} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fig_01.pdf} \caption{Sketch of the model used in this work. The polymer is made of two types of monomers. Short-ranged attractive monomers (red) are separated by regions of non-attractive ones (light-blue). The position of attractive monomers is fixed at distance extracted from a Gaussian distribution, and attractive monomers are fixed during each simulation (quenched disorder). Monomers are described as hard-sphere beads, joint by inextensible links.} \label{fig:model} \end{figure} We study a coarse-grained model consisting in a polymer made of $N$ consecutive monomers represented as hard-sphere beads of radius $R_{\mathrm{HC}}$ (see Fig.\ref{fig:model}). Each monomer represents a region of the chromosome, and the size can be defined at will to describe the fiber at any resolution (e.g., from the finest experimental resolution of $\sim$ kb to describe topological associating domains, to that of Mb to describe chromosomal compartments). In this model, bead $i$ can interact with bead $j$ with an attractive short-ranged square well potential $u_{ij}$: \begin{displaymath} u_{ij} = \begin{cases} \infty & \mbox{if } r_{ij} < R_{\mathrm{HC}} \\ B_{ij} & \mbox{if } R_{\mathrm{HC}} < r_{ij} < R \\ 0 & \mbox{if } r_{ij}>R \ , \end{cases} \end{displaymath} where $r_{ij}$ is the distance between the beads, $R_{\mathrm{HC}}$ is the hard-core radius, $R$ is the range of the interaction and $B_{ij}$ is the interaction energy, which depends on the types of the monomers $i$ and $j$. In order to represent bridging interactions, we place $p$ attractive monomers along the chain (see Fig.\ref{fig:model}). Therefore, the interaction energy is \begin{displaymath} B_{ij} = \begin{cases} -\varepsilon & \mbox{if } i \mbox{ and } j \mbox{ are attractive monomers} \\ 0 & \mbox{otherwise} \end{cases} \end{displaymath} where $\varepsilon>0$ since the interaction between bridging points is always attractive. Using square-well potentials makes the MC calculations easier and faster than using smooth short-ranged potential. The uncrossability of the polymer chain is guaranteed by the hard-core repulsion, whose range is $R_{\mathrm{HC}}=0.472\lambda$. The distance $\lambda$ between consecutive beads is maintained fixed by the MC moves, and sets the microscopic length scale, with respect to which all the lengths of the model are measured. We first studied regular co--polymers, in which interacting monomers are places every $p$ other monomers which only repel each other by hard--core repulsion. Subsequently, we studied a disordered model in which the position of these interacting monomers is displaced by a Gauassian--distributed quantity. The simulations are performed with an off-lattice MC algorithm whose degrees of freedom are the angles and dihedrals of the chain, updated with flip and pivot moves through a Metropolis acceptance rule, to ensure an effective sampling of the canonical ensemble. The algorithm is implemented in a freely-distributed code \cite{tiana2015montegrappa}. To improve the efficiency of the algorithm to sample equilibrium conformations also at low temperatures, the MC algorithm is used in its parallel-tempering variant, in which 16 replicas of the system are simulated in parallel at increasing temperature, and the conformations of adjacent temperatures are exchanged every 1000 MC step with a Metropolis-like acceptance rule \cite{swendsen1986replica}. The thermodynamic quantities are then calculated with a weighted--histogram technique \cite{ferrenberg1989optimized}. \section{Results} \subsection*{Multi-domain states in absence of disorder are stable} In the case of equally-spaced bridging points, theoretical arguments support the claim that multi-domain states are thermodynamically stable~\cite{scolari2015combined}. To test this hypothesis, we simulated polymers from $N=129$ up to $N=513$ monomers with the parallel-tempring algorithm until the quantities of interest reached convergence, keeping constant the density of interacting monomers $\eta=p/N$. As shown in Fig. \ref{fig:ordered_collapses}, polymers with $N=129$ monomers collapse into a single domain, while the polymers of length $N=256$ and $N=513$ collapse into a multiple-domain state similar to rosettes. Rosettes are formed by consecutive strands of the chain. In this range of $N$, the number of domains seems to depend linearly on $p$, as suggested in ref.~\cite{scolari2015combined}. The collapse for all values on $N$ happens near a temperature of $T\simeq 0.47\varepsilon$ (see Fig. \ref{fig:ordered_collapses}). No phase similar to a random globule, in which the interactions are not correlated with the distance of the interacting monomers along the chain, is observed. All the rosette-like and multiple-rosette configurations appear to be thermodynamically stable below the coil-globule transition temperature (see Supplementary Figure S1). \begin{figure \centering \includegraphics[width=0.5\textwidth]{fig_02.pdf} \caption{ Energy density for co--polymers with an ordered pattern of interacting monomers. In these simulations the density of interacting monomers $\eta=p/N$ was kept constant to the value $1/16$. Simulations were performed with $\varepsilon=2.4$, $R=0.77$, $\lambda=1.42$ for $3\times10^{9}$ Monte Carlo sweeps. } \label{fig:ordered_collapses} \end{figure} For longer chains ($N>129$), after a first collapse at higher temperature, the polymer displays a second collapse at lower temperature from a phase with higher number of domains to a phase with a lower number of domains (e.g., see Fig.~\ref{fig:ordered_collapses}, red solid curve). While the first energy jump displays features similar to a first-order phase transition, as suggested in~\cite{scolari2015combined}, the fusion of two domains resembles a nucleation-like phenomenon, and we speculate that this could be similar to a second order phase transition. The low-temperature phases are difficult to sample for longer polymers and thus we could not equilibrate the chain with $N=513$ below $T=0.14\varepsilon$. Although we have seen in this range of low temperatures conformations with three and two rosettes, we are not able to assess if they are equilibrium states. Equally, we could not equilibrate the system at even lower temperatures, at which we expect the equilibrium state to form a single rosette, because this is certainly the zero--temperature equilibrium state of the system. Summing up, our results indicate that new stable multi-domain phases become available with increasing system size, and that the system can cross several hierarchical levels of organization with decreasing number of domains as equilibrium states as the temperature is decreased, before collapsing into a single domain. \subsection*{Disorder enhances the stability of multi-domain configurations} \begin{figure \centering \includegraphics[width=0.5\textwidth]{./fig_03.pdf} \caption{Disorder in the positioning of the interacting monomers shifts the domain-formation transition towards higher temperatures. The plot shows collapse curves of internal energy of a polymer of length $N=257$ monomers. Each dashed curve relates to a different value of the variance $\sigma$. Two snapshots at the same temperature are highlighted comparing the case of regularly-spaced interacting monomers (A) to the disordered case (B): while the former is clearly in a coil state, the latter appears collapsed into a two-domain state. This is also visible in the specific heat vs $kT/\varepsilon$ plot (inset), in which the peak corresponding to the transition point smoothens and shifts towards higher temperatures in presence of disorder. The simulations were performed with $N=257$, $\eta=17/257$, $R=0.77$, $\lambda=1.42$, $1.8\cdot10^9$ MC moves. } \label{fig:disordered_collapses} \end{figure} The model with interacting monomers placed every $\eta^{-1}$ other monomers is then extended, introducing a quenched Gaussian displacement of zero mean and variance $\sigma$. For $\sigma=0$ we recover the ordered case, while for $\sigma \gtrsim \eta^{-1}$ we expect a uniform distribution of interacting monomers, not reminiscent of the ordered arrangement. Before studying the equilibrium properties of the disordered system, we must show that they are self--averaging, that is that the average over the disorder is representative of a typical situation. As a rule, extensive quantities like the internal energy are self--averaging because of an argument given by Brout \cite{Brout:1959tz}; however this argument cannot be applied straightforwardly to disordered polymers, and we checked explicitly in two cases ($\sigma=7$ and $\sigma=16$, using four realizations of the disorder) that the energy curves and the number of rosettes do not depend on the specific realization of the disorder (see Fig. S2 in the Supplementary Material). We then performed equilibrium simulations with $\eta^{-1}=16$ and $\sigma$ varying from 0 to 32 for $N=257$. In all these cases, as shown in Fig.~\ref{fig:disordered_collapses}, we observe a transition from a random coil at high temperatures to multi-rosette states. The transition becomes less sharp with increasing $\sigma$. Moreover, the disorder has the unexpected effect of stabilizing the multi-domain phase, as the transition temperatures become higher. This effect is accompanied by a broadening of the range of temperatures in which the multiple domain phase is stable, roughly proportional to $\sigma$. The inset of Fig.~\ref{fig:disordered_collapses} shows the specific heat of the system, whose peaks are associated with the transitions. Two peaks in the specific heat are typically visible in this plot, corresponding, respectively, to the collapse from coil to two-domain state (high temperature) and to the transition from two-domain state to one-domain state (low temperature). These peaks shift apart at increasing $\sigma$. The disorder smoothens the collapse curve of the higher transition only, since the height of the specific-heat peak decreases with $\sigma$, while it becomes wider. This suggests that the transition from coil to multiple-domain states may no longer be switch-like, due to the emergence of domains of different size at different temperatures. \begin{figure \centering \includegraphics[width=0.5\textwidth]{fig_04.pdf} \caption{The model shows power-law like scaling of the contact probability with arc-length distance. The plot shows the mean logarithm of the contact probability obtained from simulations. The average is performed over configurations and over all monomers $i$ and $j$, whose inter-monomer distance is $|i-j|$. The different curves correspond to the case of ordered interacting monomers ($\sigma=0$, orange points) and disordered interacting monomers, $\sigma=16$ at temperatures $T=0.49$ (red), $T=0.50$ (cyan), $T=0.52$ (green and purple, for two different realizations of the disorder), $T=0.53$ (blue). The solid lines are linear fits, giving slopes (exponents) 0.53 for $\sigma=0$, 0.72 for $\sigma=16$ at $T=0.49$, 0.87 at $T=0.50$, 1.01 and 1.05 at $T=0.52$ and 1.86 at $T=0.53$.} \label{fig:p} \end{figure} We also considered the average contact probability between monomers as a function of their distance $|i-j|$ along the chain, which is typically measured from genome contact maps\cite{imakaev2015modeling,Nicodemi2014}. Fig.~\ref{fig:p} compares this function for the case of equally-spaced and disordered interacting monomers. In the ordered case ($\sigma=0$), the spacing with the closest interacting induces oscillations in the function, but the overall trend agrees with a power law with exponent close to 0.5 for values of $|i-j|$ up to distances comparable to $N$ (and therefore affected by finite-size effects). Disordered chains display exponents that increase with the temperature between, 0.7 and 1 in the multi-rosette phase, and up to 1.9 in the coil region (this value is comparable to the expectation for a self-avoiding chain). The exponents appear to depend weakly on the specific realization of the disorder (cf. purple and green points in Fig.~\ref{fig:p}). \subsection*{Scaling argument for the entropy of a disordered star polymer} In order give some theoretical support to explain why the multi-rosette configurations are thermodynamically stable in presence of disorder, we generalized the scaling argument given in ref. \cite{scolari2015combined}. In a configuration made of $q$ rosettes, each domain has a core made of $p/q$ monomers and a corona made of $p/q$ loops. Each rosette is approximated as a star polymer made of $f=p/q$ arms. This description allows a simple estimate of the entropic contribution of the corona to the free energy. In absence of disorder the leading term in this contribution is $f^{3/2}$. The energetic contribution to the free energy is the surface tension of each core, which is proportional to the surface of a single core $(p/q)^{2/3}$ multiplied by the number of domains. Therefore, the free energy in absence of disorder reads \begin{equation} \Delta F \simeq p^{3/2}q^{-1/2}+\varepsilon (p)^{2/3} q^{1/3} \ , \end{equation} which can be minimized with respect to the number of domains $q$, to find the number of rosettes at equilibrium \begin{equation} q_{\mathrm{eq}} \sim \varepsilon^{-6/5} \ . \end{equation} We now estimate how $\Delta F$ changes for disordered distributions of bridging points in the polymer. At fixed rosette state, the changes in the positions of the attractive monomers along the chain do not affect the energetic term, so we need to compute only the entropic term for a rosette with loops of random length. To do this, we approximate the disordered rosette to a star polymer with arms of random length, and use the blob model for star polymers~\cite{daoud1982star, witten1986colloid} to describe the system with a mean-field ansatz. Here we omit intermediate calculations, which can be found in the supplementary material, section S1. To account for the different lengths of the arms, we impose that the number of arms $f$ is a decreasing function of the radius, \begin{equation} f(r)=f_{0} \left(\frac{r}{b}\right)^{-\gamma} \ , \label{eq:ansatz} \end{equation} where $b$ is the radius of the core of the star and $\gamma \geq 0$. It is possible to show that this is equivalent to a power-law distribution of the distance between consecutive attractive monomers (see supplementary material, section S1, last paragraph). This assumption does not correspond to the Gaussian displacements of the bridging points from equally-spaced positions used in our simulations, and is motivated mainly by the ease of carrying out the calculation. We can now plug Eq. \ref{eq:ansatz} into a scaling argument similar to the one found in ref.~\cite{witten1986colloid}. This calculation gives a leading term in the entropy that is identical to the one in absence of disorder, \begin{equation} \Delta F_{\mathrm{entropic}}\simeq f_0 \left[S-\gamma f_{0}^{-1/2} \frac{S^2}{2} + \frac{\gamma^2}{4} f_0^{-1} \frac{S^3}{3} \right] \end{equation} with \begin{displaymath} S\simeq \frac{2}{\gamma} f_{0}^{1/2}, \end{displaymath} which implies \begin{equation} \Delta F_{\mathrm{entropic}}\simeq (f_0)^{3/2}. \end{equation} Thus, this argument supports the existence of stable states in presence of disorder in the bridging points, and predicts that the disorder does not change the leading term in the entropy of the rosettes, and the collapse is qualitatively the same. Since the leading-order term of the entropy is unaffected in the extreme power-law spacing between attracting monomers along the chain, we also expect that this prediction applies for more compact distributions of the spacing between possible bridging points, such as the one used in our simulations. Indeed, we find that the collapsed phase of the polymer of length $N=257$ exhibits two domains for all values of $\sigma$ we tested, just as the model in absence of disorder. In order to rationalize why the simulations show a shift of the transition towards higher temperatures, which is not predicted by the above argument, we can notice that the above argument only considers the star-polymer contribution to the free energy. We can also compare the typical value of the loop entropy in presence and absence of disorder, but at fixed $\eta$. In absence of disorder the total entropy of $p$ loops of length $N/p$ is \begin{equation} S_{\mathrm{tot}}\sim p \log(N/p). \end{equation} For sufficiently small disorder (i.e. when $\sigma$ is much smaller than $\eta^{-1}$), there are $p$ loops of random length $l_i=|x_i-x_{i+1}|$, where $x_i$ and $x_{i+1}$ are the positions of two consecutive attractive monomers. Since the distribution of $x_i$ is Gaussian, the distribution of $l_i$ is still a Gaussian with mean $\langle l_i \rangle=N/p$. Thus we can compute the total entropy for the system in presence of disorder: \\ \begin{equation} S^{\mathrm{dis}}_{\mathrm{tot}}\sim \sum^p_{i=1} \log(l_i). \label{eq:Sdis} \end{equation} We can rewrite eq.~\ref{eq:Sdis} to obtain a relation with $S_{\mathrm{tot}}$: $$ S^{\mathrm{dis}}_{\mathrm{tot}}\simeq p \sum^p_{i=1} \frac{1}{p}\log(l_i) $$ $$ \simeq p \langle \log(l_i)\rangle < p \log \langle l_i \rangle $$ where in the last line we used the Jensen inequality for concave functions. This means that $$ S^{\mathrm{dis}}_{\mathrm{tot}} < S_{\mathrm{tot}}, $$ namely that the entropy cost for $p$ loops decreases in the disordered model, so that the transition temperature increases. For the same reason, allowing for collapsed states with multiple domains, one can also speculate that the transition becomes broader because different regions of the polymer with different local densities of attractive monomers start to collapse at different temperatures. \subsection*{Localization of domains caused by disorder} \begin{figure* \centering \includegraphics[scale=0.5]{fig_05.pdf} \caption{A disordered distribution of interactions can localize the domains along the chain. The figure shows contact matrices for different conformations of a polymer made of $N=257$ monomers and $p=17$ interacting monomers (in green) distant $16$ monomers from each other. The lowest contact maps are the equilibrium average of the system. Each column is obtained with a specific realization of the disorder, while the two columns with $\sigma=16$ are obtained, respectively, with two different realizations of the placement of interacting beads. These simulations were performed with $N=257$, $\eta=17/257$, $R=0.77$, $\lambda=1.42$, $1.8\cdot10^9$ Monte Carlo sweeps. The replicas used in this image are at the temperature $k_BT/\varepsilon=0.3$.} \label{fig:contact_matrices} \end{figure*} In long ordered co--polymers with equally-spaced attracting monomers, the positions of the domains are invariant for translations along the chain, and they are free to move along the chain (see Fig. \ref{fig:contact_matrices}, left column). Different equilibrium conformations can break this symmetry, displaying domains at specific positions, but the equilibrium contact map averages out the domains, re--establishing the translational symmetry (cf. the lowest--left contact map in Fig. \ref{fig:contact_matrices}). Only a small effect due to the finiteness of the chain is observable at the polymer ends; this would further reduce in longer, more realistic polymers. Disorder has the effect of localizing the domains, preventing their averaging out. The three rightmost columns of Fig.~\ref{fig:contact_matrices} show the result of simulations performed with a realization of disorder with $\sigma= 3$ and two realizations with $\sigma= 16$, choosing $N=257$ and $p=17$. Disorder breaks the translational symmetry of the system, favouring the stabilization of domains in specific regions of the chain. As a consequence, the average map is no longer uniform. For example, at $\sigma=3$, contact maps show with high probability a two-blocks structure (figure \ref{fig:contact_matrices}, second column, bottom panel) that highly contribute to determine the average map (shown below). As shown in the case $\sigma=16$ (last two columns of Fig.~\ref{fig:contact_matrices}), the degree of localization depends on the specific realization of the disorder. The figure shows two contact maps of conformations obtained with two different realizations of the same distribution of disorder. In the first realization, the two-block structure has well-defined borders that correspond the regions around monomer $25$ and $110$. Instead, the second realization of the same distribution does not show a clear compartmentalization into two fixed spatial domains, and reallocation of bridging points is observed around a coarse-grained nearly equally-spaced structure of organizing centers. The degree of localization of the domains does not seem to depend trivially on the organization of the interacting monomers into linear clusters along the chain (green dots along the diagonal of Fig. \ref{fig:contact_matrices}). In the case $\sigma=3$, the displacement from the ordered case is small, but still there is a higher degree of localization than the case $\sigma=16$ shown in the rightmost column, where there is a more marked partitioning of interacting monomers. Thus, the degree of localization appears to result from a complex balancing between energy and entropy, and cannot be easily predicted from the location of the interacting monomers. \section{Discussion and Conclusions} Our extensive MC simulations give access to the equilibrium properties of polymers up to up to $N=513$, characterized by a small linear density of fixed attractive monomers, which can be equally spaced or disordered. Both in the ordered and disordered case, the phase diagram of the polymer displays a high-temperature coil phase and a sharp transition to globular phases with multi-rosette structures. The states with different number of rosettes are clearly separated from each other by jumps in the internal energy which resemble first-order transitions. At highest temperatures we observe states with the largest number of rosettes, and this number decreases with the temperature to the one-rosette zero-temperature state. Although the system size is limited in our simulations by the high computational cost of equilibrating the system, we can speculate that a hierarchy of states exists with varying number of rosettes. The maximum observed number, $n_{\mathrm{max}}\simeq N/128$, is reached just below the coil-globule transition temperature. The observed rosettes have the specific feature of involving monomers that are close along the chain. The formation of rosette-like domains is a form of microphase separation (MPS), which the thermodynamic limit is known to take place in ordered co-polymers and to produce well-defined structures with few allowed symmetries (lamellar, hexagonal and cubic) in the vicinity of the homogeneous phase~\cite{Leibler:1980hc}. If disorder is added in the position of the chemical species, mean-field calculations by de Gennes show that the MPS phase is suppressed in favour of a glassy state~\cite{deGennes:1979jp}. Beyond mean field, Shakhnovich and coworkers showed that fluctuations reduce the glassy temperature re-establishing the MPS \cite{Sfatos:1993}, by a perturbative approach in the number of neighbouring monomers common to different conformations of the chain (disregarded in the mean field). Our results suggest that correlations between neighbouring monomers play an important role in defining the phase diagram of this polymer. Although the size of our polymers is limited by computational constrains, the rosette-like domains we observed are formed by consecutive segments of the chain, and suggest that effect of correlations could be much larger than that suggested by the perturbative approach. In fact, while the latter predicts a phase diagram with a second-order transition from a disordered globule to MPS, we observe what looks like a first-order transition from a random coil directly to a domain--separated phase. Moreover, at increasing disorder, the range of temperatures at which MPS occurs increases not only because the freezing temperature decrease, as predicted in ref.~\cite{Sfatos:1993}, but also because the high-energy states are affected (as observed in ref. \cite{Tiana:2011fj}), increasing the coil--globule transition temperature. Whether this behaviour is a result of the finiteness of the chain or is a feature that survives in the thermodynamic limit, we cannot tell based on our simulations, which are necessarily limited in terms of the size of the chain. However, the scaling arguments that support the simulations are not expected to fail in the large--$N$ limit, suggesting that the phase diagram we propose is stable with respect to $N$. An important effect of the disorder is that of localizing the structural domains in the chain, analogously to what happens with spin diffusion in the presence of impurities~\cite{Anderson:1958vr}. While the system, at least in the thermodynamic limit, is invariant for translation of the domains, and consequently its average equilibrium contact map is uniform, in presence of quenched disorder the domains can become localized, resulting in a block equilibrium contact map. The detailed pattern of blocks, and even how well-defined they are, does not appear to be a self-averaging quantity, and depends on the specific positioning of the interacting monomers. These properties do not seem to be easily predicted from the knowledge of the exact realization of disorder, in agreement with the general observation that the identification of the equilibrium states of disordered systems is a NP-hard problem~\cite{Barahona:1982gj}. The results obtained with this simple co--polymer model can be useful to get some insight in the structural organization of chromosomes~\cite{ringrose2017epigenetics}, which display a hierarchical set of nested domains~\cite{Zhan:2017kj}. Little is known about the actual molecular mechanisms responsible for the formation of domains, at different length scales, in the chromatin fiber and several models were proposed to account for such an organization. Some years ago it was suggested that they are the result of the rapid collapse of the fiber into a non-equilibrium crumpled globule~\cite{mirny2011fractal}. A model that generates blocks that are similar to the smallest--scale domains observed in chromatin is the loop--extrusion, based on the hypothesis that the interaction between regions of the fiber are mediated by an active, ATP-fueled protein complex~\cite{Fudenberg2016,Goloborodko2016a}. In other, equilibrium, models, such as the one we study here, the number of domains is determined by the number of different interacting species~\cite{Jost:2014co,Bianco:2017gp}, and xthe formation of domains is essentially energy-driven. With the present simple model we showed that it is not necessary to resort to very complicated ingredients, but the balance between entropy and energy is enough to generate stable domains even with a single type of interacting protein. Finally, a feature of chromosomes that emerges from experimental data and that was widely studied in the past years is that the contact probability between pairs of regions of the same chromosome roughly scales with their genomic distance with a power law controlled by an atypical exponent that is variable, but typically lower than 1.5, behavior that is unexpected for simple homopolymes at equilibrium~\cite{LiebermanAiden:2009jz,Sanborn:2015dr}. Also in this case several physical mechanism were proposed \cite{LiebermanAiden:2009jz,Barbieri:2012iw,Zhan:2016ds,Fudenberg2016}. Our results suggest that even a simple model as the one we propose here produces equilibrium contact probability functions that can be fitted with power laws of genomic distance, with exponents that are lower than those of homopolymers, and in overall agreement with the trends of experimental data. In our model, the slopes of this contact probability depend on the disorder strength and on the stabilization energy of the domains.
{ "timestamp": "2018-02-13T02:16:19", "yymm": "1802", "arxiv_id": "1802.03949", "language": "en", "url": "https://arxiv.org/abs/1802.03949" }
\section{Introduction} \label{intro} In the Music Information Retrieval (MIR) field, many research problems of interest involve the automatic description of properties of musical signals, employing concepts that are understood by humans. For this, tasks are derived that can be solved by automated systems. In such cases, algorithmic processes are employed to map raw music audio information to humanly understood descriptors (e.g.\ genre labels or descriptive tags). To achieve this, historically, the raw audio would first be transformed into a \emph{representation} based on \emph{hand-crafted features}, which are engineered by humans to reflect dedicated semantic signal properties. The feature representation would then serve as input to various statistical or Machine Learning (ML) approaches~\cite{Casey2008Content-basedChallenges}.\par The framing as described above can generally be applied to many applied ML problems: complex real-world problems are abstracted into a relatively simpler form, by establishing tasks that can be computationally addressed by automatic systems. In many cases, the task involves making a prediction based on a certain observation. For this, modern ML methodologies can be employed, that automatically can infer the logic for the prediction directly from (a numeric representation of) the given data, by optimizing an objective function defined for the given task. However, music is a multimodal phenomenon, that can be described in many parallel ways, ranging from objective descriptors to subjective preference. As a consequence, in many cases, while music-related tasks are well understood by humans, it often is hard to pinpoint and describe where the truly `relevant' information is in the music data used for the tasks, and how this properly can be translated into numeric representations that should be used for prediction. While research into such proper translations can be conducted per individual task, it is likely that informative factors in music data will be shared across tasks. As a consequence, when seeking to identify informative factors that are not explicitly restricted to a single task, Multi-Task Learning (MTL) is a promising strategy. In MTL, a single learning framework hosts multiple tasks at once, allowing for models to perform better by sharing commonalities between involved tasks~\cite{RichCaruana1997Multitask}. MTL has been successfully used in a range of applied ML works~\cite{Bengio2012RepresentationPerspectives,Liu2015Multi-taskSelection, Bingel2017IdentifyingNetworks,li2014heterogeneous,Zhang2015DeepAnalysis,ZhangFacialLearning, DBLP:journals/corr/KaiserGSVPJU17,DBLP:conf/iccv/ChangLPK17}, also including the music domain~\cite{Weston2011Multi-TaskingRetrieval,Aytar2016SoundNet:Video}. Following successes in the fields of Computer Vision (CV) and Natural Language Processing (NLP), deep learning approaches have recently also gained increasing interest in the MIR field, in which case \emph{deep representations} of music audio data are directly learned from the data, rather than being hand-crafted. Many works employing such approaches reported considerable performance improvements in various music analysis, indexing and classification tasks~\cite{Hamel2010LearningNetworks,Boulanger-Lewandowski2012ModelingTranscription,schlueter2014_icassp,Choi2016AutomaticNetworks,Oord2013DeepRecommendation,chandna2017monoaural,Jeong2016LearningClassification,Han2016DeepMusic}.\par In many deep learning applications, rather than training a complete network from scratch, pre-trained networks are commonly used to generate deep representations, which can be either directly adopted or further adapted for the current task at hand. In CV and NLP, (parts of) certain pre-trained networks ~\cite{Simonyan2014VeryRecognition,he2016deep,Szegedy2015GoingConvolutions,Mikolov2013EfficientSpace} have now been adopted and adapted in a very large number of works. These `standard' deep representations have typically been obtained by training a network for a single learning task, such as visual object recognition, employing large amounts of training data. The hypothesis on why these representations are effective in a broader of spectrum of tasks than they originally were trained for, is that \emph{deep transfer learning (DTL)} is happening: information initially picked up by the network is beneficial also for new learning tasks performed on the same type of raw input data. Clearly, the validity of this hypothesis is linked to the extent to which the new task can rely on similar data characteristics as the task on which the pre-trained network was originally trained.\par Although a number of works deployed DTL for various learning tasks in the music domain\cite{Dieleman2011Audio-basedNetwork,Choi2017TransferTasks,van2014transfer,Liang2014Content-AwareNetworks}, to our knowledge, however, transfer learning and the employment of pre-trained networks are not as standard in the MIR domain as in the CV domain. Again, this may be due to the broad and partially subjective range and nature of possible music descriptions. Following the considerations above, it may then be useful to combine deep transfer learning with multi-task learning. Indeed, in order to increase robustness to a larger scope of new learning tasks and datasets, the concept of MTL also has been applied in training deep networks for representation learning, both in the music domain ~\cite{Aytar2016SoundNet:Video,Weston2011Multi-TaskingRetrieval} and in general~\cite[p.~2]{Bengio2012RepresentationPerspectives}. As the model learns several tasks and datasets in parallel, it may pick up commonalities among them. As a consequence, the expectation is that a network learned with MTL will yield robust performance across different tasks, by transferring shared knowledge~\cite{RichCaruana1997Multitask,Bengio2012RepresentationPerspectives}. A simple illustration of the conceptual difference between traditional DTL and deep transfer learning based on MTL (further referred to as \emph{multi-task based deep transfer learning (MTDTL))} is shown in Fig.~\ref{fig:toyexample}.\par \begin{figure} \centering \includegraphics[height=0.3\textheight]{graphics/toy_example.pdf} \caption{Simplified illustration of the conceptual difference between traditional deep transfer learning (DTL) based on a single learning task (above) and multi-task based deep transfer learning (MTDTL) (below). The same color used for a learning and an target task indicates that the tasks have commonalities, which implies that the learned representation is likely to be informative for the target task. At the same time, this representation may not be that informative to another future task, leading to a low transfer learning performance. The hypothesis behind MTDTL is that relying on more learning tasks increases robustness of the learned representation and its usability for a broader set of target tasks.} \label{fig:toyexample} \end{figure} The mission of this paper is to investigate the effect of conditions around the setup of MTDTL, which are important to yield effective deep music representations. Here, we understand an `effective' representation to be a representation that is suitable for a wide range of new tasks and datasets. Ultimately, we aim for providing a methodological framework to systematically obtain and evaluate such transferable representations. We pursue this mission by exploring the effectiveness of MTDTL and traditional DTL, as well as concatenations of multiple deep representations, obtained by networks that were independently trained on separate single learning tasks. We consider these representations for multiple choices of learning tasks and considering multiple target datasets. Our work will address the following research questions: \begin{itemize} \item \textbf{RQ1:} Given a set of learning sources that can be used to train a network, what is the influence of the number and type of the sources on the effectiveness of the learned deep representation? \item \textbf{RQ2:} How do various degrees of information sharing in the deep architecture affect the effectiveness of a learned deep representation? \end{itemize} By answering the \textbf{RQ1} we arrive at an understanding of important factors regarding the composition of a set of learning tasks and datasets (which in the remainder of this work will be denoted as \emph{learning sources}) to achieve an effective deep music representation, specifically on the number and nature of learning sources. The answer to \textbf{RQ2} provides insight in \emph{how to choose the optimal multi-task network architecture} under a MTDTL context. For example, in MTL, multiple sources are considered under a joint learning scheme, that partially shares inferences obtained from different learning sources in the learning pipeline. In MTL applications using deep neural networks, this means that certain layers will be shared between all sources, while at other stages, the architecture will `branch' out into source-specific layers~\cite{RichCaruana1997Multitask,Bingel2017IdentifyingNetworks,li2014heterogeneous,Zhang2015DeepAnalysis,ZhangFacialLearning,misra2016cross,Aytar2016SoundNet:Video}. However, investigation is still needed on where in the layered architecture branching should ideally happen---if a branching strategy would turn out beneficial in the first place.\par To reach the aforementioned answers, it is necessary to conduct a systematic assessment to examine relevant factors. For \textbf{RQ1}, we investigate different numbers and combinations of learning sources. For \textbf{RQ2}, we study different architectural strategies. However, we wish to ultimately investigate effectiveness of the representation with respect to new, target learning tasks and datasets (which in the remainder of this paper will be denoted by \emph{target datasets}). While this may cause combinatorial explosion with respect to possible experimental configurations, we will make strategic choices in the design and evaluation procedure of the various representation learning strategies.\par The scientific contribution of this work can be summarized as follows: \begin{itemize} \item[$\bullet$] We provide insight into the effectiveness of various deep representation learning strategies under the multi-task learning context. \item[$\bullet$] We offer in-depth insight into ways to evaluate desired properties of a deep representation learning procedure. \item[$\bullet$] We propose and release several pre-trained music representation networks, based on different learning strategies for multiple semantic learning sources. \end{itemize} The rest of this work is presented as following: a formalization of this problem, as well as the global outline of how learning will be performed based on different learning tasks from different sources, will be presented in Section~\ref{learning_framework}. Detailed specifications of the deep architectures we considered for the learning procedure will be discussed in Section~\ref{dl_specifications}. Our strategy to \emph{evaluate} the effectiveness of different representation network variants by employing various \emph{target datasets} will be the focus of Section~\ref{eval}. Experimental results will be discussed in Section~\ref{res:intro}, after which general conclusions will be presented in Section~\ref{concl}. \section{Framework for Deep Representation Learning} \label{learning_framework} In this section, we formally define the deep representation learning problem. As Fig.~\ref{fig:problem} illustrates, any domain-specific MTDTL problem can be abstracted into a formal task, which is instantiated by a specific dataset with specific observations and labels. Multiple tasks and datasets are involved to emphasize different aspects of the input data, such that the learned representation is more adaptable to different future tasks. The learning part of this scheme can be understood as the MTL phase, which is introduced in Section~\ref{learning_framework:prob_def}. Subsequently in Section~\ref{learning_framework:learning_sources}, we discuss learning sources involved in this work, which consist of various tasks and datasets to allow investigating their effects on the transfer learning. Further, we introduce the label preprocessing procedure that is applied in this work in Section~\ref{learning_framework:learning_sources:learnfromfactors}, ensuring that the learning sources are more regularized, such that their comparative analysis is clearer. \begin{figure}[!htp] \centering \subfloat[Multi-Task Transfer Learning in General Problem Domain]{% \includegraphics[width=0.7\textwidth]{graphics/Problem_General.pdf}% \label{fig:problem:general}% }% \hfill% \subfloat[Multi-Task Transfer Learning in Music Information Retrieval Domain]{% \includegraphics[width=0.7\textwidth]{graphics/Problem_Example_full.pdf}% \label{fig:problem:example}% }% \hfill% \caption{Schematic overview of what this work investigates. The upper scheme illustrates a general problem solving framework in which multi-task transfer learning is employed. The tasks $t \in \{t_0, t_1, \cdots, t_M\}$ are derived from a certain problem domain, which are instantiated by datasets, that often are represented as sample pairs of observations and corresponding labels $(X_{t}, y_{t})$. Sometimes, the original dataset is processed further into simpler representation forms $(X_{t}, z_{t})$, to filter out undesirable information and noise. Once a model or system $f_{t}(X_{t})$ has learned the necessary mappings within the learning sources, this knowledge can be transferred to another set of target datasets, leveraging commonalities already obtained by the pre-training. Below the general framework, we show a concrete example, in which the broad MIR problem domain is abstracted into various sub-problems with corresponding tasks and datasets.} \label{fig:problem} \end{figure} \subsection{Problem Definition} \label{learning_framework:prob_def} A machine learning problem, focused on solving a specific task $t$, can be formulated as a minimization problem, in which a model function $f_t$ must be learned that minimizes a loss function $\mathcal{L}$ for given dataset $\mathcal{D}_{t} = \{\,(x^{(i)}_{t}, y^{(i)}_{t}) \mid i \in \{1, \cdots, I\} \,\}$, comparing the model's predictions given by the input $x_t$ and actual task-specific learning labels $y_t$. This can be formulated using the following expression: \begin{equation} \label{eq:1} \hat\theta = \argmin\;\mathbb{E}_{\mathcal{D}_{t}}\mathcal{L}(y_t, f_t(x_t;\theta)) \end{equation} where $x_{t}\in\mathbb{R}^d$ is, traditionally, a hand-crafted $d$-dimensional feature vector and $\theta$ is a set of model parameters of $f$. When deep learning is employed, the model function $f$ denotes a learnable network. Typically, the network model $f$ is learned in an end-to-end fashion, from raw data at the input to the learning label. In the speech and music field, however, using true end-to-end learning is still not a common practice. Instead, raw data is typically transformed first, before serving as network input. More specifically, in the music domain, common input to function $f$ would be $X\in\mathbb{R}^{c\times{n}\times{b}}$, replacing the originally hand-crafted feature vector $x\in\mathbb{R}^d$ from (\ref{eq:1}) by a time-frequency representation of the observed music data, usually obtained through the Short-Time Fourier Transform (STFT), with potential additional filter bank applications (e.g.\ mel-filter bank). The dimensions $c$, $n$, $b$ indicate channels of the audio signal, time steps, and frequency bins respectively. If such a network still is trained for a specific single machine learning task $t$, we can now reformulate (\ref{eq:1}) as follows: \begin{equation} \label{eq:2} \hat\theta = \argmin \; \mathbb{E}_{\mathcal{D}_{t}}\mathcal{L}(y_{t}, f_{t}(X_{t};\theta)). \end{equation} In MTL, in the process of learning the network model $f$, different tasks will need to be solved in parallel. In case of deep neural networks, this is usually realized by having a network in which lower layers are shared for all tasks, but upper layers are task-specific. Given $m$ different tasks $t$, each having the learning label $y_{t}$, we can formulate the learning objective of the neural network in a MTL scenario as follows: \begin{equation} \label{eq:4} \hat\theta^{s}, \hat\theta^{*} = \argmin \; \mathbb{E}_{t\in{\mathcal{T}}}\mathbb{E}_{\mathcal{D}_{t}}\mathcal{L}(y_{t}, f_{t}(X_{t};\theta^{s},\theta^{t})) \end{equation} Here, $\mathcal{T}=\{t_{1},t_{2},...,t_{m}\}$ is a given set of tasks to be learned and $\theta^{*}=\{\theta^{1},\theta^{2},...,\theta^{m}\}$ indicates a set of model parameters $\theta^{t}$ with respect to each task. Since the deep architecture initially shares lower layers and branches out to task-specific upper layers, the parameters of shared layers and task-specific layers are referred to separately as $\theta^{s}$ and $\theta^{t}$, respectively. Updates for all parameters can be achieved through standard back-propagation. Further specifics on network architectures and training configurations will be given in Section~\ref{dl_specifications}.\par Given the formalizations above, the first step in our framework is to select a suitable set $\mathcal{T}$ of learning tasks. These tasks can be seen as multiple concurrent descriptions or transformations of the same input fragment of musical audio: each will reflect certain semantic aspects of the music. However, unlike the approach in a typical MTL scheme, solving multiple specific learning tasks is actually not our main goal; instead, we wish to learn an effective \emph{representation} that captures as many semantically important factors in the low-level music representation as possible. Thus, rather than using learning labels $y_{t}$, our representation learning process will employ reduced learning labels $z_{t}$, which capture a reduced set of semantic factors from $y_{t}$. We then can reformulate (\ref{eq:4}) as follows: \begin{equation} \label{eq:5} \hat\theta^{s}, \hat\theta^{*} = \argmin \; \mathbb{E}_{t\in{\mathcal{T}}}\mathbb{E}_{\mathcal{D}_{t}}\mathcal{L}(z_{t}, f_{t}(X_{t};\theta^{s},\theta^{t})) \end{equation} where $z_t\in\mathbb{R}^{k}$ is a $k$-dimensional vector that represents reduced learning label for a specific task $t$. Each $z_t$ will be obtained through task-specific factor extraction methods, as described in Section~\ref{learning_framework:learning_sources:learnfromfactors}. \subsection{Learning Sources} \label{learning_framework:learning_sources} In the MTDTL context, a training dataset can be seen as the `source' to learn the representation, which will be further transferred to the future `target' dataset. Different learning sources of different nature can be imagined, that can be globally categorized as \emph{Algorithm} or \emph{Annotation}. As for the \emph{Algorithm} category, by employing traditional feature extraction or representation transformation algorithms, we will be able to automatically extract semantically interesting aspects from input data. As for the \emph{Annotation} category, these include different types of label annotations of the input data by humans.\par The dataset used as resource for our learning experiments is the Million Song Dataset (MSD)\cite{Bertin-Mahieux2011}. In its original form, it contains metadata and precomputed features for a million songs, with several associated data resources, e.g.\ considering \texttt{Last.fm} social tags and listening profiles from \texttt{the Echo Nest}. While the MSD does not distribute audio due to copyright reasons, through the API of the \texttt{7digital} service, 30-second audio previews can be obtained for the songs in the dataset. These 30-second previews will form the source for our raw audio input.\par Using the MSD data, we consider several subcategories of learning sources within the \emph{Algorithm} and \emph{Annotation} categories; below, we give an overview of these, and specify what information we considered exactly for the learning labels in our work. \subsubsection{Algorithm} \label{learning_framework:learning_sources:algorithm} \begin{itemize} \item \textbf{\textit{Self.}} The music track is the learning source itself; in other words, intrinsic information in the input music track should be captured through a learning procedure, without employing further data. Various unsupervised or auto-regressive learning strategies can be employed under this category, with variants of Autoencoders, including the Stacked Autoencoder~\cite{bengio2007greedy,Vincent2008ExtractingAutoencoders}, Restricted Boltzmann Machines (RBM)~\cite{smolensky1986information}, Deep Belief Networks (DBN)~\cite{Hinton2006ANets} and Generative Adversarial Networks (GAN)~\cite{goodfellow2014generative}. As another example within this category, variants of the Siamese networks for similarity learning can be considered~\cite{Han2015MatchNet:Matching,Arandjelovic2017LookLearn,Huang2017SimilarityGames}. In our case, we will employ the Siamese architecture to learn a metric that measures whether two input music clips belong to the same track, or two different tracks. This can be formulated as follows: \begin{equation} \label{eq:self} \hat\theta^{self}, \hat\theta^{s} = \argmin \; \mathbb{E}_{X_l, X_r \sim \mathcal{D}_{self}} \mathcal{L}(y_{self}, f_{self}(X_{l},X_{r};\theta^{self},\theta^{s})) \end{equation} \begin{equation} \label{eq:self_h} y_{self}= \begin{cases} 1, & \text{if } X_{l} \text{ and } X_{r} \text{ sampled from same track} \\ 0 & \text{otherwise} \end{cases} \end{equation} where $X_{l}$ and $X_{r}$ are a pair of randomly sampled short music snippets (taken from the 30-second MSD audio previews) and $f_{self}$ is a network for learning a metric between given input representations in terms of the criteria imposed by $y_{self}$. It is composed of one or more fully-connected layers and one output layer with softmax activation. An global outline illustration of our chosen architecture is given in Fig.~\ref{fig:match_arch}. Further specifications of the representation network and sampling strategies will be given in Section~\ref{fig:match_arch}. \begin{figure}[htp] \centering \includegraphics[height=0.33\textheight]{graphics/siamese_arch.pdf} \caption{Siamese architecture adopted for the \emph{self} learning task. For further details of the Representation Network, see Section~\ref{dl_specifications:base_architecture} and Fig.~\ref{fig:base_arch}.} \label{fig:match_arch} \end{figure} \item \textbf{\textit{Feature.}} Many algorithms exist already for extracting features out of musical audio, or for transforming musical audio representations. By running such algorithms on musical audio, learning labels are automatically computed, without the need for soliciting human annotations. Algorithmically computed outcomes will likely not be perfect, and include noise or errors. At the same time, we consider them as a relatively efficient way to extract semantically relevant and more structured information out of a raw input signal.\par In our case, under this category, we use Beat Per Minute (BPM) information, released as part of the MSD's precomputed features. The BPM values were computed by an estimation algorithm, as part of the \texttt{Echo Nest} API.\par \end{itemize} \subsubsection{Annotation} \label{learning_framework:learning_sources:annotation} \begin{itemize} \item \textbf{\textit{Metadata.}} Typically, metadata will come `for free' with music audio, specifying side information, such as a release year, the song title, the name of the artist, the corresponding album name, and the corresponding album cover image. Considering that this information describes categorization facets of the musical audio, metadata can be a useful information source to learn a music representation. In our experiments, we use release year information, which is readily provided as metadata with each song in the MSD.\par \item \textbf{\textit{Crowd.}} Through interaction with music streaming or scrobbling services, large numbers of users, also designated as the \textit{crowd}, left explicit or implicit information regarding their perspectives on musical content. For example, they may have created social tags, ratings, or social media mentionings of songs. With many services offering API access to these types of descriptors, crowd data therefore offers scalable, spontaneous and diverse (albeit noisy) human perspectives on music signals.\par In our experiments, we use social tags from \texttt{Last.fm}\footnote{\url{https://labrosa.ee.columbia.edu/millionsong/lastfm}} and user listening profiles from the \texttt{Echo Nest}. \item \textbf{\textit{Professional.}} As mentioned in \cite{Casey2008Content-basedChallenges}, annotation of music tracks is a complicated and time-consuming process: annotation criteria frequently are subjective, and considerable domain knowledge and annotation experience may be required before accurate and consistent annotations can be made. Professional experts in categorization have this experience, and thus are capable of indicating clean and systematic information about musical content. It is not trivial to get such professional annotations at scale; however, these types of annotations may be available in existing professional libraries.\par In our case, we use professional annotations from the Centrale Discotheek Rotterdam (CDR), the largest music library in The Netherlands, holding all music ever released in the country in physical and digital form in its collection. The CDR collection can be digitally accessed through the online Muziekweb\footnote{\url{https://www.muziekweb.nl/}} platform. For each musical album in the CDR collection, genre annotations were made by a professional annotator, according to a fixed vocabulary of 367 hierarchical music genres.\par As another professional-level `description', we adopted lyrics information per each track, which is provided in Bag-of-Words format with the MSD. To filter out trivial terms such as stop-words, we applied TF-IDF\cite{salton1983introduction}.\par \item \textbf{\textit{Combination.}} Finally, learning labels can be derived from combinations of the above categories. In our experiment, we used combination of artist information and social tags, by making a bag of tags at the artist level as a learning label.\par \end{itemize} Not all songs in the MSD actually include learning labels from all the sources mentioned above. Clearly, it is another advantage of using MTL that one can use such unbalanced datasets in a single learning procedure, to maximize the coverage of the dataset. However, on the other hand, if one uses an unbalanced number of samples across different learning sources, it is not trivial to compare the effect of individual learning sources. We therefore choose to work with a subset of the dataset, in which equal numbers of samples across learning sources can be used. As a consequence, we managed to collect 46,490 clips of tracks with corresponding learning source labels. A 41,841 / 4,649 split was made for training and validation for all sources from both MSD and CDR. Since we mainly focus on transfer learning, we used the validation set mostly for monitoring the training, to keep the network from overfitting.\par \begin{table} \centering \caption{Properties of learning sources.} \label{tab:intertask} \begin{tabular}{llllrl} \hline\noalign{\smallskip} Identifier & \multicolumn{2}{c}{Category} & Data & Dimensionality & Preprocessing \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{self} & \multirow{ 2}{*}{Algorithm} & Self & MSD - Track & 1 & \\ \textit{bpm} & & Feature & MSD - BPM & 1 & GMM \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{year} & \multirow{ 6}{*}{Annotation} & Metadata & MSD - Year & 1 & GMM \\ \textit{tag} & & Crowd & MSD - Tag & 174,156 & pLSA \\ \textit{taste} & & Crowd & MSD - Taste & 949,813 & pLSA \\ \textit{cdr\_tag} & & Professional & CDR - Tag & 367 & pLSA \\ \textit{lyrics} & & Professional & MSD - Lyrics & 5,000 & pLSA, TF-IDF\\ \textit{artist} & & Combination & MSD - Artist \& Tag & 522,366 & pLSA \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{table} \scriptsize \centering \caption{Examples of Latent Topics extracted with pLSA from MSD social tags} \label{tab:topic_term} \begin{tabular}{ll} \hline\noalign{\smallskip} Topic & Strongest social tags\\ \noalign{\smallskip}\hline\noalign{\smallskip} tag1 & \texttt{indie rock}, \texttt{indie}, \texttt{british}, \texttt{Scottish}\\ tag2 & \texttt{pop}, \texttt{pop rock}, \texttt{dance}, \texttt{male vocalists}\\ tag3 & \texttt{soul}, \texttt{rnb}, \texttt{funk}, \texttt{Neo-Soul}\\ tag4 & \texttt{Melodic Death Metal}, \texttt{black metal}, \texttt{doom metal}, \texttt{Gothic Metal}\\ tag5 & \texttt{fun}, \texttt{catchy}, \texttt{happy}, \texttt{Favorite}\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsection{Latent Factor Preprocessing} \label{learning_framework:learning_sources:learnfromfactors} Most learning sources are noisy. For instance, social tags include tags for personal playlist management, long sentences, or simply typos, which do not actually show relevant nuances in describing the music signal. The algorithmically extracted BPM information also is imperfect, and likely contains octave errors, in which BPM is under- or overestimated by a factor of 2. To deal with this noise, several previous works using the MSD~\cite{Choi2016AutomaticNetworks,Choi2017TransferTasks} applied a frequency-based filtering strategy along with top-down domain knowledge. However, this shrinks the available sample size. As an alternative way to handle noisiness, several other previous works~\cite{Lamere2008SocialRetrieval,Weston2011Multi-TaskingRetrieval,Hamel2013TRANSFERSIMILARITY,Law2010LearningLabels,van2014transfer,Oord2013DeepRecommendation} apply latent factor extraction using various low-rank approximation models to preprocess the label information. We also choose to do this in our experiments. A full overview of chosen learning sources, their category, origin dataset, dimensionality and preprocessing strategies is shown in Table~\ref{tab:intertask}. In most cases, we apply probabilistic latent semantic analysis (pLSA), which extracts latent factors as a multinomial distribution of latent topics~\cite{DBLP:conf/uai/Hofmann99}. Table~\ref{tab:topic_term} illustrates several examples of strong social tags within extracted latent topics. For situations in which learning labels are a scalar, non-binary value (BPM and release year), we applied a Gaussian Mixture Model (GMM) to transform each value into a categorical distribution of Gaussian components. In case of the \textit{Self} category, as it basically is a binary membership test, no factor extraction was needed in this case. After preprocessing, learning source labels $y_t$ are now expressed in the form of probabilistic distributions $z_t$. Then, the learning of a deep representation can take place by minimizing the Kullback\textendash Leibler (KL) divergence between model inferences $f_t(X)$ and label factor distributions $z_t$. Along with the noise reduction, another benefit from such preprocessing is the regularization of the scale of the objective function between different tasks involved in the learning, when the resulting factors have the same size. This regularity between the objective functions is particularly helpful for comparing different tasks and datasets. For this purpose, we used a fixed single value $k=50$ for the number of factors (pLSA) and the number of Gaussians (GMM). In the remainder of this paper, the datasets and tasks processed in above manner will be denoted by \textit{learning sources} for coherent presentation and usage of the terminology.\par \section{Representation Network Architectures} \label{dl_specifications} In this section, we present the detailed specification of the deep representation neural network architecture we exploited in this work. We will discuss the base architecture of the network, and further discuss the shared architecture with respect to different fusion strategies that one can take in the MTDTL context. Also, we introduce details on the preprocessing related to the input data served into networks.\par \subsection{Base Architecture} \label{dl_specifications:base_architecture} \begin{table} \centering \caption{Configuration of the base CNN. \texttt{conv} and \texttt{max-pool} indicate a 2-dimensional convolution and max-pooling layer, respectively. We set the stride size with 2 on the time dimension of \texttt{conv1}, to compress dimensionality at the early stage. Otherwise, all strides are set as 1 across all the convolution layers. \texttt{gap} corresponds to the global average pooling used in~\cite{he2016deep}, which averages out all the spatial dimensions of the filter responses. \texttt{fc} is an abbreviation of fully-connected layer. We use \texttt{dropout} with $p=0.5$ only for the \texttt{fc-feature} layer, where the intermediate latent representation is extracted and evaluated. For simplicity, we omit the batch-size dimension of the input shape.} \label{tab:netarch} \begin{tabular}{lllll} \hline\noalign{\smallskip} Layer & Input Shape & Weight Shape & Sub-Sampling & Activation\\ \noalign{\smallskip}\hline\noalign{\smallskip} \texttt{conv1} & $2\times216\times128$ & $2\times16\times5\times5$ & $2\times1$ & \texttt{ReLU}\\ \texttt{max-pool1} & $16\times108\times128$ & & $2\times2$ & \\ \texttt{conv2} & $16\times54\times64$ & $16\times32\times3\times3$ & & \texttt{ReLU}\\ \texttt{max-pool2} & $32\times54\times64$ & & $2\times2$ & \\ \texttt{conv3} & $32\times27\times32$ & $32\times64\times3\times3$ & & \texttt{ReLU} \\ \texttt{max-pool3} & $64\times27\times32$ & & $2\times2$ & \\ \texttt{conv4} & $64\times13\times16$ & $64\times64\times3\times3$ & & \texttt{ReLU}\\ \texttt{max-pool4} & $64\times13\times16$ & & $2\times2$ &\\ \texttt{conv5} & $64\times6\times8$ & $64\times128\times3\times3$ & & \texttt{ReLU}\\ \texttt{max-pool5} & $128\times6\times8$ & & $2\times2$ & \\ \texttt{conv61} & $128\times3\times4$ & $128\times256\times3\times3$ & & \texttt{ReLU}\\ \texttt{conv62} & $256\times3\times4$ & $256\times256\times1\times1$ & & \texttt{ReLU} \\ \texttt{gap} & $256$ & & \\ \texttt{fc-feature} & $256$ & $256\times256$ & & \texttt{ReLU} \\ \texttt{dropout} & $256$ & & \\ \texttt{fc-output} & $256$ & learning source specific & & \texttt{Softmax} \\ \noalign{\smallskip}\hline \end{tabular} \end{table} As the deep base architecture for feature representation learning, we choose a Convolutional Neural Network (CNN) architecture inspired by~\cite{Simonyan2014VeryRecognition}, as described in Fig.~\ref{fig:base_arch} and Table~\ref{tab:netarch}. The CNN is one of the most popular architectures in many music-related machine learning tasks~\cite{Oord2013DeepRecommendation,Choi2016AutomaticNetworks,Han2016DeepMusic,Schluter2016LearningExamples,DBLP:conf/icassp/HersheyCEGJMPPS17,DBLP:conf/nips/LeePLN09,Dieleman2011Audio-basedNetwork,DBLP:conf/icmla/HumphreyB12,DBLP:conf/interspeech/NakashikaGT12,DBLP:conf/ismir/UllrichSG14,DBLP:conf/mlsp/Piczak15,DBLP:conf/ica/SimpsonRP15,DBLP:conf/interspeech/PhanHMM16,DBLP:conf/cbmi/PonsLS16,DBLP:conf/fedcsis/StasiakM16,DBLP:conf/icassp/SuZZG16}. Many of these works adopt an architecture having cascading blocks of 2-dimensional filters and max-pooling, derived from well-known works in image recognition~\cite{Simonyan2014VeryRecognition,Krizhevsky2012ImageNetNetworks}. Although variants of CNN using 1-dimensional filters also were suggested by~\cite{Dieleman2014END-TO-ENDAUDIO,Oord2016WaveNet:Audio,Aytar2016SoundNet:Video,Jaitly2011LEARNINGHinton} to learn features directly from a raw audio signal in an end-to-end manner, not many works managed to use them on music classification tasks successfully~\cite{Lee2017Sample-LevelWaveforms}.\par The main difference between the base architecture and~\cite{Simonyan2014VeryRecognition} is the use of Global Average Pooling (GAP) and the Batch Normalization (BN) layers. BN is applied to accelerate the training and stabilize the internal covariate shift for every convolution layer and the \texttt{fc-feature} layer~\cite{Ioffe}. Also, global spatial pooling is adopted as the last pooling layer of the cascading convolution blocks, which is known to effectively summarize the spatial dimensions both in the image~\cite{he2016deep} and music domain~\cite{Han2016DeepMusic}. We also applied the approach to ensure the \texttt{fc-feature} layer not to have a huge number of parameters. We applied the Rectified Linear Unit (ReLU)~\cite{Nair2010RectifiedMachines} to all convolution layers and the \texttt{fc-feature} layer. For the \texttt{fc-output} layer, softmax activation is used. For each convolution layer, we applied zero-padding such that the input and the output have the same spatial shape. As for the regularization, we choose to apply drop-out~\cite{Srivastava2014Dropout:Overfitting} on the \texttt{fc-feature} layer. We added $L2$ regularization across all the parameters with the same weight $\lambda=10^{-6}$. \subsubsection{Audio Preprocessing} \label{dl_specifications:audiopreproc} We aim to learn a music representation from as-raw-as-possible input data to fully leverage the capability of the neural network. For this purpose, we use the dB-scale mel-scale magnitude spectrum of an input audio fragment, extracted by applying 128-band mel-filter banks on the Short-Time Fourier Transform (STFT). mel-spectrograms have generally been a popular input representation choice for CNNs applied in music-related tasks~\cite{Nam2012LearningRetrieval,Hamel2013TRANSFERSIMILARITY,Oord2013DeepRecommendation,Choi2016AutomaticNetworks,Choi2017TransferTasks,Han2016DeepMusic}; besides, it also was reported recently that their frequency-domain summarization, based on psycho-acoustics, is efficient and not easily learnable through data-driven approaches~\cite{Choi2017ATagging,Doerfler2017BasicDesign}. We choose a 1024-sample window size and 256-sample hop size, translating to about 46 ms and 11.6 ms respectively for a sampling rate of 22 kHz. We also applied standardization to each frequency band of the mel spectrum, making use of the mean and variance of all individual mel spectra in the training set. \subsubsection{Sampling} \label{dl_specifications:sampling} During the learning process, in each iteration, a random batch of songs is selected. Audio corresponding to these songs originally is 30 seconds in length; for computational efficiency, we randomly crop 2.5 seconds out of each song each time. Keeping stereo channels of the audio, the size of a single input tensor $X^*$ we used for the experiment ended up with $2\times216\times128$, where the first dimension indicates number of channels, and following dimensions mean time steps and mel-bins, respectively. Along with the computational efficiency, a number of literatures in MIR field reported that using a small chunk of the input not only inflates the dataset, but also shows good performance on the high-level tasks such as music auto-tagging~\cite{Lee2017Sample-LevelWaveforms,Han2016DeepMusic,Dieleman2014END-TO-ENDAUDIO}. For the \textit{self} case, we generate batches with equal numbers of songs for both membership categories in $y_{self}$.\par \begin{figure}[htp] \centering \includegraphics[height=0.33\textheight]{graphics/default_arch.pdf}% \caption{Default CNN architecture for supervised single-source representation learning. Details of the Representation Network are presented at the left of the global architecture diagram. The numbers inside the parentheses indicate either the number of filters, or the number of units with respect to the type of layer.} \label{fig:base_arch} \end{figure} \subsection{Multi-Source Architectures with Various Degrees of Shared Information} \label{dl_specifications:fusion} When learning a music representation based on various available learning sources, different strategies can be taken regarding the choice of architecture. We will investigate the following setups: \begin{itemize} \item{ As a base case, a \emph{\textbf{Single-Source Representation (SS-R)}} can be learned for a single source only. As mentioned earlier, this would be the typical strategy leading to pre-trained networks, that later would be used in transfer learning. In our case, our base architecture from Section~\ref{dl_specifications:base_architecture} and Fig.\ \ref{fig:base_arch} will be used, for which the layers in the Representation Network also are illustrated in Fig.\ \ref{fig:base}. Out of the \texttt{fc-feature} layer, a $d$-dimensional representation is obtained. } \item{ If multiple perspectives on the same content, as reflected by the multiple learning labels, should also be reflected in the ultimate learned representation, one can learn \emph{SS-R} representations for each learning source, and simply concatenate them afterwards. With $d$ dimensions per source and $m$ sources, this leads to a $d \times m$ \emph{\textbf{Multiple Single-Source Concatenated Representation (MSS-CR)}}. In this case, independent networks are trained for each of the sources, and no shared knowledge will be transferred between sources. A layer setup of the corresponding Representation Network is illustrated in Fig.\ \ref{fig:mst_cr}. } \item{ When applying MTL learning strategies, the deep architecture should involve shared knowledge layers, before branching out to various individual learning sources, whose learned representations will be concatenated in the final $d \times m$-dimensional representation. We call these \emph{\textbf{Multi-Source Concatenated Representations (MS-CR)}}. As the branching point can be chosen at different stages, we will investigate the effect of various prototypical branching point choices: at the second convolution layer (\emph{MS-CR@2}, Fig.~\ref{fig:split_2}), the fourth convolution layer (\emph{MS-CR@4}, Fig.~\ref{fig:split_4}), and the sixth convolution layer (\emph{MS-CR@6}, Fig.~\ref{fig:split_6}). The later the branching point occurs, the more shared knowledge the network will employ. } \item{ In the most extreme case, branching would only occur at the very last fully connected layer, and a \textbf{Multi-Source Shared Representation (MS-SR)} (or, more specifically, \emph{MS-SR@FC}) is learned, as illustrated in Fig.~\ref{fig:split_fc}. As the representation is obtained from the \texttt{fc-feature} layer, no concatenation takes place here, and a $d$-dimensional representation is obtained. } \end{itemize} A summary of these different representation learning architectures is given in Table~\ref{tab:fusion}. Beyond the strategies we choose, further approaches can be thought of to connect representations learned for different learning sources in neural network architectures. For example, for different tasks, representations can be extracted from different intermediate hidden layers, benefiting from the hierarchical feature encoding capability of the deep network~\cite{Choi2017TransferTasks}. However, considering that learned representations are usually taken from a specific fixed layer of the shared architecture, we focus on the strategies as we outlined above.\par \begin{table} \centering \caption{Properties of the various categories of representation learning architectures.} \label{tab:fusion} \begin{tabular}{ccccc} \hline\noalign{\smallskip} & Multi Source & Shared Network & Concatenation & Dimensionality\\ \noalign{\smallskip}\hline\noalign{\smallskip} \textbf{SS-R} & No & No & No & $d$ \\ \textbf{MSS-CR} & Yes & No & Yes & $d\times{m}$ \\ \textbf{MS-CR} & Yes & Partial & Yes & $d\times{m}$ \\ \textbf{MS-SR} & Yes & Yes & No & $d$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{figure}[htp] \centering \subfloat[SS-R: Base setup.]{% \includegraphics[height=0.33\textheight]{graphics/split_base_black_no_margin.pdf}% \label{fig:base}% }% \hfill% \subfloat[MSS-CR: Concatenation of multiple independent SS-R networks.]{% \includegraphics[height=0.33\textheight]{graphics/split_base_black_no_margin.pdf}% \includegraphics[height=0.33\textheight]{graphics/split_base_black_no_margin.pdf}% \label{fig:mst_cr}% }% \hfill% \subfloat[MS-CR@2: network branches to source-specific layers from 2nd convolution layer.]{% \includegraphics[height=0.33\textheight]{graphics/split_2_black.pdf}% \label{fig:split_2}% }% \hfill% \subfloat[MS-CR@4: network branches to source-specific layers from 4th convolution layer.]{% \includegraphics[height=0.33\textheight]{graphics/split_4_black.pdf}% \label{fig:split_4}% }% \hfill% \subfloat[MS-CR@6: network branches to source-specific layers from 6th convolution layer.]{% \includegraphics[height=0.33\textheight]{graphics/split_6_black.pdf}% \label{fig:split_6}% }% \hfill% \subfloat[MS-SR@FC: heavily shared network, source-specific branching only at final FC layer.]{% \includegraphics[height=0.33\textheight]{graphics/split_fc_black.pdf}% \label{fig:split_fc}% }% \hfill \caption{The various model architectures considered in the current work. Beyond single-source architectures, multi-source architectures with various degrees of shared information are studied. For simplification, multi-source cases are illustrated here for two sources. The \texttt{fc-feature} layer from which representations will be extracted is the FC(256) layer in the illustrations (see Table~\ref{tab:netarch}).} \label{fig:split} \end{figure} \subsection{MTL Training Procedure} \label{dl_specifications:train} \begin{algorithm}[h] \nl Initialize $\Theta$: \{$\theta^{t}$, $\theta^{s}$\} randomly\; \nl \For{epoch in 1...N}{ \nl \For{iteration in 1...L}{ \nl Pick a learning source $t$ randomly\; \nl Pick batch of samples from learning source $t$\; ($X_l$, $X_r$) for \textit{self}\; $X$ otherwise\; \nl Derive learning label $z_{t}$\; \nl Sub-sample chunk $X^*$ from track $X$\; \nl Forward-pass:\; $\mathcal{L}(y_{self}, \Theta, X_l^*, X_r^*)=$Eq. \ref{eq:self} for \textit{self}\; $\mathcal{L}(z_{t}, \Theta, X^*)=$Eq. \ref{eq:2} otherwise\; \nl Backward-pass: $\nabla(\Theta)$\; \nl Update model: $\Theta \gets \Theta - \epsilon \nabla(\Theta)$\; } } \caption{{Training a Multi-Source CNN} \label{Algorithm}} \label{alg:train} \end{algorithm} Similar to~\cite{Weston2011Multi-TaskingRetrieval,Liu2015Multi-taskSelection}, we choose to train the MTL models with a stochastic update scheme as described in Algorithm~\ref{alg:train}. At every iteration, a learning source is selected randomly. After the learning source is chosen, a batch of observation-label pairs $(X, z_{t})$ is drawn. For the audio previews belonging to the songs within this batch, an input representation $X^*$ is cropped randomly from its super-sample $X$. The updates of the parameters $\Theta$ are conducted through back-propagation using the Adam algorithm~\cite{Kingma2014Adam:Optimization}. For each neural network we train, we set $L=lm$, where $l$ is the number of iterations needed to visit all the training samples with fixed batch size $b=128$, and $m$ is the number of learning sources used in the training. Across the training, we used a fixed learning rate $\epsilon=0.00025$. After a fixed number of epochs $N$ is reached, we stop the training.\par \subsection{Implementation Details} \label{dl_specifications:imple} We used \textit{PyTorch}~\cite{paszke2017automatic} to implement the CNN models and parallel data serving. For evaluation of models and cross-validation, we made extensive use of functionality in \textit{Scikit-Learn}~\cite{Pedregosa2012Scikit-learn:Python}. Furthermore, \textit{Librosa}~\cite{Mcfee2015Librosa:Python} was used to process audio files and its raw features including mel spectrograms. The training is conducted with 8 Graphical Processing Unit (GPU) computation nodes, composed of 2 NVIDIA GRID K2 GPUs and 6 NVIDIA GTX 1080Ti GPUs.\par \begin{figure}[htp] \centering \includegraphics[width=1\textwidth]{graphics/framework.pdf} \caption{Overall system framework. The first row of the figure illustrates the learning scheme, where the representation learning is happening by minimizing the KL divergence between the network inference $f_t(X)$ and the preprocessed learning label $z_t$. The preprocessing is conducted by the blue blocks which transform the original noisy labels $y_t$ to $z_t$, reducing noise and summarizing the high-dimensional label space into a smaller latent space. The second row describes the entire evaluation scenario. The representation is first extracted from the representation network, which is transferred from the upper row. The sequence of representation vectors is aggregated as the concatenation of their means and standard deviations. The purple block indicates a machine learning model employed to evaluate the representation's effectiveness.} \label{fig:framework} \end{figure} \section{Evaluation} \label{eval} So far, we discussed the details regarding the learning phase of this work, which corresponds to the upper row of Fig.~\ref{fig:framework}. This included various choices of sources for the representation learning, and various choices of architecture and fusion strategies. In this section, we present the evaluation methodology we followed, as illustrated in the second row of Fig.~\ref{fig:framework}. First, we will discuss the chosen target tasks and datasets in Section~\ref{eval:tasks}, followed in Section~\ref{eval:baseline} by the baselines against which our representations will be compared. Section~\ref{eval:expdesign} explains our experimental design, and finally we discuss the implementation of our evaluation experiments in Section~\ref{eval:imple}. \subsection{Target Datasets} \label{eval:tasks} In order to gain insight into the effectiveness of learned representations with respect to multiple potential future tasks, we consider a range of \emph{target datasets}. In this work, our target datasets are chosen to reflect various semantic properties of music, purposefully chosen semantic biases, or popularity in the MIR literature. Furthermore, the representation network should not be configured or learned to explicitly solve the chosen target datasets. While for the learning sources, we could provide categorizations on where and how the learning labels were derived, and also consider algorithmic outcomes as labels, existing popular research datasets mostly fall in the \textit{Professional} or \textit{Crowd} categories. In our work, we choose 7 evaluation datasets commonly used in MIR research, which reflect three conventional types of MIR tasks, namely classification, regression and recommendation: \begin{table}[h] \centering \caption{Properties of target datasets used in our experiments. Because of time constraints, we sampled the Lastfm dataset as described in Section~\ref{eval:tasks}; the original size appears between parentheses. In case particular data splits are defined by an original author or follow up study, we apply the same split, including the reference in which the split is introduced. Otherwise, we applied either a random split stratified by the label (Ballroom), or simple filtering based on reported faulty entries (IRMAS).} \label{tab:extertask} \begin{tabular}{lrllll} \hline\noalign{\smallskip} Task & \multicolumn{2}{c}{Data} & \#Tracks & \#Class & Split Method \\ \noalign{\smallskip}\hline\noalign{\smallskip} Classification & FMA\cite{Defferrard2016FMA:Analysis} & Genre & 25,000 & 16 & Artist Filtered~\cite{Defferrard2016FMA:Analysis} \\ Classification & GTZAN\cite{Tzanetakis2002MusicalIEEE} & Genre & 1,000 & 10 & Artist Filtered~\cite{DBLP:journals/tmm/KereliukSL15} \\ Classification & Ext. Ballroom\cite{Gouyon2006AnAlgorithms,DBLP:conf/mlsp/MarchandP16} & Genre & 3,390 & 13 & N/A \\ Classification & IRMAS\cite{Bosch2012ASignals} & Instrument & 6,705 & 11 & Song Filtered \\ Regression & Music Emotion\cite{Soleymani20131000Music} & Arousal & 744 & & Genre Stratified\cite{Soleymani20131000Music}\\ Regression & Music Emotion\cite{Soleymani20131000Music} & Valence & 744 & & Genre Stratified\cite{Soleymani20131000Music} \\ Recommendation & Lastfm\mbox{*}\cite{Celma:Springer2010} & Listening Count & 27,093 (961,416) & & N/A \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{itemize} \item \textbf{\textit{Classification.}} Different types of classification tasks exist in MIR. In our experiments, we consider several datasets used for genre classification and instrument classification. For genre classification, we chose the GTZAN~\cite{Tzanetakis2002MusicalIEEE} and FMA~\cite{Defferrard2016FMA:Analysis} datasets as main exemplars. Even though GTZAN is known for its caveats~\cite{Sturm2014TheRetrieval}, we deliberately used it, because its popularity can be beneficial when comparing with previous and future work. We note though that there may be some overlap between the tracks of GTZAN and the subset of the MSD we use in our experiments; the extent of this overlap is unknown, due to the lack of a confirmed and exhaustive track listing of the GTZAN dataset. We choose to use a fault-filtered data split for the training and evaluation, which is suggested in~\cite{DBLP:journals/tmm/KereliukSL15}. The split originally includes a training, validation and evaluation split; in our case, we also included the validation split as training data. Among the various packages provided by the FMA, we chose the top-genre classification task of FMA-Medium~\cite{Defferrard2016FMA:Analysis}. This is a classification dataset with an unbalanced genre distribution. We used the data split provided by the dataset for our experiment, where the training is validation set are combined as the training. Considering another type of genre classification, we selected the Extended Ballroom dataset~\cite{Gouyon2006AnAlgorithms, DBLP:conf/mlsp/MarchandP16}. Because the classes in this dataset are highly separable with regard to their BPM~\cite{Sturm2016TheSystems}, we specifically included this `purposefully biased' dataset as an example of how a learned representation may effectively capture temporal dynamics properties present in a target dataset, as long as learning sources also reflected these properties. Since no pre-defined split is provided or suggested by other literature, we used stratified random sampling based on the genre label. The last dataset we considered for classification is the training set of the IRMAS dataset~\cite{Bosch2012ASignals}, which consists of short music clips annotated with the predominant instruments present in the clip. Compared to the genre classification task, instrument classification is generally considered as less subjective, requiring features to separate timbral characteristics of the music signal as opposed to high-level semantics like genre. We split the dataset to make sure that observations from the same music track are not split into training and test set. As performance metric for all these classification tasks, we used classification accuracy. \item \textbf{\textit{Regression.}} As exemplars of regression tasks, we evaluate our proposed deep representations on the dataset used in the MediaEval Music Emotion prediction task~\cite{Soleymani20131000Music}. It contains frame-level and song-level labels of a two-dimensional representation of emotion, with valence and arousal as dimensions~\cite{Posner2005ThePsychopathology}. Valence is related to the positivity or negativity of the emotion, and arousal is related to its intensity~\cite{Soleymani20131000Music}. The song-level annotation of the V-A coordinates was used as the learning label. In similar fashion to the approach taken in~\cite{Choi2017TransferTasks}, we trained separate models for the two emotional dimensions. As for the dataset split, we used the split provided by the dataset, which is done by the random split stratified by the genre distribution. As evaluation metric, we measured the coefficient of determination $R^{2}$ of each model. \item \textbf{\textit{Recommendation.}} Finally, we employed the `Last.fm - 1K users' dataset~\cite{Celma:Springer2010} to evaluate our representations in the context of a content-aware music recommendation task (which will be denoted as \emph{Lastfm} in the remaining of the paper). This dataset contains 19 million records of listening events across $961,416$ unique tracks collected from $992$ unique users. In our experiments, we mimicked a cold-start recommendation problem, in which items not seen before should be recommended to the right users. For efficiency, we filtered out users who listened to less than $5$ tracks and tracks known to less than $5$ users. As for the audio content of each track, we obtained the mapping between the MusicBrainz Identifier (MBID) with the Spotify identifier (SpotifyID) using the \texttt{MusicBrainz API}\footnote{\url{https://musicbrainz.org/}}. After cross-matching, we collected 30 seconds previews of all track using the \texttt{Spotify API}\footnote{\url{https://developer.spotify.com/documentation/web-api/}}. We found that there is a substantial amount of missing mapping information between the SpotifyID and MBID in the \texttt{MusicBrainz} database, where only approximately 30\% of mappings are available. Also, because of the substantial amount of inactive users and unpopular tracks in the dataset, we ultimately acquired a dataset of $985$ unique users and $27,093$ unique tracks with audio content. Similar to \cite{Liang2014Content-AwareNetworks}, we considered the \textit{outer matrix} performance for un-introduced songs; in other words, the model's recommendation accuracy on the items newly introduced to the system~\cite{Liang2014Content-AwareNetworks}. This was done by holding out certain tracks when learning user models, and then predicting user preference scores based on all tracks, including those that were held out, resulting in a ranked track list per user. As evaluation metric, we consider Normalized Discounted Cumulative Gain ($nDCG@500$), only treating held-out tracks that were indeed liked by a user as relevant items. Further details on how hold-out tracks were chosen are given in Section~\ref{eval:imple}. \end{itemize} A summary of all evaluation datasets, their origins and properties, can be found in Table~\ref{tab:extertask}. \subsection{Baselines} \label{eval:baseline} We examined three baselines to compare with our proposed representations: \begin{itemize} \item\textbf{\textit{Mel-Frequency Cepstral Coefficients (MFCC).}} These are some of the most popular audio representations in MIR research. In this work, we extract and aggregate MFCC following the strategy in~\cite{Choi2017TransferTasks}. In particular, we extracted 20 coefficients and also used their first- and second-order derivatives. After obtaining the sequence of MFCCs and its derivatives, we performed aggregation by taking the average and standard deviation over the time dimension, resulting in a 120-dimensional vector representation. \item\textbf{\textit{Random Network Feature (Rand).}} We extracted the representation at the \texttt{fc-feature} layer without any representation network training. With random initialization, this representation therefore gives a random baseline for a given CNN architecture. We refer to this baseline as \textit{Rand}. \item\textbf{\textit{Latent Representation from Music Auto-Tagger (Choi).}} The work in~\cite{Choi2017TransferTasks} focused on a music auto-tagging task, and can be considered as yielding a state-of-the-art deep music representation for MIR. While the model's focus on learning a representation for music auto-tagging can be considered as our \textit{SS-R} case, there are a number of issues that complicate direct comparisons between this work and ours. First, the network in~\cite{Choi2017TransferTasks} is trained with about 4 times more data samples than in our experiments. Second, it employed a much smaller network than our architecture. Further, intermediate representations were extracted, which is out of the scope of our work, as we only consider representations at the \texttt{fc-feature} layer. Nevertheless, despite these caveats, the work still is very much in line with ours, making it a clear candidate for comparison. Throughout the evaluation, we could not fully reproduce the performance reported in the original paper~\cite{Choi2017TransferTasks}. When reporting our results, we therefore will report the performance we obtained with the published model, referring to this as \textit{Choi}. \end{itemize} \subsection{Experimental Design} \label{eval:expdesign} \begin{figure} \centering \includegraphics[scale=.4]{graphics/_alias.pdf} \caption{Aliasing among main effects in the final experimental design.} \label{fig:exp_design} \end{figure} In order to investigate our research questions, we carried out an experiment to study the effect of the number and type of learning sources on the effectiveness of deep representations, as well as the effect of the various architectural learning strategies described in Section~\ref{dl_specifications:fusion}. For the experimental design we consider the following factors: \begin{itemize} \item Representation strategy, with 6 levels: \emph{SS-R}, \emph{MS-SR@FC}, \emph{MS-CR@6}, \emph{MS-CR@4}, \emph{MS-CR@2}, and \emph{MSS-CR}). \item 8 2-level factors indicating the presence or not of each of the 8 learning sources: \emph{self}, \emph{year}, \emph{bpm}, \emph{taste}, \emph{tag}, \emph{lyrics}, \emph{cdr\_tag} and \emph{artist}. \item Number of learning sources present in the learning process (1 to 8). Note that this is actually calculated as the sum of the eight factors above. \item Target dataset, with 7 levels: Ballroom, FMA, GTZAN, IRMAS, Lastfm, Arousal and Valence. \end{itemize} Given a learned representation, fitting dataset-specific models is much more efficient than learning the representation, so we decided to evaluate each representation on all 7 target datasets. The experimental design is thus restricted to combinations of representation and learning sources, and for each such combination we will produce 7 observations. However, given the constraint of \emph{SS-R} relying on a single learning source, that there is only one possible combination for n = 8 sources, as well as the high unbalance in the number of sources\footnote{For instance, from the 255 possible combinations of up to 8 sources, there are 70 combinations of $n=4$ sources, but 28 with $n=2$, or only 8 for $n=7$. Simple random sampling from the 255 possible combinations would lead to a very unbalanced design, that is, a highly non-uniform distribution of observation counts across the levels of the factor ($n$ in this case). A balanced design is desired to prevent aliasing and maximize statistical power. See section 15.2 in~\cite{Montgomery2012design} for details on unbalanced designs.}, we proceeded in three phases: \begin{enumerate} \item We first trained the \emph{SS-R} representations for each of the 8 sources, and repeated 6 times each. This resulted in 48 experimental runs. \item We then proceeded to train all five multi-source strategies with all sources, that is, $n=8$. We repeated this 5 times, leading to 25 additional experimental runs. \item Finally, we ran all five multi-source strategies with $n=2,\dots,7$. The full design matrix would contain 5 representations and 8 sources, for a total of 1,230 possible runs. Such an experiment was unfortunately infeasible to run exhaustively given available resources, so we decided to follow a fractional design. However, rather than using a pre-specified optimal design with a fixed amount of runs~\cite{Goos2011optimal}, we decided to run sequentially for as long as time would permit us, generating at each step a new experimental run on demand in a way that would maximize desired properties of the design up to that point, such as balance and orthogonality\footnote{An experimental design is orthogonal if the effects of any factor balance out across the effects of the other factors. In a non-orthogonal design effects may be aliased, meaning that the estimate of one effect is partially biased with the effect of another, the extent of which ranges from 0 (no aliasing) to 1 (full aliasing). Aliasing is sometimes referred to as confounding. See sections 8.5 and 9.5 in~\cite{Montgomery2012design} for details on aliasing.}. We did this with the greedy Algorithm~\ref{alg:design}. From the set of still remaining runs $\mathcal{A}$, a subset $\mathcal{O}$ is selected such that the expected unbalance in the augmented design $\mathcal{B}\cup\{o\}$ is minimal. In this case, the unbalance of a design is defined as the maximum unbalance found between the levels of any factor, except for those already exhausted\footnote{For instance, let a design have 20 runs for \emph{SS-R}, 16 for \emph{MS-SR@FC}, and 18 for all other representations. The unbalance in the representation factor is thus $20-16=4$. The total unbalance of the design is defined as the maximum unbalance found across all factors.}. From $\mathcal{O}$, a second subset $\mathcal{P}$ is selected such that the expected aliasing in the augmented design is minimal, here defined as the maximum absolute aliasing between main effects\footnote{See section 2.3.7 in~\cite{Goos2011optimal} for details on how to compute an alias matrix.}. Finally, a run $p$ is selected at random from $\mathcal{P}$, the corresponding representation is learned, and the algorithm iterates again after updating $\mathcal{A}$ and $\mathcal{B}$. Following this on demand methodology, we managed to run another 352 experimental runs from all the 1,230 possible. \end{enumerate} \begin{algorithm}[h] \nl Initialize $\mathcal{A}$ with all possible 1,230 runs to execute\; \nl Initialize $\mathcal{B}\gets\emptyset$ for the set of already executed runs\; \nl \While{time allows}{ \nl Select $\mathcal{O}\subseteq \mathcal{A}$ s.t. $\forall o\in \mathcal{O}$, the unbalance in $\mathcal{B}\cup \{o\}$ is minimal\; \nl Select $\mathcal{P}\subseteq \mathcal{O}$ s.t. $\forall p\in \mathcal{P}$, the aliasing in $\mathcal{B}\cup \{p\}$ is minimal\; \nl Select $p\in \mathcal{P}$ at random\; \nl Update $\mathcal{A}\gets \mathcal{A}-\{p\}$\; \nl Update $\mathcal{B}\gets \mathcal{B}\cup\{p\}$\; \nl Learn the representation coded by $p$\; } \caption{Sequential generation of experimental runs.} \label{alg:design} \end{algorithm} After going through the three phases above, the final experiment contained $48+25+352=425$ experimental runs, each producing a different deep music representation. We further evaluated each representation on all 7 target datasets, leading to a grand total of $42\times 7=2,97$5 datapoints. Fig.~\ref{fig:exp_design} plots the alias matrix of the final experimental design, showing that the aliasing among main factors is indeed minimal. The final experimental design matrix can be downloaded along with the rest of the supplemental material. Each considered representation network was trained using the CNN representation network model from Section~\ref{dl_specifications}, based on the specific combination of learning sources and deep architecture as indicated by the experimental run. In order to reduce variance, we fixed the number of training epochs to $N = 200$ across all runs, and applied the same base architecture, except for the branching point. This entire training procedure took approximately 5 weeks with given computational hardware resources introduced in Section~\ref{dl_specifications:imple}. \subsection{Implementation Details} \label{eval:imple} In order to assess how our learned deep music representations perform on the various target datasets, transfer learning will now be applied, to consider our representations in the context of these new target datasets. As a consequence, new machine learning pipelines are set up, focused on each of the target datasets. In all cases, we applied the pre-defined split if it is feasible. Otherwise, we randomly split the dataset in a 80\% training and 20\% test set. For every dataset, we repeated the training and evaluation for 5 times, using different train/test splits. In most of our evaluation cases, validation will take place on the test set; in case of the the recommendation problem, the test set represents a set of tracks to be held out during user model training, and re-inserted for validation. In all cases, we will extract representations from evaluation dataset audio as detailed in Section~\ref{eval:imple:feat_preproc}, and then learn relatively simple models based on them, as detailed in Section~\ref{eval:imple:model}. Employing the metrics as mentioned in the previous section, we will then take average performance scores over the 5 different train-test splits for final performance reporting. \subsubsection{Feature Extraction and Preprocessing} \label{eval:imple:feat_preproc} Taking raw audio from the evaluation datasets as input, we take non-overlapping slices out of this audio with a fixed length of 2.5 seconds. Based on this, we apply the same preprocessing transformations as discussed in Section~\ref{dl_specifications:audiopreproc}. Then, we extract a deep representation from this preprocessed audio, employing the architecture as specified by the given experimental run. As in the case of Section~\ref{dl_specifications:fusion}, representations are extracted from the \texttt{fc-feature} layer of each trained CNN model. Depending on the choice of architecture, the final representation may consist of concatenations of representations obtained by separate representation networks. Input audio may originally be (much) longer than 2.5 seconds; therefore, we aggregate information in feature vectors over multiple time slices by taking their \textit{mean} and \textit{standard deviation} values. As a result, we get a representation with averages per learned feature dimension, and another representation with standard deviations per feature dimension. These will be concatenated, as illustrated in Fig.~\ref{fig:framework}.\par \subsubsection{Target Dataset-Specific Models} \label{eval:imple:model} As our goal is not to over-optimize dataset-specific performance, but rather perform a comparative analysis between different representations (resulting from different learning strategies), we keep the model simple, and use fixed hyper-parameter values for each model across the entire experiment. To evaluate the trained representations, we used different models according to the target dataset. For classification and regression tasks, we used Multi Layer Perceptron (MLP) model~\cite{DBLP:journals/ai/Hinton89}. More specifically, the MLP model has two hidden layers, whose dimensionality is $256$. As for the non-linearity, we choose ReLU~\cite{Nair2010RectifiedMachines} for all nodes, and the model is trained with ADAM optimization technique~\cite{Kingma2014Adam:Optimization} for 200 iterations. In evaluation, we used the \textit{Scikit-Learn}'s implementation for ease of distributed computing on multiple CPU computation nodes. For the recommendation task, we choose a similar model as suggested in~\cite{Liang2014Content-AwareNetworks,Hu2008CollaborativeYifan}, in which the learning objective function $\mathcal{L}$ is defined as \begin{equation} \label{eq:recsys} \hat{U}, \hat{V}, \hat{W} = \argmin \; ||P-UV^{T}||_{C} + \frac{\lambda^{V}}{2}||V-XW|| + \frac{\lambda^{U}}{2}||U|| + \frac{\lambda^{W}}{2}||W|| \end{equation} \noindent where $P\in\mathbb{R}^{u\times{i}}$ is a binary matrix indicating whether there is interaction between users $u$ and items $i$, $U\in\mathbb{R}^{u\times{r}}$ and $V\in\mathbb{R}^{i\times{r}}$ are $r$ dimensional user factors and item factors for the low-rank approximation of $P$. $P$ is derived from the original interaction matrix $R\in\mathbb{R}^{u\times{i}}$, which contains the number of interaction from users $u$ to items $i$, as follows:\par \begin{equation} P_{u, i} = \begin{cases} 1, & \text{if } R_{u, i} > 0\\ 0 & \text{otherwise} \end{cases} \end{equation} $W\in\mathbb{R}^{d\times{r}}$ is a free parameter for the projection from $d$-dimensional feature space to the factor space. $X\in\mathbb{R}^{i\times{d}}$ is the feature matrix where each row corresponds to a track. Finally, $||\cdot||_{C}$ is the Frobenious norm weighted by the confidence matrix $C\in\mathbb{R}^{u\times{i}}$, which controls the credibility of the model on the given interaction data, given as follows:\par \begin{equation} \label{eq:recsys:confidence} C = 1 + \alpha R \end{equation} where $\alpha$ controls credibility. As for hyper-parameters, we set $\alpha=0.1$, $\lambda^{V}=0.00001$, $\lambda^{U}=0.00001$, and $\lambda^{W}=0.1$, respectively. For the number of factors we choose $r=50$ to focus only on the relative impact of the representation over the different conditions. We implemented an update rule with the Alternating Least Squares (ALS) algorithm similar to~\cite{Liang2014Content-AwareNetworks}, and updated parameters during 15 iterations. \section{Results and Discussion} \label{res:intro} In this section, we present results and discussion related to the proposed deep music representations. In Section \ref{res:single_multi_rep}, we will first compare the performance across the \emph{SS-R}s, to show how different individual learning sources work for each target dataset. Then, we will present general experimental results related to the performance of the multi-source representations. In Section \ref{res:task_num}, we discuss the effect of the number of learning sources exploited in the representation learning, in terms of their general performance, reliability, and model compactness. In Section \ref{res:single_vs_multi}, we discuss effectiveness of different representations in MIR. Finally, we present some initial evidence for multifaceted semantic explainability of the proposed MTDTL in Section~\ref{res:mulexpfac}.\footnote{For the reproducibility, we release all relevant materials including code, models and extracted features at \url{https://github.com/eldrin/MTLMusicRepresentation-PyTorch}.} \subsection{Single-Source and Multi-Source Representation} \label{res:single_multi_rep} \begin{figure} \centering \includegraphics[scale=.58]{graphics/_y_by_source.pdf} \caption{Performance of single source representations. Each point indicates the performance of a representation learned from the single source. Solid points indicate the average performance per source. The baselines are illustrated as horizontal lines.} \label{fig:single_task_rep} \end{figure} Fig.~\ref{fig:single_task_rep} presents the performance of \emph{SS-R} representations on each of the 7 target datasets. We can see that all sources tend to outperform the \textit{Rand} baseline on all datasets, except for a handful cases involving sources \emph{self} and \emph{bpm}. Looking at the top performing sources, we find that \emph{tag}, \emph{cdr\_tag} and \emph{artist} perform better or on-par with the most sophisticated baseline, \textit{Choi}, except for the IRMAS dataset. The other sources are found somewhere between these two baselines, except for datasets Lastfm and Arousal, where they perform better than \textit{Choi} as well. Finally, the \textit{MFCC} is generally outperformed in all cases, with the notable exception of the IRMAS dataset, where only \textit{Choi} performs better. Zooming in to dataset-specific observed trends, the \emph{bpm} learning source shows a highly skewed performance across target datasets: it clearly outperforms all other learning sources in the Ballroom dataset, but it achieves the worst or second worst performance in the other datasets. As shown in~\cite{Sturm2016TheSystems}, this confirms that the Ballroom dataset is well-separable based on BPM information alone. Indeed, representations trained on the \emph{bpm} learning source seem to contain a latent representation close to the BPM of an input music signal. In contrast, we can see that the \emph{bpm} representation achieves the worst results in the Arousal dataset, where both temporal dynamics and BPM are considered as important factors determining the intensity of emotion. On the IRMAS dataset, we see that all the \emph{SS-R}s perform worse than the \textit{MFCC} and \textit{Choi} baselines. Given that they both take into account low-level features, either by design or by exploiting low-level layers of the neural network, this suggests that predominant instrument sounds are harder to distinguish based solely on semantic features, which is the case of the representations studied here. Also, we find that there is small variability for each \emph{SS-R} run within the training setup we applied. Specifically, in 50\% of cases we have within-\emph{SS-R} variability less than 15\% of the within-dataset variability. 90\% of the cases are within 30\% of the within-dataset variability. \begin{figure} \centering \includegraphics[scale=.58]{graphics/_y_by_arc.pdf} \caption{Performance by representation strategy. Solid points represent the mean per representation. The baselines are illustrated as horizontal lines.} \label{fig:overallperformance} \end{figure} We now consider how the various representations based on multiple learning sources perform, in comparison to those based on single learning sources. The boxplots in Fig.~\ref{fig:overallperformance} show the distributions of performance scores for each architectural strategy and per target dataset. For comparison, the gray boxes summarize the distributions depicted in Fig.~\ref{fig:single_task_rep}, based on the \emph{SS-R} strategy. In general, we can see that these \emph{SS-R} obtain the lowest scores, followed by \emph{MS-SR@FC}, except for the IRMAS dataset. Given that these representations have the same dimensionality, these results suggest that adding a single source-specific layer on top of a heavily shared model may help improving the adaptability of the neural network models, especially when there is no prior knowledge regarding the well-matching learning sources for the target datasets. The \emph{MS-CR} and \emph{MSS-CR} representations obtain the best results in general, which is somewhat expected because of their larger dimensionality. \subsection{Effect of Number of Learning Sources and Fusion Strategy} \label{res:task_num} \begin{figure} \centering\includegraphics[scale=.58]{graphics/_y_by_n.pdf} \caption{(Standardized) Performance by number of learning sources. Solid points represent the mean per architecture and number of sources. The black horizontal line marks the mean performance of the \emph{SS-R} representations. The colored lines show linear fits.} \label{fig:performance_by_n} \end{figure} While the plots in Fig.~\ref{fig:overallperformance} suggest that \emph{MSS-CR} and \emph{MS-CR} are the best strategies, the high observed variability makes this statement still rather unclear. In order to gain better insight of the effects of dataset, architecture strategies and number and type of learning sources, we further analyzed the results using a hierarchical or multilevel linear model on all observed scores~\cite{Gelman2006hierarchical}. The advantage of such a model is essentially that it accounts for the structure in our experiment, where observations nested within datasets are not independent. By Fig.~\ref{fig:overallperformance} we can anticipate a very large dataset effect because of the inherently different levels of difficulty, as well as a high level of heteroskedasticity. We therefore analyzed standardized performance scores rather than raw scores. In particular, the $i$-th performance score $y_i$ is standardized with the within-dataset mean and standard deviation scores, that is, $y^*_i=(y_i - \bar{y}_{d[i]})/s_{d[i]}$, where $d[i]$ denotes the dataset of the $i$-th observation. This way, the dataset effect is effectively $0$ and the variance is homogeneous. In addition, this will allow us to compare the relative differences across strategies and number of sources using the same scale in all datasets. We also transformed the variable $n$ that refers to the number of sources to $n^*$, which is set to $n^*=0$ for \emph{SS-R}s and to $n^*=n-2$ for the other strategies. This way, the intercepts of the linear model will represent the average performance of each representation strategy in its simplest case, that is, \emph{SS-R} ($n=1$) or non-\emph{SS-R} with $n=2$. We fitted a first analysis model as follows: \begin{align} y^*_i &= \beta_{0r[i]d[i]} + \beta_{1r[i]d[i]}\cdot n^*_i + e_i &e_i&\sim N(0,\sigma^2_e) \label{eq:m11}\\ \beta_{0rd} &= \beta_{0r} + u_{0rd} &u_{0rd}&\sim N(0,\sigma^2_{0r}) \label{eq:m12}\\ \beta_{1rd} &= \beta_{1r} + u_{1rd} &u_{1rd}&\sim N(0,\sigma^2_{1r}) \label{eq:m13}, \end{align} where $\beta_{0r[i]d[i]}$ is the intercept of the corresponding \underline{r}epresentation strategy within the corresponding \underline{d}ataset. Each of these coefficients is defined as the sum of a global fixed effect $\beta_{0r}$ of the representation, and a random effect $u_{0rd}$ which allows for random within-dataset variation\footnote{We note that hierarchical models do not fit each of the individual $u_{0rd}$ coefficients (a total of 42 in this model), but the amount of variability they produce, that is, $\sigma^2_{0r}$ (6 in total).}. This way, we separate the effects of interest (ie. each $\beta_{0r}$) from the dataset-specific variations (ie. each $u_{0rd}$). The effect of the number of sources is similarly defined as the sum of a fixed representation-specific coefficient $\beta_{1r}$ and a random dataset-specific coefficient $u_{1rd}$. Because the slope depends on the representation, we are thus implicitly modeling the interaction between strategy and number of sources, which can be appreciated in Fig.~\ref{fig:performance_by_n}, specially with \emph{MS-SR@FC}. \begin{figure} \centering\includegraphics[scale=.4]{graphics/_eff1.pdf} \caption{Fixed effects and bootstrap 95\% confidence intervals estimated for the first analysis model. The left plot depicts the effects of the representation strategy ($\beta_{0r}$ intercepts) and the right plot shows the effects of the number of sources ($\beta_{1r}$ slopes).} \label{fig:effects1} \end{figure} Fig.~\ref{fig:effects1} shows the estimated effects and bootstrap 95\% confidence intervals. The left plot confirms the observations in Fig.~\ref{fig:overallperformance}. In particular, they confirm that \emph{SS-R} performs significantly worse than \emph{MS-SR@FC}, which is similarly statistically worse than the others. When carrying out pairwise comparisons, \emph{MSS-CR} outperforms all other strategies except \emph{MS-CR@2} ($p=0.32$), which ourperforms all others except \emph{MS-CR@6} ($p=0.09$). The right plot confirms the qualitative observation from Fig.~\ref{fig:performance_by_n} by showing a significantly positive effect of the number of sources except for \emph{MS-SR@FC}, where it is not statistically different from 0. The intervals suggest a very similar effect in the best representations, with average increments of about $0.16$ per additional source ---recall that scores are standardized. To gain better insight into differences across representation strategies, we used a second hierarchical model where the representation strategy was modeled as an ordinal variable $r^*$ instead of the nominal variable $r$ used in the first model. In particular, $r^*$ represents the size of the network, so we coded \emph{SS-R} as $0$, \emph{MS-SR@FC} as $0.2$, \emph{MS-CR@6} as $0.4$, \emph{MS-CR@4} as $0.6$, \emph{MS-CR@2} as $0.8$, and \emph{MSS-CR} as $1$ (see Fig.~\ref{fig:split}). In detail, this second model is as follows: \begin{align} y^*_i &= \beta_{0} + \beta_{1d[i]}\cdot r^*_i + \beta_{2d[i]}\cdot n^*_i + \beta_{3d[i]}\cdot r^*_i\cdot n^*_i + e_i &e_i&\sim N(0,\sigma^2_e) \label{eq:m21}\\ \beta_{1d} &= \beta_{10} + u_{1d} &u_{1d}&\sim N(0,\sigma^2_1) \label{eq:m22}\\ \beta_{2d} &= \beta_{20} + u_{2d} &u_{2d}&\sim N(0,\sigma^2_2) \label{eq:m23}\\ \beta_{3d} &= \beta_{30} + u_{3d} &u_{3d}&\sim N(0,\sigma^2_3) \label{eq:m24}. \end{align} In contrast to the first model, there is no representation-specific fixed intercept but an overall intercept $\beta_0$. The effect of the network size is similarly modeled as the sum of an overall fixed slope $\beta_{10}$ and a random dataset-specific effect $u_{1d}$. Likewise, this model includes the main effect of the number of sources (fixed effect $\beta_{20}$), as well as its interaction with the network size (fixed effect $\beta_{30}$). Fig.~\ref{fig:effects2} shows the fitted coefficients, confirming the statistically positive effect of the size of the networks and, to a smaller degree but still significant, of the number of sources. The interaction term is not statistically significant, probably because of the unclear benefit of the number of sources in \emph{MS-SR@FC}. \begin{figure} \centering\includegraphics[scale=.4]{graphics/_eff2.pdf} \caption{Fixed effects and bootstrap 95\% confidence intervals estimated for the second analysis model, depicting the overall intercept ($\beta_0$), the slope of the network size ($\beta_{10}$), the slope of the number of sources ($\beta_{20}$), and their interaction ($\beta_{30}$).} \label{fig:effects2} \end{figure} Overall, these analyses confirm that all multi-source strategies outperform the single-source representations, with a direct relation to the number of parameters in the network. In addition, there is a clearly positive effect of the number of sources, with a minor interaction between both factors. Fig.~\ref{fig:performance_by_n} also suggests that the variability of performance scores decreases with the number of learning sources used. This implies that if there are more learning sources available, one can expect less variability across instantiations of the network. Most importantly, variability obtained for a single learning source ($n=1$) is always larger than the variability with 2 or more sources. The Ballroom dataset shows much smaller variability when BPM is included in the combination. For this specific dataset, this indicates that once \emph{bpm} is used to learn the representation, the expected performance is stable and does not vary much, even if we keep including more sources. Section~\ref{res:single_vs_multi} provides more insight in this regard. \subsection{Single-Source vs. Multi-Source} \label{res:single_vs_multi} \begin{figure} \centering\includegraphics [scale=.58]{graphics/_y_by_best.pdf} \caption{(Standardized) performance by number of learning sources. Solid points mark representations including the source performing best with \emph{SS-R} in the dataset; empty points mark representations without it. Solid and dashed lines represent linear fits, respectively; dashed areas represent 95\% confidence intervals.} \label{fig:performance_w_wo_best_stl} \end{figure} \begin{figure} \centering\includegraphics[scale=.4]{graphics/_cor.pdf} \caption{Correlation between (standardized) \emph{SS-R} performance and variance component.} \label{fig:rank_cor} \end{figure} The evidence so far tells us that, \emph{on average}, learning from multiple sources leads to better performance than learning from a single source. However, it could be possible that the \emph{SS-R} representation with the best learning source for the given target dataset still performs better than a multi-source alternative. In fact, in Fig.~\ref{fig:performance_by_n} there are many cases where the best \emph{SS-R} representation (black circles at $n=1$) already perform quite well compared to the more sophisticated alternatives. Fig.~\ref{fig:performance_w_wo_best_stl} presents similar scatter plots, but now explicitly differentiating between representations using the single best source (filled circles, solid lines) and not using it (empty circles, dashed lines). The results suggest that even if the strongest learning source for the specific dataset is not used, the others largely compensate for it in the multi-source representations, catching up and even surpassing the best \emph{SS-R} representations. The exception to this rule is again \emph{bpm} in the Ballroom dataset, where it definitely makes a difference. As the plots shows, the variability for low numbers of learning sources is larger when not using the strongest source, but as more sources are added, this variability reduces. To further investigate this issue, for each target dataset, we also computed the variance component due to each of the learning sources, excluding \emph{SS-R} representations~\cite{Searle2006variance}. A large variance due to one of the sources means that, on average and for that specific dataset, there is a large difference in performance between having that source or not. Table~\ref{tab:var} shows all variance components, highlighting the per-dataset largest. Apart from \emph{bpm} in the Ballroom dataset, there is no clear evidence that one single source is specially good in all datasets, which suggests that in general there is not a single source that one would use by default. Notably though, sources \emph{artist}, \emph{tag} and \emph{self} tend to have large variance components. \begin{table}[ht] \centering \caption{Variance components (as percent of total) of the learning sources, within each of the target datasets, and for non-\emph{SS-R} representations. Largest per dataset in bold face.}\label{tab:var} \begin{tabular}{rrrrrrrr} \hline & Ballroom & FMA & GTZAN & IRMAS & Lastfm & Arousal & Valence \\ \hline \emph{self} & 2 & \textbf{32} & \textbf{39} & 18 & 29 & 6 & 10 \\ \emph{year} & $<$1 & 6 & $<$1 & 1 & 2 & 2 & $<$1 \\ \emph{bpm} & \textbf{96} & 3 & $<$1 & 8 & 16 & $<$1 & \textbf{42} \\ \emph{taste} & $<$1 & $<$1 & $<$1 & $<$1 & $<$1 & $<$1 & 6 \\ \emph{tag} & 1 & 17 & 21 & 16 & 20 & \textbf{33} & 14 \\ \emph{lyrics} & $<$1 & $<$1 & $<$1 & 3 & $<$1 & 11 & $<$1 \\ \emph{cdr\_tag} & $<$1 & 9 & 12 & 16 & 2 & 16 & 14 \\ \emph{artist} & 1 & \textbf{32} & 28 & \textbf{37} & \textbf{32} & 31 & 15 \\ \hline \end{tabular} \end{table} In addition, we observe that the sources with largest variance are not necessarily the sources that obtain the best results by themselves in an \emph{SS-R} representation (see Fig.~\ref{fig:single_task_rep}). We examined this relationship further by calculating the correlation between variance components and (standardized) performance of the corresponding \emph{SS-R}s. The Pearson correlation is $0.38$, meaning that there is a mild association. Fig.~\ref{fig:rank_cor} further shows this with a scatterplot, with a clear distinction between poorly-performing sources (\emph{year}, \emph{taste} and \emph{lyrics} at the bottom) and well-performing sources (\emph{tag}, \emph{cdr\_tag} and \emph{artist} at the right). This result implies that even if some \emph{SS-R} is particularly strong for a given dataset, when considering more complex fusion architectures, the presence of that one source is not necessarily required because the other sources make up for its absence. This is especially important in practical terms, because different tasks generally have different best sources, and practitioners rarely have sufficient domain knowledge to select them up front. Also, and unlike the Ballroom dataset, many real-world problems are not easily solved with a single feature. Therefore, choosing a more general representation based on multiple sources is a much simpler way to proceed, which still yields comparable or better results. In other words, if ``a single deep representation to rule them all'' is pre-trained, it is advisable to base this representation on multiple learning sources. At the same time, given that \emph{MSS-CR} representations also generally show strong performance (albeit that they will bring high dimensionality), and that they will come `for free' as soon as \emph{SS-R} networks are trained, alternatively, we could imagine an ecosystem in which the community could pre-train and release many \emph{SS-R} networks for different individual sources in a distributed way, and practitioners can then collect these into \emph{MSS-CR} representations, without the need for retraining.\par \subsection{Compactness} \label{res:task_num:compactness} \begin{figure} \centering\includegraphics[width=0.7\textwidth]{graphics/model_complexity_by_n_task.pdf} \caption{Number of network parameters by number of learning sources.} \label{fig:complexity} \end{figure} Under an MTDTL setup with branching (the \emph{MS-CR} architectures), as more learning sources are used, not only the representation will grow larger, but so will the necessary deep network to learn it: see Fig.~\ref{fig:complexity} for an overview of necessary model parameters for the different architectures. When using all the learning sources, \emph{MS-CR@6}, which for a considerable part encompasses a shared network architecture and branches out relatively late, has an around 6.3 times larger network size compared to the network size needed for \emph{SS-R}. In contrast, \emph{MS-SR@FC}, which is the most heavily shared MTDTL case, uses a network that is only 1.2 times larger than the network needed for \emph{SS-R}.\par Also, while the representations resulting from the \emph{MSS-CR} and various \emph{MS-CR} architectures linearly depend on the chosen number of learning sources $m$ (see Table~\ref{tab:fusion}), for \emph{MS-SR@FC}, which has a fixed dimensionality of $d$ independent of $m$, we do notice increasing performance as more learning sources are used, except \emph{IRMAS} dataset. This implies that under MTDTL setups, the network does learn as much as possible from the multiple sources, even in case of fixed network capacity.\par \subsection{Multiple Explanatory Factors} \label{res:mulexpfac} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{graphics/semantic_topic_scatter.pdf} \caption[The LOF Caption]{Potential semantic explainability of DTMTL music representations. Here, we provide a visualization using t-SNE~\cite{VanDerMaaten2008VisualizingT-sne}, plotting 2-dimensional coordinates of each sample from the GTZAN dataset, as resulting from an \emph{MS-CR} representation trained on 5 sources\footnotemark. In the zoomed-in panes, we overlay the strongest topic model terms in $z_t$, for various types of learning sources.} \label{fig:topic_semantic} \end{figure} \footnotetext{The specific model used in the visualization is the $232$th model from the experimental design we introduce in Section~\ref{eval:expdesign}, which is performing better than 95\% of other models on GTZAN target dataset.} By training representation models on multiple learning sources in the way we did, our hope is that the representation will reflect latent semantic facets that will ultimately allow for semantic explainability. In Fig.~\ref{fig:topic_semantic}, we show a visualization that suggests this indeed may be possible. More specifically, we consider one of our \emph{MS-CR} models trained on 5 learning sources. For each learning source-specific block of the representation, using the learning source-specific \texttt{fc-out} layers, we can predict a factor distribution $z_t$ for each of the learning sources. Then, from the predicted $z_t$, one can either map this back on the original learning labels $y_t$, or simply consider the strongest predicted topics (which we visualized in Fig.~\ref{fig:topic_semantic}), to relate the representation to human-understandable facets or descriptions.\footnote{Note that, as soon as a pre-trained representation network model will be adapted to an new dataset through transfer learning, the \texttt{fc-out} layer cannot be used to obtain such explanations from the learning sources used in the representation learning, since the layers will then be fine-tuned to another dataset. However, we hypothesize it may be possible that the semantic explainability can still be preserved, if fine-tuning is jointly conducted with the original learning sources used during the pre-training time in the multi-objective strategy.}\par \section{Conclusion} \label{concl} In this paper, we have investigated the effect of different strategies to learn music representations with deep networks, considering multiple learning sources and different network architectures with varying degrees of shared information. Our main research questions are how the number and combination of learning sources (\textbf{RQ1}), and different configurations of the shared architecture (\textbf{RQ2}) affect effectiveness of the learned deep music representation. As a consequence, we conducted an experiment training 425 neural network models with different combinations of learning sources and architectures. After an extensive empirical analysis, we can summarize our findings as follows: \begin{itemize} \item{\textbf{RQ1} The number of learning sources positively affects the effectiveness of a learned deep music representation, although representations based on a single learning source will already be effective in specialized cases (e.g. BPM and the Ballroom dataset).} \item{\textbf{RQ2} In terms of architecture, the amount of shared information has a negative effect on performance: larger models with less shared information (e.g.~\emph{MS-CR@2}, \emph{MSS-CR}) tend to outperform models where sharing is higher (e.g.~\emph{MS-CR@6}, \emph{MS-SR@FC}), all of which outperform the base model (\emph{SS-R}).} \end{itemize} \par Our findings give various pointers to useful future work. First of all, `generality' is difficult to define in the music domain, maybe more so than in CV or NLP, in which lower-level information atoms may be less multifaceted in nature (e.g.\ lower-level representations of visual objects naturally extend to many vision tasks, while an equivalent in music is harder to pinpoint). In case of clear task-specific data skews, practitioners should be pragmatic about this. Also, we only investigated one special case of transfer learning, which might not be generalized well if one considers the adaptation of the pre-trained network for further fine-tuning with respect to their target dataset. Since there are various choices to make, which will bring substantial amount of variability, we decided to leave the aspects for further future works. We believe open-sourcing the models we trained throughout this work will be helpful for such follow-up works. Another limitation of current work is the selective set of label types in the learning sources. For instance, there are also a number of MIR related tasks that are using time-variant labels such as automatic music transcription, segmentation, beat tracking and chord estimation. We believe that such tasks should be investigated as well in the future to build a more complete overview of MTDTL problem. Finally, in our current work, we still largely considered MTDTL as a `black box' operation, trying to learn \emph{how} MTDTL can be effective. However, the original reason for starting this work was not only to yield an effective general-purpose representation, but one that also would be semantically interpretable according to different semantic facets. We showed some early evidence our representation networks may be capable of picking up such facets; however, considerable future work will be needed into more in-depth analysis techniques of \emph{what} the deep representations actually learned. \begin{acknowledgements} This work was carried out on the Dutch national e-infrastructure with the support of SURF Cooperative. We further thank the CDR for having provided their album-level genre annotations for our experiments. We thank Keunwoo Choi for the discussion and all the help regarding the implementation of his work. We also thank David Tax for the valuable inputs and discussion. Finally, we thank editors and reviewers for their effort and constructive help to improve this work. \end{acknowledgements} \textit{Conflict of interest: Jaehun Kim, Juli\'{a}n Urbano, Cynthia C.~S. Liem and Alan Hanjalic state that there are no conflicts of interest.} \bibliographystyle{unsrtnat} \section{Introduction} \label{intro} In the Music Information Retrieval (MIR) field, many research problems of interest involve the automatic description of properties of musical signals, employing concepts that are understood by humans. For this, tasks are derived that can be solved by automated systems. In such cases, algorithmic processes are employed to map raw music audio information to humanly understood descriptors (e.g.\ genre labels or descriptive tags). To achieve this, historically, the raw audio would first be transformed into a \emph{representation} based on \emph{hand-crafted features}, which are engineered by humans to reflect dedicated semantic signal properties. The feature representation would then serve as input to various statistical or Machine Learning (ML) approaches~\cite{Casey2008Content-basedChallenges}.\par The framing as described above can generally be applied to many applied ML problems: complex real-world problems are abstracted into a relatively simpler form, by establishing tasks that can be computationally addressed by automatic systems. In many cases, the task involves making a prediction based on a certain observation. For this, modern ML methodologies can be employed, that automatically can infer the logic for the prediction directly from (a numeric representation of) the given data, by optimizing an objective function defined for the given task. However, music is a multimodal phenomenon, that can be described in many parallel ways, ranging from objective descriptors to subjective preference. As a consequence, in many cases, while music-related tasks are well understood by humans, it often is hard to pinpoint and describe where the truly `relevant' information is in the music data used for the tasks, and how this properly can be translated into numeric representations that should be used for prediction. While research into such proper translations can be conducted per individual task, it is likely that informative factors in music data will be shared across tasks. As a consequence, when seeking to identify informative factors that are not explicitly restricted to a single task, Multi-Task Learning (MTL) is a promising strategy. In MTL, a single learning framework hosts multiple tasks at once, allowing for models to perform better by sharing commonalities between involved tasks~\cite{RichCaruana1997Multitask}. MTL has been successfully used in a range of applied ML works~\cite{Bengio2012RepresentationPerspectives,Liu2015Multi-taskSelection, Bingel2017IdentifyingNetworks,li2014heterogeneous,Zhang2015DeepAnalysis,ZhangFacialLearning, DBLP:journals/corr/KaiserGSVPJU17,DBLP:conf/iccv/ChangLPK17}, also including the music domain~\cite{Weston2011Multi-TaskingRetrieval,Aytar2016SoundNet:Video}. Following successes in the fields of Computer Vision (CV) and Natural Language Processing (NLP), deep learning approaches have recently also gained increasing interest in the MIR field, in which case \emph{deep representations} of music audio data are directly learned from the data, rather than being hand-crafted. Many works employing such approaches reported considerable performance improvements in various music analysis, indexing and classification tasks~\cite{Hamel2010LearningNetworks,Boulanger-Lewandowski2012ModelingTranscription,schlueter2014_icassp,Choi2016AutomaticNetworks,Oord2013DeepRecommendation,chandna2017monoaural,Jeong2016LearningClassification,Han2016DeepMusic}.\par In many deep learning applications, rather than training a complete network from scratch, pre-trained networks are commonly used to generate deep representations, which can be either directly adopted or further adapted for the current task at hand. In CV and NLP, (parts of) certain pre-trained networks ~\cite{Simonyan2014VeryRecognition,he2016deep,Szegedy2015GoingConvolutions,Mikolov2013EfficientSpace} have now been adopted and adapted in a very large number of works. These `standard' deep representations have typically been obtained by training a network for a single learning task, such as visual object recognition, employing large amounts of training data. The hypothesis on why these representations are effective in a broader of spectrum of tasks than they originally were trained for, is that \emph{deep transfer learning (DTL)} is happening: information initially picked up by the network is beneficial also for new learning tasks performed on the same type of raw input data. Clearly, the validity of this hypothesis is linked to the extent to which the new task can rely on similar data characteristics as the task on which the pre-trained network was originally trained.\par Although a number of works deployed DTL for various learning tasks in the music domain\cite{Dieleman2011Audio-basedNetwork,Choi2017TransferTasks,van2014transfer,Liang2014Content-AwareNetworks}, to our knowledge, however, transfer learning and the employment of pre-trained networks are not as standard in the MIR domain as in the CV domain. Again, this may be due to the broad and partially subjective range and nature of possible music descriptions. Following the considerations above, it may then be useful to combine deep transfer learning with multi-task learning. Indeed, in order to increase robustness to a larger scope of new learning tasks and datasets, the concept of MTL also has been applied in training deep networks for representation learning, both in the music domain ~\cite{Aytar2016SoundNet:Video,Weston2011Multi-TaskingRetrieval} and in general~\cite[p.~2]{Bengio2012RepresentationPerspectives}. As the model learns several tasks and datasets in parallel, it may pick up commonalities among them. As a consequence, the expectation is that a network learned with MTL will yield robust performance across different tasks, by transferring shared knowledge~\cite{RichCaruana1997Multitask,Bengio2012RepresentationPerspectives}. A simple illustration of the conceptual difference between traditional DTL and deep transfer learning based on MTL (further referred to as \emph{multi-task based deep transfer learning (MTDTL))} is shown in Fig.~\ref{fig:toyexample}.\par \begin{figure} \centering \includegraphics[height=0.3\textheight]{graphics/toy_example.pdf} \caption{Simplified illustration of the conceptual difference between traditional deep transfer learning (DTL) based on a single learning task (above) and multi-task based deep transfer learning (MTDTL) (below). The same color used for a learning and an target task indicates that the tasks have commonalities, which implies that the learned representation is likely to be informative for the target task. At the same time, this representation may not be that informative to another future task, leading to a low transfer learning performance. The hypothesis behind MTDTL is that relying on more learning tasks increases robustness of the learned representation and its usability for a broader set of target tasks.} \label{fig:toyexample} \end{figure} The mission of this paper is to investigate the effect of conditions around the setup of MTDTL, which are important to yield effective deep music representations. Here, we understand an `effective' representation to be a representation that is suitable for a wide range of new tasks and datasets. Ultimately, we aim for providing a methodological framework to systematically obtain and evaluate such transferable representations. We pursue this mission by exploring the effectiveness of MTDTL and traditional DTL, as well as concatenations of multiple deep representations, obtained by networks that were independently trained on separate single learning tasks. We consider these representations for multiple choices of learning tasks and considering multiple target datasets. Our work will address the following research questions: \begin{itemize} \item \textbf{RQ1:} Given a set of learning sources that can be used to train a network, what is the influence of the number and type of the sources on the effectiveness of the learned deep representation? \item \textbf{RQ2:} How do various degrees of information sharing in the deep architecture affect the effectiveness of a learned deep representation? \end{itemize} By answering the \textbf{RQ1} we arrive at an understanding of important factors regarding the composition of a set of learning tasks and datasets (which in the remainder of this work will be denoted as \emph{learning sources}) to achieve an effective deep music representation, specifically on the number and nature of learning sources. The answer to \textbf{RQ2} provides insight in \emph{how to choose the optimal multi-task network architecture} under a MTDTL context. For example, in MTL, multiple sources are considered under a joint learning scheme, that partially shares inferences obtained from different learning sources in the learning pipeline. In MTL applications using deep neural networks, this means that certain layers will be shared between all sources, while at other stages, the architecture will `branch' out into source-specific layers~\cite{RichCaruana1997Multitask,Bingel2017IdentifyingNetworks,li2014heterogeneous,Zhang2015DeepAnalysis,ZhangFacialLearning,misra2016cross,Aytar2016SoundNet:Video}. However, investigation is still needed on where in the layered architecture branching should ideally happen---if a branching strategy would turn out beneficial in the first place.\par To reach the aforementioned answers, it is necessary to conduct a systematic assessment to examine relevant factors. For \textbf{RQ1}, we investigate different numbers and combinations of learning sources. For \textbf{RQ2}, we study different architectural strategies. However, we wish to ultimately investigate effectiveness of the representation with respect to new, target learning tasks and datasets (which in the remainder of this paper will be denoted by \emph{target datasets}). While this may cause combinatorial explosion with respect to possible experimental configurations, we will make strategic choices in the design and evaluation procedure of the various representation learning strategies.\par The scientific contribution of this work can be summarized as follows: \begin{itemize} \item[$\bullet$] We provide insight into the effectiveness of various deep representation learning strategies under the multi-task learning context. \item[$\bullet$] We offer in-depth insight into ways to evaluate desired properties of a deep representation learning procedure. \item[$\bullet$] We propose and release several pre-trained music representation networks, based on different learning strategies for multiple semantic learning sources. \end{itemize} The rest of this work is presented as following: a formalization of this problem, as well as the global outline of how learning will be performed based on different learning tasks from different sources, will be presented in Section~\ref{learning_framework}. Detailed specifications of the deep architectures we considered for the learning procedure will be discussed in Section~\ref{dl_specifications}. Our strategy to \emph{evaluate} the effectiveness of different representation network variants by employing various \emph{target datasets} will be the focus of Section~\ref{eval}. Experimental results will be discussed in Section~\ref{res:intro}, after which general conclusions will be presented in Section~\ref{concl}. \section{Framework for Deep Representation Learning} \label{learning_framework} In this section, we formally define the deep representation learning problem. As Fig.~\ref{fig:problem} illustrates, any domain-specific MTDTL problem can be abstracted into a formal task, which is instantiated by a specific dataset with specific observations and labels. Multiple tasks and datasets are involved to emphasize different aspects of the input data, such that the learned representation is more adaptable to different future tasks. The learning part of this scheme can be understood as the MTL phase, which is introduced in Section~\ref{learning_framework:prob_def}. Subsequently in Section~\ref{learning_framework:learning_sources}, we discuss learning sources involved in this work, which consist of various tasks and datasets to allow investigating their effects on the transfer learning. Further, we introduce the label preprocessing procedure that is applied in this work in Section~\ref{learning_framework:learning_sources:learnfromfactors}, ensuring that the learning sources are more regularized, such that their comparative analysis is clearer. \begin{figure}[!htp] \centering \subfloat[Multi-Task Transfer Learning in General Problem Domain]{% \includegraphics[width=0.7\textwidth]{graphics/Problem_General.pdf}% \label{fig:problem:general}% }% \hfill% \subfloat[Multi-Task Transfer Learning in Music Information Retrieval Domain]{% \includegraphics[width=0.7\textwidth]{graphics/Problem_Example_full.pdf}% \label{fig:problem:example}% }% \hfill% \caption{Schematic overview of what this work investigates. The upper scheme illustrates a general problem solving framework in which multi-task transfer learning is employed. The tasks $t \in \{t_0, t_1, \cdots, t_M\}$ are derived from a certain problem domain, which are instantiated by datasets, that often are represented as sample pairs of observations and corresponding labels $(X_{t}, y_{t})$. Sometimes, the original dataset is processed further into simpler representation forms $(X_{t}, z_{t})$, to filter out undesirable information and noise. Once a model or system $f_{t}(X_{t})$ has learned the necessary mappings within the learning sources, this knowledge can be transferred to another set of target datasets, leveraging commonalities already obtained by the pre-training. Below the general framework, we show a concrete example, in which the broad MIR problem domain is abstracted into various sub-problems with corresponding tasks and datasets.} \label{fig:problem} \end{figure} \subsection{Problem Definition} \label{learning_framework:prob_def} A machine learning problem, focused on solving a specific task $t$, can be formulated as a minimization problem, in which a model function $f_t$ must be learned that minimizes a loss function $\mathcal{L}$ for given dataset $\mathcal{D}_{t} = \{\,(x^{(i)}_{t}, y^{(i)}_{t}) \mid i \in \{1, \cdots, I\} \,\}$, comparing the model's predictions given by the input $x_t$ and actual task-specific learning labels $y_t$. This can be formulated using the following expression: \begin{equation} \label{eq:1} \hat\theta = \argmin\;\mathbb{E}_{\mathcal{D}_{t}}\mathcal{L}(y_t, f_t(x_t;\theta)) \end{equation} where $x_{t}\in\mathbb{R}^d$ is, traditionally, a hand-crafted $d$-dimensional feature vector and $\theta$ is a set of model parameters of $f$. When deep learning is employed, the model function $f$ denotes a learnable network. Typically, the network model $f$ is learned in an end-to-end fashion, from raw data at the input to the learning label. In the speech and music field, however, using true end-to-end learning is still not a common practice. Instead, raw data is typically transformed first, before serving as network input. More specifically, in the music domain, common input to function $f$ would be $X\in\mathbb{R}^{c\times{n}\times{b}}$, replacing the originally hand-crafted feature vector $x\in\mathbb{R}^d$ from (\ref{eq:1}) by a time-frequency representation of the observed music data, usually obtained through the Short-Time Fourier Transform (STFT), with potential additional filter bank applications (e.g.\ mel-filter bank). The dimensions $c$, $n$, $b$ indicate channels of the audio signal, time steps, and frequency bins respectively. If such a network still is trained for a specific single machine learning task $t$, we can now reformulate (\ref{eq:1}) as follows: \begin{equation} \label{eq:2} \hat\theta = \argmin \; \mathbb{E}_{\mathcal{D}_{t}}\mathcal{L}(y_{t}, f_{t}(X_{t};\theta)). \end{equation} In MTL, in the process of learning the network model $f$, different tasks will need to be solved in parallel. In case of deep neural networks, this is usually realized by having a network in which lower layers are shared for all tasks, but upper layers are task-specific. Given $m$ different tasks $t$, each having the learning label $y_{t}$, we can formulate the learning objective of the neural network in a MTL scenario as follows: \begin{equation} \label{eq:4} \hat\theta^{s}, \hat\theta^{*} = \argmin \; \mathbb{E}_{t\in{\mathcal{T}}}\mathbb{E}_{\mathcal{D}_{t}}\mathcal{L}(y_{t}, f_{t}(X_{t};\theta^{s},\theta^{t})) \end{equation} Here, $\mathcal{T}=\{t_{1},t_{2},...,t_{m}\}$ is a given set of tasks to be learned and $\theta^{*}=\{\theta^{1},\theta^{2},...,\theta^{m}\}$ indicates a set of model parameters $\theta^{t}$ with respect to each task. Since the deep architecture initially shares lower layers and branches out to task-specific upper layers, the parameters of shared layers and task-specific layers are referred to separately as $\theta^{s}$ and $\theta^{t}$, respectively. Updates for all parameters can be achieved through standard back-propagation. Further specifics on network architectures and training configurations will be given in Section~\ref{dl_specifications}.\par Given the formalizations above, the first step in our framework is to select a suitable set $\mathcal{T}$ of learning tasks. These tasks can be seen as multiple concurrent descriptions or transformations of the same input fragment of musical audio: each will reflect certain semantic aspects of the music. However, unlike the approach in a typical MTL scheme, solving multiple specific learning tasks is actually not our main goal; instead, we wish to learn an effective \emph{representation} that captures as many semantically important factors in the low-level music representation as possible. Thus, rather than using learning labels $y_{t}$, our representation learning process will employ reduced learning labels $z_{t}$, which capture a reduced set of semantic factors from $y_{t}$. We then can reformulate (\ref{eq:4}) as follows: \begin{equation} \label{eq:5} \hat\theta^{s}, \hat\theta^{*} = \argmin \; \mathbb{E}_{t\in{\mathcal{T}}}\mathbb{E}_{\mathcal{D}_{t}}\mathcal{L}(z_{t}, f_{t}(X_{t};\theta^{s},\theta^{t})) \end{equation} where $z_t\in\mathbb{R}^{k}$ is a $k$-dimensional vector that represents reduced learning label for a specific task $t$. Each $z_t$ will be obtained through task-specific factor extraction methods, as described in Section~\ref{learning_framework:learning_sources:learnfromfactors}. \subsection{Learning Sources} \label{learning_framework:learning_sources} In the MTDTL context, a training dataset can be seen as the `source' to learn the representation, which will be further transferred to the future `target' dataset. Different learning sources of different nature can be imagined, that can be globally categorized as \emph{Algorithm} or \emph{Annotation}. As for the \emph{Algorithm} category, by employing traditional feature extraction or representation transformation algorithms, we will be able to automatically extract semantically interesting aspects from input data. As for the \emph{Annotation} category, these include different types of label annotations of the input data by humans.\par The dataset used as resource for our learning experiments is the Million Song Dataset (MSD)\cite{Bertin-Mahieux2011}. In its original form, it contains metadata and precomputed features for a million songs, with several associated data resources, e.g.\ considering \texttt{Last.fm} social tags and listening profiles from \texttt{the Echo Nest}. While the MSD does not distribute audio due to copyright reasons, through the API of the \texttt{7digital} service, 30-second audio previews can be obtained for the songs in the dataset. These 30-second previews will form the source for our raw audio input.\par Using the MSD data, we consider several subcategories of learning sources within the \emph{Algorithm} and \emph{Annotation} categories; below, we give an overview of these, and specify what information we considered exactly for the learning labels in our work. \subsubsection{Algorithm} \label{learning_framework:learning_sources:algorithm} \begin{itemize} \item \textbf{\textit{Self.}} The music track is the learning source itself; in other words, intrinsic information in the input music track should be captured through a learning procedure, without employing further data. Various unsupervised or auto-regressive learning strategies can be employed under this category, with variants of Autoencoders, including the Stacked Autoencoder~\cite{bengio2007greedy,Vincent2008ExtractingAutoencoders}, Restricted Boltzmann Machines (RBM)~\cite{smolensky1986information}, Deep Belief Networks (DBN)~\cite{Hinton2006ANets} and Generative Adversarial Networks (GAN)~\cite{goodfellow2014generative}. As another example within this category, variants of the Siamese networks for similarity learning can be considered~\cite{Han2015MatchNet:Matching,Arandjelovic2017LookLearn,Huang2017SimilarityGames}. In our case, we will employ the Siamese architecture to learn a metric that measures whether two input music clips belong to the same track, or two different tracks. This can be formulated as follows: \begin{equation} \label{eq:self} \hat\theta^{self}, \hat\theta^{s} = \argmin \; \mathbb{E}_{X_l, X_r \sim \mathcal{D}_{self}} \mathcal{L}(y_{self}, f_{self}(X_{l},X_{r};\theta^{self},\theta^{s})) \end{equation} \begin{equation} \label{eq:self_h} y_{self}= \begin{cases} 1, & \text{if } X_{l} \text{ and } X_{r} \text{ sampled from same track} \\ 0 & \text{otherwise} \end{cases} \end{equation} where $X_{l}$ and $X_{r}$ are a pair of randomly sampled short music snippets (taken from the 30-second MSD audio previews) and $f_{self}$ is a network for learning a metric between given input representations in terms of the criteria imposed by $y_{self}$. It is composed of one or more fully-connected layers and one output layer with softmax activation. An global outline illustration of our chosen architecture is given in Fig.~\ref{fig:match_arch}. Further specifications of the representation network and sampling strategies will be given in Section~\ref{fig:match_arch}. \begin{figure}[htp] \centering \includegraphics[height=0.33\textheight]{graphics/siamese_arch.pdf} \caption{Siamese architecture adopted for the \emph{self} learning task. For further details of the Representation Network, see Section~\ref{dl_specifications:base_architecture} and Fig.~\ref{fig:base_arch}.} \label{fig:match_arch} \end{figure} \item \textbf{\textit{Feature.}} Many algorithms exist already for extracting features out of musical audio, or for transforming musical audio representations. By running such algorithms on musical audio, learning labels are automatically computed, without the need for soliciting human annotations. Algorithmically computed outcomes will likely not be perfect, and include noise or errors. At the same time, we consider them as a relatively efficient way to extract semantically relevant and more structured information out of a raw input signal.\par In our case, under this category, we use Beat Per Minute (BPM) information, released as part of the MSD's precomputed features. The BPM values were computed by an estimation algorithm, as part of the \texttt{Echo Nest} API.\par \end{itemize} \subsubsection{Annotation} \label{learning_framework:learning_sources:annotation} \begin{itemize} \item \textbf{\textit{Metadata.}} Typically, metadata will come `for free' with music audio, specifying side information, such as a release year, the song title, the name of the artist, the corresponding album name, and the corresponding album cover image. Considering that this information describes categorization facets of the musical audio, metadata can be a useful information source to learn a music representation. In our experiments, we use release year information, which is readily provided as metadata with each song in the MSD.\par \item \textbf{\textit{Crowd.}} Through interaction with music streaming or scrobbling services, large numbers of users, also designated as the \textit{crowd}, left explicit or implicit information regarding their perspectives on musical content. For example, they may have created social tags, ratings, or social media mentionings of songs. With many services offering API access to these types of descriptors, crowd data therefore offers scalable, spontaneous and diverse (albeit noisy) human perspectives on music signals.\par In our experiments, we use social tags from \texttt{Last.fm}\footnote{\url{https://labrosa.ee.columbia.edu/millionsong/lastfm}} and user listening profiles from the \texttt{Echo Nest}. \item \textbf{\textit{Professional.}} As mentioned in \cite{Casey2008Content-basedChallenges}, annotation of music tracks is a complicated and time-consuming process: annotation criteria frequently are subjective, and considerable domain knowledge and annotation experience may be required before accurate and consistent annotations can be made. Professional experts in categorization have this experience, and thus are capable of indicating clean and systematic information about musical content. It is not trivial to get such professional annotations at scale; however, these types of annotations may be available in existing professional libraries.\par In our case, we use professional annotations from the Centrale Discotheek Rotterdam (CDR), the largest music library in The Netherlands, holding all music ever released in the country in physical and digital form in its collection. The CDR collection can be digitally accessed through the online Muziekweb\footnote{\url{https://www.muziekweb.nl/}} platform. For each musical album in the CDR collection, genre annotations were made by a professional annotator, according to a fixed vocabulary of 367 hierarchical music genres.\par As another professional-level `description', we adopted lyrics information per each track, which is provided in Bag-of-Words format with the MSD. To filter out trivial terms such as stop-words, we applied TF-IDF\cite{salton1983introduction}.\par \item \textbf{\textit{Combination.}} Finally, learning labels can be derived from combinations of the above categories. In our experiment, we used combination of artist information and social tags, by making a bag of tags at the artist level as a learning label.\par \end{itemize} Not all songs in the MSD actually include learning labels from all the sources mentioned above. Clearly, it is another advantage of using MTL that one can use such unbalanced datasets in a single learning procedure, to maximize the coverage of the dataset. However, on the other hand, if one uses an unbalanced number of samples across different learning sources, it is not trivial to compare the effect of individual learning sources. We therefore choose to work with a subset of the dataset, in which equal numbers of samples across learning sources can be used. As a consequence, we managed to collect 46,490 clips of tracks with corresponding learning source labels. A 41,841 / 4,649 split was made for training and validation for all sources from both MSD and CDR. Since we mainly focus on transfer learning, we used the validation set mostly for monitoring the training, to keep the network from overfitting.\par \begin{table} \centering \caption{Properties of learning sources.} \label{tab:intertask} \begin{tabular}{llllrl} \hline\noalign{\smallskip} Identifier & \multicolumn{2}{c}{Category} & Data & Dimensionality & Preprocessing \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{self} & \multirow{ 2}{*}{Algorithm} & Self & MSD - Track & 1 & \\ \textit{bpm} & & Feature & MSD - BPM & 1 & GMM \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{year} & \multirow{ 6}{*}{Annotation} & Metadata & MSD - Year & 1 & GMM \\ \textit{tag} & & Crowd & MSD - Tag & 174,156 & pLSA \\ \textit{taste} & & Crowd & MSD - Taste & 949,813 & pLSA \\ \textit{cdr\_tag} & & Professional & CDR - Tag & 367 & pLSA \\ \textit{lyrics} & & Professional & MSD - Lyrics & 5,000 & pLSA, TF-IDF\\ \textit{artist} & & Combination & MSD - Artist \& Tag & 522,366 & pLSA \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{table} \scriptsize \centering \caption{Examples of Latent Topics extracted with pLSA from MSD social tags} \label{tab:topic_term} \begin{tabular}{ll} \hline\noalign{\smallskip} Topic & Strongest social tags\\ \noalign{\smallskip}\hline\noalign{\smallskip} tag1 & \texttt{indie rock}, \texttt{indie}, \texttt{british}, \texttt{Scottish}\\ tag2 & \texttt{pop}, \texttt{pop rock}, \texttt{dance}, \texttt{male vocalists}\\ tag3 & \texttt{soul}, \texttt{rnb}, \texttt{funk}, \texttt{Neo-Soul}\\ tag4 & \texttt{Melodic Death Metal}, \texttt{black metal}, \texttt{doom metal}, \texttt{Gothic Metal}\\ tag5 & \texttt{fun}, \texttt{catchy}, \texttt{happy}, \texttt{Favorite}\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsection{Latent Factor Preprocessing} \label{learning_framework:learning_sources:learnfromfactors} Most learning sources are noisy. For instance, social tags include tags for personal playlist management, long sentences, or simply typos, which do not actually show relevant nuances in describing the music signal. The algorithmically extracted BPM information also is imperfect, and likely contains octave errors, in which BPM is under- or overestimated by a factor of 2. To deal with this noise, several previous works using the MSD~\cite{Choi2016AutomaticNetworks,Choi2017TransferTasks} applied a frequency-based filtering strategy along with top-down domain knowledge. However, this shrinks the available sample size. As an alternative way to handle noisiness, several other previous works~\cite{Lamere2008SocialRetrieval,Weston2011Multi-TaskingRetrieval,Hamel2013TRANSFERSIMILARITY,Law2010LearningLabels,van2014transfer,Oord2013DeepRecommendation} apply latent factor extraction using various low-rank approximation models to preprocess the label information. We also choose to do this in our experiments. A full overview of chosen learning sources, their category, origin dataset, dimensionality and preprocessing strategies is shown in Table~\ref{tab:intertask}. In most cases, we apply probabilistic latent semantic analysis (pLSA), which extracts latent factors as a multinomial distribution of latent topics~\cite{DBLP:conf/uai/Hofmann99}. Table~\ref{tab:topic_term} illustrates several examples of strong social tags within extracted latent topics. For situations in which learning labels are a scalar, non-binary value (BPM and release year), we applied a Gaussian Mixture Model (GMM) to transform each value into a categorical distribution of Gaussian components. In case of the \textit{Self} category, as it basically is a binary membership test, no factor extraction was needed in this case. After preprocessing, learning source labels $y_t$ are now expressed in the form of probabilistic distributions $z_t$. Then, the learning of a deep representation can take place by minimizing the Kullback\textendash Leibler (KL) divergence between model inferences $f_t(X)$ and label factor distributions $z_t$. Along with the noise reduction, another benefit from such preprocessing is the regularization of the scale of the objective function between different tasks involved in the learning, when the resulting factors have the same size. This regularity between the objective functions is particularly helpful for comparing different tasks and datasets. For this purpose, we used a fixed single value $k=50$ for the number of factors (pLSA) and the number of Gaussians (GMM). In the remainder of this paper, the datasets and tasks processed in above manner will be denoted by \textit{learning sources} for coherent presentation and usage of the terminology.\par \section{Representation Network Architectures} \label{dl_specifications} In this section, we present the detailed specification of the deep representation neural network architecture we exploited in this work. We will discuss the base architecture of the network, and further discuss the shared architecture with respect to different fusion strategies that one can take in the MTDTL context. Also, we introduce details on the preprocessing related to the input data served into networks.\par \subsection{Base Architecture} \label{dl_specifications:base_architecture} \begin{table} \centering \caption{Configuration of the base CNN. \texttt{conv} and \texttt{max-pool} indicate a 2-dimensional convolution and max-pooling layer, respectively. We set the stride size with 2 on the time dimension of \texttt{conv1}, to compress dimensionality at the early stage. Otherwise, all strides are set as 1 across all the convolution layers. \texttt{gap} corresponds to the global average pooling used in~\cite{he2016deep}, which averages out all the spatial dimensions of the filter responses. \texttt{fc} is an abbreviation of fully-connected layer. We use \texttt{dropout} with $p=0.5$ only for the \texttt{fc-feature} layer, where the intermediate latent representation is extracted and evaluated. For simplicity, we omit the batch-size dimension of the input shape.} \label{tab:netarch} \begin{tabular}{lllll} \hline\noalign{\smallskip} Layer & Input Shape & Weight Shape & Sub-Sampling & Activation\\ \noalign{\smallskip}\hline\noalign{\smallskip} \texttt{conv1} & $2\times216\times128$ & $2\times16\times5\times5$ & $2\times1$ & \texttt{ReLU}\\ \texttt{max-pool1} & $16\times108\times128$ & & $2\times2$ & \\ \texttt{conv2} & $16\times54\times64$ & $16\times32\times3\times3$ & & \texttt{ReLU}\\ \texttt{max-pool2} & $32\times54\times64$ & & $2\times2$ & \\ \texttt{conv3} & $32\times27\times32$ & $32\times64\times3\times3$ & & \texttt{ReLU} \\ \texttt{max-pool3} & $64\times27\times32$ & & $2\times2$ & \\ \texttt{conv4} & $64\times13\times16$ & $64\times64\times3\times3$ & & \texttt{ReLU}\\ \texttt{max-pool4} & $64\times13\times16$ & & $2\times2$ &\\ \texttt{conv5} & $64\times6\times8$ & $64\times128\times3\times3$ & & \texttt{ReLU}\\ \texttt{max-pool5} & $128\times6\times8$ & & $2\times2$ & \\ \texttt{conv61} & $128\times3\times4$ & $128\times256\times3\times3$ & & \texttt{ReLU}\\ \texttt{conv62} & $256\times3\times4$ & $256\times256\times1\times1$ & & \texttt{ReLU} \\ \texttt{gap} & $256$ & & \\ \texttt{fc-feature} & $256$ & $256\times256$ & & \texttt{ReLU} \\ \texttt{dropout} & $256$ & & \\ \texttt{fc-output} & $256$ & learning source specific & & \texttt{Softmax} \\ \noalign{\smallskip}\hline \end{tabular} \end{table} As the deep base architecture for feature representation learning, we choose a Convolutional Neural Network (CNN) architecture inspired by~\cite{Simonyan2014VeryRecognition}, as described in Fig.~\ref{fig:base_arch} and Table~\ref{tab:netarch}. The CNN is one of the most popular architectures in many music-related machine learning tasks~\cite{Oord2013DeepRecommendation,Choi2016AutomaticNetworks,Han2016DeepMusic,Schluter2016LearningExamples,DBLP:conf/icassp/HersheyCEGJMPPS17,DBLP:conf/nips/LeePLN09,Dieleman2011Audio-basedNetwork,DBLP:conf/icmla/HumphreyB12,DBLP:conf/interspeech/NakashikaGT12,DBLP:conf/ismir/UllrichSG14,DBLP:conf/mlsp/Piczak15,DBLP:conf/ica/SimpsonRP15,DBLP:conf/interspeech/PhanHMM16,DBLP:conf/cbmi/PonsLS16,DBLP:conf/fedcsis/StasiakM16,DBLP:conf/icassp/SuZZG16}. Many of these works adopt an architecture having cascading blocks of 2-dimensional filters and max-pooling, derived from well-known works in image recognition~\cite{Simonyan2014VeryRecognition,Krizhevsky2012ImageNetNetworks}. Although variants of CNN using 1-dimensional filters also were suggested by~\cite{Dieleman2014END-TO-ENDAUDIO,Oord2016WaveNet:Audio,Aytar2016SoundNet:Video,Jaitly2011LEARNINGHinton} to learn features directly from a raw audio signal in an end-to-end manner, not many works managed to use them on music classification tasks successfully~\cite{Lee2017Sample-LevelWaveforms}.\par The main difference between the base architecture and~\cite{Simonyan2014VeryRecognition} is the use of Global Average Pooling (GAP) and the Batch Normalization (BN) layers. BN is applied to accelerate the training and stabilize the internal covariate shift for every convolution layer and the \texttt{fc-feature} layer~\cite{Ioffe}. Also, global spatial pooling is adopted as the last pooling layer of the cascading convolution blocks, which is known to effectively summarize the spatial dimensions both in the image~\cite{he2016deep} and music domain~\cite{Han2016DeepMusic}. We also applied the approach to ensure the \texttt{fc-feature} layer not to have a huge number of parameters. We applied the Rectified Linear Unit (ReLU)~\cite{Nair2010RectifiedMachines} to all convolution layers and the \texttt{fc-feature} layer. For the \texttt{fc-output} layer, softmax activation is used. For each convolution layer, we applied zero-padding such that the input and the output have the same spatial shape. As for the regularization, we choose to apply drop-out~\cite{Srivastava2014Dropout:Overfitting} on the \texttt{fc-feature} layer. We added $L2$ regularization across all the parameters with the same weight $\lambda=10^{-6}$. \subsubsection{Audio Preprocessing} \label{dl_specifications:audiopreproc} We aim to learn a music representation from as-raw-as-possible input data to fully leverage the capability of the neural network. For this purpose, we use the dB-scale mel-scale magnitude spectrum of an input audio fragment, extracted by applying 128-band mel-filter banks on the Short-Time Fourier Transform (STFT). mel-spectrograms have generally been a popular input representation choice for CNNs applied in music-related tasks~\cite{Nam2012LearningRetrieval,Hamel2013TRANSFERSIMILARITY,Oord2013DeepRecommendation,Choi2016AutomaticNetworks,Choi2017TransferTasks,Han2016DeepMusic}; besides, it also was reported recently that their frequency-domain summarization, based on psycho-acoustics, is efficient and not easily learnable through data-driven approaches~\cite{Choi2017ATagging,Doerfler2017BasicDesign}. We choose a 1024-sample window size and 256-sample hop size, translating to about 46 ms and 11.6 ms respectively for a sampling rate of 22 kHz. We also applied standardization to each frequency band of the mel spectrum, making use of the mean and variance of all individual mel spectra in the training set. \subsubsection{Sampling} \label{dl_specifications:sampling} During the learning process, in each iteration, a random batch of songs is selected. Audio corresponding to these songs originally is 30 seconds in length; for computational efficiency, we randomly crop 2.5 seconds out of each song each time. Keeping stereo channels of the audio, the size of a single input tensor $X^*$ we used for the experiment ended up with $2\times216\times128$, where the first dimension indicates number of channels, and following dimensions mean time steps and mel-bins, respectively. Along with the computational efficiency, a number of literatures in MIR field reported that using a small chunk of the input not only inflates the dataset, but also shows good performance on the high-level tasks such as music auto-tagging~\cite{Lee2017Sample-LevelWaveforms,Han2016DeepMusic,Dieleman2014END-TO-ENDAUDIO}. For the \textit{self} case, we generate batches with equal numbers of songs for both membership categories in $y_{self}$.\par \begin{figure}[htp] \centering \includegraphics[height=0.33\textheight]{graphics/default_arch.pdf}% \caption{Default CNN architecture for supervised single-source representation learning. Details of the Representation Network are presented at the left of the global architecture diagram. The numbers inside the parentheses indicate either the number of filters, or the number of units with respect to the type of layer.} \label{fig:base_arch} \end{figure} \subsection{Multi-Source Architectures with Various Degrees of Shared Information} \label{dl_specifications:fusion} When learning a music representation based on various available learning sources, different strategies can be taken regarding the choice of architecture. We will investigate the following setups: \begin{itemize} \item{ As a base case, a \emph{\textbf{Single-Source Representation (SS-R)}} can be learned for a single source only. As mentioned earlier, this would be the typical strategy leading to pre-trained networks, that later would be used in transfer learning. In our case, our base architecture from Section~\ref{dl_specifications:base_architecture} and Fig.\ \ref{fig:base_arch} will be used, for which the layers in the Representation Network also are illustrated in Fig.\ \ref{fig:base}. Out of the \texttt{fc-feature} layer, a $d$-dimensional representation is obtained. } \item{ If multiple perspectives on the same content, as reflected by the multiple learning labels, should also be reflected in the ultimate learned representation, one can learn \emph{SS-R} representations for each learning source, and simply concatenate them afterwards. With $d$ dimensions per source and $m$ sources, this leads to a $d \times m$ \emph{\textbf{Multiple Single-Source Concatenated Representation (MSS-CR)}}. In this case, independent networks are trained for each of the sources, and no shared knowledge will be transferred between sources. A layer setup of the corresponding Representation Network is illustrated in Fig.\ \ref{fig:mst_cr}. } \item{ When applying MTL learning strategies, the deep architecture should involve shared knowledge layers, before branching out to various individual learning sources, whose learned representations will be concatenated in the final $d \times m$-dimensional representation. We call these \emph{\textbf{Multi-Source Concatenated Representations (MS-CR)}}. As the branching point can be chosen at different stages, we will investigate the effect of various prototypical branching point choices: at the second convolution layer (\emph{MS-CR@2}, Fig.~\ref{fig:split_2}), the fourth convolution layer (\emph{MS-CR@4}, Fig.~\ref{fig:split_4}), and the sixth convolution layer (\emph{MS-CR@6}, Fig.~\ref{fig:split_6}). The later the branching point occurs, the more shared knowledge the network will employ. } \item{ In the most extreme case, branching would only occur at the very last fully connected layer, and a \textbf{Multi-Source Shared Representation (MS-SR)} (or, more specifically, \emph{MS-SR@FC}) is learned, as illustrated in Fig.~\ref{fig:split_fc}. As the representation is obtained from the \texttt{fc-feature} layer, no concatenation takes place here, and a $d$-dimensional representation is obtained. } \end{itemize} A summary of these different representation learning architectures is given in Table~\ref{tab:fusion}. Beyond the strategies we choose, further approaches can be thought of to connect representations learned for different learning sources in neural network architectures. For example, for different tasks, representations can be extracted from different intermediate hidden layers, benefiting from the hierarchical feature encoding capability of the deep network~\cite{Choi2017TransferTasks}. However, considering that learned representations are usually taken from a specific fixed layer of the shared architecture, we focus on the strategies as we outlined above.\par \begin{table} \centering \caption{Properties of the various categories of representation learning architectures.} \label{tab:fusion} \begin{tabular}{ccccc} \hline\noalign{\smallskip} & Multi Source & Shared Network & Concatenation & Dimensionality\\ \noalign{\smallskip}\hline\noalign{\smallskip} \textbf{SS-R} & No & No & No & $d$ \\ \textbf{MSS-CR} & Yes & No & Yes & $d\times{m}$ \\ \textbf{MS-CR} & Yes & Partial & Yes & $d\times{m}$ \\ \textbf{MS-SR} & Yes & Yes & No & $d$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{figure}[htp] \centering \subfloat[SS-R: Base setup.]{% \includegraphics[height=0.33\textheight]{graphics/split_base_black_no_margin.pdf}% \label{fig:base}% }% \hfill% \subfloat[MSS-CR: Concatenation of multiple independent SS-R networks.]{% \includegraphics[height=0.33\textheight]{graphics/split_base_black_no_margin.pdf}% \includegraphics[height=0.33\textheight]{graphics/split_base_black_no_margin.pdf}% \label{fig:mst_cr}% }% \hfill% \subfloat[MS-CR@2: network branches to source-specific layers from 2nd convolution layer.]{% \includegraphics[height=0.33\textheight]{graphics/split_2_black.pdf}% \label{fig:split_2}% }% \hfill% \subfloat[MS-CR@4: network branches to source-specific layers from 4th convolution layer.]{% \includegraphics[height=0.33\textheight]{graphics/split_4_black.pdf}% \label{fig:split_4}% }% \hfill% \subfloat[MS-CR@6: network branches to source-specific layers from 6th convolution layer.]{% \includegraphics[height=0.33\textheight]{graphics/split_6_black.pdf}% \label{fig:split_6}% }% \hfill% \subfloat[MS-SR@FC: heavily shared network, source-specific branching only at final FC layer.]{% \includegraphics[height=0.33\textheight]{graphics/split_fc_black.pdf}% \label{fig:split_fc}% }% \hfill \caption{The various model architectures considered in the current work. Beyond single-source architectures, multi-source architectures with various degrees of shared information are studied. For simplification, multi-source cases are illustrated here for two sources. The \texttt{fc-feature} layer from which representations will be extracted is the FC(256) layer in the illustrations (see Table~\ref{tab:netarch}).} \label{fig:split} \end{figure} \subsection{MTL Training Procedure} \label{dl_specifications:train} \begin{algorithm}[h] \nl Initialize $\Theta$: \{$\theta^{t}$, $\theta^{s}$\} randomly\; \nl \For{epoch in 1...N}{ \nl \For{iteration in 1...L}{ \nl Pick a learning source $t$ randomly\; \nl Pick batch of samples from learning source $t$\; ($X_l$, $X_r$) for \textit{self}\; $X$ otherwise\; \nl Derive learning label $z_{t}$\; \nl Sub-sample chunk $X^*$ from track $X$\; \nl Forward-pass:\; $\mathcal{L}(y_{self}, \Theta, X_l^*, X_r^*)=$Eq. \ref{eq:self} for \textit{self}\; $\mathcal{L}(z_{t}, \Theta, X^*)=$Eq. \ref{eq:2} otherwise\; \nl Backward-pass: $\nabla(\Theta)$\; \nl Update model: $\Theta \gets \Theta - \epsilon \nabla(\Theta)$\; } } \caption{{Training a Multi-Source CNN} \label{Algorithm}} \label{alg:train} \end{algorithm} Similar to~\cite{Weston2011Multi-TaskingRetrieval,Liu2015Multi-taskSelection}, we choose to train the MTL models with a stochastic update scheme as described in Algorithm~\ref{alg:train}. At every iteration, a learning source is selected randomly. After the learning source is chosen, a batch of observation-label pairs $(X, z_{t})$ is drawn. For the audio previews belonging to the songs within this batch, an input representation $X^*$ is cropped randomly from its super-sample $X$. The updates of the parameters $\Theta$ are conducted through back-propagation using the Adam algorithm~\cite{Kingma2014Adam:Optimization}. For each neural network we train, we set $L=lm$, where $l$ is the number of iterations needed to visit all the training samples with fixed batch size $b=128$, and $m$ is the number of learning sources used in the training. Across the training, we used a fixed learning rate $\epsilon=0.00025$. After a fixed number of epochs $N$ is reached, we stop the training.\par \subsection{Implementation Details} \label{dl_specifications:imple} We used \textit{PyTorch}~\cite{paszke2017automatic} to implement the CNN models and parallel data serving. For evaluation of models and cross-validation, we made extensive use of functionality in \textit{Scikit-Learn}~\cite{Pedregosa2012Scikit-learn:Python}. Furthermore, \textit{Librosa}~\cite{Mcfee2015Librosa:Python} was used to process audio files and its raw features including mel spectrograms. The training is conducted with 8 Graphical Processing Unit (GPU) computation nodes, composed of 2 NVIDIA GRID K2 GPUs and 6 NVIDIA GTX 1080Ti GPUs.\par \begin{figure}[htp] \centering \includegraphics[width=1\textwidth]{graphics/framework.pdf} \caption{Overall system framework. The first row of the figure illustrates the learning scheme, where the representation learning is happening by minimizing the KL divergence between the network inference $f_t(X)$ and the preprocessed learning label $z_t$. The preprocessing is conducted by the blue blocks which transform the original noisy labels $y_t$ to $z_t$, reducing noise and summarizing the high-dimensional label space into a smaller latent space. The second row describes the entire evaluation scenario. The representation is first extracted from the representation network, which is transferred from the upper row. The sequence of representation vectors is aggregated as the concatenation of their means and standard deviations. The purple block indicates a machine learning model employed to evaluate the representation's effectiveness.} \label{fig:framework} \end{figure} \section{Evaluation} \label{eval} So far, we discussed the details regarding the learning phase of this work, which corresponds to the upper row of Fig.~\ref{fig:framework}. This included various choices of sources for the representation learning, and various choices of architecture and fusion strategies. In this section, we present the evaluation methodology we followed, as illustrated in the second row of Fig.~\ref{fig:framework}. First, we will discuss the chosen target tasks and datasets in Section~\ref{eval:tasks}, followed in Section~\ref{eval:baseline} by the baselines against which our representations will be compared. Section~\ref{eval:expdesign} explains our experimental design, and finally we discuss the implementation of our evaluation experiments in Section~\ref{eval:imple}. \subsection{Target Datasets} \label{eval:tasks} In order to gain insight into the effectiveness of learned representations with respect to multiple potential future tasks, we consider a range of \emph{target datasets}. In this work, our target datasets are chosen to reflect various semantic properties of music, purposefully chosen semantic biases, or popularity in the MIR literature. Furthermore, the representation network should not be configured or learned to explicitly solve the chosen target datasets. While for the learning sources, we could provide categorizations on where and how the learning labels were derived, and also consider algorithmic outcomes as labels, existing popular research datasets mostly fall in the \textit{Professional} or \textit{Crowd} categories. In our work, we choose 7 evaluation datasets commonly used in MIR research, which reflect three conventional types of MIR tasks, namely classification, regression and recommendation: \begin{table}[h] \centering \caption{Properties of target datasets used in our experiments. Because of time constraints, we sampled the Lastfm dataset as described in Section~\ref{eval:tasks}; the original size appears between parentheses. In case particular data splits are defined by an original author or follow up study, we apply the same split, including the reference in which the split is introduced. Otherwise, we applied either a random split stratified by the label (Ballroom), or simple filtering based on reported faulty entries (IRMAS).} \label{tab:extertask} \begin{tabular}{lrllll} \hline\noalign{\smallskip} Task & \multicolumn{2}{c}{Data} & \#Tracks & \#Class & Split Method \\ \noalign{\smallskip}\hline\noalign{\smallskip} Classification & FMA\cite{Defferrard2016FMA:Analysis} & Genre & 25,000 & 16 & Artist Filtered~\cite{Defferrard2016FMA:Analysis} \\ Classification & GTZAN\cite{Tzanetakis2002MusicalIEEE} & Genre & 1,000 & 10 & Artist Filtered~\cite{DBLP:journals/tmm/KereliukSL15} \\ Classification & Ext. Ballroom\cite{Gouyon2006AnAlgorithms,DBLP:conf/mlsp/MarchandP16} & Genre & 3,390 & 13 & N/A \\ Classification & IRMAS\cite{Bosch2012ASignals} & Instrument & 6,705 & 11 & Song Filtered \\ Regression & Music Emotion\cite{Soleymani20131000Music} & Arousal & 744 & & Genre Stratified\cite{Soleymani20131000Music}\\ Regression & Music Emotion\cite{Soleymani20131000Music} & Valence & 744 & & Genre Stratified\cite{Soleymani20131000Music} \\ Recommendation & Lastfm\mbox{*}\cite{Celma:Springer2010} & Listening Count & 27,093 (961,416) & & N/A \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{itemize} \item \textbf{\textit{Classification.}} Different types of classification tasks exist in MIR. In our experiments, we consider several datasets used for genre classification and instrument classification. For genre classification, we chose the GTZAN~\cite{Tzanetakis2002MusicalIEEE} and FMA~\cite{Defferrard2016FMA:Analysis} datasets as main exemplars. Even though GTZAN is known for its caveats~\cite{Sturm2014TheRetrieval}, we deliberately used it, because its popularity can be beneficial when comparing with previous and future work. We note though that there may be some overlap between the tracks of GTZAN and the subset of the MSD we use in our experiments; the extent of this overlap is unknown, due to the lack of a confirmed and exhaustive track listing of the GTZAN dataset. We choose to use a fault-filtered data split for the training and evaluation, which is suggested in~\cite{DBLP:journals/tmm/KereliukSL15}. The split originally includes a training, validation and evaluation split; in our case, we also included the validation split as training data. Among the various packages provided by the FMA, we chose the top-genre classification task of FMA-Medium~\cite{Defferrard2016FMA:Analysis}. This is a classification dataset with an unbalanced genre distribution. We used the data split provided by the dataset for our experiment, where the training is validation set are combined as the training. Considering another type of genre classification, we selected the Extended Ballroom dataset~\cite{Gouyon2006AnAlgorithms, DBLP:conf/mlsp/MarchandP16}. Because the classes in this dataset are highly separable with regard to their BPM~\cite{Sturm2016TheSystems}, we specifically included this `purposefully biased' dataset as an example of how a learned representation may effectively capture temporal dynamics properties present in a target dataset, as long as learning sources also reflected these properties. Since no pre-defined split is provided or suggested by other literature, we used stratified random sampling based on the genre label. The last dataset we considered for classification is the training set of the IRMAS dataset~\cite{Bosch2012ASignals}, which consists of short music clips annotated with the predominant instruments present in the clip. Compared to the genre classification task, instrument classification is generally considered as less subjective, requiring features to separate timbral characteristics of the music signal as opposed to high-level semantics like genre. We split the dataset to make sure that observations from the same music track are not split into training and test set. As performance metric for all these classification tasks, we used classification accuracy. \item \textbf{\textit{Regression.}} As exemplars of regression tasks, we evaluate our proposed deep representations on the dataset used in the MediaEval Music Emotion prediction task~\cite{Soleymani20131000Music}. It contains frame-level and song-level labels of a two-dimensional representation of emotion, with valence and arousal as dimensions~\cite{Posner2005ThePsychopathology}. Valence is related to the positivity or negativity of the emotion, and arousal is related to its intensity~\cite{Soleymani20131000Music}. The song-level annotation of the V-A coordinates was used as the learning label. In similar fashion to the approach taken in~\cite{Choi2017TransferTasks}, we trained separate models for the two emotional dimensions. As for the dataset split, we used the split provided by the dataset, which is done by the random split stratified by the genre distribution. As evaluation metric, we measured the coefficient of determination $R^{2}$ of each model. \item \textbf{\textit{Recommendation.}} Finally, we employed the `Last.fm - 1K users' dataset~\cite{Celma:Springer2010} to evaluate our representations in the context of a content-aware music recommendation task (which will be denoted as \emph{Lastfm} in the remaining of the paper). This dataset contains 19 million records of listening events across $961,416$ unique tracks collected from $992$ unique users. In our experiments, we mimicked a cold-start recommendation problem, in which items not seen before should be recommended to the right users. For efficiency, we filtered out users who listened to less than $5$ tracks and tracks known to less than $5$ users. As for the audio content of each track, we obtained the mapping between the MusicBrainz Identifier (MBID) with the Spotify identifier (SpotifyID) using the \texttt{MusicBrainz API}\footnote{\url{https://musicbrainz.org/}}. After cross-matching, we collected 30 seconds previews of all track using the \texttt{Spotify API}\footnote{\url{https://developer.spotify.com/documentation/web-api/}}. We found that there is a substantial amount of missing mapping information between the SpotifyID and MBID in the \texttt{MusicBrainz} database, where only approximately 30\% of mappings are available. Also, because of the substantial amount of inactive users and unpopular tracks in the dataset, we ultimately acquired a dataset of $985$ unique users and $27,093$ unique tracks with audio content. Similar to \cite{Liang2014Content-AwareNetworks}, we considered the \textit{outer matrix} performance for un-introduced songs; in other words, the model's recommendation accuracy on the items newly introduced to the system~\cite{Liang2014Content-AwareNetworks}. This was done by holding out certain tracks when learning user models, and then predicting user preference scores based on all tracks, including those that were held out, resulting in a ranked track list per user. As evaluation metric, we consider Normalized Discounted Cumulative Gain ($nDCG@500$), only treating held-out tracks that were indeed liked by a user as relevant items. Further details on how hold-out tracks were chosen are given in Section~\ref{eval:imple}. \end{itemize} A summary of all evaluation datasets, their origins and properties, can be found in Table~\ref{tab:extertask}. \subsection{Baselines} \label{eval:baseline} We examined three baselines to compare with our proposed representations: \begin{itemize} \item\textbf{\textit{Mel-Frequency Cepstral Coefficients (MFCC).}} These are some of the most popular audio representations in MIR research. In this work, we extract and aggregate MFCC following the strategy in~\cite{Choi2017TransferTasks}. In particular, we extracted 20 coefficients and also used their first- and second-order derivatives. After obtaining the sequence of MFCCs and its derivatives, we performed aggregation by taking the average and standard deviation over the time dimension, resulting in a 120-dimensional vector representation. \item\textbf{\textit{Random Network Feature (Rand).}} We extracted the representation at the \texttt{fc-feature} layer without any representation network training. With random initialization, this representation therefore gives a random baseline for a given CNN architecture. We refer to this baseline as \textit{Rand}. \item\textbf{\textit{Latent Representation from Music Auto-Tagger (Choi).}} The work in~\cite{Choi2017TransferTasks} focused on a music auto-tagging task, and can be considered as yielding a state-of-the-art deep music representation for MIR. While the model's focus on learning a representation for music auto-tagging can be considered as our \textit{SS-R} case, there are a number of issues that complicate direct comparisons between this work and ours. First, the network in~\cite{Choi2017TransferTasks} is trained with about 4 times more data samples than in our experiments. Second, it employed a much smaller network than our architecture. Further, intermediate representations were extracted, which is out of the scope of our work, as we only consider representations at the \texttt{fc-feature} layer. Nevertheless, despite these caveats, the work still is very much in line with ours, making it a clear candidate for comparison. Throughout the evaluation, we could not fully reproduce the performance reported in the original paper~\cite{Choi2017TransferTasks}. When reporting our results, we therefore will report the performance we obtained with the published model, referring to this as \textit{Choi}. \end{itemize} \subsection{Experimental Design} \label{eval:expdesign} \begin{figure} \centering \includegraphics[scale=.4]{graphics/_alias.pdf} \caption{Aliasing among main effects in the final experimental design.} \label{fig:exp_design} \end{figure} In order to investigate our research questions, we carried out an experiment to study the effect of the number and type of learning sources on the effectiveness of deep representations, as well as the effect of the various architectural learning strategies described in Section~\ref{dl_specifications:fusion}. For the experimental design we consider the following factors: \begin{itemize} \item Representation strategy, with 6 levels: \emph{SS-R}, \emph{MS-SR@FC}, \emph{MS-CR@6}, \emph{MS-CR@4}, \emph{MS-CR@2}, and \emph{MSS-CR}). \item 8 2-level factors indicating the presence or not of each of the 8 learning sources: \emph{self}, \emph{year}, \emph{bpm}, \emph{taste}, \emph{tag}, \emph{lyrics}, \emph{cdr\_tag} and \emph{artist}. \item Number of learning sources present in the learning process (1 to 8). Note that this is actually calculated as the sum of the eight factors above. \item Target dataset, with 7 levels: Ballroom, FMA, GTZAN, IRMAS, Lastfm, Arousal and Valence. \end{itemize} Given a learned representation, fitting dataset-specific models is much more efficient than learning the representation, so we decided to evaluate each representation on all 7 target datasets. The experimental design is thus restricted to combinations of representation and learning sources, and for each such combination we will produce 7 observations. However, given the constraint of \emph{SS-R} relying on a single learning source, that there is only one possible combination for n = 8 sources, as well as the high unbalance in the number of sources\footnote{For instance, from the 255 possible combinations of up to 8 sources, there are 70 combinations of $n=4$ sources, but 28 with $n=2$, or only 8 for $n=7$. Simple random sampling from the 255 possible combinations would lead to a very unbalanced design, that is, a highly non-uniform distribution of observation counts across the levels of the factor ($n$ in this case). A balanced design is desired to prevent aliasing and maximize statistical power. See section 15.2 in~\cite{Montgomery2012design} for details on unbalanced designs.}, we proceeded in three phases: \begin{enumerate} \item We first trained the \emph{SS-R} representations for each of the 8 sources, and repeated 6 times each. This resulted in 48 experimental runs. \item We then proceeded to train all five multi-source strategies with all sources, that is, $n=8$. We repeated this 5 times, leading to 25 additional experimental runs. \item Finally, we ran all five multi-source strategies with $n=2,\dots,7$. The full design matrix would contain 5 representations and 8 sources, for a total of 1,230 possible runs. Such an experiment was unfortunately infeasible to run exhaustively given available resources, so we decided to follow a fractional design. However, rather than using a pre-specified optimal design with a fixed amount of runs~\cite{Goos2011optimal}, we decided to run sequentially for as long as time would permit us, generating at each step a new experimental run on demand in a way that would maximize desired properties of the design up to that point, such as balance and orthogonality\footnote{An experimental design is orthogonal if the effects of any factor balance out across the effects of the other factors. In a non-orthogonal design effects may be aliased, meaning that the estimate of one effect is partially biased with the effect of another, the extent of which ranges from 0 (no aliasing) to 1 (full aliasing). Aliasing is sometimes referred to as confounding. See sections 8.5 and 9.5 in~\cite{Montgomery2012design} for details on aliasing.}. We did this with the greedy Algorithm~\ref{alg:design}. From the set of still remaining runs $\mathcal{A}$, a subset $\mathcal{O}$ is selected such that the expected unbalance in the augmented design $\mathcal{B}\cup\{o\}$ is minimal. In this case, the unbalance of a design is defined as the maximum unbalance found between the levels of any factor, except for those already exhausted\footnote{For instance, let a design have 20 runs for \emph{SS-R}, 16 for \emph{MS-SR@FC}, and 18 for all other representations. The unbalance in the representation factor is thus $20-16=4$. The total unbalance of the design is defined as the maximum unbalance found across all factors.}. From $\mathcal{O}$, a second subset $\mathcal{P}$ is selected such that the expected aliasing in the augmented design is minimal, here defined as the maximum absolute aliasing between main effects\footnote{See section 2.3.7 in~\cite{Goos2011optimal} for details on how to compute an alias matrix.}. Finally, a run $p$ is selected at random from $\mathcal{P}$, the corresponding representation is learned, and the algorithm iterates again after updating $\mathcal{A}$ and $\mathcal{B}$. Following this on demand methodology, we managed to run another 352 experimental runs from all the 1,230 possible. \end{enumerate} \begin{algorithm}[h] \nl Initialize $\mathcal{A}$ with all possible 1,230 runs to execute\; \nl Initialize $\mathcal{B}\gets\emptyset$ for the set of already executed runs\; \nl \While{time allows}{ \nl Select $\mathcal{O}\subseteq \mathcal{A}$ s.t. $\forall o\in \mathcal{O}$, the unbalance in $\mathcal{B}\cup \{o\}$ is minimal\; \nl Select $\mathcal{P}\subseteq \mathcal{O}$ s.t. $\forall p\in \mathcal{P}$, the aliasing in $\mathcal{B}\cup \{p\}$ is minimal\; \nl Select $p\in \mathcal{P}$ at random\; \nl Update $\mathcal{A}\gets \mathcal{A}-\{p\}$\; \nl Update $\mathcal{B}\gets \mathcal{B}\cup\{p\}$\; \nl Learn the representation coded by $p$\; } \caption{Sequential generation of experimental runs.} \label{alg:design} \end{algorithm} After going through the three phases above, the final experiment contained $48+25+352=425$ experimental runs, each producing a different deep music representation. We further evaluated each representation on all 7 target datasets, leading to a grand total of $42\times 7=2,97$5 datapoints. Fig.~\ref{fig:exp_design} plots the alias matrix of the final experimental design, showing that the aliasing among main factors is indeed minimal. The final experimental design matrix can be downloaded along with the rest of the supplemental material. Each considered representation network was trained using the CNN representation network model from Section~\ref{dl_specifications}, based on the specific combination of learning sources and deep architecture as indicated by the experimental run. In order to reduce variance, we fixed the number of training epochs to $N = 200$ across all runs, and applied the same base architecture, except for the branching point. This entire training procedure took approximately 5 weeks with given computational hardware resources introduced in Section~\ref{dl_specifications:imple}. \subsection{Implementation Details} \label{eval:imple} In order to assess how our learned deep music representations perform on the various target datasets, transfer learning will now be applied, to consider our representations in the context of these new target datasets. As a consequence, new machine learning pipelines are set up, focused on each of the target datasets. In all cases, we applied the pre-defined split if it is feasible. Otherwise, we randomly split the dataset in a 80\% training and 20\% test set. For every dataset, we repeated the training and evaluation for 5 times, using different train/test splits. In most of our evaluation cases, validation will take place on the test set; in case of the the recommendation problem, the test set represents a set of tracks to be held out during user model training, and re-inserted for validation. In all cases, we will extract representations from evaluation dataset audio as detailed in Section~\ref{eval:imple:feat_preproc}, and then learn relatively simple models based on them, as detailed in Section~\ref{eval:imple:model}. Employing the metrics as mentioned in the previous section, we will then take average performance scores over the 5 different train-test splits for final performance reporting. \subsubsection{Feature Extraction and Preprocessing} \label{eval:imple:feat_preproc} Taking raw audio from the evaluation datasets as input, we take non-overlapping slices out of this audio with a fixed length of 2.5 seconds. Based on this, we apply the same preprocessing transformations as discussed in Section~\ref{dl_specifications:audiopreproc}. Then, we extract a deep representation from this preprocessed audio, employing the architecture as specified by the given experimental run. As in the case of Section~\ref{dl_specifications:fusion}, representations are extracted from the \texttt{fc-feature} layer of each trained CNN model. Depending on the choice of architecture, the final representation may consist of concatenations of representations obtained by separate representation networks. Input audio may originally be (much) longer than 2.5 seconds; therefore, we aggregate information in feature vectors over multiple time slices by taking their \textit{mean} and \textit{standard deviation} values. As a result, we get a representation with averages per learned feature dimension, and another representation with standard deviations per feature dimension. These will be concatenated, as illustrated in Fig.~\ref{fig:framework}.\par \subsubsection{Target Dataset-Specific Models} \label{eval:imple:model} As our goal is not to over-optimize dataset-specific performance, but rather perform a comparative analysis between different representations (resulting from different learning strategies), we keep the model simple, and use fixed hyper-parameter values for each model across the entire experiment. To evaluate the trained representations, we used different models according to the target dataset. For classification and regression tasks, we used Multi Layer Perceptron (MLP) model~\cite{DBLP:journals/ai/Hinton89}. More specifically, the MLP model has two hidden layers, whose dimensionality is $256$. As for the non-linearity, we choose ReLU~\cite{Nair2010RectifiedMachines} for all nodes, and the model is trained with ADAM optimization technique~\cite{Kingma2014Adam:Optimization} for 200 iterations. In evaluation, we used the \textit{Scikit-Learn}'s implementation for ease of distributed computing on multiple CPU computation nodes. For the recommendation task, we choose a similar model as suggested in~\cite{Liang2014Content-AwareNetworks,Hu2008CollaborativeYifan}, in which the learning objective function $\mathcal{L}$ is defined as \begin{equation} \label{eq:recsys} \hat{U}, \hat{V}, \hat{W} = \argmin \; ||P-UV^{T}||_{C} + \frac{\lambda^{V}}{2}||V-XW|| + \frac{\lambda^{U}}{2}||U|| + \frac{\lambda^{W}}{2}||W|| \end{equation} \noindent where $P\in\mathbb{R}^{u\times{i}}$ is a binary matrix indicating whether there is interaction between users $u$ and items $i$, $U\in\mathbb{R}^{u\times{r}}$ and $V\in\mathbb{R}^{i\times{r}}$ are $r$ dimensional user factors and item factors for the low-rank approximation of $P$. $P$ is derived from the original interaction matrix $R\in\mathbb{R}^{u\times{i}}$, which contains the number of interaction from users $u$ to items $i$, as follows:\par \begin{equation} P_{u, i} = \begin{cases} 1, & \text{if } R_{u, i} > 0\\ 0 & \text{otherwise} \end{cases} \end{equation} $W\in\mathbb{R}^{d\times{r}}$ is a free parameter for the projection from $d$-dimensional feature space to the factor space. $X\in\mathbb{R}^{i\times{d}}$ is the feature matrix where each row corresponds to a track. Finally, $||\cdot||_{C}$ is the Frobenious norm weighted by the confidence matrix $C\in\mathbb{R}^{u\times{i}}$, which controls the credibility of the model on the given interaction data, given as follows:\par \begin{equation} \label{eq:recsys:confidence} C = 1 + \alpha R \end{equation} where $\alpha$ controls credibility. As for hyper-parameters, we set $\alpha=0.1$, $\lambda^{V}=0.00001$, $\lambda^{U}=0.00001$, and $\lambda^{W}=0.1$, respectively. For the number of factors we choose $r=50$ to focus only on the relative impact of the representation over the different conditions. We implemented an update rule with the Alternating Least Squares (ALS) algorithm similar to~\cite{Liang2014Content-AwareNetworks}, and updated parameters during 15 iterations. \section{Results and Discussion} \label{res:intro} In this section, we present results and discussion related to the proposed deep music representations. In Section \ref{res:single_multi_rep}, we will first compare the performance across the \emph{SS-R}s, to show how different individual learning sources work for each target dataset. Then, we will present general experimental results related to the performance of the multi-source representations. In Section \ref{res:task_num}, we discuss the effect of the number of learning sources exploited in the representation learning, in terms of their general performance, reliability, and model compactness. In Section \ref{res:single_vs_multi}, we discuss effectiveness of different representations in MIR. Finally, we present some initial evidence for multifaceted semantic explainability of the proposed MTDTL in Section~\ref{res:mulexpfac}.\footnote{For the reproducibility, we release all relevant materials including code, models and extracted features at \url{https://github.com/eldrin/MTLMusicRepresentation-PyTorch}.} \subsection{Single-Source and Multi-Source Representation} \label{res:single_multi_rep} \begin{figure} \centering \includegraphics[scale=.58]{graphics/_y_by_source.pdf} \caption{Performance of single source representations. Each point indicates the performance of a representation learned from the single source. Solid points indicate the average performance per source. The baselines are illustrated as horizontal lines.} \label{fig:single_task_rep} \end{figure} Fig.~\ref{fig:single_task_rep} presents the performance of \emph{SS-R} representations on each of the 7 target datasets. We can see that all sources tend to outperform the \textit{Rand} baseline on all datasets, except for a handful cases involving sources \emph{self} and \emph{bpm}. Looking at the top performing sources, we find that \emph{tag}, \emph{cdr\_tag} and \emph{artist} perform better or on-par with the most sophisticated baseline, \textit{Choi}, except for the IRMAS dataset. The other sources are found somewhere between these two baselines, except for datasets Lastfm and Arousal, where they perform better than \textit{Choi} as well. Finally, the \textit{MFCC} is generally outperformed in all cases, with the notable exception of the IRMAS dataset, where only \textit{Choi} performs better. Zooming in to dataset-specific observed trends, the \emph{bpm} learning source shows a highly skewed performance across target datasets: it clearly outperforms all other learning sources in the Ballroom dataset, but it achieves the worst or second worst performance in the other datasets. As shown in~\cite{Sturm2016TheSystems}, this confirms that the Ballroom dataset is well-separable based on BPM information alone. Indeed, representations trained on the \emph{bpm} learning source seem to contain a latent representation close to the BPM of an input music signal. In contrast, we can see that the \emph{bpm} representation achieves the worst results in the Arousal dataset, where both temporal dynamics and BPM are considered as important factors determining the intensity of emotion. On the IRMAS dataset, we see that all the \emph{SS-R}s perform worse than the \textit{MFCC} and \textit{Choi} baselines. Given that they both take into account low-level features, either by design or by exploiting low-level layers of the neural network, this suggests that predominant instrument sounds are harder to distinguish based solely on semantic features, which is the case of the representations studied here. Also, we find that there is small variability for each \emph{SS-R} run within the training setup we applied. Specifically, in 50\% of cases we have within-\emph{SS-R} variability less than 15\% of the within-dataset variability. 90\% of the cases are within 30\% of the within-dataset variability. \begin{figure} \centering \includegraphics[scale=.58]{graphics/_y_by_arc.pdf} \caption{Performance by representation strategy. Solid points represent the mean per representation. The baselines are illustrated as horizontal lines.} \label{fig:overallperformance} \end{figure} We now consider how the various representations based on multiple learning sources perform, in comparison to those based on single learning sources. The boxplots in Fig.~\ref{fig:overallperformance} show the distributions of performance scores for each architectural strategy and per target dataset. For comparison, the gray boxes summarize the distributions depicted in Fig.~\ref{fig:single_task_rep}, based on the \emph{SS-R} strategy. In general, we can see that these \emph{SS-R} obtain the lowest scores, followed by \emph{MS-SR@FC}, except for the IRMAS dataset. Given that these representations have the same dimensionality, these results suggest that adding a single source-specific layer on top of a heavily shared model may help improving the adaptability of the neural network models, especially when there is no prior knowledge regarding the well-matching learning sources for the target datasets. The \emph{MS-CR} and \emph{MSS-CR} representations obtain the best results in general, which is somewhat expected because of their larger dimensionality. \subsection{Effect of Number of Learning Sources and Fusion Strategy} \label{res:task_num} \begin{figure} \centering\includegraphics[scale=.58]{graphics/_y_by_n.pdf} \caption{(Standardized) Performance by number of learning sources. Solid points represent the mean per architecture and number of sources. The black horizontal line marks the mean performance of the \emph{SS-R} representations. The colored lines show linear fits.} \label{fig:performance_by_n} \end{figure} While the plots in Fig.~\ref{fig:overallperformance} suggest that \emph{MSS-CR} and \emph{MS-CR} are the best strategies, the high observed variability makes this statement still rather unclear. In order to gain better insight of the effects of dataset, architecture strategies and number and type of learning sources, we further analyzed the results using a hierarchical or multilevel linear model on all observed scores~\cite{Gelman2006hierarchical}. The advantage of such a model is essentially that it accounts for the structure in our experiment, where observations nested within datasets are not independent. By Fig.~\ref{fig:overallperformance} we can anticipate a very large dataset effect because of the inherently different levels of difficulty, as well as a high level of heteroskedasticity. We therefore analyzed standardized performance scores rather than raw scores. In particular, the $i$-th performance score $y_i$ is standardized with the within-dataset mean and standard deviation scores, that is, $y^*_i=(y_i - \bar{y}_{d[i]})/s_{d[i]}$, where $d[i]$ denotes the dataset of the $i$-th observation. This way, the dataset effect is effectively $0$ and the variance is homogeneous. In addition, this will allow us to compare the relative differences across strategies and number of sources using the same scale in all datasets. We also transformed the variable $n$ that refers to the number of sources to $n^*$, which is set to $n^*=0$ for \emph{SS-R}s and to $n^*=n-2$ for the other strategies. This way, the intercepts of the linear model will represent the average performance of each representation strategy in its simplest case, that is, \emph{SS-R} ($n=1$) or non-\emph{SS-R} with $n=2$. We fitted a first analysis model as follows: \begin{align} y^*_i &= \beta_{0r[i]d[i]} + \beta_{1r[i]d[i]}\cdot n^*_i + e_i &e_i&\sim N(0,\sigma^2_e) \label{eq:m11}\\ \beta_{0rd} &= \beta_{0r} + u_{0rd} &u_{0rd}&\sim N(0,\sigma^2_{0r}) \label{eq:m12}\\ \beta_{1rd} &= \beta_{1r} + u_{1rd} &u_{1rd}&\sim N(0,\sigma^2_{1r}) \label{eq:m13}, \end{align} where $\beta_{0r[i]d[i]}$ is the intercept of the corresponding \underline{r}epresentation strategy within the corresponding \underline{d}ataset. Each of these coefficients is defined as the sum of a global fixed effect $\beta_{0r}$ of the representation, and a random effect $u_{0rd}$ which allows for random within-dataset variation\footnote{We note that hierarchical models do not fit each of the individual $u_{0rd}$ coefficients (a total of 42 in this model), but the amount of variability they produce, that is, $\sigma^2_{0r}$ (6 in total).}. This way, we separate the effects of interest (ie. each $\beta_{0r}$) from the dataset-specific variations (ie. each $u_{0rd}$). The effect of the number of sources is similarly defined as the sum of a fixed representation-specific coefficient $\beta_{1r}$ and a random dataset-specific coefficient $u_{1rd}$. Because the slope depends on the representation, we are thus implicitly modeling the interaction between strategy and number of sources, which can be appreciated in Fig.~\ref{fig:performance_by_n}, specially with \emph{MS-SR@FC}. \begin{figure} \centering\includegraphics[scale=.4]{graphics/_eff1.pdf} \caption{Fixed effects and bootstrap 95\% confidence intervals estimated for the first analysis model. The left plot depicts the effects of the representation strategy ($\beta_{0r}$ intercepts) and the right plot shows the effects of the number of sources ($\beta_{1r}$ slopes).} \label{fig:effects1} \end{figure} Fig.~\ref{fig:effects1} shows the estimated effects and bootstrap 95\% confidence intervals. The left plot confirms the observations in Fig.~\ref{fig:overallperformance}. In particular, they confirm that \emph{SS-R} performs significantly worse than \emph{MS-SR@FC}, which is similarly statistically worse than the others. When carrying out pairwise comparisons, \emph{MSS-CR} outperforms all other strategies except \emph{MS-CR@2} ($p=0.32$), which ourperforms all others except \emph{MS-CR@6} ($p=0.09$). The right plot confirms the qualitative observation from Fig.~\ref{fig:performance_by_n} by showing a significantly positive effect of the number of sources except for \emph{MS-SR@FC}, where it is not statistically different from 0. The intervals suggest a very similar effect in the best representations, with average increments of about $0.16$ per additional source ---recall that scores are standardized. To gain better insight into differences across representation strategies, we used a second hierarchical model where the representation strategy was modeled as an ordinal variable $r^*$ instead of the nominal variable $r$ used in the first model. In particular, $r^*$ represents the size of the network, so we coded \emph{SS-R} as $0$, \emph{MS-SR@FC} as $0.2$, \emph{MS-CR@6} as $0.4$, \emph{MS-CR@4} as $0.6$, \emph{MS-CR@2} as $0.8$, and \emph{MSS-CR} as $1$ (see Fig.~\ref{fig:split}). In detail, this second model is as follows: \begin{align} y^*_i &= \beta_{0} + \beta_{1d[i]}\cdot r^*_i + \beta_{2d[i]}\cdot n^*_i + \beta_{3d[i]}\cdot r^*_i\cdot n^*_i + e_i &e_i&\sim N(0,\sigma^2_e) \label{eq:m21}\\ \beta_{1d} &= \beta_{10} + u_{1d} &u_{1d}&\sim N(0,\sigma^2_1) \label{eq:m22}\\ \beta_{2d} &= \beta_{20} + u_{2d} &u_{2d}&\sim N(0,\sigma^2_2) \label{eq:m23}\\ \beta_{3d} &= \beta_{30} + u_{3d} &u_{3d}&\sim N(0,\sigma^2_3) \label{eq:m24}. \end{align} In contrast to the first model, there is no representation-specific fixed intercept but an overall intercept $\beta_0$. The effect of the network size is similarly modeled as the sum of an overall fixed slope $\beta_{10}$ and a random dataset-specific effect $u_{1d}$. Likewise, this model includes the main effect of the number of sources (fixed effect $\beta_{20}$), as well as its interaction with the network size (fixed effect $\beta_{30}$). Fig.~\ref{fig:effects2} shows the fitted coefficients, confirming the statistically positive effect of the size of the networks and, to a smaller degree but still significant, of the number of sources. The interaction term is not statistically significant, probably because of the unclear benefit of the number of sources in \emph{MS-SR@FC}. \begin{figure} \centering\includegraphics[scale=.4]{graphics/_eff2.pdf} \caption{Fixed effects and bootstrap 95\% confidence intervals estimated for the second analysis model, depicting the overall intercept ($\beta_0$), the slope of the network size ($\beta_{10}$), the slope of the number of sources ($\beta_{20}$), and their interaction ($\beta_{30}$).} \label{fig:effects2} \end{figure} Overall, these analyses confirm that all multi-source strategies outperform the single-source representations, with a direct relation to the number of parameters in the network. In addition, there is a clearly positive effect of the number of sources, with a minor interaction between both factors. Fig.~\ref{fig:performance_by_n} also suggests that the variability of performance scores decreases with the number of learning sources used. This implies that if there are more learning sources available, one can expect less variability across instantiations of the network. Most importantly, variability obtained for a single learning source ($n=1$) is always larger than the variability with 2 or more sources. The Ballroom dataset shows much smaller variability when BPM is included in the combination. For this specific dataset, this indicates that once \emph{bpm} is used to learn the representation, the expected performance is stable and does not vary much, even if we keep including more sources. Section~\ref{res:single_vs_multi} provides more insight in this regard. \subsection{Single-Source vs. Multi-Source} \label{res:single_vs_multi} \begin{figure} \centering\includegraphics [scale=.58]{graphics/_y_by_best.pdf} \caption{(Standardized) performance by number of learning sources. Solid points mark representations including the source performing best with \emph{SS-R} in the dataset; empty points mark representations without it. Solid and dashed lines represent linear fits, respectively; dashed areas represent 95\% confidence intervals.} \label{fig:performance_w_wo_best_stl} \end{figure} \begin{figure} \centering\includegraphics[scale=.4]{graphics/_cor.pdf} \caption{Correlation between (standardized) \emph{SS-R} performance and variance component.} \label{fig:rank_cor} \end{figure} The evidence so far tells us that, \emph{on average}, learning from multiple sources leads to better performance than learning from a single source. However, it could be possible that the \emph{SS-R} representation with the best learning source for the given target dataset still performs better than a multi-source alternative. In fact, in Fig.~\ref{fig:performance_by_n} there are many cases where the best \emph{SS-R} representation (black circles at $n=1$) already perform quite well compared to the more sophisticated alternatives. Fig.~\ref{fig:performance_w_wo_best_stl} presents similar scatter plots, but now explicitly differentiating between representations using the single best source (filled circles, solid lines) and not using it (empty circles, dashed lines). The results suggest that even if the strongest learning source for the specific dataset is not used, the others largely compensate for it in the multi-source representations, catching up and even surpassing the best \emph{SS-R} representations. The exception to this rule is again \emph{bpm} in the Ballroom dataset, where it definitely makes a difference. As the plots shows, the variability for low numbers of learning sources is larger when not using the strongest source, but as more sources are added, this variability reduces. To further investigate this issue, for each target dataset, we also computed the variance component due to each of the learning sources, excluding \emph{SS-R} representations~\cite{Searle2006variance}. A large variance due to one of the sources means that, on average and for that specific dataset, there is a large difference in performance between having that source or not. Table~\ref{tab:var} shows all variance components, highlighting the per-dataset largest. Apart from \emph{bpm} in the Ballroom dataset, there is no clear evidence that one single source is specially good in all datasets, which suggests that in general there is not a single source that one would use by default. Notably though, sources \emph{artist}, \emph{tag} and \emph{self} tend to have large variance components. \begin{table}[ht] \centering \caption{Variance components (as percent of total) of the learning sources, within each of the target datasets, and for non-\emph{SS-R} representations. Largest per dataset in bold face.}\label{tab:var} \begin{tabular}{rrrrrrrr} \hline & Ballroom & FMA & GTZAN & IRMAS & Lastfm & Arousal & Valence \\ \hline \emph{self} & 2 & \textbf{32} & \textbf{39} & 18 & 29 & 6 & 10 \\ \emph{year} & $<$1 & 6 & $<$1 & 1 & 2 & 2 & $<$1 \\ \emph{bpm} & \textbf{96} & 3 & $<$1 & 8 & 16 & $<$1 & \textbf{42} \\ \emph{taste} & $<$1 & $<$1 & $<$1 & $<$1 & $<$1 & $<$1 & 6 \\ \emph{tag} & 1 & 17 & 21 & 16 & 20 & \textbf{33} & 14 \\ \emph{lyrics} & $<$1 & $<$1 & $<$1 & 3 & $<$1 & 11 & $<$1 \\ \emph{cdr\_tag} & $<$1 & 9 & 12 & 16 & 2 & 16 & 14 \\ \emph{artist} & 1 & \textbf{32} & 28 & \textbf{37} & \textbf{32} & 31 & 15 \\ \hline \end{tabular} \end{table} In addition, we observe that the sources with largest variance are not necessarily the sources that obtain the best results by themselves in an \emph{SS-R} representation (see Fig.~\ref{fig:single_task_rep}). We examined this relationship further by calculating the correlation between variance components and (standardized) performance of the corresponding \emph{SS-R}s. The Pearson correlation is $0.38$, meaning that there is a mild association. Fig.~\ref{fig:rank_cor} further shows this with a scatterplot, with a clear distinction between poorly-performing sources (\emph{year}, \emph{taste} and \emph{lyrics} at the bottom) and well-performing sources (\emph{tag}, \emph{cdr\_tag} and \emph{artist} at the right). This result implies that even if some \emph{SS-R} is particularly strong for a given dataset, when considering more complex fusion architectures, the presence of that one source is not necessarily required because the other sources make up for its absence. This is especially important in practical terms, because different tasks generally have different best sources, and practitioners rarely have sufficient domain knowledge to select them up front. Also, and unlike the Ballroom dataset, many real-world problems are not easily solved with a single feature. Therefore, choosing a more general representation based on multiple sources is a much simpler way to proceed, which still yields comparable or better results. In other words, if ``a single deep representation to rule them all'' is pre-trained, it is advisable to base this representation on multiple learning sources. At the same time, given that \emph{MSS-CR} representations also generally show strong performance (albeit that they will bring high dimensionality), and that they will come `for free' as soon as \emph{SS-R} networks are trained, alternatively, we could imagine an ecosystem in which the community could pre-train and release many \emph{SS-R} networks for different individual sources in a distributed way, and practitioners can then collect these into \emph{MSS-CR} representations, without the need for retraining.\par \subsection{Compactness} \label{res:task_num:compactness} \begin{figure} \centering\includegraphics[width=0.7\textwidth]{graphics/model_complexity_by_n_task.pdf} \caption{Number of network parameters by number of learning sources.} \label{fig:complexity} \end{figure} Under an MTDTL setup with branching (the \emph{MS-CR} architectures), as more learning sources are used, not only the representation will grow larger, but so will the necessary deep network to learn it: see Fig.~\ref{fig:complexity} for an overview of necessary model parameters for the different architectures. When using all the learning sources, \emph{MS-CR@6}, which for a considerable part encompasses a shared network architecture and branches out relatively late, has an around 6.3 times larger network size compared to the network size needed for \emph{SS-R}. In contrast, \emph{MS-SR@FC}, which is the most heavily shared MTDTL case, uses a network that is only 1.2 times larger than the network needed for \emph{SS-R}.\par Also, while the representations resulting from the \emph{MSS-CR} and various \emph{MS-CR} architectures linearly depend on the chosen number of learning sources $m$ (see Table~\ref{tab:fusion}), for \emph{MS-SR@FC}, which has a fixed dimensionality of $d$ independent of $m$, we do notice increasing performance as more learning sources are used, except \emph{IRMAS} dataset. This implies that under MTDTL setups, the network does learn as much as possible from the multiple sources, even in case of fixed network capacity.\par \subsection{Multiple Explanatory Factors} \label{res:mulexpfac} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{graphics/semantic_topic_scatter.pdf} \caption[The LOF Caption]{Potential semantic explainability of DTMTL music representations. Here, we provide a visualization using t-SNE~\cite{VanDerMaaten2008VisualizingT-sne}, plotting 2-dimensional coordinates of each sample from the GTZAN dataset, as resulting from an \emph{MS-CR} representation trained on 5 sources\footnotemark. In the zoomed-in panes, we overlay the strongest topic model terms in $z_t$, for various types of learning sources.} \label{fig:topic_semantic} \end{figure} \footnotetext{The specific model used in the visualization is the $232$th model from the experimental design we introduce in Section~\ref{eval:expdesign}, which is performing better than 95\% of other models on GTZAN target dataset.} By training representation models on multiple learning sources in the way we did, our hope is that the representation will reflect latent semantic facets that will ultimately allow for semantic explainability. In Fig.~\ref{fig:topic_semantic}, we show a visualization that suggests this indeed may be possible. More specifically, we consider one of our \emph{MS-CR} models trained on 5 learning sources. For each learning source-specific block of the representation, using the learning source-specific \texttt{fc-out} layers, we can predict a factor distribution $z_t$ for each of the learning sources. Then, from the predicted $z_t$, one can either map this back on the original learning labels $y_t$, or simply consider the strongest predicted topics (which we visualized in Fig.~\ref{fig:topic_semantic}), to relate the representation to human-understandable facets or descriptions.\footnote{Note that, as soon as a pre-trained representation network model will be adapted to an new dataset through transfer learning, the \texttt{fc-out} layer cannot be used to obtain such explanations from the learning sources used in the representation learning, since the layers will then be fine-tuned to another dataset. However, we hypothesize it may be possible that the semantic explainability can still be preserved, if fine-tuning is jointly conducted with the original learning sources used during the pre-training time in the multi-objective strategy.}\par \section{Conclusion} \label{concl} In this paper, we have investigated the effect of different strategies to learn music representations with deep networks, considering multiple learning sources and different network architectures with varying degrees of shared information. Our main research questions are how the number and combination of learning sources (\textbf{RQ1}), and different configurations of the shared architecture (\textbf{RQ2}) affect effectiveness of the learned deep music representation. As a consequence, we conducted an experiment training 425 neural network models with different combinations of learning sources and architectures. After an extensive empirical analysis, we can summarize our findings as follows: \begin{itemize} \item{\textbf{RQ1} The number of learning sources positively affects the effectiveness of a learned deep music representation, although representations based on a single learning source will already be effective in specialized cases (e.g. BPM and the Ballroom dataset).} \item{\textbf{RQ2} In terms of architecture, the amount of shared information has a negative effect on performance: larger models with less shared information (e.g.~\emph{MS-CR@2}, \emph{MSS-CR}) tend to outperform models where sharing is higher (e.g.~\emph{MS-CR@6}, \emph{MS-SR@FC}), all of which outperform the base model (\emph{SS-R}).} \end{itemize} \par Our findings give various pointers to useful future work. First of all, `generality' is difficult to define in the music domain, maybe more so than in CV or NLP, in which lower-level information atoms may be less multifaceted in nature (e.g.\ lower-level representations of visual objects naturally extend to many vision tasks, while an equivalent in music is harder to pinpoint). In case of clear task-specific data skews, practitioners should be pragmatic about this. Also, we only investigated one special case of transfer learning, which might not be generalized well if one considers the adaptation of the pre-trained network for further fine-tuning with respect to their target dataset. Since there are various choices to make, which will bring substantial amount of variability, we decided to leave the aspects for further future works. We believe open-sourcing the models we trained throughout this work will be helpful for such follow-up works. Another limitation of current work is the selective set of label types in the learning sources. For instance, there are also a number of MIR related tasks that are using time-variant labels such as automatic music transcription, segmentation, beat tracking and chord estimation. We believe that such tasks should be investigated as well in the future to build a more complete overview of MTDTL problem. Finally, in our current work, we still largely considered MTDTL as a `black box' operation, trying to learn \emph{how} MTDTL can be effective. However, the original reason for starting this work was not only to yield an effective general-purpose representation, but one that also would be semantically interpretable according to different semantic facets. We showed some early evidence our representation networks may be capable of picking up such facets; however, considerable future work will be needed into more in-depth analysis techniques of \emph{what} the deep representations actually learned. \begin{acknowledgements} This work was carried out on the Dutch national e-infrastructure with the support of SURF Cooperative. We further thank the CDR for having provided their album-level genre annotations for our experiments. We thank Keunwoo Choi for the discussion and all the help regarding the implementation of his work. We also thank David Tax for the valuable inputs and discussion. Finally, we thank editors and reviewers for their effort and constructive help to improve this work. \end{acknowledgements} \textit{Conflict of interest: Jaehun Kim, Juli\'{a}n Urbano, Cynthia C.~S. Liem and Alan Hanjalic state that there are no conflicts of interest.} \bibliographystyle{unsrtnat}
{ "timestamp": "2019-02-13T02:04:07", "yymm": "1802", "arxiv_id": "1802.04051", "language": "en", "url": "https://arxiv.org/abs/1802.04051" }